The customer reports: A colleague of mine produced a PDF file with Adobe InDesign using a green background, a color gradient, a vector map and a photo with a transparency mask where (as I am told) only 0 and 100% transparency is used. When rendering an output of 5 x 2,65m at 600 DPI a fast computer (3 GHz P4HT, 1 GB RAM) needed 5 hours of processing time. We had some similar files with similar processing time. Do you see a way to process files like that much faster? I've put the file on casper://home/support/689155/10794.pdf
Please supply the arguments used to run this job. I timed the job as follows: time ./bin/gs -q -sDEVICE=tiff32nc -r600 -sOutputFile=689155%d.tif - < ~/10794.pdf real 5m44.798s user 5m41.710s sys 0m1.100s This is significantly faster than is reported in this bug report.
gshead r7882 takes nearly an hour on my Ubuntu AMD64 x2 machine: marcos@amd64:[2]% time gsheadppm -r600 ./10794.pdf GPL Ghostscript SVN PRE-RELEASE 8.57 (2007-03-15) Copyright (C) 2007 artofcode LLC, Benicia, CA. All rights reserved. This software comes with NO WARRANTY: see the file PUBLIC for details. Processing pages 1 through 1. Page 1 2937.983u 208.161s 59:26.82 88.2% 0+0k 0+0io 0pf+0w marcos@amd64:[3]% Where gsheadppm is: bin/gs -sDEVICE=ppmraw -sOutputFIle=test.ppm -dBATCH -dNOPAUSE (along with a couple of system specific -I options for lib and fonts).
I ran the job first with ppmraw and then tiff32nc(see commands and output below). For ppmraw it failed after 57+ minutes and returned a "File size limit exceeded" error. For tiff32nc it took under 6 minutes. I will investigate wy ppmraw fails like this. time ./bin/gs -q -sDEVICE=ppmraw -r600 -sOutputFile=689155%d.tif -f -dBATCH -dNOPAUSE ~/10794.pdf File size limit exceeded real 57m4.620s user 56m18.320s sys 0m23.720s tim@peeves:~/gs$ time ./bin/gs -q -sDEVICE=tiff32nc -r600 -sOutputFile=689155%d.tif -f -dBATCH -dNOPAUSE ~/10794.pdf real 5m45.923s user 5m43.630s sys 0m1.390s
The reason tiff32nc is so much quicker is that tiff32nc produces an 8.5x11 inch page instead of 196.85x104.33 inches that ppmraw produces. This may be a bug that should be tracked separately. As this job spends approximately 44% of its time in the function gx_default_fill_linear_color_scanline, it looks to be the same bug as #687445. Moving assingment to Igor for further investigation.
Created attachment 3598 [details] patch.txt This is an attempt to spped up shading with skiping parts that fall outside the clipping path. Rather it gives a 20% speedup for the test case, we're not sure that it must go to production. While working on it we've got some other ideas which possibly can help. Please consider this patch as a small temporary improvement.
The revision 8421 is faster in about 20%. BTW please know that the default band buffer size for this test case gives a 9 pixel height bands. I think giving more space for the band buffer with - dMaxBitmap would give some speedup. Please try to give it about half RAM of your computer (500Meg), and then reduce while the RAM/disk swapping is too hard.
Comments #1,2,3 are wrong, please ignore. In #1,2 the tester run a wrong test case. In #3 the job was prematurely terminated with disk full. Regarding 44% of time for gx_default_fill_linear_color_scanline mentioned in Comment #4 : Not sure whether the benchmarking was done for entire job due to disk full in Comment #3, and becauae I observe significant time expense for other things while running entire job. Also since the job runs 6 hours, the full benchmarking would take days. Was it really so ? Regarding bug 687445 in Comment #4 : no, it has nothing common with this bug.
The revision 8488 looks well optimized, but for big pages as one in this report the default settings for banding parameters are not good. When running with - dBufferSpace=100000000 the rendering goes 5 times faster. I opened a separate bug 689668 about that. At this point I believe it is a documentation problem, so I'll close this bug as wontfix. Support may need to explain to the customer additionally.