I have been reading up on bug #500941 and the various others that come up in a query for JOBSERVER or NOOUTERSAVE, and the corresponding archived e-mail traffic, and it seems that, although things work better now, the discussion never quite hit on one key point, and that hitting it clearly might simplify any further compatibility questions that might come up. The original issue was that restore would unwind global VM in situations where that was astonishing. The discussion tended to be phrased in these terms: should ghostscript behave as a job server or not by default, and what applications will be tripped up in each case? But what was really being discussed (and implemented) was whether ghostscript should behave by default as a job server encapsulated or a job server unencapsulated, and that isn't the same question. It is possible to have a PostScript interpreter that isn't a job server at all. Only a job server has a notion of encapsulated or unencapsulated job. Only an encapsulated job has an automatic level 0 save (though nothing stops an unencapsulated job from /doing/ a level 0 save). Only in a job server does a level 0 save include global VM. In a non-jobserver, nothing is special about level 0, and restore simply never ever touches global VM, period (restore operator, paragraph 2, both PLRM2 and PLRM3). In tabular form: Automatic restore affects outer save? global VM? ----------------------+-------------+------------------- Not a job server | No | Never ----------------------+-------------+------------------- A job server | +-------------+ Outer Unencapsulated job | No | save +-------------+ only Encapsulated job | Yes | ----------------------+-------------+------------------- The original problem was not about ghostscript's default behavior being astonishing to code that assumed it was or wasn't on a job server. The problem was that ghostscript had /3/ choices for default behavior, and it defaulted to the only one of the three that was truly astonishing: the behavior of /a job server in an unencapsulated job/. No programmer would ever expect that behavior without explicitly writing true (xyzzy) startjob. The enhancement would be to clearly distinguish (in code and docs) those /three/ contexts; it should not be hard for ghostscript to support all three, so with a command line parameter you could get a genuine non-job-server (where restore never touches globals) or a genuine job server, encapsulated by default. I am not sure it is necessary to have a special parameter to get job-server-in-unencapsulated-job, because any code that wants that behavior is likely to have its own true (xyzzy) startjob. As it stands right now, -dJOBSERVER doesn't quite mean what it says (it seems to mean something more like, say, APPLEJOBSEP), -dNOOUTERSAVE does mean what it says, and gives a way to choose between encapsulated and unencapsulated job server behavior, and there isn't a way to get non-job-server behavior.
Hmm, I just noticed this comment in pdf_main.ps: % It turns out that the PDF interpreter uses memory more % effectively if it is run under at least one level of save. % This is counter-intuitive, and we don't understand why it happens, Could it be because, at save level 0 with job server behavior, updates to global VM are being logged for unwinding? I wonder...
I looked more closely at gs_init.ps and saw that the current situation is closer to what I'm suggesting than first I thought. It looks like it /is/ possible by the right combinations of JOBSERVER and NOOUTERSAVE to get three behaviors, encapsulated-job, unencapsulated-job, and not-quite-exactly-but-pretty-close-to-nojob: undefined defined JOBSERVER +-------------+-----------+ undefined | no job | encap job | |-------------+ | defined | unencap job | encap job | +-------------+-----------+ NOOUTERSAVE The no-job case is achieved sort of obliquely by taking a save and popping it, so the save level comes out 1 instead of 0 as it would likely be in a real no-job interpreter, but the effect is about right, logging of globals is disabled and no restore will ever touch them because the discarded save is unreachable. startjob (either true or false) does seem to fail in this case (probably because it can see there is a level-0 save, but doesn't have it to restore) and that's fine because startjob is supposed to fail and return false in a no-job interpreter. So maybe the actual current behavior is close enough, even if it was arrived at sort of sideways and the names JOBSERVER and NOOUTERSAVE were not quite the best choices (you leave off JOBSERVER to get a job server in unencapsulated mode). A purist might want the save level to be 0 in no-job mode, but at this point that would only trigger all the vmstatus pop pop 0 eq { save pop } if workarounds that people put in their code because of the old behavior. If the current behavior can be regarded as close enough, this report converts to a documentation bug for the places in the docs where the notions of encap-job, unencap-job, and non-job really are muddled up (-dJOBSERVER and -dNOOUTERSAVE in Use.htm especially). I'll suggest some rewording when I have something I'm happy with. Hmm, just had a tangential thought - if the GC determines that a save object has become unreachable, can it disable the logging of /local VM/ changes with respect to that object too? They'll never be needed and there could be a performance benefit. I haven't delved in the code to see if it already does something like that.
Maybe there is a way to get clearer command parameters without having to break anything. gs_init.ps could check for -sJOBSERVER=ENCAPSULATED, UNENCAPSULATED, or NONE, with the same behavior as the current -dJOBSERVER (empty value), -dNOOUTERSAVE, or nothing, which could still be recognized and merely documented as deprecated. There's still the issue that JOBSERVER is also used elsewhere to enable special error handlers and the ^D Apple job separators, which right now means encapsulated jobs have these features, but unencapsulated jobs don't, unless you start them as encapsulated jobs and then do true (xyzzy) startjob, and then they do. so maybe this doesn't completely reduce to a doc bug.
After 8 years with no activity, no specimen file and no apparent actual bug, I'm going to close this one. Any relevance to current Ghostscript is more than debatable.