Bug 696506 - clean up genarch logic for cross-compiling
Summary: clean up genarch logic for cross-compiling
Status: RESOLVED DUPLICATE of bug 696508
Alias: None
Product: Ghostscript
Classification: Unclassified
Component: Build Process (show other bugs)
Version: master
Hardware: PC Linux
: P4 normal
Assignee: Chris Liddell (chrisl)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-01-08 16:10 UTC by Mike Frysinger
Modified: 2016-06-09 07:25 UTC (History)
1 user (show)

See Also:
Customer:
Word Size: ---


Attachments
Draft: autogenerate most of arch.h via autoconf (4.91 KB, patch)
2016-01-12 15:10 UTC, Brian Norris
Details | Diff
arch.h.in draft (2.62 KB, text/plain)
2016-01-12 15:10 UTC, Brian Norris
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Mike Frysinger 2016-01-08 16:10:51 UTC
this might not cover all the defines, but it should cover many.  feel free to split up into multiple bugs if this is too much for one.

ARCH_ALIGN_xxx_MOD - autoconf has AC_CHECK_ALIGNOF

ARCH_LOG2_SIZEOF_xxx - should be able to use AC_CHECK_SIZEOF to build this value

ARCH_SIZEOF_xxx - autoconf has AC_CHECK_SIZEOF

ARCH_MAX_xxx - these are constants ... genbase.c doesn't actually calculate them.  they can be moved straight to config.h or wherever.

ARCH_IS_BIG_ENDIAN - bug 696498 + AC_C_BIGENDIAN should provide the WORDS_BIGENDIAN define you can key off of

ARCH_FLOATS_ARE_IEEE - there is no standard currently for this.  the autoconf guys suggest:
https://lists.gnu.org/archive/html/autoconf/2003-05/msg00028.html
   #if (__STDC_IEC_559__
        || (FLT_RADIX == 2 && FLT_MANT_DIG == 24 \
            && FLT_MIN_EXP == -125 && FLT_MAX_EXP == 128))
   # define ARCH_FLOATS_ARE_IEEE 1
   #else
   # define ARCH_FLOATS_ARE_IEEE 0
   #endif

once you have that define, ARCH_FLOAT_MANTISSA_BITS & ARCH_DOUBLE_MANTISSA_BITS are easy to set up

ARCH_PTRS_ARE_SIGNED - you can calculate this by compiling:
  #include <stdint.h>
  int main() {
    char b[(char *)(uintptr_t)-1 < (char *)(uintptr_t)0 ? -1 : 0];
    return sizeof(b);
  }
if it compiles, then ARCH_PTRS_ARE_SIGNED=0, otherwise ARCH_PTRS_ARE_SIGNED=1.  this is the same trick that autoconf uses to calculate type sizes.

ARCH_ARITH_RSHIFT - i don't have a good answer for this.  since it looks like a bug for some old compilers, maybe make it default to the sane answer ?

ARCH_DIV_NEG_POS_TRUNCATES - probably can write a test like the pointer signed math above.
Comment 1 Brian Norris 2016-01-12 15:10:10 UTC
Created attachment 12231 [details]
Draft: autogenerate most of arch.h via autoconf

Attaching a preliminary hack that gets most of arch.h autogenerated for me on my host machine. I haven't tried out cross compile yet.

Caveats:
 * I haven't done any of the plumbing to use the autogenerated arch.h instead of genarch.c
 * Haven't figured out a good plan for the fallback cases where GX_COLOR_INDEX_TYPE isn't determined by autoconf (a very small corner case, I think?)
Comment 2 Brian Norris 2016-01-12 15:10:43 UTC
Created attachment 12232 [details]
arch.h.in draft
Comment 3 Chris Liddell (chrisl) 2016-01-12 23:55:04 UTC
We aren't going to stop using genarch.
Comment 4 Chris Liddell (chrisl) 2016-01-13 01:23:28 UTC
Preliminary comments:

Your IEEE floating point checks are a) glibc specific, so unsuitable as not portable (especially bad as this is supposed to better support cross compilation), and b) are not safe according to the glibc documentation:

"Of these macros, only FLT_RADIX is guaranteed to be a constant expression. The other macros listed here cannot be reliably used in places that require constant expressions, such as `#if' preprocessing directives or in the dimensions of static arrays."

https://www.gnu.org/software/libc/manual/html_node/Floating-Point-Parameters.html

TBH, I wonder if we shouldn't simply default to assuming IEEE fp, and have a "--without-ieee-fp" option. I can't imagine many platforms don't conform to at least IEEE fp standards these days.


The GX_COLOR_INDEX_TYPE should ultimately fall back to "unsigned long int" as that really should be supported everywhere, and should be a 32 bit value. It should probably warn if that happens when the large color indices were requested, and should also fail if "unsigned long int" isn't at least 32 bits.

arch.h.in should go in the arch directory with the other non-dynamically created arch definition headers. I'd prefer if it was something like "arch/config-arch.h.in". Having the build actually use the generated file should be as simple as having configure set the "ARCH_CONF_HEADER" to point to the generated header file.
Comment 5 Mike Frysinger 2016-01-13 11:44:22 UTC
(In reply to Chris Liddell (chrisl) from comment #4)

why do you think my (well, actually Paul Eggert's) IEEE proposal is glibc specific ?  if you look at the linked discussion, Paul explicitly says "This works on all implementations that I know of", and he's talking about way more than glibc.  in fact, those defines have nothing to do with glibc as it is gcc that sets them up, and i can't find glibc doing those #define anywhere.  if the glibc manual covers them, it's part of the complete environment, and that includes things coming via gcc.

if GX_COLOR_INDEX_TYPE should be a 32-bit value, why not just do uint32_t and be done ?
Comment 6 Chris Liddell (chrisl) 2016-01-14 01:12:36 UTC
(In reply to Mike Frysinger from comment #5)
> (In reply to Chris Liddell (chrisl) from comment #4)
> 
> why do you think my (well, actually Paul Eggert's) IEEE proposal is glibc
> specific ?  if you look at the linked discussion, Paul explicitly says "This
> works on all implementations that I know of", and he's talking about way
> more than glibc.  in fact, those defines have nothing to do with glibc as it
> is gcc that sets them up, and i can't find glibc doing those #define
> anywhere.  if the glibc manual covers them, it's part of the complete
> environment, and that includes things coming via gcc.

My apologies, you are right, those instances in glibc appear to be related to their C++ support - but it's hard to be sure. I don't think the documentation is terribly clear on it.....

I do think it would be preferable to allow it to be overridden on the configure command line to either force IEEE floats, or force non-IEEE floats, but that's not going to be difficult.

> if GX_COLOR_INDEX_TYPE should be a 32-bit value, why not just do uint32_t
> and be done ?

GX_COLOR_INDEX_TYPE must be *at least* 32 bits to work at all. But for full DeviceN output support, it needs to be 64 bits - that is, to allow the maximum number of spot colors support in, for example, tiffsep. So we *really* want to use a 64 bit unsigned integer data type if one is available.
Comment 7 Mike Frysinger 2016-01-14 04:34:09 UTC
(In reply to Chris Liddell (chrisl) from comment #6)

i certainly have no problem with there being knobs for people to explicitly set tests.  usually autoconf uses cache vars for these so the user could do something like:
./configure \
  gs_cv_floats_are_ieee=yes \
  ...
but if you prefer classic configure flags instead, that's up to you.

since pretty much every standard out there requires a 64-bit type via uint64_t, what systems do you care about lack that ?  i don't think i've come across one yet, but i've only been programming since ~2003.
Comment 8 Chris Liddell (chrisl) 2016-01-14 09:17:52 UTC
(In reply to Mike Frysinger from comment #7)
> (In reply to Chris Liddell (chrisl) from comment #6)
> 
> i certainly have no problem with there being knobs for people to explicitly
> set tests.  usually autoconf uses cache vars for these so the user could do
> something like:
> ./configure \
>   gs_cv_floats_are_ieee=yes \
>   ...
> but if you prefer classic configure flags instead, that's up to you.
> 
> since pretty much every standard out there requires a 64-bit type via
> uint64_t, what systems do you care about lack that ?  i don't think i've
> come across one yet, but i've only been programming since ~2003.

C89 (ANSI C) does not include a 64 bit integer type and, in theory, that's the standard we support - Ghostscript is compiled for a wide array of systems, including *very* legacy systems.

There's also the fact that some (mostly embedded) systems have no hardware support for 64 bit integer types, so they are implemented in software, and operations on them are commensurately slower. Those types of system tend to be heavily memory limited, too, and there is a slight memory overhead in using 64 bit color indices.

It's preferable to offer those looking to embed Ghostscript the flexibility to use the smaller data type.

At least we dropped support for 16 bit color indices!!
Comment 9 Chris Liddell (chrisl) 2016-06-09 07:25:27 UTC
I'm going to continue this under the other, closely related, bug.

*** This bug has been marked as a duplicate of bug 696508 ***