1993.08.17 00:06 "byte swapping 16- and 32-bit data", by Sam Leffler

1993.08.17 16:51 "Re: byte swapping 16- and 32-bit data", by Dan McCoy

I believe that I'm the one that brought up the problem.

I think the reason that this has not come up before is that there are currently very few people who do both of:

  1. use bitspersample greater than 8
  2. transfer files between big-endian (sgi,mac,sun) and little-endian (pc,dec) machines.

I predict that there will be many more people doing both of these in the future.

Our products ship on macs, pcs and 6 different workstations.

We have tried the above and found that it is a problem.

As Sam stated, the library currently handles this problem transparently for uncompressed data, but not for compressed data.

This is clearly not right, it is inconsistent.

What sam is suggesting is removing the byte-order independence from the uncompressed code.

I think this is the wrong fix.

Adding byte order independence to all compression mode incurs NO extra overhead in the normal case. The extra expense of byte swapping will only be paid when the image file has been transferred to an opposite byte-order machine, such as mac to pc.

I see little problem in asking application writers that want to deal with the case of >8bit data to wrap their calls to read data with code of the form:

I do.

For one, 90% of the applications will develop and test on a single platform and get it wrong.

We've got dozens of programs that read tiff files. 99% of the time, they read files that were written on the same platform as they are being read on.

I don't want to spread the machine dependent code through all of those applications to handle the last 1%. It makes much more sense, and is much more "orthogonal", to just make the library handle the data the same way it handles the tags, in a machine independent way.

I would vote for making all the compression routines handle >8 bits per sample data the way that the uncompressed routines currently do.

Dan McCoy mccoy@pixar.com