2004.04.04 17:22 "Re: [Tiff] 16 bit gray scale images and tiff", by Bob Friesenhahn
The TIFF 6.0 Specification, on page 22, says "Allowable values for Baseline TIFF grayscale images are 4 and 8, allowing either 16 or 256 distinct shades of gray." Maybe the problem here is the word "Baseline". I don't know what that means. Where in that document is 16 bits per sample allowed?
I believe that "Baseline" implies that any fully-compliant TIFF reader should be able to read it. If a writer wants to produce files that any compliant reader can read, it will stick to baseline formats. "Baseline" is the lowest common denominator. Certainly specialized readers exist which only know how to read one or two subformats.
In any event I posted a message last week because I could not understand how film scanners could be writing 16 bit gray scale images in TIFF format (for x-ray film which is black and white), or how I could read those images.
By extrapolating from the baseline specification, TIFF may be used to support formats which are not directly defined by the TIFF specification.
I want to thank the two people who replied to me. To read a 16 bit gray scale I was told to call
TIFFReadScanline in the TIFF library, and that works. For each pixel the first byte was the most significant byte.
Do I assume that all images are of this byte order? In the GraphicMagick stuff I downloaded I found a call to MSBOrderShort that has something to do with byte order, but I don't know what that function does, nor could I find it in either GraphicMagick (but it must be there somewhere) or in the tiff library.
MSBOrderShort is a utility function in the GraphicMagick library. It exists because GraphicMagick likes to handle all TIFF data in big-endian order regardless of the architecture it is running on. For example, if libtiff presented the data as a little-endian short (because we are running on Intel x86), GraphicMagick's MSBOrderShort converts the data to big-endian shorts so that it can then parse the data as an octet stream. This approach is less efficient, but more flexible.
Another question, is there such a thing as an RGB image with 16 bits per sample (and so 3 samples per pixel)? What happens when a 14 bit scanner writes out a color image? Are they truncated to 8 bits per color?
Yes, it is possible. In fact, it appears that GraphicsMagick knows how to read/write it.