AWARE SYSTEMS
TIFF and LibTiff Mail List Archive

Thread

2011.06.05 12:38 "[Tiff] Questions on TIFF LZW compression", by Thomas Richter
2011.06.05 12:47 "[Tiff] Questions on TIFF LZW compression", by Thomas Richter
2011.06.06 16:17 "Re: [Tiff] Questions on TIFF LZW compression", by Olivier Paquet
2011.06.06 16:55 "Re: [Tiff] Questions on TIFF LZW compression", by Bob Friesenhahn
2011.06.07 16:45 "Re: [Tiff] Questions on TIFF LZW compression", by Thomas Richter
2011.06.07 17:06 "Re: [Tiff] Questions on TIFF LZW compression", by Bob Friesenhahn
2011.06.09 01:27 "Re: [Tiff] Patch for tif_ojpeg version 3.9.5", by Kevin Myers
2011.06.08 22:32 "[Tiff] Patch for tif_ojpeg version 3.9.5", by
2011.06.09 05:34 "Re: [Tiff] Patch for tif_ojpeg version 3.9.5", by Andreas Kleinert

2011.06.05 12:47 "[Tiff] Questions on TIFF LZW compression", by Thomas Richter

Hi folks,

 would it be possible that some of the knowledgeable people here help me
 understanding some details of the LZW compression
 specified in the TIFF specs? I'm not asking about the compression itself
 which is clear enough, but rather its integration into the
 TIFF specifications.

 First, how does LZW compression work for images that are not 8 bits/pixel?
 As far as I read the specs, the input to the LZW compression is the
 "raw" TIFF stripe buffer, but this leaves a couple of corner
 cases open. I see that this type of compression works for any bit depth
 that divides eight, or is a multiple of eight, but what
 happens for 10 bits per pixel images? Is LZW really applied to the
 bit-packed(!) input, or is the LZW algorithm applied to 16 bit
 data, where 10 bits are packed (left-justified? right-justified?) into a
 16 bit word? In the former case, LZW will very unlikely give
 much compression performance, of course.

 What happens with 16 bit/pixel data? Is the input to the LZW compression
 big-endian or little-endian? Or does it depend on the
 endian-ness of the TIFF file? I suspect the latter, but couldn't find
 any clear indicator of this.

 What about the predictor mode (horizontal difference predictor). While
 there is no problem with overflow with 8 or 16 bit per pixel data,
 what happens with 10 bit/pixel data, for example? While the specs don't
 spell this out, do I need to take the difference modulo 2^N, where
 N is the bit-depths? Or modulo 2^M, where M is the bit-size of a
 "container" word, say 16 bits for 10 bit data? What happens if I encode
 10 bit/pixel data with a predictor? Do I apply the predictor on the
 pixel values (sensible) or on the raw 8-bit values resulting from bit-
 packing the 10-bit data into 8-bit containers (not very sensible, but
 not ruled out by the specs).

 Can the predictor also be specified without LZW compression? It probably
 makes little sense then, but can - for example - the predictor
 be combined with the Fax (CCITT, ITU) compression?

 And finally, what about the predictor and tiles? Does the predictor
 predict across tile edges? The specs seem to indicate this, but this
 would also imply that tiles cannot be decompressed independently. Is
 this really the purpose of prediction or are the specs incomplete
 and should I stop prediction at tile boundaries (which makes sense if
 you want to retain the independence of the tiles).

 Thanks a lot for your time!

 Greetings,
      Thomas