2006.06.20 06:29 "[Tiff] CCITT compression standard and its application in TIFF", by Joris
There is confusion as to the CCITT compression standards and how it applies in TIFF. Or at least, there is in my head. So I thought I'd do some attempt at sorting it out and posting here.
** What is white, and what is black? **
The CCITT compression standards are designed for FAX communication. That's why they define compression of white and black. That is not desirable in a TIFF compression scheme, as the compression scheme should define compression of 0's and 1's, and the TIFF photometric should define the interpretation of these values.
Thus, I propose we should read "white" in the CCITT standard to mean 0, and "black" to mean 1, and next we should apply the TIFF photometric, either MinInWhite or MinIsBlack, to determine what 0 and 1 mean. Thus, CCITT "white" ends up meaning black when CCITT compression is combined with TIFF photometric MinIsBlack.
Some people will argue that this is confusing. It is. However, the only other alternative is even more confusing, as it would mean totally ignoring the TIFF photometric and have the compression scheme define the color interpretation. It would mean that switching PhotometricInterpretation tag value from MinIsBlack to MinIsWhite does not change the image interpretation. Clearly, the only other alternative is much worse.
My proposal furthermore seems to be consistent with...
- ...the TIFF 6.0 specification, even if it is a bit vague. It clearly states near the top of page 50:
The "normal" PhotometricInterpretation for bilevel CCITT compressed data is WhiteIsZero. In this case, the CCITT "white" runs are to be interpretated as white, and the CCITT "black" runs are to be interpreted as black. However, if the PhotometricInterpretation is BlackIsZero, the TIFF reader must reverse the meaning of white and black when displaying and printing the image.
- ...most current practice, even if not all current practice
- ...the fact that other parts of the CCITT specification, too, need to be ignored in favor of the TIFF encapsulation. For example, the T.4 specification says near the top of the "Coding scheme" chapter: <quote>
- A total of 1728 picture elements represent one horizontal scan line of 215 mm length.
This definition of Density, too, needs to be ignored, in favor of the Density specification inside the TIFF IFD. So, clearly, as a general rule, it seems we must take out of the CCITT spec that which concerns us, i.e. the details about the actual compression only, and ignore all else.
One last note is that it would seem writers can best avoid the confusion by using 0 to represent white, i.e. the MinIsWhite photometric. Thus, CCITT "white", read to mean "0", is interpreted according to TIFF photometric to be actual white, and all is consistent. Though it is valid to use 0 to represent black by stating the MinIsBlack photometric, it is confusing as black is encoded as CCITT "white" this way, and the only good reason for doing so would be to get best possible compression ratio on images that have black background with white text, as CCITT compression is optimized for CCITT "white" background with CCITT "black" text.
** What is proper k value in two-dimensional T.4 encoding? **
The T.4 spec says writers should pick a k value depending on resolution of the image. The k value determines the maximum number of lines that are encoded with the two-dimensional compression scheme after a line with one-dimensional encoding (the "key" line, to use video compression speak).
Note that a writer is free to insert less then k-1 two-dimensionally encoded lines at any point. Also note that the differentiation between one-dimensional and two-dimensional compression is encoded at the start of each line. Thus, a reader does not need to know what k was used, it simply reads what compression scheme is used from the first few bits of each line.
In my head, here, again, we have a mix of compression scheme responsabilities and IFD responsabilities, that may be perfectly desirable in a full FAX communication setting, but are undesirable in TIFF. Note that in TIFF densities are not always known, the resolution flags could simply not be present.
I humbly propose writers would do best using the fixed value of 8 for k, in TIFF, always, independent of resolution. Less, does not seem very usefull - may as well use pure one-dimensional T.4 instead. More, does not seem very usefull either - may as well sacrifice error-recoverability completey and use the full two-dimensional T.6 compression instead. And a flexible value based on actual resolution seems a mixup of departements that should be independent in TIFF, and not always possible anyway.
** What is CCITT RLE and RLEW compression? **
The ITU T.4 and T.6 specifications help define TIFF compression 3 (T.4, aka Group 3 FAX), and TIFF compression 4 (T.6, aka Group 4 FAX). They do not directly define TIFF compression 2 (CCITT RLE), nor TIFF compression 32771 (CCITT RLEW). Googling around to double-check my understandings of these last two, there was surprisingly little I could find. So, here's to the best of my knowledge what these compression schemes are:
Both ressemble T.4 compression. Neither uses the T4Options tag. The only difference with classic one-dimensional T.4 compression is the total absence of EOL codes. Instead of EOL are potentially some fill bits with value 0 to ensure the start of each new line sits on a byte boundary in RLE compression, or a word boundary in RLEW compression. As the start of each data block in TIFF sits on a word boundary anyway, there is no ambiguouity in this description. Nevertheless, as each compressed image data block is totally self-contained in TIFF, I would propose the offset of each compressed line relative to the offset of the compressed data as a whole is what counts, so bad writers that incorrectly dump RLEW compressed data on odd byte offsets, would have to write each compressed data line on an odd byte offset too.
I've double-checked my understanding of these compression modes by writing data with my own proprietary encoder and reading it back with LibTiff. This works for RLE compression, but does not seem to work for RLEW compression. Does anyone know if either my understanding of RLEW is incorrect, or if either there's a bug in the RLEW reader (entirely possible since the compression scheme is not widely used to say the least)? Also, if anyone has RLEW TIFFs that are produced by something other then LibTiff, please send them to me, and I'll try and use them to clarify this issue and post a follow-up.
I hope this helps clear up some current and future misunderstanding on the black/white issue at least. If anyone knows anything here to be a misunderstanding on my part, I gladly stand corrected.
Joris Van Damme
Download your free TIFF tag viewer for windows here: