2006.09.27 13:31 "[Tiff] Horizontal prediction for 16bit data", by Joris Van Damme

2006.09.27 22:56 "Re: [Tiff] Horizontal prediction for 16bit data", by Chris Cox

Joris;

You need to do differencing in the host machine byte order, not the file byte order (otherwise it won't work too well!).

And you must do differencing on the native size (16 bit, 32 bit, etc.) for the horizontal differencing predictor.

One of these days I need to draw out the TIFF pipeline and put it in a technote. (something I thought was obvious from the spec. but other people keep missing)

Reorder planes -> difference -> byte order swap to file order -> compress And the reverse

Decompress -> byte order swap to host order -> undo-difference -> reorder planes

Chris

On 9/27/06 6:31 AM, "Joris" <joris.at.lebbeke@skynet.be> wrote:

I've some trouble with a testimage that uses prediction on 16bit integer grayscale data.

If I remember correctly (I've not double-checked but will do so if nobody can confirm nor contradict my memory), the spec says horizontal prediction is only valid for 8bit data. But we're forced to 'logically extend' the spec in many areas now, so we might as well look into this.

I see three possible 'logical extensions', and all seem to have some drawbacks.

  1. differentiate 16bit values after resolving file byte order This is a logical problem, as file byte order resolution in decoding needs to come after prediction resolution in my model. So some hack would be required, resolving file byte order, depredicting, and unresolving file byte order next. I've not yet investigated whether or not I arrive at logical impossibilities if I turn my model around, but could do so if required.
  2. differentiate 16bit values regardless of file byte order I think this wouldn't be very efficient, as to resulting compression ratio.
  3. differentiate 8bit values, always, just differentiate twice as many This solves the byte order issue, but still we'd be totally giving up on any possibility to extend to any bitdepth and we would be limiting prediction to multiples of 8bits from this point on.

Interestingly, when I implement option 1 or 2 decoding my testimage (there's no difference between 1 and 2 as far as my testimage is concerned since file byte order and machine byte order are the same), I get results identical to the results I get from the LibTiff RGBA interface. These results are clearly wrong as far as this particular testimage is concerned. When I implement option 3, I get clearly good results on this image. There's no indication in any tags where this testimage came from.

Any comments would be hugely appreciated. I'm not aware of how exactly this is handled in LibTiff. Does LibTiff support writing this stuff? If so, our single option would be to follow this. Does LibTiff not support writing, but resolves this when reading nonetheless? In this case, we ought to try and find out what flavors are 'out there' I think, as any decision we make will at least have to try and take that into account. Do many of us have such files in our testimage libraries? If they are rare at best at this point, our options may be considered more open perhaps.