2009.12.07 16:47 "[Tiff] Full strip allocation in TIFFWriteScanline", by Lars Uebernickel
TIFFWriteScanline always allocates memory for a complete strip. When writing large TIFF files in a single strip, this can lead to memory problems quickly.
I am aware that the whole point of dividing the image data into strips is to be able to read and write in manageable chunks of memory. However, some applications (e.g. many fax applications) seem to have problems with TIFF images with more than a single strip.
I am aware of libtiff's strip-based API, which is capable of writing strip data incrementally with TIFFWriteRawStrip. Unfortunately, it expects already compressed data and is therefore not making use of the built-in compressors. TIFFWriteEncodedStrip on the other hand compresses data prior to writing it, but it doesn't work incrementally, i.e. it expects the full data of each strip.
I assume that this is because libtiff's implementation of the compression algorithms cannot operate on streamed data? I think it'd make sense to support this, as the data which needs to be buffered by most compression schemes should be substantially less (and never more) than the smallest strip size feasible for each algorithm.
What are you general thoughts on changing the compression implementations in this regard? Do you think it is doable or am I overlooking something? Is somebody already working on it or planning to?
P.S.: A bit of context: We ran into these issues with ghostscript, after porting all of its tiff output devices to use libtiff instead of writing output files manually (using ghostscript's streamable implementation of some of the compression schemes). In order to provide backwards compatibility, the output files are written in a single strip.