2017.02.06 20:43 "[Tiff] Qs about support for more than 2^16 IFDs and writing performance", by Dinesh Iyer

2017.02.21 15:16 "Re: [Tiff] Qs about support for more than 2^16 IFDs and writing performance", by Dinesh Iyer

Hi everyone,

I managed to get in touch with the author of the patch. His name is Martin. He was OK with his work being used in the TIFF library. He also had the following to say about his patch.

I developed this patch really with only incrementally writing a new file in mind, I have not considered at all how well this would work with editing existing files, etc. In that sense I am not sure whether this can easily be merged into the actual library.

But he is OK with this patch being used as a starting point for any fix that will work in all scenarios. Please do let me know if you require any additional information.

Regards,
Dinesh

On Mon, Feb 13, 2017 at 11:22 AM, Olivier Paquet <olivier.paquet@gmail.com> wrote:

2017-02-13 9:35 GMT-05:00 Bob Friesenhahn <bfriesen@simple.dallas.tx.us>:

We are still needing a minimal test-case (in portable C code) which exhibits the slow writing problem so that the quality of any solution can be evaluated.

Here's my contribution (slow.c):

#include <tiffio.h>
#include <stdlib.h>
#include <stdio.h>

int main( int argc, const char *argv[] )
{
       int n = atoi( argv[1] );
       TIFF* tif = TIFFOpen( "test.tif", "w" );
       for( int i = 0; i < n; ++i )
       {
               TIFFWriteDirectory( tif );
       }
       TIFFClose( tif );
       remove( "test.tif" );
}

Not a valid TIFF but you did say minimal ;-) It clearly shows bad behavior:

> /usr/bin/gcc slow.c -ltiff -ljpeg -llzma -pthread
> time ./a.out 1000
0.056u 0.203s 0:00.29 86.2%
> time ./a.out 10000
3.563u 21.436s 0:25.05 99.7%
> time ./a.out 20000
14.189u 86.316s 1:40.57 99.9%