2019.10.01 18:01 "[Tiff] TIFF_IO_MAX too large for Windows XP", by David C. Partridge

2019.10.10 22:24 "Re: [Tiff] TIFF_IO_MAX too large for Windows XP", by David C. Partridge

OK I totally understand that you folks don't wish to add XP specific code (though that code's pretty trivial).

I can tell you I didn't change our builds to re-instate XP support just for fun - I was getting a lot of pressure to do so, and it only took me about 45 minutes to change the build options and the code to allow that - I only had to add special case code to test the Windows version at run time in one place.

However, I'd argue that the current setting of TIFF_IO_MAX is unreasonably high.

Given that the actual limit in XP (for network shares) appears to be a bit under 64MB or bit under 32MB (for x64), I really see no harm in reducing it to 16MB, 1MB or even 64K!! My experience has been that reading files using ever increasing buffers has diminishing returns that typically tail off after 64KB, and beyond 1MB there's little or no benefit in increasing the chunk size, however if you want to go bigger, by all means set the limit to 16MB.

Cheers

David

-----Original Message-----

From: Tiff [mailto:tiff-bounces@lists.osgeo.org] On Behalf Of Bob Friesenhahn

That said, I'm against including code of this nature, when we don't

> have adequate testing or CI coverage. It's a recipe for future
> regressions if the XP codepaths are not tested. It's difficult for a
> volunteer-driven project to adequately support and maintain dead
> platforms. There is a cost to adding and maintaining this support,

> which needs to be borne in mind. As is the quality of the support if it's not being actively and routinely tested.

While I am willing to test, use and support libtiff on a wide range of contemporary platforms, Windows XP is not such a platform.

There is no implied warranty. :-)

I think it is worth discussing if this rather high default limit should be reduced to something much smaller like 32k or 128k.

There is reason to believe that performance may improve (and not suffer) if I/Os use the underlying filesystem block size. In the case of ZFS this might be 128k, and for NFS it might be some varying size, and who knows what for SMB/CIFS. However, it may be that using the filesystem block size will only be an improvement if the write offset is also aligned to the filesystem block size and this might not be be easily feasible with TIFF (unless a short write is done to achieve a desired offset followed by full writes).

On a Unix (POSIX) type system, the filesystem block size can be determined using statvfs() or fstatvfs() and checking the f_bsize member. More than likely there are similar interfaces for Windows to obtain the filesystem block size.

Absent consideration of underlying storage block sizes, there are optimimum read/write sizes when dealing with network I/O.

bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriese