2019.10.01 18:01 "[Tiff] TIFF_IO_MAX too large for Windows XP", by David C. Partridge

2019.10.10 11:28 "Re: [Tiff] TIFF_IO_MAX too large for Windows XP", by Edward Lam

On 10/10/2019 6:28 AM, David C. Partridge wrote:

I wrote a crude binary search to determine the actual limit.

Good idea!

However the number is *much* lower when you are reading from a mapped network drive (in this case connecting to Host disk when running in a VM under VMWare). In that case the limit would appear to be 67,076,032 (0x3FF7FC0) bytes (32,832 less than 64MB).

Was this on 32-bit Windows? If so, I think this verifies that the WriteFile() MSDN documentation note [1] found by Roger also applies to ReadFile() as well. 67,076,032 / 1024^2 = 63.96868896... which matches their "63.97" value for 32-bit Windows. Unfortunately, they rounded up instead of truncating down. :(

-Edward

1. https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile

I don't know quite why it would be that value, but likely it's a restriction in NETBIOS.

For my purposes I've changed my code modification so that it never tried to read more than 16MB in a single gulp on XP.

It would be interesting to hear the experience of people trying to read large tiff files from mapped network drives on - I suspect they may also hit problems with the current built in limit value.

And by the way - the performance hit of using smaller buffers is barely noticeable - at least not with TIF files of a couple of hundred MB

Note that TIFF_IO_MAX seems to be an absolute limit. This value should only be used if an exceptionally large read/write size (larger than that value) was requested. Before one can do an I/O with a size of TIFF_IO_MAX, it is necessary to have allocated at least this much memory in a single allocation since the data needs somewhere to go. To me it raises a red flag if TIFF_IO_MAX is being hit in the first place. It is unlikely that such a high limit is hit unless using BigTIFF on an extremely large file or the input file is corrupt. Windows 32-bit apps can not normally access more than 2GB of memory in the first place due to address space limitations, and often the memory allocator will not agree to allocate anything close to that much.

I agree that file I/O in smaller chunks is rarely harmful and in fact some operating systems will behave better when using smaller chunks if the chunks perfectly match the filesystem block size, or the additional I/O requests wake up sequential I/O optimizations in the kernel, or the kernel implementation has buffering issues.