2017.06.28 13:49 "Re: [Tiff] Excessive memory allocation while chopping strips", by Olivier Paquet
2017-06-28 6:09 GMT-04:00 Even Rouault <firstname.lastname@example.org>:
I wouldn't call the tool "broken". It is a reasonable behaviour in most cases
I would. It's reporting problems for code which runs fine. It's causing "fixes" to be written which may introduce new security issues, the very thing the tool is meant to detect in the first place.
That does not mean we might not still be better off with the tool than without. But we have to be careful about what we do based on its reports. It reminds me a lot of the early days of valgrind, where I'd "fix" some things which we not real bugs to be able to keep using the tool. They were simpler fixes, mostly uninitialized memory, but even those could have unwanted consequences (eg. calloc() is a whole lot slower than malloc()).
For strip reading, there's the mechanism I already put in place and described in my previous email. Small files that would declare large strip byte count will no longer cause the full allocation to be done before beginning reading the strip content.
Looks like a reasonable fix but:
- It's not that simple (should ideally have a few pair of eyes go over it).
- It made reading the large strips a little slower.
- It may have increased the peak amount of memory required (in case realloc needs to copy).
- It does not help with compressed strips, as I understand it.
But otherwise, it looks like something which might be useful to someone required to deal with unsafe input.
Given the number of teams that fuzz libtiff / tiff tools and report issues in Bugzilla, it seems there's a lot of interest for such use cases.
I don't know about that. Just because people like to use their shiny new fuzzer on everything they can get their hands on does not mean code will be written to use the new API. I might be wrong but I get the feeling that in many cases libtiff is being dragged along indirectly with a bunch of other dependencies, typically through some flavor of an abstract image handling library. Do you know better from reading the bug reports?
If we're going to have this, I think it should be the default behavior across the board. We should first fix the cases needed by well formed large files (ie. progressive allocation for large tags, as you did for the strips). Then simply refuse strips/tiles larger than X (say, 1 GB or even 100 MB) and add a flag to TIFFOpen() to override that for the rare few projects which really need to deal with such files and don't care about the potential consequences. Making it the default requires that we make it work well and ensures it will be used. It's also more work, unfortunately.