2001.12.31 04:15 "Partial Extraction of a Scanline?", by Bad Badtz

2002.01.02 17:25 "Re: Partial Extraction of a Scanline?", by Peter Montgomery

Bad,

Take for example, I would like to view a part of a striped image with scanline size = 32,000 pixels on a screen 1024 by 768.

Using ReadScanline will return me all 32,000pixels when in fact only 1024 pixels can be seen on screen. The other 30,000+ pixels are rather useless. I was thinking won't it be better if I can just get that 1024 pixels instead of the whole scanline? How should I go about modifying the code to read partial scanline? Please enlighten me!

I'm afraid to say I don't believe the library is set up that way. Remember, the library has to provide consistent access to the underlying data no matter how it's stored. To this end, some compromises will always have to be made. For example, suppose that you have a TIFF with LZW compression. Pretty much the only way to get to the middle of a compressed scanline (and this goes for most if not all compression schemes) is to start at one end and work your way into the middle. It's almost impossible to be able to figure out what constitutes "the middle" of a scanline when it is in compressed form. Thus, the library would be able to give you partial scanlines on uncompressed images but not compressed images. This would mean inconsistent access to the data and not make for a very good library.

This brings us back to your original problem. My first question is - have you tried it yet or is this still a theorietical problem. It sounds like a theoretical problem where you believe that perfromance will be unacceptable or the code "won't feel right" if you write the obivous solution. If this is the case, then my advice is to write the obvious solution first and then see if it's is unacceptable. A common mistake programmers make is to optimize too early. I may be getting off subject here, but if you haven't read "Code Complete" by Steve McConnell then I recommend you do so right away. He goes into much grewter detail than I can in a posting as to why early optimization is a bad idea.

So, it seems you're pretty limited in your options. You can:

  1. Use only uncompressed images and write your own library to read them (or possibly modify TiffLib).
  2. Use only tiled images and use TiffLib and some glue routines to read partial scanlines.
  3. Use any style image and accept that you'll have to read the entire scanline.

I wish I could offer a better solution, but those are pretty much the options I see. Perhaps someone else on the list can offer some solution I am not seeing. In the meantime, my advice is to write your app with the existing TiffLib scanline routines and see if it works. I would think that with 32000 pixel wide images, high speed performance isn't too much of an option anyway. Furthermore, if you are writing an image viewer, I would assume that someone would want to scroll around the image anyway.

If that's the case, then reading only what you can see from disk would be pretty pokey perfromance-wise. I would read the full width of the image plus a little buffer top and bottom. That way, the user could quickly scroll horizontally (since the image would already be in RAM) and any vertical scrolling (other than the little bit of padding you have) would require disk access. This is a good compromise between speed and memory usage when handling large images.

Thanks,
PeterM