AWARE SYSTEMS
TIFF and LibTiff Mail List Archive

Thread

2007.07.12 15:21 "[Tiff] how to rotate an image", by Oliver Geisen
2007.07.12 15:30 "Re: [Tiff] how to rotate an image", by Andy Cave
2007.07.14 12:45 "Re: [Tiff] how to rotate an image", by Oliver Geisen
2007.07.14 13:47 "Re: [Tiff] how to rotate an image", by Andy Cave
2007.07.13 16:19 "[Tiff] Re: Bit shifting and rotating of TIFF images", by Richard Nolde
2007.07.14 12:39 "Re: [Tiff] Re: Bit shifting and rotating of TIFF images", by Oliver Geisen
2007.07.14 16:03 "Re: [Tiff] Re: Bit shifting and rotating of TIFF images", by Bob Friesenhahn
2007.07.16 04:57 "[Tiff] Image Roation by 180 degrees", by Richard Nolde
2007.07.18 00:33 "Re: [Tiff] Re: Bit shifting and rotating of TIFF images", by Chris Cox
2007.07.17 09:32 "Re: [Tiff] Re: Bit shifting and rotating of TIFF images", by Oliver Geisen
2007.07.17 12:10 "Re: [Tiff] Re: Bit shifting and rotating of TIFF images", by Ron
2007.07.20 04:26 "[Tiff] Bit shifts vs lookup tables", by Richard Nolde

2007.07.18 00:33 "Re: [Tiff] Re: Bit shifting and rotating of TIFF images", by Chris Cox

That could be on an Intel processor with slow shifts (worst on the Pentium 4).

On most processors, the shift would be considerably faster than a memory lookup.

Chris

On 7/17/07 5:10 AM, "Ron" <ron@debian.org> wrote:

> On Tue, Jul 17, 2007 at 11:32:35AM +0200, Oliver Geisen wrote: >>>> Simulated image of 16300x27501 pixels, bilevel:

>>>>
>>>> Results:

>>>>  * plain reading/writing (no bit-manipulation): 0.294 sec
>>>>  * using bit-shift operator ("<<" resp. ">>"):  1.920 sec
>>>>  * using lookup-table:                          0.380 sec

>>>
>>> That is interesting. It seems that you are right that for the CPU you are
>>> using the lookup-table approach is much faster. Maybe it is always
>>> faster.
>> I think this is true for images beyond a specific size (number of pixels).
>

> I suspect you'll find the size (and perhaps even layout) of your > lookup table is a significant factor. Along with the data you

> look up in it. If the entries you need don't fit in the cache > that's when the performance penalties kick in.

>
> Also if your algorithmic method isn't the best it can be, and

> not coded in hand optimised (or at least audited) assembler, > then what you are comparing may not be a 'fair' contest.

>

> I know of at least one fairly recent benchmark where an algorithmic > method supposedly beats a lookup table, but its implementation also

> relies on fast bit manipulations that are available to the processor, > but not exposed to people coding in C.

>

> Since your current results are only about 25% slower than a raw copy, > whether its worth pursuing that really depends on how big a number

> you are adding 25% to ;-)... and whether your performance figures > really do stay around this level with varying input data.

>
> Cheers,
> Ron