Understanding HDR from a photographer's perspective.

Moglex

TPF Noob!
Joined
Jun 13, 2008
Messages
581
Reaction score
0
Location
Whitstable
Can others edit my Photos
Photos OK to edit
It's apparant from looking at a few of the HDR posts of the last week or so that there are some serious misunderstandings of what HDR actually is, at least as far as photographers use the term. I want to try and give a very simple overview of what HDR attempts to achieve for the photographer wanting to create a screen or paper representation of a scene. I want to try and do this without obfuscating the issue with details of the non-linear response of our eyes and other sensors.

I hope it will stimulate discussion and perhaps others will want to expound on some of the deeper technicalities.

The problem.

When we talk about the range of a photographic film or sensor we are already creating a certain confusion with terminology because there really isn't a problem with range. If you photgraph a scene with very bright elements and completely dark elements, provided you use somewhere vaguely the correct exposure you will end up with a photograph where the bright parts are white and the dark parts are black.

What we are really concerned with is the fact that because of the tonal resolution of the sensor (and, indeed, the medium on which we view the resultant photograph) we cannot see detail in both the darkest and the lightest areas at the same time.

As I've already said, explaining this properly would involve detailed discussion of the type of non linearity of the sensors and the methods used to overcome the problems this causes, including the possibility of using floating point numbers in the internal representation of the scene.

Instead of that I'd like to try and explain what an HDR photograph on a screen or a piece of paper actually is by showing how you could create an HDR image without the use of anything more than an Instamatic and some glue.

Deep breath!

Imagine that you want to take a photograph of one of those doll's houses where the front opens and you can see all the rooms layed out inside.

Imagine there are nine rooms and that each room is lit differently from the brightest ballroom in (apparantly) even steps to the darkest, moonlit bedroom.

You eyes can probably make out the detail in each room all at once and certainly if you shield them you can see each room clearly, one at a time.

If, however, you take a photograph of the whole house, exposing for the mid point, the darkest room is black and the brightest almost plain white.

So what you do is photograph each room individually, exposing for its own lighting (your Instamatic has unusually close focusing capabilities).

You now send off your cassette to the lab and get back a set of prints and paste them in the correct positions onto a sheet of paper and voila! you have a (rather odd) HDR representation of the interior of the dolls house where you can see the detail of each room just as well as any other, all at the same time.

Of course, this is not what we actually do with photographic HDR as we want to retain an idea of the relative brightness of the rooms.

So now suppose you take the exposures with your digital camera and this time you print them yourself with each room a little darker in proportion to its actual illumination. You will have sacrificed some detail but now have pretty much manually produced 'HDR' picture of the doll's house which you can photograph and compress using the JPEG algorithm (or any other) and display as either a print or a screen image.


This is effectively what HDR software does but in at a much smaller scale and using an intermediate internal representation of the image that has a much greater bit depth than we typically use followed by a mapping back to a suitable output bit depth.
 
I think that you need to explain where HDR ends and tone mapping begins - that seems to be the main cause of confusion and based on your post above, you appear to be just as confused as many other people.

What we are really concerned with is the fact that because of the tonal resolution of the sensor (and, indeed, the medium on which we view the resultant photograph) we cannot see detail in both the darkest and the lightest areas at the same time.

I also think that your use of the term 'tonal resolution' is a little misleading - perhaps you need to define it, because it sounds like you are using it for 'dynamic range' - the relative limits of luminance that are recorded. I would interpret 'tonal resolution' to mean the smallest difference in luminance that is recorded or stored, not the overall range.

Best,
Helen
 
I think that you need to explain where HDR ends and tone mapping begins - that seems to be the main cause of confusion and based on your post above, you appear to be just as confused as many other people.

Perhaps you could explain where the doll's house example falls down.

As I cannot see any evidence of confusion that would help to pinpoint it, if the confusion is in my mind rather than yours.

I also think that your use of the term 'tonal resolution' is a little misleading - perhaps you need to define it, because it sounds like you are using it for 'dynamic range' - the relative limits of luminance that are recorded.

Again, perhaps you could explain how you get that from what I wrote as that is precisely what I do not mean.

I'm not quite sure how you can make the jump from 'resolution' to 'range' as the two things are quite different and I risked using a term (resolution) that has an already clearly understood meaning specifically to make that point.

I would interpret 'tonal resolution' to mean the smallest difference in luminance that is recorded or stored, not the overall range.

Well done!

You are 100% correct.

It just remains for you to explain why you thought I meant anything else.
 
Last edited:
I think that you need to explain where HDR ends and tone mapping begins

Best,
Helen

I was trying to avoid getting this technical but as I seem to have confused Helen, I'll have a stab at explaining where this boundary lies as simply as possible.

I'm going to completely ignore non-linearity here as I can guarantee it will confuse the issue for a lot of people.

Suppose you have a sensor that can resolve 100 levels of brightness.

And you have a printer that can display 100 levels of brightness.

And you have a scene that has 150 levels of brightness.

So you take one shot where the bottom 100 levels are resolved and the top 50 are burned out.

You take another shot where the top 100 are resolved and the bottom 50 are all black.

Your software then uses these two images to create an internal representation with the full 150 levels of brightness.

That is the HDR part of the process.

However, as your output device cannot handle all 150 levels the software now performs some tone mapping. For example it could simply divide each value by 1.5 and output it resulting in an image which has compressed detail from all the brighness range but is displayable on a device that can only resolve 100 levels.

As I said above, that is an absurdly simplified explanation, mainly because it ignores the non linear nature of things. The problem with a full explanation is that it requires several pages and the maths gets quite involved for the non-mathematically inclined. I hope it helped at least someone.
 
So what does all this tell us about making an HDR image from one RAW exposure 'developed' different ways?
 
Perhaps you could explain where the doll's house example falls down.

OK.

So what you do is photograph each room individually, exposing for its own lighting (your Instamatic has unusually close focusing capabilities).

You now send off your cassette to the lab and get back a set of prints and paste them in the correct positions onto a sheet of paper and voila! you have a (rather odd) HDR representation of the interior of the dolls house where you can see the detail of each room just as well as any other, all at the same time.
I wouldn't call that an HDR image at all. It has already had a form of tone mapping applied to render it as an LDR image. The information about the total dynamic range has been lost, even though it was made from the total dynamic range. Had each print been illuminated at different, appropriate levels then it would be an HDR image. That's really the key: the HDRI file contains information about the full range of scene luminance.

Of course, this is not what we actually do with photographic HDR as we want to retain an idea of the relative brightness of the rooms.

So now suppose you take the exposures with your digital camera and this time you print them yourself with each room a little darker in proportion to its actual illumination. You will have sacrificed some detail but now have pretty much manually produced 'HDR' picture of the doll's house which you can photograph and compress using the JPEG algorithm (or any other) and display as either a print or a screen image.
That's where I suggest that tone mapping gets mentioned. The HDR image is turned into an LDR image by tone mapping because of the impracticality (but not impossibility) of displaying the HDR image. It's no longer an HDR image, but an LDR image produced from an HDR image by tone mapping. It's often called an HDR image, of course, because many photographers call tone mapping 'HDR'. An HDR-originated image that is shown as an LDR image with simple tonal compression instead of tone mapping would probably not be recognised as an HDR-originated image by many photographers.

What we are really concerned with is the fact that because of the tonal resolution of the sensor (and, indeed, the medium on which we view the resultant photograph) we cannot see detail in both the darkest and the lightest areas at the same time.
That sounds very much like a dynamic range problem, not a tonal resolution problem. If the dynamic range of the sensor was adequate, you would be able to see detail in both the brightest and darkest parts.

Best,
Helen
 
What if you did not tone map the HDR image?

Could you post an example of a before tone-map and after tone-map image please?

Thanks.
 
I wouldn't call that an HDR image at all. It has already had a form of tone mapping applied to render it as an LDR image. The information about the total dynamic range has been lost, even though it was made from the total dynamic range. Had each print been illuminated at different, appropriate levels then it would be an HDR image. That's really the key: the HDRI file contains information about the full range of scene luminance.
All I can really say to that is: If you want to continue to use HDR in a way that is at odds with the way it is generally used photographically, go ahead. No one can stop you.

Do not, however, expect the rest of the photgraphic world to go along with you. That isn't going to happen. At best you will be considered a tiresome pedant and at worst simply wrong.

That's where I suggest that tone mapping gets mentioned. The HDR image is turned into an LDR image by tone mapping because of the impracticality (but not impossibility) of displaying the HDR image. It's no longer an HDR image, but an LDR image produced from an HDR image by tone mapping. It's often called an HDR image, of course, because many photographers call tone mapping 'HDR'. An HDR-originated image that is shown as an LDR image with simple tonal compression instead of tone mapping would probably not be recognised as an HDR-originated image by many photographers.

I think you are confusing yourself because you still don't seem to understand the difference between range and resolution.

Photographers know what they mean by HDR. People who use computer representations of HDR to do analyses that do not involve physical output of the HDR image mean something else, but this is a photography forum not, for example, a nuclear explosion analysis forum.

That sounds very much like a dynamic range problem, not a tonal resolution problem. If the dynamic range of the sensor was adequate, you would be able to see detail in both the brightest and darkest parts.

LOL.

You've just demonstrated with perfect clarity that you don't understand the relationship between range, resolution, and the ability to record an image!

Clue: what you say above is exactly wrong.

You've further muddled up the distinction between the source range, the recorded range, and the output range.

I would also suggest that you have a bit of a think about exactly what is involved in originating and HDR image from multiple exposures to get a more holistic view of the whole shebang.
 
What if you did not tone map the HDR image?

Could you post an example of a before tone-map and after tone-map image please?

I'm afraid that wouldn't be very helpful as, whilst you could display the 'after' image, the only way you could display the 'before' image would be to use some process that involved silently dumping the very information that makes the HDR image HDR.
 
All I can really say to that is: If you want to continue to use HDR in a way that is at odds with the way it is generally used photographically, go ahead. No one can stop you.

Do not, however, expect the rest of the photgraphic world to go along with you. That isn't going to happen. At best you will be considered a tiresome pedant and at worst simply wrong.



I think you are confusing yourself because you still don't seem to understand the difference between range and resolution.

Photographers know what they mean by HDR. People who use computer representations of HDR to do analyses that do not involve physical output of the HDR image mean something else, but this is a photography forum not, for example, a nuclear explosion analysis forum.



LOL.

You've just demonstrated with perfect clarity that you don't understand the relationship between range, resolution, and the ability to record an image!

Clue: what you say above is exactly wrong.

You've further muddled up the distinction between the source range, the recorded range, and the output range.

I would also suggest that you have a bit of a think about exactly what is involved in originating and HDR image from multiple exposures to get a more holistic view of the whole shebang.

We're going to have to differ on this entire subject because further debate is pointless. I'll stick to my version and I guess that you will stick to yours.

Best,
Helen
 
We're going to have to differ on this entire subject because further debate is pointless. I'll stick to my version and I guess that you will stick to yours.

That's fine.

I'll stick to mine as will the the rest of the photographic community (since it's actually theirs).

I really would recommend that you bone up on what tonal resolution and range really mean and how they interrelate, though. Once you get a proper handle on that things should become a lot clearer. :wink:
 
...

I really would recommend that you bone up on what tonal resolution and range really mean and how they interrelate, though. Once you get a proper handle on that things should become a lot clearer. :wink:

I've stopped arguing with you. Why don't you stop the condescension?

Best,
Helen
 
Guys, this is a classic case of both sides having valid and indeed true information, but you are divided by common language and its uses.

Helen is right that without the full luminance values the tone mapped output file is tachnically not a true HDRI.
But... i still maintain my personal view that HDR as it relates to us photographers (and not computer game designers or graphic artworkers) can be used to discribe the tone mapped representation of the finished HDRI.

How else would we as photographers share our HDR images?.... we couldn't.

It would be way too painful to have to go around correcting everyone who displays thier HDR images.... and to give thier images a different name... and what would those images be called?.... this is where a huge amount of confusion would begin....

A guy uses 1 RAW to create an 'HDR'... this can be called tone mapping as all the information used to create the image came from 1 file...

Another guy used 7 exposures to create a HDR.... your going to tell him his image is merly tone mapped as well?.... it may (very) technically be true but if the image was a representaion of the actual finished HDRI he has at home (provided he used all the correct methods) then how can it not be called a HDR?.... it would be very confusing to put this in the same basket as the example above.

Anyway, this is my thinking towards the subject and i have maintined this idea for over 2 years when i first started to experiment with HDR... untill another term is invented to name a tone mapped representation of a true HDR... i will continue to use these terms as i see them.

I would always expect tho to see, as above, people with different appraoches to language and the use of specific terms to disagree... but like you have done, you will just have to agree to disagree... no harm done.
 
I've stopped arguing with you. Why don't you stop the condescension?

I'm sorry. I really wasn't trying to be condescending.

When you posted:

That sounds very much like a dynamic range problem, not a tonal resolution problem. If the dynamic range of the sensor was adequate, you would be able to see detail in both the brightest and darkest parts.

It really did demonstrate that you are confused about the part played by resolution and the part played by range and how they interrelate.

I shall cease trying to discuss the matter with you so as not to cause further offence.
 

Most reactions

Back
Top