Moglex
TPF Noob!
- Joined
- Jun 13, 2008
- Messages
- 581
- Reaction score
- 0
- Location
- Whitstable
- Can others edit my Photos
- Photos OK to edit
It's apparant from looking at a few of the HDR posts of the last week or so that there are some serious misunderstandings of what HDR actually is, at least as far as photographers use the term. I want to try and give a very simple overview of what HDR attempts to achieve for the photographer wanting to create a screen or paper representation of a scene. I want to try and do this without obfuscating the issue with details of the non-linear response of our eyes and other sensors.
I hope it will stimulate discussion and perhaps others will want to expound on some of the deeper technicalities.
The problem.
When we talk about the range of a photographic film or sensor we are already creating a certain confusion with terminology because there really isn't a problem with range. If you photgraph a scene with very bright elements and completely dark elements, provided you use somewhere vaguely the correct exposure you will end up with a photograph where the bright parts are white and the dark parts are black.
What we are really concerned with is the fact that because of the tonal resolution of the sensor (and, indeed, the medium on which we view the resultant photograph) we cannot see detail in both the darkest and the lightest areas at the same time.
As I've already said, explaining this properly would involve detailed discussion of the type of non linearity of the sensors and the methods used to overcome the problems this causes, including the possibility of using floating point numbers in the internal representation of the scene.
Instead of that I'd like to try and explain what an HDR photograph on a screen or a piece of paper actually is by showing how you could create an HDR image without the use of anything more than an Instamatic and some glue.
Deep breath!
Imagine that you want to take a photograph of one of those doll's houses where the front opens and you can see all the rooms layed out inside.
Imagine there are nine rooms and that each room is lit differently from the brightest ballroom in (apparantly) even steps to the darkest, moonlit bedroom.
You eyes can probably make out the detail in each room all at once and certainly if you shield them you can see each room clearly, one at a time.
If, however, you take a photograph of the whole house, exposing for the mid point, the darkest room is black and the brightest almost plain white.
So what you do is photograph each room individually, exposing for its own lighting (your Instamatic has unusually close focusing capabilities).
You now send off your cassette to the lab and get back a set of prints and paste them in the correct positions onto a sheet of paper and voila! you have a (rather odd) HDR representation of the interior of the dolls house where you can see the detail of each room just as well as any other, all at the same time.
Of course, this is not what we actually do with photographic HDR as we want to retain an idea of the relative brightness of the rooms.
So now suppose you take the exposures with your digital camera and this time you print them yourself with each room a little darker in proportion to its actual illumination. You will have sacrificed some detail but now have pretty much manually produced 'HDR' picture of the doll's house which you can photograph and compress using the JPEG algorithm (or any other) and display as either a print or a screen image.
This is effectively what HDR software does but in at a much smaller scale and using an intermediate internal representation of the image that has a much greater bit depth than we typically use followed by a mapping back to a suitable output bit depth.
I hope it will stimulate discussion and perhaps others will want to expound on some of the deeper technicalities.
The problem.
When we talk about the range of a photographic film or sensor we are already creating a certain confusion with terminology because there really isn't a problem with range. If you photgraph a scene with very bright elements and completely dark elements, provided you use somewhere vaguely the correct exposure you will end up with a photograph where the bright parts are white and the dark parts are black.
What we are really concerned with is the fact that because of the tonal resolution of the sensor (and, indeed, the medium on which we view the resultant photograph) we cannot see detail in both the darkest and the lightest areas at the same time.
As I've already said, explaining this properly would involve detailed discussion of the type of non linearity of the sensors and the methods used to overcome the problems this causes, including the possibility of using floating point numbers in the internal representation of the scene.
Instead of that I'd like to try and explain what an HDR photograph on a screen or a piece of paper actually is by showing how you could create an HDR image without the use of anything more than an Instamatic and some glue.
Deep breath!
Imagine that you want to take a photograph of one of those doll's houses where the front opens and you can see all the rooms layed out inside.
Imagine there are nine rooms and that each room is lit differently from the brightest ballroom in (apparantly) even steps to the darkest, moonlit bedroom.
You eyes can probably make out the detail in each room all at once and certainly if you shield them you can see each room clearly, one at a time.
If, however, you take a photograph of the whole house, exposing for the mid point, the darkest room is black and the brightest almost plain white.
So what you do is photograph each room individually, exposing for its own lighting (your Instamatic has unusually close focusing capabilities).
You now send off your cassette to the lab and get back a set of prints and paste them in the correct positions onto a sheet of paper and voila! you have a (rather odd) HDR representation of the interior of the dolls house where you can see the detail of each room just as well as any other, all at the same time.
Of course, this is not what we actually do with photographic HDR as we want to retain an idea of the relative brightness of the rooms.
So now suppose you take the exposures with your digital camera and this time you print them yourself with each room a little darker in proportion to its actual illumination. You will have sacrificed some detail but now have pretty much manually produced 'HDR' picture of the doll's house which you can photograph and compress using the JPEG algorithm (or any other) and display as either a print or a screen image.
This is effectively what HDR software does but in at a much smaller scale and using an intermediate internal representation of the image that has a much greater bit depth than we typically use followed by a mapping back to a suitable output bit depth.