Garbz
No longer a newbie, moving up!
- Joined
- Oct 26, 2003
- Messages
- 9,713
- Reaction score
- 203
- Location
- Brisbane, Australia
- Website
- www.auer.garbz.com
- Can others edit my Photos
- Photos NOT OK to edit
I get what you're saying but that is not what I understand the term "dynamic range" defines. Dynamic range from my understanding is the difference between the lightest and the darkest point of an image.
Now it is trivial to convert this to a lower lower value as I said from 12bit to 8bit by dividing by 16. Or as you said from 30bit to 8bit by dividing by 4.2million. However the end result is that the brightest point originally recorded would still be the brightest point on the 8bit file, and the darkest point recorded would still be the darkest point on the 8bit file. Thus assuming nothing more than a gamma correction is done there is absolutely no loss in dynamic range. There is a huge loss in detail for processing, but there is no loss in dynamic range.
Same with your HDR example. RAW 0-255 becaomes JPG 0 - 255 which contains the lowest value with zero loss of information. RAW 3841 - 4096 becomes JPEG 0-255 with zero loss of information. Great all detail preserved get to tonemapping to bring the result to 8bit, BUT. The whitest point is still RAW 4096, and the darkest is still RAW 0, which could also be done by simply dividing all values of the RAW file by 16.
The difference between white and black points stays the same. Whether or not this result looks good is beyond what I am saying, which is that the dynamic range is still the same providing nothing extra is done to clobber it.
I agree with what you're saying, just not that this is called dynamic range in terms of the range of light a camera can capture. It potentially clashes with "dynamic range" as it is known in digital signal processing, as that definition of dynamic range assumes that each bit records one step of data and thus is directly linked to the bitdepth.
To put it another way:
Bitdepth of our cameras are limited by A/D conversion, not by the sensor which limits the dynamic range of light available. So if an 8bit file describes the differences in range between 0lx and 255lx (pulling these out of thin air), a 12bit RAW will still describe the difference between 0lx and 255lx, just give you a lot of decimal points in between.
I am always interested in being proven wrong. Do you potentially have a link to some resource that says otherwise? I'm extrapolating these ideas from the dictionary definitions of "dynamic range" and applying them to the fact that while we have gone from 10-14bit sensors we still have less than 1 additional stop of data available in modern cameras compared to their 6 year old predecessors.
Now it is trivial to convert this to a lower lower value as I said from 12bit to 8bit by dividing by 16. Or as you said from 30bit to 8bit by dividing by 4.2million. However the end result is that the brightest point originally recorded would still be the brightest point on the 8bit file, and the darkest point recorded would still be the darkest point on the 8bit file. Thus assuming nothing more than a gamma correction is done there is absolutely no loss in dynamic range. There is a huge loss in detail for processing, but there is no loss in dynamic range.
Same with your HDR example. RAW 0-255 becaomes JPG 0 - 255 which contains the lowest value with zero loss of information. RAW 3841 - 4096 becomes JPEG 0-255 with zero loss of information. Great all detail preserved get to tonemapping to bring the result to 8bit, BUT. The whitest point is still RAW 4096, and the darkest is still RAW 0, which could also be done by simply dividing all values of the RAW file by 16.
The difference between white and black points stays the same. Whether or not this result looks good is beyond what I am saying, which is that the dynamic range is still the same providing nothing extra is done to clobber it.
I agree with what you're saying, just not that this is called dynamic range in terms of the range of light a camera can capture. It potentially clashes with "dynamic range" as it is known in digital signal processing, as that definition of dynamic range assumes that each bit records one step of data and thus is directly linked to the bitdepth.
To put it another way:
Bitdepth of our cameras are limited by A/D conversion, not by the sensor which limits the dynamic range of light available. So if an 8bit file describes the differences in range between 0lx and 255lx (pulling these out of thin air), a 12bit RAW will still describe the difference between 0lx and 255lx, just give you a lot of decimal points in between.
I am always interested in being proven wrong. Do you potentially have a link to some resource that says otherwise? I'm extrapolating these ideas from the dictionary definitions of "dynamic range" and applying them to the fact that while we have gone from 10-14bit sensors we still have less than 1 additional stop of data available in modern cameras compared to their 6 year old predecessors.