Details about RAW format

I get what you're saying but that is not what I understand the term "dynamic range" defines. Dynamic range from my understanding is the difference between the lightest and the darkest point of an image.

Now it is trivial to convert this to a lower lower value as I said from 12bit to 8bit by dividing by 16. Or as you said from 30bit to 8bit by dividing by 4.2million. However the end result is that the brightest point originally recorded would still be the brightest point on the 8bit file, and the darkest point recorded would still be the darkest point on the 8bit file. Thus assuming nothing more than a gamma correction is done there is absolutely no loss in dynamic range. There is a huge loss in detail for processing, but there is no loss in dynamic range.

Same with your HDR example. RAW 0-255 becaomes JPG 0 - 255 which contains the lowest value with zero loss of information. RAW 3841 - 4096 becomes JPEG 0-255 with zero loss of information. Great all detail preserved get to tonemapping to bring the result to 8bit, BUT. The whitest point is still RAW 4096, and the darkest is still RAW 0, which could also be done by simply dividing all values of the RAW file by 16.

The difference between white and black points stays the same. Whether or not this result looks good is beyond what I am saying, which is that the dynamic range is still the same providing nothing extra is done to clobber it.

I agree with what you're saying, just not that this is called dynamic range in terms of the range of light a camera can capture. It potentially clashes with "dynamic range" as it is known in digital signal processing, as that definition of dynamic range assumes that each bit records one step of data and thus is directly linked to the bitdepth.


To put it another way:
Bitdepth of our cameras are limited by A/D conversion, not by the sensor which limits the dynamic range of light available. So if an 8bit file describes the differences in range between 0lx and 255lx (pulling these out of thin air), a 12bit RAW will still describe the difference between 0lx and 255lx, just give you a lot of decimal points in between.

I am always interested in being proven wrong. Do you potentially have a link to some resource that says otherwise? I'm extrapolating these ideas from the dictionary definitions of "dynamic range" and applying them to the fact that while we have gone from 10-14bit sensors we still have less than 1 additional stop of data available in modern cameras compared to their 6 year old predecessors.
 
*blinks* This is almost worse than colour management. :lmao:
 
Hahhah This is just semantics. I think Stosh and I are stuck on a topic of definition more than anything. He is definitely right from the digital signal processing definition of dynamic range.

Colour management on the other hand is genuinely confusing.
 
To put it another way:
Bitdepth of our cameras are limited by A/D conversion, not by the sensor which limits the dynamic range of light available.

This is definitely 100% correct. (Unless I'm wrong :meh:)

The problem I have is actually finding the relevant data to determine what the dynamic range of the sensor actually is.

My impression, from making tome mapped HDR's from a single exposure, is that the sensor's dynamic range in higher than that which would give a natural looking end result on a screen or printer and thus when the RAW is processed it uses only part of the range of values potentially present.

When you 'manually' process RAW you can move this sub-range which gives you the 'latitude'.

It seems likely that the sensor dynamic range has increased only very slightly over the years and the extra a/d conversion depth is used to get more accurate intermediate tones.
 
I'm gonna hop back in here ^_^

What I was referencing earlier about the camera favoring the brights wasn't wrong actually, when viewing your jpeg on camera it crops for the brighter values. Or so says my lab notes. This was specifically for the canon powershot series so I can't tell you how other cameras work, but because the eye is naturally attracted to the brighter scenes it makes sense to crop off the bottom end and preserve detail in the brights.

This discussion about raw is confusing me quite a bit though, I don't know what is being discussed at all o.o

Your original question was about knowing if raw captured a nice range for you to work with.

I have an answer for this question! And it is a link, because I could explain it, however I'll wait until you click on it.

Dynamic Range: Digital Imaging: Glossary: Learn: Digital Photography Review

Ok! Now that you clicked on it I will sum up in my own words, what they wrote.

What I said before about clipping is in fact the key element to this, RAW does not give you more physical space to work with but rather a more descriptive way to explain the same thing. Instead of saying the bit depth is larger, think of it as the bit depth is more descriptive. This is why to do a true to life HDR you need 3 exposures, because regardless if it is RAW you simply can not create more dynamic range, end of story. However, what you can do is take one jpeg, and use a technique film photographers use to properly expose their photos (ours is just digital and much easier :D ) But this is not expanding the dynamic range in a healthy way, it is the fast food of HDR. That is why it is called the poor man's HDR. The reason JPEGS suck so much is because they are the equivalent of scanning a print into your computer... they are nothing more then pictures.

So, your camera captures the full dynamic range it can everytime. By setting your exposure, you are selecting the range. My understanding is that the preview JPEG does in fact have the dynamic range of the original, just not as many descriptions in between.

Your RAW photos are better because they are digital negatives, and can be treated in a similar fashion as to film negatives. The limitation is that digital processors are linear, and therefore can not completely match their analog counterparts.

I'm sorry if I got anything wrong, however this is a lot of information for me to be processing at 5:30 am!! I think I did a decent job though, my hand written notes are HORRID D=!!

**Edit**

I'm gonna keep this post up in case some of it is right xD But looking back I think I confused some things, my apologies :) But maybe something I said is helpful :p
 
Last edited:
I forget what we were talking about too lol.

After reading your posts Garbz and Shmne I think I see where our major difference is. I think you guys are assuming that higher bits means additional steps inside a given dynamic range, and I'm saying that higher bits don't give additional steps, they git you the same steps, but a higher dynamic range. Without doing some homework I don't have any sites to quote to support my opinion. I look at things at the hardware level. If you have more bits on a sensor, doesn't it *define* a higher dynamic range because the pixels can count more photons? The definition of dynamic range from Shmne's site (thank you) is the largest possible signal divided by the smalles possible signal. 8 bit would be 256/1 and 12 bit would be 4096/1. Doesn't this go along with what I'm saying?

If we take the other side (that higher bit depth means more intermediate steps), I'm not even sure how to represent that on the hardware level. How would the 12 bit sensor collect a half or a quarter of a photon that the 8 bit sensor couldn't? I don't think it's possible (this is assuming they both have the same efficiencies). I don't think there's any way possible to represent an intermediate step when you're counting individual photons. Now like Garbz said we move on to ADC which is another matter completely. And that's where I'm going to claim lots of non-understanding.

After re-reading what I just posted, maybe we're both right? When we see 8 bit or 12 bit cameras or sensors, does that relate to the bucket depth or to the analog to digital conversion? If it's the conversion I think you guys are right. If it's the sensor bucket depth, then I think I'm right.

Just one more thing about the HDR discussion. I can see how you would call it the poor man's HDR, but let's say that down the road they do invent a 14 bit or even 16 bit bucket depth (and let's ASSUME I'm right on the above discussion lol), then each bit higher in bucket depth basically gives you an extra stop in image recording. At some point with it's extreme dynamic range capability, you'd have to agree that it would be the same as taking many 8 bit exposures at many different levels, wouldn't you? But again, if you guys are right on today's usage of "bits", then I can see why you would give me one of "those" looks when I say you can do HDR with a single image!

Same with your HDR example. RAW 0-255 becaomes JPG 0 - 255 which contains the lowest value with zero loss of information. RAW 3841 - 4096 becomes JPEG 0-255 with zero loss of information. Great all detail preserved get to tonemapping to bring the result to 8bit, BUT. The whitest point is still RAW 4096, and the darkest is still RAW 0, which could also be done by simply dividing all values of the RAW file by 16.
Now you've completely confused me. This isn't how HDR is done that I'm aware of. The reason it's HDR is because it's possible to have values in your final image of 255 that were much much darker than the brightest pixels of your scene. Let's say there was an area in the original exposure that was in deep shadow. Your #1 JPG would probably show that area perfectly. The darkest area in those shadows would be near 0 and the lightest areas in that shadow would be 255. But those lightest areas in your RAW were only 255 which when divided by 16 your way would yield a very dark 16. You would create layers choose which JPGs have properly exposed areas and combine them that way in your final HDR image. Wouldn't that be the exact same thing as taking 16 different 8 bit images all at different exposures?

Like you said Garbz, I think this all boils down to definitions. I need to do some homework. I am really enjoying the discussion though. :clap:
 
...but my question is more like "do you always have the same lattitude of + and - on exposure compensation when you're manually processing your own RAW images when comparing to the on-board processed images and/or the histogram?"...

Of course not. You are using different software with different level of control. In theory they could deliver the same results, but in practice its almost a certainty that there will be some degree of difference.
 
I forget what we were talking about too lol.

After reading your posts Garbz and Shmne I think I see where our major difference is. I think you guys are assuming that higher bits means additional steps inside a given dynamic range, and I'm saying that higher bits don't give additional steps, they git you the same steps, but a higher dynamic range. Without doing some homework I don't have any sites to quote to support my opinion. I look at things at the hardware level. If you have more bits on a sensor, doesn't it *define* a higher dynamic range because the pixels can count more photons? The definition of dynamic range from Shmne's site (thank you) is the largest possible signal divided by the smalles possible signal. 8 bit would be 256/1 and 12 bit would be 4096/1. Doesn't this go along with what I'm saying?

Firstly, sensors do not count photons. That's a metaphor. Their output is, however, an analogue of the number of photons that have interacted with them.

As Garbz has said the dynamic range of the sensor is completely independant of the number of bits that an attached a/d converter can provide.

If you have a sensor (no matter what it senses) that has a noise level of 1mv and a maximum output of 1v then it will have a dynamic range of 1000 to 1.

If you attach an eight bit sensor then you will get that dynamic range split across 256 values. If you attach a 16 bit a/d converter you will get exactly the same dynamic range spilt across 64,000 values.

What seems to be very hard to find out is what the actual dynamic range of camera senors is and how this relates to the subset of values that can be used at any one time as the RAW is processed. (Which will be affected by exposure and contrast adjustments as the RAW is 'developed').

If we take the other side (that higher bit depth means more intermediate steps), I'm not even sure how to represent that on the hardware level. How would the 12 bit sensor collect a half or a quarter of a photon that the 8 bit sensor couldn't?

You need to step back from the 'counting photons' metaphor.

After re-reading what I just posted, maybe we're both right? When we see 8 bit or 12 bit cameras or sensors, does that relate to the bucket depth or to the analog to digital conversion? If it's the conversion I think you guys are right. If it's the sensor bucket depth, then I think I'm right.

The number of bits actually refers to the a/d converter.

You cannot measure the actual dynamic range as a number of bits.

Just one more thing about the HDR discussion. I can see how you would call it the poor man's HDR, but let's say that down the road they do invent a 14 bit or even 16 bit bucket depth (and let's ASSUME I'm right on the above discussion lol), then each bit higher in bucket depth basically gives you an extra stop in image recording. At some point with it's extreme dynamic range capability, you'd have to agree that it would be the same as taking many 8 bit exposures at many different levels, wouldn't you? But again, if you guys are right on today's usage of "bits", then I can see why you would give me one of "those" looks when I say you can do HDR with a single image!

Some of us have actually done 'single shot' HDR experiments (which worked) and there are tutorials available (one was mentioned here a few weeks ago),
 
Thanks for adding Moglex. After re-reading everything today I had missed your earlier post. It seems like you're agreeing with me that a RAW has a higher dynamic range than the JPG. I think I now see why Garbz was saying that the JPG should have the same dynamic range. I think he was talking about the JPG image itself, not the dynamic range of the original scene. If your JPG has at least one pixel with a 0 value and at least one pixel with a 255 value, you've got the maximum dynamic range you can possibly have with that image. If the RAW has a 0 value pixel and 4095 value pixel it also has the maximum dynamic range it can have, but I think you and I agree that this dynamic range is a much larger subset (not just more detailed) of the original scene than the JPG would be.

It seems I was wrong with my bit theory. The bits are in the a/d conversion, not the dynamic range (well depth?) on the sensor. I'm glad I learned something new today. I too would like to find where the specs are for these values. But I suspect just like you said, that the dynamic range hasn't really changed much over the years probably because there's not much use for it. So what if you can record 20 stops in a single RAW file? If your output JPG displayed that entire range it would be rediculously pale and bland. Thanks again for your contributions.
 
*patiently waits for HDR printing and viewing techniques*

About this single-image HDR stuff, isn't that more accurately called tone-mapping? You're essentially just bringing up the shadows and bringing down the highlights a bit and boosting the contrast to get more detail out of your RAW. Topaz Adjust does the same thing (and boy does it ever look cool). But that doesn't necessarily equate to a higher dynamic range than you would normally have, right? You're just making better use of what's already there.
 
A single exposure HDR is tone mapping, and yes they are very successful. HDR is also tone mapping though, it is an identical process.

From what I've seen with the files I have handled through class, it did not effectively increase dynamic range anywhere near as close to the properly formatted HDR where you have multiple original exposures. This is because when you allow the camera to expose for brights, darks, then neutrals, you effectively have extended your range to allow from complete black, to complete white without having a computer do any kind of calculations to gain those exposures.

The excercise involved being handed a "neutral" shot to create a single exposure HDR with. Then later we turned around and got the other 2 exposures, just eyeing the histogram you could flip through the two and see the jump of difference. Much more information in the proper HDR then the single exposure.

The rest of the discussion? I'm still confused... I wouldn't mind someone finding a final answer from Canon or Nikon or something xD

As far as printing and viewing...ummm...next poster?
 
Thanks for adding Moglex. After re-reading everything today I had missed your earlier post. It seems like you're agreeing with me that a RAW has a higher dynamic range than the JPG. I think I now see why Garbz was saying that the JPG should have the same dynamic range. I think he was talking about the JPG image itself, not the dynamic range of the original scene. If your JPG has at least one pixel with a 0 value and at least one pixel with a 255 value, you've got the maximum dynamic range you can possibly have with that image. If the RAW has a 0 value pixel and 4095 value pixel it also has the maximum dynamic range it can have, but I think you and I agree that this dynamic range is a much larger subset (not just more detailed) of the original scene than the JPG would be.

I agree with everything except that last part. Why would the dynamic range of the JPEG have to be a subset? I will accept that it may need to do it to clear out nonlinearities in what the sensor is recording but let me put it another way.
In Photoshop. File -> New -> and make a 16bit image. Do a gradient fill from white to black. Now you have a 16bit file with steps going from 0 to 4096. Then click Image -> Mode -> 8bits/channel, and the result looks the same, but now goes from 0 - 255.

So while in signal processing "dynamic range" is used as the size of the set of possible values (which is larger in higher bit files), in photography terms the "dynamic range" of a picture is the difference between the brightest point and the darkest point (which is the same for both 8 and 16bit gradients above). This leads me to believe an appropriately processed JPG would look identical to the RAW. It may not look as good as a subset, but the full visual dynamic range of what's converted by the A/D converters fits into an 8bit file.

To throw in more food for thought, Sony cameras reduce the A/D converter bitrate and processing bitrate when recording at 8fps, but the dynamic range in the images remain the same as when shooting with the full bitrate of the A/D converter.

eh I'm confusing myself now.

Sorry to totally derail this thread.
 
About this single-image HDR stuff, isn't that more accurately called tone-mapping?

At the moment, any external representation of an HDR image is tone mapped. There is no currently available consumer technology to display HDR without compressing the image tonewise.

You're essentially just bringing up the shadows and bringing down the highlights a bit and boosting the contrast to get more detail out of your RAW. Topaz Adjust does the same thing (and boy does it ever look cool). But that doesn't necessarily equate to a higher dynamic range than you would normally have, right? You're just making better use of what's already there.

Not necessarily.

It depends on the actual dynamic range of the sensor; not the number of bits that the attached a/d converter can provide.

(Ignoring gamma correction to keep things simple - obviously the real situation is even more complex)

If the sensor has a dynamic range of 1000 to 1 then in order to display the image it creates on a monitor with a dynamic range of 256 to 1 you will need to discard data at either the top or bottom of the range (or both).

When you adjust the exposure during RAW development you move the part of the sensors range that you are using.

Thus, if you develop the RAW using different exposure adjustments and combine these using suitable software you can actually extract a greater dynamic range than normal (which, of course, will have to be tone mapped for viewing).

What nobody seems to know, and despite a lot of searching I have been unable to find out, is what the dynamic range of the sensor in a DSLR actually is.

When I get the time I may find out experimentally.
 
I agree with everything except that last part. Why would the dynamic range of the JPEG have to be a subset?

I agree. It doesn't.

To simplify things to the ultimate level, the dynamic range of an image on white paper is related to the relative reflectivity of the deepest black and the plain white of the paper.

Now, if you use a thick black marker, you have effectively a 1 bit display.

Using any number of other techniques you can get a vast number of shades thus achieving a display of many bits depth. The dynamic range, however, remains the same.
 
:waiting:


:banghead: My head.
 

Most reactions

Back
Top