Details about RAW format

Stosh

TPF Noob!
Joined
Jul 18, 2009
Messages
203
Reaction score
0
Location
Lancaster, PA
Can others edit my Photos
Photos OK to edit
I am very familiar with what RAW format is in general terms - simply exactly what the CCD or CMOS recorded which is higher in dynamic range than the processed final picture file. My question is more specific as to how the camera's own processed picture is made from the RAW.

Is the (on board) processed image always "centered" in the dynamic range of the RAW image? In other words will I always have latitude in adjusting *my* processed images both lighter and darker? Or does the on board image processing chip decide where in the dynamic range to make the final image based on its analysis?

The reason I'm asking is because when I take a pic and then review it on the on board LCD, let's say it's one of those difficult backlit objects where you have some underexposed areas in the foreground, yet the sky has some washed out highlights too. I'm assuming the LCD is displaying a "processed" image and not a RAW one. How do I know if RAW recorded a wide enough dynamic range to get my own nice processed shot?
 
This is where you need to be able to read a histogram.
It will tell you everything you need to know.

The preview is nothing more than a jpg file your camera creates for display, it does not contain anywhere near the full data, that is where the histogram comes in. If you can read a histogram you will know where your exosure is and if you can recover it. As well if your camera has an RBG histogram you can look per channel and evaluate your exposure.
 
Yup, it displays a processed image; really the JPEG preview of the RAW file. As for knowing if the RAW has a wide enough range to save your butt, well, that has always had guesswork involved for me. You can get about one stop of highlight recovery and three from the shadows, sorta, ish...kinda. It all depends on the sensor.

Note the histogram is also displaying the histo of the JPEG preview. ;)
 
...My question is more specific as to how the camera's own processed picture is made from the RAW.

The camera's processed image, which is used for the display and possibly the histogram, is processed using settings in the camera. How these relate, in detail, to the actual data that can be gleaned by the RAW converter on your PC will vary from camera to camera and from RAW converter to RAW converter.

The camera's displayed image is, of course, NOT a JPEG. JPEG images, as such, exist only as data files and can't be displayed by any graphic software. In order to display an image, the running software must convert any file format into the display format used by the computer's display device. These in-memory bitmap formats are uncompressed and their dynamic range is limited by the display hardware and its driver software. This is true regardless of whether the computer is a standalone personal computer or whether its an embedded device in a camera, cell phone, or whatever.
 
SpeetTrap: Yes, I do know how to read a histogram (and that's extremely valuable), but it always seemed like the histogram was a representative of the preview image I was looking at, not necessarily the RAW image. My "review mode" always shows the histogram, not just the image. MusicaleCA seems to agree with this (that the histogram represents "processed" image). I did scan through the 5D m2 manual and couldn't find specific details on the histogram. MusicaleCA: Do you know this for certain or are you assuming as I am?

Dwig: i agree with you 100% pertaining to each piece of display hardware needing to convert to its own hardware dependent display signal, but I'm not sure how that's relevant to my question. The first half of your response confirms that the displayed review image is an on-board processed one, and the histogram may or may not be, but my question is more like "do you always have the same lattitude of + and - on exposure compensation when you're manually processing your own RAW images when comparing to the on-board processed images and/or the histogram?"

For instance if I see just a few pixels flashing (indicating that they're blown out) if the histogram and preview image are the same, I should still be fine because with a stop of latitude I should be able to make a perfect, not clipped processed image. Obviously if half the picture is flashing white, you have no idea how many stops over the clip level you are.
 
MusicaleCA seems to agree with this (that the histogram represents "processed" image). I did scan through the 5D m2 manual and couldn't find specific details on the histogram. MusicaleCA: Do you know this for certain or are you assuming as I am?

It is fact. This is the same for all cameras that shoot RAW, not just Canons. The histograms are based off of the JPG, not the RAW data file.

Besides, if you understand your camera, these small differences become moot. Example, on Nikon D700s, the reds come in a touch too hot, so I have a preset that is applied to all images as they are being imported that correct that for me. Skin tones come out WAY nicer.

Now, I did not base this correction on the histogram (nor JPG thumbnail), but the RAW preview on a calibrated monitor.

As far as trusting the histogram, well, I trust it a LOT more than I trust the JPG preview, but I do not trust the histograms 100%. That said, if I do see blinking in my highlights in the luminance histogram, I do back down on things becuase they *are* there... but I trust the RGB histograms MORE than just the luminance histogram alone. If the camera is showing blown out areas, it is up to the photographer to KNOW what it is he is shooting and if that blown out area was done on purpose (IE: shooting a back-lit person, you WILL blow out the backgrounds, but get the proper exposure on the face of your subject).

In the end, it is a LOT easier for me to recover detail from something that is not blown out over something that is. Once blown out... data is lost and unrecoverable. The very nature of using a RAW file, though, minimizes that, but not all that much (camera dependent, some cameras have a greater dynamic range than others)... it helps out a lot more to bring out underexposed areas than recovering from areas that are blown out completely. Once that happens... there is nothing you can do, data is forever gone.
 
Wowowo. RAW does not have greater dynamic range if the image is properly converted to JPEG. I'd worry if your camera's default action is to clobber available dynamic range of an image, after all extended dynamic range is the holy grail camera manufacturers are chasing.

What the RAW has is extra bitdepth (more information describing existing dynamic range). What RAW converts have are intelligent highlight recovery techniques which trick you into thinking you've pulled more dynamic range out of your picture, when infact it is only a calculated guess at what the blown details looked like. Now there's nothing saying that your camera doesn't try and do something similar.

The camera displays the JPEG conversion of the RAW file with the current settings. If you want to see if you have ever clipped the RED GREEN or BLUE channels even in the slightest then go out and while shooting RAW set your contrast and saturation as low as it will go. This will be the closest approximation to the linear data captured by the sensor. It'll look like crap but your RAW converter should apply its own settings anyway.
 
Wowowo. RAW does not have greater dynamic range if the image is properly converted to JPEG. I'd worry if your camera's default action is to clobber available dynamic range of an image, after all extended dynamic range is the holy grail camera manufacturers are chasing.

What the RAW has is extra bitdepth (more information describing existing dynamic range). What RAW converts have are intelligent highlight recovery techniques which trick you into thinking you've pulled more dynamic range out of your picture, when infact it is only a calculated guess at what the blown details looked like. Now there's nothing saying that your camera doesn't try and do something similar.

The camera displays the JPEG conversion of the RAW file with the current settings. If you want to see if you have ever clipped the RED GREEN or BLUE channels even in the slightest then go out and while shooting RAW set your contrast and saturation as low as it will go. This will be the closest approximation to the linear data captured by the sensor. It'll look like crap but your RAW converter should apply its own settings anyway.

Ah, yes. That'd be how I shoot all the time. Don't want my camera futzing with the output and confuddling me.

But...When we're looking at that image (or similarly the preview of a RAW file in <PP program of choice> ), isn't there part of the dynamic range that's simply not being displayed? I mean, where's that extra information come from, then, for highlight recovery?
 
Wowowo. RAW does not have greater dynamic range if the image is properly converted to JPEG. I'd worry if your camera's default action is to clobber available dynamic range of an image, after all extended dynamic range is the holy grail camera manufacturers are chasing.

I see what you're getting at here, but I'm not sure I agree. This may come down to semantics. I guess I don't see how an image can be "properly converted to JPEG" and still contain the dynamic range of the RAW. This can only be done using HDR techniques. Like you say if you crank down the contrast and saturation (which is a VERY good idea for histogram review so THANKS!) the JPG may contain close or even all of the dynamic range of the RAW, but if it looks like crap, how can you say that's "properly converted"? A JPG with "normal" looking contrast and saturation can never show the entire dynamic range of a RAW without doing HDR type techniques, or at least not until they create a JPG format with higher bit depth, and then monitors to display such things.
If you properly use the extra bit depth that the RAW format gives you with the original exposure, then by resampling down to JPG's bit depth you're going to lose something - either darks or brights using a "normal" contrast curve, or you'll lose the contrast itself if you choose not to clip either end.

Ah, yes. That'd be how I shoot all the time. Don't want my camera futzing with the output and confuddling me.
I really like this idea, but don't you ever go somewhere with the family or on vacation and shoot hundreds of snapshots? I can't fathom individually PP all those shots. Possibly there's some kind of batch processing that makes this easier? I only PP ones I'm going to display, print, enlarge, etc. Maybe I'll start using the user functions and for snapshots shoot with "normal" mode with JPGs and for possible keeper shots I'll change to super low contrast and saturation mode for better histogram review while shooting.
 
Last edited:
That is where a program like Lightroom rules. Filter out the keepers, make global adjustments in groups of shots... make touch-ups here and there on the odd shot that is off... export... done.

I can do 700-800 images in a single evening as long as I am consistent in my shooting and use RAW for best final results to start from. If I start from JPG, the process is very similar, but the number of keepers is always a lot lower. The only time I see JPGs anymore is on export and viewing final results. Even when doing small minor changes, the results turn out better to my eyes than doing the same minor adjustments on a large/fine JPG.

RAW offers me no downsides of concern, so I use it exclusively.
 
Jerry, I assume that isn't 700-800 keepers, right? Otherwise I will feel very inferior. :greenpbl: (I can easily filter through 2k images in an evening after a full day of shooting, but I might only bother to process about a hundred of those fully. Meh...that's where my amateurness still really shows.)

Ah, yes. That'd be how I shoot all the time. Don't want my camera futzing with the output and confuddling me.
I really like this idea, but don't you ever go somewhere with the family or on vacation and shoot hundreds of snapshots? I can't fathom individually PP all those shots. Possibly there's some kind of batch processing that makes this easier? I only PP ones I'm going to display, print, enlarge, etc. Maybe I'll start using the user functions and for snapshots shoot with "normal" mode with JPGs and for possible keeper shots I'll change to super low contrast and saturation mode for better histogram review while shooting.

As Jerry said, it's pretty easy with LR. But I'll admit I've shot just straight JPEG a few times, but only for trivial things like creating a visual record of a piece of real estate I'm moving into (not real estate photography :greenpbl: ), or things like that, where if the WB is a little off, I really just don't care.
 
I see what you're getting at here, but I'm not sure I agree. This may come down to semantics. I guess I don't see how an image can be "properly converted to JPEG" and still contain the dynamic range of the RAW. This can only be done using HDR techniques. Like you say if you crank down the contrast and saturation (which is a VERY good idea for histogram review so THANKS!) the JPG may contain close or even all of the dynamic range of the RAW, but if it looks like crap, how can you say that's "properly converted"? A JPG with "normal" looking contrast and saturation can never show the entire dynamic range of a RAW without doing HDR type techniques, or at least not until they create a JPG format with higher bit depth, and then monitors to display such things.

Think of it this way. The "Dynamic Range" is the brightest point minus the darkest point. When you take a RAW photo on a 12bit sensor the brightest point has a value of 4096, and the darkest point has a value of 0. This data is linear which sucks because our eyes perceive light logarithmically. When converting to JPEG all that is done to this RAW data is a Gamma curve is applied, and the data values are literally divided by 16 (i.e. the least significant 4 bits are removed). Now the gamma correction curve is a curve that that looks like this:
180px-Gamma06_600.png

Source: Wikipedia

The critical thing to note here is that input is on the horizontal axis, and output is on the vertical axis. Note that the gamma correction curve goes from 0,0 to 1,1, and has intermediate values between that. The implication of this is that a value which was previously 0 will still stay 0, a value that was previously 4096 will stay at 4096. I.e. no dynamic range is lost in this conversion.
In the final step of the process divides everything by 16. I.e. the maximum brightness value the sensor has picked up was 4096 becomes 255. The maximum brightness point of the 8bit file. The same happens for the 0 point which stays 0.

From this the conclusion is that while data is rounded out and lost, this data is not visible. The difference between 255 and 254 is very difficult to see even on a perfectly calibrated monitor. The difference between 4096, and 4095 assuming we had such a screen would be below human perception. This data that is lost is critical in the calculations when doing gamma correction, and calculations such as increasing contrast to pull detail out of rounding errors,

but the take home message is: Assuming your camera doesn't screw with the image to "pretty it up" by cranking the brightness, contrast, or saturation or some such, the brightest point the sensor recorded will have a final value of 255 when converted to JPEG, meaning dynamic range is the same!


I saw you mentioned HDR. HDR requires more than one exposure at a different level to work. What I think you are talking about is the "tonemapping" process. It's not that the monitor can't display the bitdetph, it's that tonemapping a 12bit file pulls microcontrast out of shadows which would have been clobbered by rounding errors if converted to 8bits first. There is no extra dynamic range and the white and dark points stay the same. It just appears as though there's extra dynamic range because you have preserved more detail from rounding errors by playing with local contrast.

But...When we're looking at that image (or similarly the preview of a RAW file in <PP program of choice> ), isn't there part of the dynamic range that's simply not being displayed? I mean, where's that extra information come from, then, for highlight recovery?

Assuming you're already drawing the maximum dynamic range from the RAW, it is literally assumed. There really isn't extra dynamic range when you do recovery, and the detail never really is quite right when compared to an image that has a lower exposure. Consider a JPEG that has been converted from the RAW. The JPEG has 255,255,255 (clipped on all channels but one) but the RAW data may actually look like 4096, 4093, 4088 in the case of 12 bits. All of this would lead to rounding errors displaying the content as being clipped, and we may not even be able to tell the difference if we had a 12bit display, but software can do wonders with these differences.

Some maths:

Convert 4096, 4093, 4088 to 8bits: 255,255,255 after rounding errors.
Lets do some highlight recovery:
Increase contrast, can't everything is the same value. Reduce brightness by 10%: 230,230,230 still clipped with no detail but just darker than white.

Now lets Increase contrast on the original: gives 4096 (max value), 4064 (assumed), 4003 (calculated linearly). Reduce the brightness by 10%: 3686, 3658, 3603.
Convert to 8bit: 230, 229, 225 Volah, DETAIL!.

By doing calculations at 12bits you have more data to work at, thus can pull little stunts like this which would have been rounded out if you're working directly with 8bit data. But note that there is no additional dynamic range here to being with. 255 = 4096 is the maximum brightness, and 0 = 0 is the maximum darkness which define the dynamic range. This is purely pulling data out of details which would be lost due to rounding error if converted to 8bit first.




I'd like to think this post is 100% right, but it was a long post, and it's a weekend so please don't eat me if I've missed something.
 
Perfectly explained Garbz, I couldn't have said it better.

JerryPH it is my experience that because the human eye sees more highlight values then shade values that most sensors put more information in the brighter values. Meaning that it would be easier to pull back an overexposed image then an underexposed one, in either case if you clip you lose it; it was just my understanding that it takes more to clip a bright value then a dark value. (When I say the eye sees more "values" I'm just refering to that the brain recognizes highlights quicker, and considers anything in light to be more important for survival aspects)

You were saying that you found it easier to increase the darks, then drop the highlights, could it be that the darks just weren't as notably destroyed because they lacked tell tale signs of pixelation or information loss? - Just curious if you could verify this for me!
 
shmne the sensors do not record more data in the highlights. That's actually a natural result of gamma correction. It pushes many of the shadow details up to the highlights
 
Garbz,
Yes, that was an excellent explanation on how to convert a 12 bit image to an 8 bit image and be able to keep the entire dynamic range. The only problem is that's not what happens. To explain my point, let's use some extreme examples. Say for instance for some scientific research purpose they invent a 30 bit sensor. This would truly be the "holy grail" in dynamic range recording because the maximum photon count would be 1 billion instead of 4096. This would certainly be able to capture any possible normally blown out highlights. If you then converted this to 8 bit without losing any dynamic range you would have an unrecognizable image. To convert it would divide each value by approx 4.2 million. Assuming you expose your shot so the darkest pixel would be a 0 or 1 in 30 bit, almost your entire picture would come out to 0s or 1s in your 8 bit JPG because there's just not that much contrast in everyday shots for any of the pixels to count over 4 million photons. It would end up a black screen!

For any 12 bit camera's "normal" settings, the JPG will NEVER show the entire dynamic range of what it captured. In other words you can select an extremely low contrast version to retain lots of dynamic range, or you have a normal dynamic range and select an 8 bit range anywhere inside the 12 bit RAW, or any combination in-between. The huge dynamic range allows *options*. One option being highlight recovery. It's not assuming or creating any highlight info. It's adjusting the curves as to not clip the JPG at the top end. If you can recover highlights, the highlights were recorded. Like shmne said you can't recover clipped information.

Which leads me to the HDR part. You certainly can do HDR with only 1 exposure. You can make 16 separate JPG images out of your single RAW exposure. Here's how:
RAW 0-255 becomes #1 JPG 0-256
RAW 256-511 becomes #2 JPG 0-256
RAW 512-767 becomes #3 JPG 0-256
RAW 788-1023 becomes #4 JPG 0-256
RAW 1024-1279 becomes #5 JPG 0-256
And so on - you get the picture. So without truly stretching or inventing any data, you can have a seriously high contrast HDR image from your single RAW. But because like you mentioned the eye sees in logs, each successive JPG up to #16 becomes less and less useful.

Let's do an example of an 8 bit sensor vs. a 12 bit sensor. I think you may be confusing how the sensors record an image. All they do is count photons of light. Let's take a scene with low contrast and pretty even illumination like a cloudy day with no sky in the picture. Assuming both sensors have equal sensitivity, and both sensors are given the same exposure, they will record EXACTLY the same numbers. And I don't mean exactly the same numbers after dividing by 16. I mean exactly the same numbers. The only thing the 12 bit sensor has over the 8 bit is that it can count higher. That's it. If there's nothing higher than 256 to count, then they will both end up with the same counts. Now if you've followed me up until this point it should now be obvious why the 12 bit sensor won't divide each number by 16. If it did the output JPG would only have values from 0-16 (256/16=16). That would be a terrible looking picture.

So to really overly simplify my summary, one could say that a 12 bit camera uses processing to select an 8 bit range inside it's 12 bit capture. And this leads me back to my OP which asks where inside this range does it create it's JPG image? Is it always in the same region with say 1 extra stop of highlights clipped and 2 stops of shadows clipped (like MusicaleCA suggests), or is it intelligently figured out in a case-by-case scenerio?
 

Most reactions

New Topics

Back
Top