Using Adobe Camera Raw: why am I only seeing 8-bit values?

astroskeptic

TPF Noob!
Joined
Nov 12, 2008
Messages
104
Reaction score
0
I recently stared working in RAW using Adobe Camera Raw (which comes with CS3). A few days ago I tried unsuccessfully to shoot the moon surrounded by faintly illuminated clouds. The challenge of the shot was its high dynamic range (DR) (faint clouds, bright moon) so this got me paying more attention to the issue of DR with RAW format.

My question is this: how can I verify I'm getting the DR I think I am? My Nikon D300 is set to 12 bit NEF. If Adobe behaved as I expect, I'd see values between 0-4095 in my image.

Yet as I mouse over regions of the image in the Camera Raw software, the pixel values displayed are clearly only 8-bit values (the max value in the highlights is 255). Why am I not seeing a range consistent with the RAW DR? Obviously the software has to scale the image to the DR of the monitor for display purposes, but I would expect that it would report the correct underlying numeric values. Setting the color space preference has no effect on this by the way (e.g. 16b Adobe RGB).

Based on this behavior, I'm concerned that I'm doing something wrong and not actually working with raw data. Or am I just somehow misinterpreting the software's display?

Thanks for your help.
 
I recently stared working in RAW using Adobe Camera Raw (which comes with CS3). A few days ago I tried unsuccessfully to shoot the moon surrounded by faintly illuminated clouds. The challenge of the shot was its high dynamic range (DR) (faint clouds, bright moon) so this got me paying more attention to the issue of DR with RAW format.

My question is this: how can I verify I'm getting the DR I think I am? My Nikon D300 is set to 12 bit NEF. If Adobe behaved as I expect, I'd see values between 0-4095 in my image.

Yet as I mouse over regions of the image in the Camera Raw software, the pixel values displayed are clearly only 8-bit values (the max value in the highlights is 255). Why am I not seeing a range consistent with the RAW DR? Obviously the software has to scale the image to the DR of the monitor for display purposes, but I would expect that it would report the correct underlying numeric values. Setting the color space preference has no effect on this by the way (e.g. 16b Adobe RGB).

Based on this behavior, I'm concerned that I'm doing something wrong and not actually working with raw data. Or am I just somehow misinterpreting the software's display?

Thanks for your help.

1. Bit depth has NOTHING to do with dynamic range, so don't connect them that way.

2. I suspect the image shown on the ACR screen is an 8 bit rendering regardless. And until you choose to import the image into PSCS3 with a selected bit depth, it does not know what depth you'll use. I would not get hung up on this. And regardless of what ACR shows you, your Nikon raw image will be 12 or 14 bits depending on the model of your camera.

3. Again, DR has NOTHING to do with bit depth. Lots of people think this but it is not so. For example value 0 in 8 bits is the same as value 0 in 12 or 14 bit depth color. Value 255 in 8 bit depth is the same value as 4,095 in 12 bit.

4. What bit depth does is cut a "pie" in 4,096 pieces in the case of 12 bits, and 8 bit cuts the same exact sized pie in 256 pieces, yet both pies have the same exact size (DR). Bit depth is about tonal graduations and not DR. Only the sensor determines the DR. A/D converter simply defines into how many pieces that DR gets "cut into". The deeper the bit depth, the more graduations the analog data is cut into to result in a digital file.

5. If you want to extend the DR of that moon/cloud shot, then take two or more raw shots in fast order usinga tripod; the first exposing for the moon surface, the next 1/3 stop slower, and the next 1/3 slower, etc...depending, 1/2 stop differences might be better...test it.

6. Then bring all the images into PSCS3 and blend them via layers and masks.

There are other ways to do this but this one could work. When you use ACR to bring in each exposure, make sure you select 16 bits, and ProRGB color space, and your set.
 
Last edited:
In either case image editing applications do not natively support 10-12 or 14bit files. When editing files they natively use 16bit formats which is why cameraraw exports like this. This is transparent to the user as it has no practical advantage. Few people could tell the difference between 135,45,60 and 134,45,60, so there's no point then providing users with more than 16.77million colour combinations to work with.

However deep down inside it makes a hell of a difference as having 69billion colour combinations means the difference between knowing a value after a filter is used and estimating the value.

CameraRAW processes in the native bitrate of the file regardless of how it's set, and exports the way you tell it to 8 or 16bits. Photoshop opens at either 8 or 16 depending on the bitdepth of the incoming file unless explicitly set otherwise.
 
Thanks for your replies.

SilverGlow, when you say "bit depth" it seems like you are referring to the storage size used by the processing software in which case I agree that it has nothing to do with DR. Conceptually, DR refers to the ratio between the largest and smallest measurable inputs to a system. In the absence of noise, this is dictated by the bit resolution of the sensor, which is the sense in which I was using the term.

So it seems that ACR doesnt give the user access to the actual raw data values, only to the scaled displayed values (presumably it scales linearly?). I wanted access to the raw valus for no other reason than a sanity check that I was actually getting 12 bits of data from the camera but I guess I'll just have to trust the software.

I did by the way try taking multiple exposures of the moon scene and combining them in photoshop but my rudimentary skills were not adequate for the task.
 
Thanks for your replies.

SilverGlow, when you say "bit depth" it seems like you are referring to the storage size used by the processing software in which case I agree that it has nothing to do with DR. Conceptually, DR refers to the ratio between the largest and smallest measurable inputs to a system. In the absence of noise, this is dictated by the bit resolution of the sensor, which is the sense in which I was using the term.

So it seems that ACR doesnt give the user access to the actual raw data values, only to the scaled displayed values (presumably it scales linearly?). I wanted access to the raw valus for no other reason than a sanity check that I was actually getting 12 bits of data from the camera but I guess I'll just have to trust the software.

I did by the way try taking multiple exposures of the moon scene and combining them in photoshop but my rudimentary skills were not adequate for the task.

No, when I speak of bit depth, I speak of the depth of each channel written to a raw file. Not storage, as you say.

In addition, a sensor does not have bit depth. And how could it? A sensor is an analog device, and not digital. Only after the A/D converter creates the raw file, does bit depth come into play. DR is not to be calculated by what an A/D outputs. The DR is a function of, and only of the analog sensor. In other words, a sensor does not have as you say "bit resolution" because it is an analog device (not digital).

The DR is never defined by bit depth, nor by the A/D converter. DR is a function of the sensor, an analog device.

For example you can take a sensor and you could take it's analog data and run it through A/D converter A or A/D converter B. The first converter outputs 12 bit raw data, and the 2nd outputs 14 bit raw data. Two different bit depths, and two different files, yet the DR IS THE SAME because the data came from the same sensor.

So how can ACR show you a raw file in native mode? A raw file is not even mapped to a color space. If it is not mapped to a color space then it can't be rendered directly on your monitor screen. Therefore the image you see on the ACR screen is a "Rendition" of the corresponding raw file.

And in fact, when you take a raw picture with your camera, that image on the LCD is a jpg, and not the raw image. See what I mean?

In summary, I assure you that if you import a raw image via ACR and specify 16 bits, rest assured that the resulting native PSCS3 file will contain 16 bit data.
 
SilverGlow, how do you define DR?

Previously I misspoke when I referred to bit depth of the sensor but I stand by my claim that DR is dictated by the A/D size. As I said before, DR is conceptually the ratio of the largest to smallest measurable inputs to a *system*. For some reason, you seem to be excluding the A/D as part of the relevant camera system. Why?

Yes, it is true that we can talk about component-level DR. In this case, we can say the DR of the sensor is defined by the ratio of the maximum to the minimum amount of charge that a pixel site can store. But this seems to me a not very useful definition of DR for a camera because the overall system DR could be less due to the effects of the A/D.

Suppose we have an ideal sensor that can perfectly count photons and it can measure anywhere from 0 to 2^20 photons in a single pixel site. In this case the sensor's DR is 2^20/1 which I'll call 20 bits (ignoring the usual convention of measuring in dB).

Now, that electrical charge is output as an analog signal to the A/D which will quantize it into its range. At best, a 12 bit A/D will produce non-zero outputs in the range 1-4095 so your sensor DR is now reduced from 2^20 down to 2^12 or 20 bits down to 12 bits by the A/D. A 14 bit A/D will give you a larger output range but still less than the range output by our hypothetical sensor.

So please explain how the A/D is not relevant to *system* DR (which is what we as photographers care about, right?)

The previous example implicitly assumes that the sensor's DR is larger than the DR of the A/D. Of course if that is not the case then the A/D cannot lower it and the overall system DR will be equal to the sensor DR.
 
Silverglow is not going to answer... but DR, in simple terms is "dynamic range". As far as the part that you are interested in, it is the range of tones that your camera will capture from white to black that you can get without either losing detail in the darkness or blowing out detail in the whites.

Better cameras will have a greater range before either happens than cameras of lesser quality.
 
yeah astroskeptic you are right. A/D converters are one aspect of determining the dynamic range. Because even though the ADC actual bit rate is not related to it, the ADC still has a reference point to call the maximum value. If that reference is 1V then the dynamic range is 1V if it's 5V then the dynamic range is 5V.

If you want to get gritty though, in digital cameras the ADC is not the limiting factor of the dynamic range as it is set at a point where it generally covers the acceptable linear region of the photo detectors. Increasing ADC dynamic range is trivial, higher reference, throw more bits at it. Increasing the linear region in which a photodiode / amplifier operates without introducing noise is almost black magic. So it's the characteristics of the sensor itself in most cameras which ultimately define the dynamic range, and the ADC is just set accordingly.
 

Most reactions

New Topics

Back
Top