Another "Trying to wrap my head around raw bit-depth" question.

480sparky

Chief Free Electron Relocator
Supporting Member
Joined
Mar 8, 2011
Messages
25,149
Reaction score
9,006
Location
Iowa
Website
pixels.com
Can others edit my Photos
Photos NOT OK to edit
I'm still trying to get my head wrapped around bit depth and how it works when shooting / editing in raw.

Hypothetical situation: I take a photo, in jpeg, and the sky is a light blue. For the sake of this discussion, let's say I have nine consecutive pixels in a straight line, and I'll reference only the blue channel. My Color Eyedropper tells me their RGB colors are:

xxx,yyy,219
xxx,yyy,219
xxx,yyy,219
xxx,yyy,220
xxx,yyy,220
xxx,yyy,220
xxx,yyy,221
xxx,yyy,221
xxx,yyy,221


If I try to darken the blue sky, I may end up with the following RGB numbers for those same nine pixels:

xxx,yyy,187
xxx,yyy,187
xxx,yyy,187
xxx,yyy,190
xxx,yyy,190
xxx,yyy,190
xxx,yyy,194
xxx,yyy,194
xxx,yyy,194


I may well see 'banding' or 'posterization' of the sky for lack of data because the image is 8-bit.
Question one: Does this sound reasonable?


Question two: If I took the shot in raw instead, is it reasonable to state that I have more colors available between xxx,yyy,219, xxx,yyy,220 and xxx,yyy,221? In darkening the sky in post may create an RGB for the same 9 pixels as thus:

xxx,yyy,186
xxx,yyy,187
xxx,yyy,188
xxx,yyy,189
xxx,yyy,190
xxx,yyy,191
xxx,yyy,192
xxx,yyy,193
xxx,yyy,194
 
Yes, but only if you work in 16-bit color.
16-bit per channel color. 24-bit color is 8 bits per channel (8 Red, 8 Green, 8 Blue).

14 bits per channel would give 16,384 shades per channel, 16 bits per channel color would give 65,536 shades per channel.
 
I'm still trying to get my head wrapped around bit depth and how it works when shooting / editing in raw.

Hypothetical situation: I take a photo, in jpeg, and the sky is a light blue. For the sake of this discussion, let's say I have nine consecutive pixels in a straight line, and I'll reference only the blue channel. My Color Eyedropper tells me their RGB colors are:

xxx,yyy,219
xxx,yyy,219
xxx,yyy,219
xxx,yyy,220
xxx,yyy,220
xxx,yyy,220
xxx,yyy,221
xxx,yyy,221
xxx,yyy,221


If I try to darken the blue sky, I may end up with the following RGB numbers for those same nine pixels:

xxx,yyy,187
xxx,yyy,187
xxx,yyy,187
xxx,yyy,190
xxx,yyy,190
xxx,yyy,190
xxx,yyy,194
xxx,yyy,194
xxx,yyy,194


I may well see 'banding' or 'posterization' of the sky for lack of data because the image is 8-bit.
Question one: Does this sound reasonable?

Yes, but you haven't just darkened the sky, you have increased the local contrast quite considerably (even assuming that you are viewing the values in a colour space such as sRGB or Adobe RGB).


Question two: If I took the shot in raw instead, is it reasonable to state that I have more colors available between xxx,yyy,219, xxx,yyy,220 and xxx,yyy,221? In darkening the sky in post may create an RGB for the same 9 pixels as thus:

xxx,yyy,186
xxx,yyy,187
xxx,yyy,188
xxx,yyy,189
xxx,yyy,190
xxx,yyy,191
xxx,yyy,192
xxx,yyy,193
xxx,yyy,194

If you had shot in 12- or 14-bit Raw and were working in '16-bit' then the 0-255 values don't apply, but as you say there would be more 'colours' in the original between the high bit equivalents of 219 and 221 so that when you increased the contrast the gradient between the equivalent of 186 and 194 would be smoother.

What editing software do you use? Can it display the pixel values in high bit when working in high bit? For example, if you are working in 15-bit + 1 in Photoshop (called 16-bit, and taking up 16 bits of space per pixel per channel) you can read pixel info in 15-bit + 1 values (ie from 0 up to 32768, 32769 values in all).
 
Last edited:
............What editing software do you use?

Capture NX2 for 99% of my work. Raw Therapee and UFRaw on occasion.

.Can it display the pixel values in high bit when working in high bit?

Not that I can find. It's displaying an 8-bit image on the monitor, AFAIK. Although it works in 16-bit in the background, it can only display just 0-255 8-bit data.
 
16 bits per channel color would give 65,536 shades per channel.
I am not aware of any device that produces 16-bit per color channel files.

14-bit is the max as far as I know.

Photoshop's 16-bit mode only uses 32,768 values ot the full 65,536 values of true 16-bit.

Converted Raw files are 16-bit files, but only 14 of those bits are image data. The extra bits are for image metadata and parametric line commands.
 
Converted Raw files are 16-bit files, but only 14 of those bits are image data. The extra bits are for image metadata and parametric line commands.

Are you sure about that? Isn't the extra space just padded to fill out two bytes and allow for true 16-bit (or 15-bit + 1) editing? Isn't the metadata stored in its own area rather than inside the bytes of image data?

PS I think that there may be true 16-bit scanners - the Eversmart Supreme perhaps? What about the Phase One backs? What is 16-bit Opticolor?
 
Last edited:
Yes, my statement is a simplification.

Some cameras only produce 12-bit Raw files, so 4 bits get added at conversion.

Raw image format - Wikipedia, the free encyclopedia

The structure of raw files, including the ISO standard raw image format ISO 12234-2, TIFF/EP, often follows a common pattern, that is:
  • A short file header which typically contains an indicator of the byte-ordering of the file, a file identifier and an offset into the main file data
  • Camera sensor metadata which is required to interpret the sensor image data. This includes the size of the sensor, the attributes of the CFA and its color profile
  • Image metadata which is required for inclusion in any CMS environment or database. These include the exposure settings, camera/scanner/lens model, date (and, optionally, place) of shoot/scan, authoring information and other. Some raw files contain a standardized metadata section with data in Exif format.
  • An image thumbnail
  • Optionally a reduced-size image in JPEG format, which can be used for a quick and less computing-intensive preview.
  • In the case of motion picture film scans, either the timecode, keycode or frame number in the file sequence which represents the frame sequence in a scanned reel; this is the most important metadata item, because it allows the file to be ordered in a frame sequence (without relying on its filename).
  • The sensor image data
 
Exactly. The metadata and parametric commands are not stored in the 'extra bits', they are in a separate part of the file. 'The extra bits are for image metadata and parametric line commands' - no they are not.
 
Should also be mentioned you'll never get banding in the process of reducing the brightness of an image. In such a case you're going from having more information to less information, and the less information will still be 1bit apart. i.e.

252, 253, 254, 255 going down to

127, 127, 128, 128 is not banding since your eye has little perception of the difference between 127 and 128 (1 bit).

If on the other hand we went the other way:

127, 127, 128, 128 becomes 253, 253, 255, 255, now you're 2 bits apart which could lead to banding.
 
Thanks, Garbz!

Taking your lead, I conducted the following experiment:

I took an image I had in .NEF that I have not previously edited. I chose one with a dark, dark blue sky (slightly underexposed, and using a CPL). I created a jpeg of the same image, then loaded both of them into Capture NX2. I selected a portion of the dark blue sky for this.

BandingUCRorigresized.jpg




I zoomed in to the same part on each image (the white box), which I had laid out on my monitor side-by-side. I then selected the middle of the frame of the raw file, selected it for a Control Point, then increased the Brightness to 60. This was repeated on the jpeg.

The results:

Raw:
BandingSampleraw.jpg



Jpeg:
BandingSamplejpeg.jpg



As you can see, the raw image maintains a nice smooth transition across the sky, while the jpeg starts to become mottled & spotty.


Bear in mind that these screen captures have already been compressed twice..... the first time when I saved them as a jpeg, the second when they're uploaded to PhotoBucket. So the raw sample will appear to have some banding issues, when in fact it does not. The full-size uploads (still, compressed twice!) can be seen here:

Raw
Jpeg
 
Are you familiar with JPEG MCU's (Minimum Coded Units)?

One of the JPEG compression steps converts 8x8 pixel, 8x16 pixel, or 16x16 pixel blocks into MCU's.

Notice that all the "mottled & spotty" areas have straight edges.
 
Are you familiar with JPEG MCU's (Minimum Coded Units)?.......

If the question is directed toward me, then "No."
 
What KmH is getting at is that what you are seeing is JPEG encoding artefacts, not actual image banding.

Also not only are they compressed twice, but it's the worst possible colour to choose. The eyes are least sensitive to blue meaning banding is harder to see on blue gradients, and taking advantage of that fact JPEG algorithms typically horrendously mangle the blue channel of an image during compressing. The end result is a picture that is completely blue will look worse compressed with JPEG then the identical picture in green.

Want a really bad example of this: Both these images were saved with identical settings and then uploaded to photobucket:
Untitled-1-3.jpg

Untitled-2-1.jpg
 

Most reactions

Back
Top