Thick Headed

I really appreciate everyone's patience and explanations.

Let me rephrase my befuddlement.

For discussion sake, let's say I have a 16 bit image representing 100 shades of grey. I convert it to 8 bit. Do I now have 50 shades of grey and are they the first 50 shades?
In the end, as you say it doesn't matter because we can't see it any way. I'm just trying to understand.
 
Subscribing to this thread. It's something I've never 100% grasped either, but I can feel it about to click! :)
 
I really appreciate everyone's patience and explanations.

Let me rephrase my befuddlement.

For discussion sake, let's say I have a 16 bit image representing 100 shades of grey. I convert it to 8 bit. Do I now have 50 shades of grey and are they the first 50 shades?
In the end, as you say it doesn't matter because we can't see it any way. I'm just trying to understand.
I've actually had some difficulty grasping some of it myself. Walking through this thread, and working out some examples has cleared a few things up for me.

You're getting the idea, but the limiting number would be higher than 50. As far as which shades are used and which aren't, I'm guessing that would depend on the software used to convert the file.

Again, in my example above, I can't see any difference. Other images, such as those that contain a lot of gradient may have a noticeable banding in the 8-bit file. Grey scale is another way the difference might be visible. Since the color channels (red, green and blue) are all equal in shades of grey, there are only 256 "colors" in an 8-bit grey scale image, and 65,536 in a 16-bit image.

I'll try a few more image conversions and see what I get - I like science experiments!
 
Great. I've tried that a few times but can't see a difference. I'm sure in some circumstances banding may be a problem. I'll try some pure sky shots where there is a light to darker gradiant.
 
8_bit.png


16_bit.png


I can see differences in these two. At least I think I cn, I may have looked at them too long and just convinced myself that I do.
 
So you are saying, in 16 bit, there are more colors to choose from and the conversion to 8 bit chooses those colors. Whereas in an 8 bit original, there are fewer colors to choose from? If that is right, then I now understand.
16-bit has the potential to display more colors than 8-bit.
Post process RGB color pallets only offer 256 (8-bit) choices per color channel to choose from even if you are editing a 16-bit image. They usually also offer an HSL pallet too. (Hue, Saturation, Luminosity)

In 12-bit depth, there can be 16 additional shades of color per color channel between each 8-bit shade.
In 14-bit depth, there can be 64 additional shades of color per color channel between each 8-bit shade.

Any of those 16 or 64 in between colors that existed in the original get lost in the conversion to 8-bit. In most images the loss is undetectable. As I mentioned before the loss can become detectable in gradients as banding or posterization where 256 colors per channel is not enough to smoothly reproduce the gradient. Image Posterization

In 8-bit, pure red is r=256, g=0, b=0. The next shade of red in 8-bit is r=255, g=0, b=0.
In 12-bit there can be 16 additional shades of red between those 2 shades of 8-bit red, and 64 additional shades in 14-bit.

It might be helpful to backup to where it all starts - the image sensor in the camera - and consider what the image sensor does.
The image sensor in a digital camera is color blind, and is incapable of recording color. Each pixel on the image sensor develops an analog voltage proportional to how much light (how many photons) hit the pixel during the time the shutter was open. Active pixel sensor - Wikipedia, the free encyclopedia

Color has to be inferred (interpolated). Most digital camera have a Bayer Array filter in front of the image sensor. The arrangement of the red, green and blue elements of the Bayer Array are used by the algorithms that interpolate what color of light any one image sensor probably recorded. Bayer filter - Wikipedia, the free encyclopedia

To decide what color should be assigned to any single pixel, the color interpolation algorithm considers the Bayer Array pattern and voltage level of adjacent pixels.

The pipeline from the image sensor to a photo that looks like what we see involves quite a bit of software manipulation, including Raw files.

Raw converters not only interpolate color, they also demosaic, apply a gamma curve, tone-map, and anti-alias before you see the photo and can then apply your own edits.
To make a TIFF (16-bit or 8-bit), or a JPEG even more is done to the photo before it is available to the photographer for additional editing.
 
Last edited:
KmH---I offer you the highest PhotoForum award for patience and clear explanations. Mostly patience. I really thank you for your time and effort.
I'm afraid that most of the time the people that answer questions, especially for hard headed people like me, are never thanked enough for the efforts.

Thank you again
 
Thanks!
We live in a 10-second sound bite world, and any forum post longer than 10 words is considered 'long'. :lmao:

It helps me clarify my own understanding.

It looks like the video does a good job of showing visually what can happen.
 

Most reactions

New Topics

Back
Top