Convert 8bit to 16bit before editing

Garbz

No longer a newbie, moving up!
Joined
Oct 26, 2003
Messages
9,713
Reaction score
203
Location
Brisbane, Australia
Website
www.auer.garbz.com
Can others edit my Photos
Photos NOT OK to edit
I was inspired by a PM earlier today to explain a bit about what you can and can’t do with a 16bit file, and how Photoshop actually works with images when you set them at 8bit or 16bit. This hopefully will answer questions such as “is there a point to working in 16bit if the image was 8bit?” The usual answers start an argument whereas the correct answer as always is It depends.

First a background for those who don’t know. The bit depth of an image determines the number of discrete possible values a single pixel can have. The end points are still the same and some colours directly map to each other. Black can be represented as RGB8(0,0,0) or as RGB16(0,0,0). White is represented as RGB8(255,255,255) or RGB16(65535,65535,65535), middle grey is represented as RGB8(128,128,128) or RGB16(32768,32768,32768) etc. Every 8bit value can be converted to a 16bit value by simply multiplying by 256.

But you don’t get any extra detail! They say. And this is true. If you convert an 8bit file to a 16bit file you don’t get any extra detail, You’ll get discrete steps in the colour values such as the next level below white will be RGB8(254,254,254) or RGB16(65279,65279,65279). Therefore all the values between 65279 to 65535 will be basically wasted when you convert an 8bit file to a 16bit file. So the conversion is pointless, maybe.

But suppose you start with 16bit file and convert to an 8bit file after making an adjustment. Suppose you take a really dark picture with a couple of the RGB8 values: 0,1,1,2,2,3,3 and you now double the brightness values you get the values 0,2,2,4,4,6,6.
Now lets do the same with a 16bit file: RGB16 0,128,240,440,512,660,768, which if we convert to RGB8 would be: 0,1,1,2,2,3,3. Suppose we take this 16bit file and double the brightness you get: 0,256,480,880,1024,1320,1536. Ok I’m going somewhere with this. Now convert it to 8bit and the result is RGB8: 0,1,2,3,4,5,6 EXTRA DETAIL.
Ok but we knew that already. Working in 16bits has more data in the in between values than 8bit. And what I mean by in-between, I mean that 8bits can display every discrete visibly discernable colour in the sRGB gamut. So it really all comes down to working with 16bit files that were recorded as 16bit files avoid rounding errors from causing a loss of detail.

ROUNDING ERRORS! Of course! And this brings me to the meat of my ramblings. Converting an 8bit file to a 16bit file in photoshop produces no additional detail, but it will eliminate rounding errors when you start stacking changes on top of each other. For this example we’ll use an image with a black to white gradient map applied:
Image1.jpg

In Photoshop an adjustment layer for levels is made with the values 0, 0.15, 255
Now another adjustment layer for levels is made attempting to reverse that with values 0, 5, 255
The result in 8bit:
Image2.jpg

Eeek nasty posterisation, too late, image ruined. …. Or is it. Because these were done as layers the processing has not been applied to the image in the lowest level. So switch the image to 16bit:
Image3.jpg

Much nicer. So there is still benefit to working with a 16bit image even if it started as an 8bit. Sure this is an extreme example, but if the HDR crowd is to be used as example we are dealing with an extreme art with some extreme post processing. So use 16bit, regardless if the image was 8bit. It doesn’t create what wasn’t there, but saves you from destroying what was when you’re feeling like some extreme processing.

Do you know this about photoshop CS4 and earlier:
Ok we have our 16bit image now let’s make a JPEG to put on the internet to show off our talent. File Save …. Now in CS5 you can select JPEG, but in CS4 you can’t because JPEG is an 8bit only format. Ok so convert the image to 8bit. Image -> Mode -> 8bit.. ****! Ok the image now looks like it did when we started in 8bit, so undo.

Solutions:
Flatten Layers – Yeah it’s not pretty but it does work. Don’t forget if you want to keep your layers to undo the flatten process.
Save For Web and Devices – This is the ticket, the money maker. All these functions in one simple window. Converts to 8bit automagically, will show you what the image looks like with the JPEG settings you selected, has a check box to convert to sRGB so I’ll never upload the wrong colourspace to the net again. Ooooh and a resize box, all non-destructive to my original work. I suggest all people should get intimate with this tool.


So that’s it. Do you work in 16bit?
 
Last edited:
Garbz
So well explained!!!
 
Garbz I think you need to write a book on all this colourspace stuff you know - it'll save your finger in the long run and line your pocket a little ;)

Seriously though thanks for writing this out and putting it up - I've gleaned bits here and there and mostly understood the overall concept, but its nice to know get some clarification and confirmation on the actual workings.


Sadly myself I've only got elements so working in 16bit is not a practical option (it will mostly just save and crop a 16bit image and not much more than that) so all my RAWs are output as 8bit by default (since otherwise my first step in elements has to be convert to 8bit). Were I to have CS5 or similar I'd certainly use the 16bit working mode!
 
thanks Garbz for taking the time to discuss this with me off line and then post the results of the gradient file.

I have only shot in Tiff or RAW and never use jpeg, but was told many years ago (in a class) to change jpeg to 16bit tiff as it was not destructive and contain more information. It seems part one is true, part two is not.

This question came up in my class recently and i needed to find an expert i could trust to give me the correct information as we know about a lot of the bs on the internet , Garbz was kind enough to help a stranger via email
 
One caveat about 'Save for Web' is that you need to make sure to select the embed metadata option otherwise it will strip your file of metadata in order to make your file leaner. If people don't know, metadata is where your copyright information, contact info, licensing info, etc. is held.

Most feel the Orphan Works bill will be passed in the US allowing anyone to use your images commercially if they lack copyright information and the interested party practices due diligence to locate the creator without success.
 
"The bit depth of an image determines it’s dynamic range. By that we mean the number of discrete possible values a single pixel can have"

Uh, NO, that's an incorrect usage of the term dynamic range. Wrong,wrong,wrong. The dynamic range is the total, overall spread of the values; adding more steps to a ladder does not make it into a taller ladder...imagine a 16-foot ladder....will it have a greater height between its highest value( ie the top of the ladder,or the highlight values) and its lowest step (the ground, or shadow values) if we add more steps to it? Or will it STILL be a 16-foot tall ladder if it has fewer steps?

Greater bit depth does NOT increase dynamic range.
 
You're right, but I'm right too. The reason being is that the term dynamic range takes slightly different meanings for continuous and sampled data systems. In a continuous system such as the light we see, the dynamic range is the difference between the lightest and the darkest point. In a sampled system the dynamic range is the largest value is determined by the bit depth and the smallest is zero.

These overlap where the largest number that can be represented is limited by the clipping point of the analogue to digital converter, and the lowest point is limited by quantisation noise. But to really screw with your mind providing the analogue signal noise is lower than the quantisation noise and increase in bit depth causes an increase in dynamic range in the photographic sense as a result of seeing more of the darker values closer to the noise.

The use of the term is quite valid but now that I've thoroughly confused everyone I'll reword it. It is a photography forum afterall, and not a signal processing lecture :)
 
Since our digital cameras only have a dynamic range of 5 stops isn't this all irrelevant? 8 bit or 16 bit, does it really matter other than reducing file size? Afterall the output will always be JPEG. I foresee this being much more relevant when the digital age brings us cameras with dynamic range of 8 stops.
 
You're right, but I'm right too. The reason being is that the term dynamic range takes slightly different meanings for continuous and sampled data systems. In a continuous system such as the light we see, the dynamic range is the difference between the lightest and the darkest point. In a sampled system the dynamic range is the largest value is determined by the bit depth and the smallest is zero.

These overlap where the largest number that can be represented is limited by the clipping point of the analogue to digital converter, and the lowest point is limited by quantisation noise. But to really screw with your mind providing the analogue signal noise is lower than the quantisation noise and increase in bit depth causes an increase in dynamic range in the photographic sense as a result of seeing more of the darker values closer to the noise.

The use of the term is quite valid but now that I've thoroughly confused everyone I'll reword it. It is a photography forum afterall, and not a signal processing lecture :)


Ok, can you please translate into layman english? I forgot my signal-processing translator at the office.... :lol:
 
Since our digital cameras only have a dynamic range of 5 stops isn't this all irrelevant? 8 bit or 16 bit, does it really matter other than reducing file size? Afterall the output will always be JPEG. I foresee this being much more relevant when the digital age brings us cameras with dynamic range of 8 stops.


Isn't the whole point of HDR photography to go beyond the 5 stop dynamic range we typically see on our cameras?

BTW, nice camera.







And one other question for anyone to answer: How the he#& do I know if my image is 8 bit or 16 bit to begin with.
 
Since our digital cameras only have a dynamic range of 5 stops isn't this all irrelevant? 8 bit or 16 bit, does it really matter other than reducing file size? Afterall the output will always be JPEG. I foresee this being much more relevant when the digital age brings us cameras with dynamic range of 8 stops.

Isn't the whole point of HDR photography to go beyond the 5 stop dynamic range we typically see on our cameras?

BTW, nice camera.

And one other question for anyone to answer: How the he#& do I know if my image is 8 bit or 16 bit to begin with.

If you are using a D-SLR your camera probably has around 12 stops of latitude. Find your camera at:

DxOMark - Sheet view

Our eyes can see many times the dynamic range of camera sensors. That is why photography can be difficult--to make that translation. That's why film crews need countless grip trucks full of 5K, 10K lights. That is why supplemental lighting exists, even in natural light portraits. HDR will increase the dynamic range of your image, but if your lighting looks bad, it'll be a bad looking image with a high dynamic range.

Dynamic range - Wikipedia, the free encyclopedia

Like Derryl pointed out, if we use the term 'dynamic range' to express the top and bottom of a scale, bit-depth has no relationship to dynamic range. His analogy was perfect. If dynamic range is the height of a ladder, your bit depth is how many steps that ladder has. If it has more steps, the ladder doesn't get taller, it just gives you more steps. In photography if all you had was a 1-bit image, you have have pure black and pure white to work with. If it became a 2-bit image you would have pure black, pure white, a light-ish gray and a darkish gray. An 8-bit image has 256 steps from black to white. A 16-bit image has 65,276 steps. Not a larger dynamic range, just more intermediate steps.

If you are shooting a RAW files you are probably capturing 12-bit images. Your RAW converter natively works in 16-bit. I think this factor highlights Garbz's point. The RAW converter could work in 12-bit theoretically, but I think by working in 16-bit, it allows you to push tones into the vacant steps when you are adjusting your image and you are essentially creating gradated tonal detail.

Right, Garbz?
 
Last edited:
Man I regret using that term. :lol:

Honestly I don't know what bitdepth RAW converters work at. But it's quite possibly native (10 12 or 14 depending on the camera). The reason being is that what I said doesn't matter if only a single correction is applied on the image. Photoshop layers doesn't do that. It'll apply the corrections in order. You can nest layers which groups them together, but all in all it's a simple order of discrete steps that are applied to your image.

Now a predefined set of sliders on the other hand can be combined into one adjustment function. Say you slide the contrast slider all the way up and then go down to the curves adjustment in and put a backwards S bend on the curve (reduce the contrast), those adjustments could effectively be added together to form a curve which doesn't change much, and THEN that curve is applied to the image. Nothing is lost as the alteration to the image is done in one step with a single function.

I'm not sure but I'm lead to believe that some groups of Lightroom adjustments work like that. For instance you can take your exposure slider and slide it to the point of clipping half the image and then take the brightness all the way down to try and recover it and it'll look HORRID (implies that one adjustment was made after the other). Yet you can clip the wazoo out of the image with the exposure slider and yet recover all the clipped highlights by moving the recovery slider up (implies that these functions are somehow combined before being applied).

And one other question for anyone to answer: How the he#& do I know if my image is 8 bit or 16 bit to begin with.

If it's a JPEG it was 8bit. If it's a RAW file you opened in Photoshop via Adobe CameraRAW it's written at the bottom of the camera RAW dialogue as to what bit depth it will transfer into photoshop. In photoshop itself the title says all:

<i> Untitled-1 @ 100% (RGB/16*)*</i>

Untitled-1 = Image Name
100% = Current zoom level
RGB = Current colour mode (Indexed, RGB, CMYK, Greyscale etc)
16 = Bitdepth of the image
* = Image is using a non-standard gamut (not sRGB colour space)
2nd * = Image is not saved.
 
I don't quite understand your last entry, Garbz, but check this out and let me know if this is what you are talking about.

I created a 3 tone image:
Picture11-1.png

This is inside Camera Raw. The histogram displays 3 spikes for the black, white and gray.

This image is essentially between a 1 and 2 bit image. Very limited.

Now I played with the parametric sliders and tried to draw out the tonal information. It worked.

Picture2-4.png

If you look at the histogram I was able to shift the gray to different values. The histogram is no longer 3 solid spikes, but rather a solid white and black spike with a slightly sloped gray spike (it is subtle but look at the base). In a way I am manufacturing detail. Is this what you are driving at Garbz?
 
Not quite sure I follow what you're trying to say with this example. Showing what happens on 3 values is useless since you're not demonstrating clipping, and each slider has a different action (Exposure +4 is not the same as +150 brightness) But here's a mathematical example to describe what I was saying. For all intensive purposes assume we're limited by 8 bits that is 0-255.

Lets start with three brightness values: 120, 160, and 200.
Let's raise the brightness (multiply all values by 2) and you get 240, 255, and 255 (clipping).
Now lets lower the brightness (divide all values by 3) and you get 80, 85, and 85. Notice due to clipping the last two values are the same?

But what if we combine the actions? Lets multiply all values by 2/3rds: 80, 107, 133. Detail is preserved.


To see this in action, open a normal image. Crank the exposure to +4 (notice all the clipping?), now crank recovery up and notice all you detail is back again? These two actions clearly pre calculate before being applied to the image.

I think I was wrong about the effect of the brightness slider looking closer at it. I notice you can't change any clipped values with it. I think the brightness slider just has a function that doesn't linearly adjust all values up or down. i.e. Got an image with clouds, increasing the exposure will clip them, increasing the brightness won't. It's possible most of lightroom's adjustments then work in some pre-calculated way, again i'm inferring here, not certain.
 

Most reactions

New Topics

Back
Top