8 bits vs 16 bits

There's basically no point in shooting in Raw 16bit (65,536 values) and converting to (usually jpeg) 8 bit (250 values). Might as well shoot in Jpeg.

If you don't use any plug ins that you might have an issue with, why are you exporting to 8bit? If you find one random one thats not compatable, wouldn't it make more sense to just change the mode on that single one to 8 bit rather than compress them all on the off chance you'll use a 3rd party plug in that might not work?

This statement is kind of like saying that ALL jpegs of the same image would always be the same. That's not true. You are adjusting the output from the raw in lightroom What you export from that post processing will look VERY different than what the camera would have processed-which is the beauty if shooting raw. You control what that output into your jpeg is.
I do not keep my raws or dng's. Along this line of thinking it was silly for me to shoot in raw to begin with. I can tell you that what the camera puts out in jpeg mode is VASTLY different than what I put out in a jpeg for editing. I have no NEED to have that 16bit in photoshop... the only reason I don't change it is because I don't need to. However what I am putting out at the end of my PS action is an 8-bit, sRGB, jpeg for both print and internet use.

I'm talking about what the OP is doing, pulling files straight from the camera into Lightroom and converting there, then working on 8 bit files. I'd be surprised if there was that big of a difference between a camera compression and a LR compression. We're not talking about post processing at this point.

We're talking about the same thing, just different stages.

OH! I missed that beat. Or I made assumptions I guess. I assumed he'd open in LR, apply his raw editing and THEN be going to PS. You are totally right there. And in fact the LR conversion from raw to jpeg will look like crap compared to the camera version. A raw image unprocessed is flat and almost fuzzy because of the amount of data there. If you just export to jpeg it looks like the total crap that a raw file is with no processing.
 
I do most of the major adjustments to contrast and color in ACR or DPP where I'm working with (I think) a 14-bit raw file, then export to PS as an 8-bit tiff (uncompressed). I've tried exporting 16-bit tiffs and found they just slowed me down without making any difference in the final psd file (I never use jpg except for web). If you're going to do a lot of drastic adjustments in PS (or LR I guess) then you'd be better off with 16-bit files, but if you can do most of this in conversion there's no need to create such large files.
 
The process of photography from the original scene to the final product is a process of reduction. The final product is likely a print reproduced in ink on paper or a print reproduced photo-chemically or an image for electronic distribution via the internet. Whatever that final product is it will contain vastly less information than the original scene which will contain more information than the sensor capture (RAW) which will contain more information than the 16bit RGB conversion which will contain more information than the 8 bit RGB reduction which will contain more information than the compressed JPEG.

A compressed 8 bit JPEG typically contains less than 20% of the data in a RAW file and less than 10% of the data in the original scene. Under the circumstances it's really quite amazing that you have anything at all worth looking when you finally commit to that 8 bit RGB JPEG. Once you make that commitment is there any chance you'll want to reconsider? There's no going back if you haven't saved the 8 bit uncompressed RGB file. There's no going back from there unless you saved the 16 bit RGB file. There's no going back from there unless you've saved the RAW capture.

There's no advantage to having more data than your final output medium can handle. If your final output medium is a print then an 8 bit RGB to print process will satisfy your output requirements. Even a compressed (JPEG) 8 bit RGB to print should do the trick if the compression is high quality. So it's reasonable to target an 8 bit JPEG to print as a final step in the process. As you work through that process of reduction in which you must discard more than 80% of the data that you originally captured, will you control the data manipulation and inevitable data loss or will you allow software to do that for you?

My workflow:

1. RAW capture.
2. Conversion to 16 bit RGB (uncompressed).
3. Cleanup editing of the 16 bit RGB file as needed.
4. Conversion to 8 bit RGB (uncompressed).
5. Print.
OR
5. Conversion to 8 bit RGB JPEG.
6. Internet.

The OPs original question is addressed in my step 2 to 3. If I'm going to do any additional adjustments to the photo after the RAW conversion I want to have full 16 bit access to the data. I may chose to manipulate that data further, for example a local color alteration. My photo will then benefit from the retention of the full 16 bit data set until the final reduction to 8 bit. I'm going to get a better quality end result if I'm manipulating twice as much data to start with.

In the end there's no advantage to the 16 bit data set as the final output can't handle that much information and so the reduction process must continue. The question then really is: This process of reduction -- do you control it or do you let software control it?

Joe
 
I agree with Joe, although I happen to think that there is an advantage in keeping in 16-bit all the way to the print in some circumstances, particularly monochrome images on glossy paper. There are 16-bit printer drivers, of course.

One thing that hasn't been mentioned is the relationship with colour space. sRGB may be OK to work on in 8-bit, and Adobe RGB might be marginal, but any colour space larger than those two should be used with 16-bit.

Here is my digital workflow using Lightroom (I also use Raw developer, Capture NX2 and Oloneo PhotoEngine). It shows a multi-purpose workflow, with an attempt to maintain high quality as long as possible:

1) Take original in 14-bit losslessly compressed Raw
2) Import into LR, make basic adjustments
3) Import Raw into Photoshop, via ACR, as 16-bit ProPhoto
4) Edit image in 16-bit, all except final sizing and sharpening
5) Save 16-bit ProPhoto PSD master file

split

6W) Convert to sRGB 8-bit
7W) Save as high-res high quality JPEG for free distribution via the web, non-commercial use licence
8W) Keep image dimensions, quarter nominal ppi downsampling, to create low-res version for page assembly, save as JPEG.
9W) Downsample further to pixel size suitable for web use without browser resizing being necessary. Save as medium quality JPEG.

6P) Convert to Adobe RGB
7P) Adjust for printing
8P) Save as TIFF, sometimes layered, sometimes flat; 16-bit if for printing with a 16-bit capable process, 8-bit if not.
 
I read a great article in 'Digital Photo Pro', several years ago, that had a good example to show the difference between editing in 16 bit vs editing in 8 bit.

Try this out....

Open an image in Photoshop, then open the levels dialog. Make some adjustment (move the white point, mid and black point sliders) then hit OK. Now open the levels dialog again and look at the histogram. It will likely look like a comb, with gaps throughout the graph.
hillers-comb.jpg


When you make those changes on a 16 bit image, the gaps are smaller...and when you make those changes to an 8 bit image, the gaps are larger.

So what does that mean to how the image looks? As I understand it, the more/larger the gaps you see, the fewer transitional tones you'll have in your image. Worst case scenario...where you should see a smooth gradient in the image, it becomes stepped because you've lost the tones in between the steps.

Histogram from here...Bit-Depth
 
One thing that hasn't been mentioned is the relationship with colour space. sRGB may be OK to work on in 8-bit, and Adobe RGB might be marginal, but any colour space larger than those two should be used with 16-bit.

Absolutely. Conversion from RAW to 16 bit RGB should target the ProPhoto color space. Otherwise you have to ask yourself why are you trying to stuff X amount of data into a container that can only hold say .6X. As you eventually move to sRGB or Adobe RGB the process of reduction continues.

Helen, I would appreciate your further thoughts and expertise on this topic. We use color spaces (ProPhoto) that can't be realized on our hardware -- so an issue of practice versus theory. Most of us really can't afford a 24 inch ColorEdge display and even that can't handle the ProPhoto space. The theory is telling me that I want to keep the RAW converter working space ProPhoto and the 16 bit RGB image ProPhoto so that if I make adjustments I won't mangle my data trying to twist and bend it inside a container that's too small to process my changes. I think I get the theory, but I struggle with doing anything in photography that I can't directly SEE! It bothers the #$*&%& out of me to think that I'm working with a photo and my window on that photo is partially obscured while I'm trying to adjust the image characteristics. I've had the horrible experience of trying to edit a photo on a laptop for example and the even more horrible experience of later viewing the result of that effort on a good display. I personally don't use LR but if I had a nickle for every photographer who's asked me about why their photo looks different after they've exported it from LR, well, I am retired. So I have this gut response that makes me want to RAW convert straight to an 8 bit sRGB and view the image using a good sRGB display (that I can afford) so I can SEE what I've done. When I'm working on the photo in the RAW converter again the same practical issue applies; my viewing window is limited! And so my gut again wants to set the working space in the converter to sRGB and convert straight to 8 bit sRGB -- no surprises! When I'm working with a 16 bit photo in the ProPhoto color space on a display that barely manages 95% of sRGB, I can't shake this uneasy feeling that something is going to jump up and bite me in the butt. I'm basically happy with my end result but I do get an occasional surprise as I make the ProPhoto to sRGB reduction and then I find myself tweaking the photo yet again. My confidence is subtly shaken by this nagging thought that if I adjust my photo while in the ProPhoto color space (including in the RAW converter), something is happening that I can't SEE and wouldn't I be better off preventing that earlier on in the process.

Thanks,
Joe
 
When you make those changes on a 16 bit image, the gaps are smaller...and when you make those changes to an 8 bit image, the gaps are larger.

This assumes that you're starting with a raw file and exporting it to PS in 16-bit mode. If you start with a jpeg (which is 8-bit, of course), converting to 16-bit for editing won't have any effect.
Yes, of course.
 
A compressed 8 bit JPEG typically contains less than 20% of the data in a RAW file and less than 10% of the data in the original scene.

I'd be interested to know how you arrived at these figures. I tested a raw file that is 7.74 MB. When saved as a jpeg directly from ACR with no editing, the file size is 4.25 MB. That's 54 percent of the raw file -- a big loss, to be sure, but considerably less than 80+ percent.


I gladly admit that I just roughed those figures, but I'd argue if anything the loss is more extreme than my rough suggests. We don't want to compare RAW file size with the file size of a compressed JPEG in this case.

Depending on the vendor the data in a RAW file is compressed.

To begin with the RAW file in fact contains more information than the converted RGB output. If that converted RGB file is a 16 bit RGB file then the loss to 8 bit conversion is 50% Assuming loss from RAW to 16 bit RGB then we're at greater than 50% loss to get to an 8 bit RGB uncompressed file. Now add on the JPEG compression.

Joe
 
Helen, I would appreciate your further thoughts and expertise on this topic. We use color spaces (ProPhoto) that can't be realized on our hardware -- so an issue of practice versus theory. Most of us really can't afford a 24 inch ColorEdge display and even that can't handle the ProPhoto space. The theory is telling me that I want to keep the RAW converter working space ProPhoto and the 16 bit RGB image ProPhoto so that if I make adjustments I won't mangle my data trying to twist and bend it inside a container that's too small to process my changes. I think I get the theory, but I struggle with doing anything in photography that I can't directly SEE! It bothers the #$*&%& out of me to think that I'm working with a photo and my window on that photo is partially obscured while I'm trying to adjust the image characteristics. I've had the horrible experience of trying to edit a photo on a laptop for example and the even more horrible experience of later viewing the result of that effort on a good display. I personally don't use LR but if I had a nickle for every photographer who's asked me about why their photo looks different after they've exported it from LR, well, I am retired. So I have this gut response that makes me want to RAW convert straight to an 8 bit sRGB and view the image using a good sRGB display (that I can afford) so I can SEE what I've done. When I'm working on the photo in the RAW converter again the same practical issue applies; my viewing window is limited! And so my gut again wants to set the working space in the converter to sRGB and convert straight to 8 bit sRGB -- no surprises! When I'm working with a 16 bit photo in the ProPhoto color space on a display that barely manages 95% of sRGB, I can't shake this uneasy feeling that something is going to jump up and bite me in the butt. I'm basically happy with my end result but I do get an occasional surprise as I make the ProPhoto to sRGB reduction and then I find myself tweaking the photo yet again. My confidence is subtly shaken by this nagging thought that if I adjust my photo while in the ProPhoto color space (including in the RAW converter), something is happening that I can't SEE and wouldn't I be better off preventing that earlier on in the process.

Thanks,
Joe
As I understand it. If you are going to be making edit/changes, it's best to be working in the larger color space. If you're just going to be making a few tweaks, resize, sharpen etc., then you don't risk much by choosing the smaller space earlier in the workflow.

One analogy is that the size of your color space, is like the size of a bucket. As you pour your image into the bucket, any colors that are outside the gamut, get spilled and are lost. But if you use a larger bucket (color space), all the colors will fit into it, so nothing is lost.
Yes, eventually, you'll likely have to downgrade to a smaller space anyway....but it makes sense to do that later in the workflow, rather than earlier.

And yes, there is a bit of a disconnect since you/we can't even see all the colors displayed on our monitors. But soft proofing should still be able to tell us which colors are out of gamut.
 
I'd argue if anything the loss is more extreme than my rough suggests.

I'm a confirmed raw shooter, so I'm not defending jpeg over raw. But your "rough" estimate is a long way from that of Bruce Fraser and Jeff Schewe in their book Real World Camera Raw. They say that when shooting jpeg, "you trust the camera to throw away one-third of your data ...."

Well clearly they're wrong then. 16 bit to 8 bit is 1/2 not 1/3 -- 16/8 = 2. And no doubt, to convert to 16 bit from RAW, we don't invent data we didn't start with so RAW contains more info than 16 bit. The loss then has to be more than 1/2 just to get to 8 bit which is more like the flip side of 1/3.

I don't think struggling to quantifying this with precision is particularly useful. Bruce and Jeff were probably roughing it just like I was because you have to throw into this the question of how useful the discarded data really was in the first place. You can't see all the data contained in a 16 bit RGB file. We have no technology capable of showing that to us so if we discard something we couldn't see anyway.... kind of like the tree falling in the forest and making a sound if you're not there. So let's say Bruce and Jeff rough erred to one side and I rough erred to the other and in the middle is 2/3.

I could claim that the JPEG compression further discards huge amounts of data and prove it with the numbers but that's not really fair. My RAW files convert to 75mb 16 bit files. Assume loss has already occurred. Convert to 8 bit and I've got a 37 mb file. Save as a high quality JPEG and I've got an 8 mb file: 8/75 = .11 or arguably an 89% loss without accounting for the RAW to 16 bit loss. Now I've got numbers to prove a greater than 90% loss RAW to JPEG. Those are hard numbers but that's not really fair because it's not a random loss. "Loss" is a negative word and carries all kinds of bad baggage including suggestions of loss of control. Random data loss versus controlled data loss are very different things. When Michaelanglo carved a statue he lost marble. What JPEG achieves is in fact a triumph of digital technology and it's not right to quantify it in that way.

So in that spirit of fairness let me say that in the reduction process from RAW capture to 8 bit JPEG we discard a sh*t load of data. I'll stick by that quantity. The far more critical questions are how useful/useless was the discarded data and by what process was it discarded. For example back to the quote posted above by Bruce and Jeff; I can throw my camera a whole lot farther than I trust its JPEG engine (at least 90% farther).

Joe
 
Well clearly they're wrong then. 16 bit to 8 bit is 1/2 not 1/3 -- 16/8 = 2.........

16-bit is 2^16, while 8-bit is 2^8. Therefore, (2^16) / (2^8) = 65536 / 256 = 256. Working with a 16-bit image give you 256 times more data than an 8-bit image
 
So are you all saving your files as 16 bit TIF or PSD? How could you possible store all those files? Personally, when I up-sample from camera RAW 14 Bit to PSD I have a 500MB file out of my FF cameras (D700 and D3S). I see what everyone here is saying and agree that the output is much better but how big are you guys actually printing too see this difference?

Please don't take this wrong but it sounds like allot of pixel peeping to me. Do you really need 16 Bit files? Seriously? I may just be missing the point here...
 
So are you all saving your files as 16 bit TIF or PSD? How could you possible store all those files? Personally, when I up-sample from camera RAW 14 Bit to PSD I have a 500MB file out of my FF cameras (D700 and D3S). I see what everyone here is saying and agree that the output is much better but how big are you guys actually printing too see this difference?

Please don't take this wrong but it sounds like allot of pixel peeping to me. Do you really need 16 Bit files? Seriously? I may just be missing the point here...

How many layers do you have to get 500 MB? You should have about a 70 MB file if there is only one layer.

12 Mpixels, two bytes per pixel per channel = 6 bytes per pixel = 72 MB.

For me it is all about maximising the potential future use of the original. Each image costs a lot to get, so no point in penny-pinching later. Disk space is cheap.
 

Most reactions

Back
Top