Challenge: Noticeable differences between RAW and a jpeg edited in 16 bit mode?


TPF Noob!
May 9, 2013
Reaction score
Iowa City, IA
Can others edit my Photos
Photos OK to edit
1) The human eye can't even see all of the colors possible with an 8-bit jpeg.
2) Considering #1, the only real advantages of RAW are to be able to do edits with the full data range (thus avoiding banding, etc.), or for LOWERING the dynamic range (e.g., you took a photo with almost entirely midtones, and wish to stretch it out to have full contrast).
3) The issue with lowering of dynamic range can be fixed in-camera in most models by selecting desired contrast settings, etc., and the camera will use the RAW to do this simple calculation without any extra loss.
4) For more complex edits like curves, dodging, burning, etc., Photoshop is able to convert an image and perform all edits at 16 bit, and many types of edits at 32 bit. This does not magically create data that isn't there, but it DOES prevent things like banding quite successfully, even if the original image only has 8 bits of data. Pretty much no matter what edits you do within the remote realms of reasonableness, it will not degrade visibly in quality.

Conclusion: There shouldn't really be any technical reason why it is more useful to edit a RAW than to edit a jpeg after first converting it to 16 or 32 bit.

The only reason I can think of for why RAW would have an advantage would be if people have just bothered to write better software for RAW converters than Adobe has done for 16 bit mode filters and adjustments. But I somewhat doubt this is the case, since Adobe is a company with vastly more experience at this sort of thing than random proprietary RAW software developers.

I tried this myself just now, shooting several sample images with RAW + JPG save mode, and tried editing some with RAW software, and others with photoshop (converting first to 16 bit mode). I was unable to do any sort of normal edits that yielded results where I was able to visually tell the difference.

Can I tell the difference between RAW edits and 8 bit unconverted jpg edits? Yes. But not with the 16 bit edits.

I even went to various websites that explain the difference between RAW and jpg, and I used their demo images: the very images that they took specifically to show the difference between the two formats in the most dramatic ways possible (things like pure gradients, and images over or underexposed by several stops), and again, was able to achieve quite comparable results either way. (Note that this is WITHOUT the benefit of being able to set appropriate contrast settings, etc. in-camera) Differences were noticeable in the shots that were poorly exposed by 2+ stops, but not slightly badly exposed ones.

I am curious as to how I might be proven wrong, in reasonable situations where you didn't completely fail your photo?
Last edited:
When you capture a photo in raw format the data will be saved in 12 or 14 bit (in Nikon cameras at least), meaning that there will be more data in the shadows/highlights which can be brought back in post processing.
The jpg file will not have this data because it is only 8 bit so it will not be possible to bring back these details.

Sure, the camera can do some processing itself according to some settings which will make sure these details are sort of brought back before saving the actual jpg... However this is done in a process determined by the company that developed the camera and might not be in a way I want it.
I'd like full control over my files. Doing the post processing myself will make sure I get photos in my own style.

Converting a jpg file to 16 bit will not bring back the detail in blown out or heavily underexposed areas, that data will still be lost forever because it was clipped during the in-camera jpg save.
This is why I work with raw.
You misunderstand the in-camera settings I'm suggesting.

I'm saying that you should set your camera to have NO auto adjustments (in my Canon T2i, this is done by setting "picture mode" to "faithful" or "neutral"). This makes it no longer do the sharpening and contrast, etc. that it normally does by default in converting to jpg. Or only minimally.

Then you do your own sharpening, color, and everything in post processing in photoshop, using 16 bit mode, and advanced tools that will do a better job of sharpening, etc. than your camera would (for example, sharpening copy layers to do it nondestructively and channel-specific, or whatever you want to do).

You're right that changing it to 16 bit will not bring back any detail, but what it will do is stop you from LOSING any of the detail you still have. And since the human eye is not precise enough to detect even half the level of detail captured by a jpeg, all you really need to do is to preserve the 8 bit detail you have.

For example, here are two images of black white gradients:
The first one was generated as a 8 bit, then squashed into a smaller tonal range using levels, then expanded back out again, all at 8 bit.
The second one was also generated at 8 bit, but then converted to 16 bit mode, and the same edits were done.
Notice a difference? The bottom one is perfectly smooth still. In fact, if you were to make a 16 bit gradient in photoshop (I can't display one here, because the file uploader wont upload 16 bit formats), you would not visibly see ANY difference, because your eye is not precise enough to do so.

However, you can indeed see a huge difference vs. the first image, because doing your edits in 8 bit is lossy. When you smash down the tonal range, there aren't enough "slots" to hold all the data, so it gets banded into bins. Then when you expand out again, the data can't be recovered.

Bottom line: Editing in 8 bit creates banding and issues. But the original 8 bit itself is just as precise as 16 bit as far as our eyes can see. In the bottom image here, I didn't create any new data. I simply protected the sufficient data that 8 bit already provides.

So by setting your camera to do minimal or no processing while converting to jpeg, all it will do is throw out data that your eye cannot physically see anyway, and then by editing in 16 bit, you avoid losing any of that remaining data. And the final result is something that (if you are good at editing) should be able to always achieve equally high PRINT quality as a RAW.

The only time when you should "need" RAW is if you are cropping out 50-80% of your image or if you shot at the *completely* wrong white balance, or if you overexposed by like 3 stops or something. In other words: if you horribly Horrible HORRIBLY botched all your images. In which case, you have much bigger issues to worry about than file format...

If you only slightly botched your image though (1 stop off on exposure, 10-20% off on zoom/composition, etc.), RAW shouldn't offer any visible differences vs. clever camera settings and 16 bit editing.


  • $8.jpg
    11.8 KB · Views: 283
Last edited:
While you are sort of right this goes wrong on a scene with higher dynamic ranges.
If you have a scene with dark patches of shadows and very light highlights you have no other option then to underexpose the shadows or overexpose the highlights (or in extreme cases both).
With a jpg file all the data in those under- and overexposed shots would be lost. With a raw file you could pull back a lot of that data.
It's basically what you do with HDR imaging, only HDR allows for an even broader range than raw files.

Yes, you can do pretty nice edits with jpg files, I've worked with loads of jpgs and editing in general works just fine. It's just that raw will enable you to capture more in terms of dynamic range.
Canon : Picture Style <--Neutral picture mode in canon, for example, is specifically designed to have a main priority of not throwing away that dynamic range data when converting to jpeg. To allow you to deal with situations just like you describe.

On default settings, the difference between RAW and jpeg often matters, a lot. Because shooting jpegs in standard mode, your camera will auto-boost the contrast so that you have full representation from blacks to whites. In doing so, it will often clip off the ends and lose a lot of the extreme detail, whereas RAW doesn't do this.

But if you simply tell it to not do that (by setting neutral picture mode on canon), then it will have the same default grayness that RAWs have, and will shy away on the histogram from both highlights and shadows, allowing you to choose in post-processing what threshold you want for clipping both ends, depending on what balance you want between detail and contrast.

If you set it correctly in settings + exposure, so that all photos just baaaaaarely dont reach either the black or the white ends of the histogram, then you will have all the data you need later (with just jpeg) to render whatever details you want, with only 10-20% adjustment, which is not enough to bring the images under the resolution capacity of the human eye (i.e., you wont visibly be able to notice the adjustment).

RAW does give you more options to go in extremes in one direction or the other in post processing, but it won't help you more than jpeg in BOTH directions at once (see above: if you sample evenly with neutral mode, then jpeg will already have more than the human eye's resolution from shadows to highlights). Thus, this feature is only a benefit if either:
1) You make a dramatic mistake like overexposing by 2-3 stops (hopefully this is very rare if you aren't clueless), or
2) You don't know what you want the lighting your photo to look like when you shoot it. Which is very very bad practice anyway. A good photographer should consider the light before anything else, before even composing the shot. They certainly should not leave that decision to post processing (this encourages you to largely ignore light completely, and will thus lead to poor choices in subject matter and composition and poorer image quality overall).
Last edited:
Note: using custom options, you can even set contrast to LOWER than canon's "neutral" picture mode, by quite a lot.

Whereas zero (neutral) will sample evenly at all lightness values, negative values will actually aggressively / actively start to bunch up the pixels toward the midtones a bit. In other words, it will sample more heavily from the high-density RAW data on the white and black ends than in the middle, when it decides what to include in the jpeg. The opposite of what it does by default when it tries to make all jpegs look all shiney and pop-y for you.

So let's say for some reason you actually want to make your image grayer than it even is in reality. Perhaps you are shooting something like a person silhouetted against a bright window from indoors and want to be able to see some detail outside and on the person (probably a bad choice without a flash, but whatever, it's an example), you can switch to a custom picture mode with -4 contrast and zero everything else, and you will retain oodles of shadow and highlight data in your jpegs (i.e. higher than 8 bit detail, even though its only an 8 bit format! Allowing you to spread it out a lot without visible artifacts), at the expense of minimal true midtone detail, which you don't want.
Last edited:
Before we get into an example why not start with a thought experiment. Lets start with 3 numbers somewhere around the middle grey region.

32760, 32770, and 32780. These three 16bit colours are a whole 30 shades apart. Now lets boost the contrast by 100 values around middle grey with the function (a-32767)*100+32767, effectively a big increase in contrast equally centered around middle grey.
32067, 33067, and 33968. These three colours are now 40 shades apart. Lets convert to 8bit a/2^8
125, 129, 133. Our final values nice and separate thanks to an increase in contrast.

Now lets do it again
32760, 32770, and 32780. Why not try converting to 8bit first a/2^(16-8)
128, 128, and 128. And now lets go back to 16bit a*2^8
32768, 32768, and 32768. And apply the same settings again (a-32767)*100+32767
32768, 32768, and 32768. And back to 8bit we go a/2^8
128, 128, 128.

Now these are just numbers but they illustrate a point. If you don't have the data to begin with yes photoshop doesn't clobber it's own functions, and yes you should always edit in 16bit regardless of what you start with, but it helps if you have real meaningful data to work with to prevent posterisation. Now is it field relevant to an image that was exposed absolutely perfectly? Maybe not. But we live in a world of HDR, boosting incredible tones from shadows, shooting into the sun, and above all in order to prevent over exposing we shoot to the left which often requires we brighten up the most butchered data (dark values thanks to a gamma curve). One day you may actually need it.

More to the point, the extra data allows a lot of latitude for playing with colours. This may not normally be a problem, unless your camera fouls up the autowhitebalance, or you had the wrong manual white balance set. There's very clear side by side differences between a JPEG and RAW shot in daylight, and then post processed to tungsten whitebalance.

But hey why not do a real world example from my life:

Let's start with a perfectly exposed image of the Great Carina Nebula. Perfectly exposed as there's no clipping on black (noise) and only very slight clipping on one channel (24pixels on blue channel). I.e I can't shoot any brighter without clipping data.

Now lets process this bad boy but first convert to 8bit then back to 16bit.

Uah YUK. Lets try again in 16bit all the way through.

Not bad!

By the way being the perfectionist I ended up redoing this 5 minute exposure some 48 times over a few hours and then stacking them together to create one whopping big 64bit file. Here's the end result of my editing:

many more stars visible now.
I'm saying that you should set your camera to have NO auto adjustments (in my Canon T2i, this is done by setting "picture mode" to "faithful" or "neutral"). This makes it no longer do the sharpening and contrast, etc. that it normally does by default in converting to jpg. Or only minimally.

I'm concerned with the accuracy of your statement. For starters, unless you have access to both the internal chip-level micro code and firmware within the camera, there is no basis upon which to state this as a fact. Having written internal chip-level logic, I can tell you that there may be an incredible level of 'unseen' processing taking place between the RAW and JPG images produced, regardless of user settable options.

I'm not out to 'bust your chops' on this, but rather attempting to keep what seems to be a highly accurate, well thought out discussion from going astray. Unfortunately, my level of 8-bit, 14-bit, whatever-bit depth image information is somewhere near zero. All I know is that there is more 'tricks' available in Photoshop or whatever with RAW files than JPGs. So I do most of my editing with the RAW.
1) The human eye can't even see all of the colors possible with an 8-bit jpeg.

That isn't even close to being correct if you are referring to sRGB or Adobe RGB.

2) Considering #1, the only real advantages of RAW are to be able to do edits with the full data range (thus avoiding banding, etc.), or for LOWERING the dynamic range (e.g., you took a photo with almost entirely midtones, and wish to stretch it out to have full contrast).

Raw is often used to increase the dynamic range above what is represented in the JPEG. As Judobreaker says, there is more (in many cases much more) luminance information in a raw file. You are probably going to say that an in-camera JPEG conversion could be selected that would include the entire usable dynamic range of the sensor, but if it was in sRGB or Adobe RGB gamma it would be very low contrast and there would be bad banding if you tried to restore contrast, whether you worked in 8 bit or 16 bit. If you know how to use a raw converter these effects will be easy to recreate.
What about JPG artefacts and such? sure you're using 16 bit but it's still a lossy format and if you're editing a JPG that'll become an issue won't it?
16 bits might not be enough either. Scientists have discovered that you can use practically infinite dynamic range for internal calculations. The trouble is that as you apply processing steps, you might increase or decrease a pixel value outside of the dynamic range available, and then put it back with another step.

So, you need lots and lots of dynamic range internally.

Also, JPEG is as noted a compression format (although there MIGHT be an option somewhere in some of the versions to do no compression?) so in general the image is being disturbed. It is being disturbed in ways that are generally invisible or at any rate hard to see for the human visual system.

As a general rules, a good 8 bit JPEG is just fine as an output format. The dynamic range and color gamut available are generally more-or-less better than most output media. The JPEG compression applied at this stage, if applied lightly, is invisible to the human eye.

JPEG is NOT a good choice for internal/intermediate images, for similar reasons to 8 or 16 bit being a poor choice internally. Applying it once is no problem, applying it over and over again can be. Just as arithmetic errors can pile up, leading to problems, compression artifacts can pile up. You open the file, you edit a bit. You blow out a couple pixels without noticing it. You save it. JPEG compresses it a little, introducing some perturbation. You open it again, and fix the thing that blew out the pixels (but that data is gone now) and you do some more work, and you save again, JPEG recompresses it.

Do this half a dozen times.

At some point, depending on a tremendous number of variables, visible artifacts will begin to appear.
Regardless of if my eyes can sense differences between adjacent values in any color space, I will never stop shooting RAW. When I bought my $3000 camera, and paid it off over 6 months, it was to produce the highest quality images possible within technological and creative limits. Why, then, would I ever consider moving to jpg? If even 1 out of 500 images could benefit from raw, losing that one to jpg is too many. I'm not lazy in my editing either, meaning I'm fine importing hundreds of 75mb RAW files and editing, or at least viewing fully, each and every one.

If there was REALLY no benefit to RAW editing, I doubt it would be so prevalent.

Most reactions

New Topics