16 bit (or more) RAW files?

As for number of distinguishable colors, well.

Remember that the RGB value in the image file has to convey value information as well as color information, AND there are issues with color gamut that I do not fully understand but which imply that the colors we're representing on any given output medium are not all the colors we can see. So you really need to encode a fair bit of extra, um, somehow or something, to account for various output media. Or something. I told you I didn't really get the gamut issues.
 
So I've got a question that's I've been wondering at.

Most computers and electronic devices use bytes. You have an 8 bit processor, a 16 bit processor, a 32 bit processor.

But never a 14 bit processor

Cameras however, tend to be 12 or 14 bit. My Nikon D700 uses 12 bit raw files, unless I specifically select 14 bits... Canon appears to have the same problem if I do a quick google.

Hasselblad, however, uses 16 bits. And if you look at the skin tones on a hassy, they're much better than a nikon or canon

So what's the deal? Why don't Nikon and Canon use 16 bit files?

And why stop there? My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo

Ok a primer.

Computers work in bytes because thats the way process registers are set up. Mathematical functions on an 8bit CPU are done with an 8bit arithmetic logic unit on sets of 8bit registers. Through the use of overflow and error flags however the system scales quite well. You want to add 16bit numbers? Well that works the same way as we add numbers with a result greater than 10. An 8bit CPU like the AVR microcontrollers I work with has no problem doing maths on 16bit numbers. The problem is in the amount of effort involved. To add two 8bit registers with an 8bit processor takes 2 instructions with 3 clock cycles. Adding two 16bit numbers takes 3 instructions with 5 clock cycles. The easiest way to get this back down to 3 clock cycles is to upgrade the ALU to handle a pair of registers at the same time, and add a new instruction that automatically works on 2 registers at once. I.e. a 16bit instruction. And it grows from there.

Now looking at the analogue world our useful data is not limited by bits, but rather by the noise floor. What's the point of having 16bits of data if statistically the bottom 4 bits will be 100% random? It's a waste of electronics and valuable chip space (remembering that on a CMOS sensor the analogue to digital conversion is done on the sensor). In a camera where every tiny component is using valuable space, and every bit of data from every pixel takes valuable processing time the goal is not to waste time or space on processing zeros or processing random data. By making custom circuits that work directly with the amount of *useful* data we have available the system becomes faster. Just like my AVR microcontrollers which have a 10bit ADC I don't bother reading the low ADC register as it increases the time it takes to process the data.

Now how does this relate to the real world? Well you may be happy with your Hassy in a studio but frankly I would be supremely pissed if my DSLR had a max continuous firing rate of 0.7fps like the Hassleblads or the sub 2fps of the Leica M8. And now for the real kicker. This has nothing to do with skin tones. The gamut and colour depth of even a 10bit sensor is enough to render skin tones correctly on a screen. This all depends on how your camera processes the data, or one step further, how your RAW processor processes the data. No amount of bits in a file will change the fact that some algorithms just don't look quite right (Adobe Standard I'm looking at you and your excessively purple skin tones).

Finally is it worth while? Well if DxO mark results are to be believed then the 14bit D800 beats both the 16bit Hassleblad H3DII-39 and Leica M8 in every metric. Colour doesn't look right to you? Maybe you need to calibrate or adjust the colour profile in the RAW software you use.


8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving. What makes you think 16- or 32- or 64-bit depth would somehow be better?

Perception is only for the final product. The question of if a 64bit image is "better" is a resounding yes. More data is better for post processing. You can pull a lot of data from dark shadows of a 14bit image. HDR software can do a world of wonder when you take multiple exposures and convert it to a 32bit file and then compress the tonal range back to 8bit for display. But for me this still isn't good enough. The software I use for astrophotography stacks literally hundreds of images and creates a 64bit file. Causes grief to work with 400mb files but that data is actually necessary to get results.

8-bit color depth gives you over 16 million colors. The human eye can only distinguish around 10 million of them.

Which 10million? We can display 6.7million more colours than the eye can see with an 8bit file yet somehow when we show the result we see visible banding on the display when using larger gamuts than sRGB. The human eye has an incredible ability to detect subtle shades of saturated colours, especially around the greens. It's excellent at distinguishing tones of shadows but sucks for identifying colours in darkness. So while the human eye may have difficulty distinguishing between (0,1,2) and (0,1,1), it can most definitely determine the difference between (0,254,0) and (0,255,0) and most likely you'll be able to determine shades in between. Unfortunately the numbers we can display are evenly distributed between the red green and blue channels. Unfortunately our eyes don't work like that so quite simply 16.7million colours isn't enough on wide gamut monitors.
 
this is fascinating stuff

so, if i choose to use 12 bit raw files will i get the same picture quality as 14 bits?

will 12 bits have less noise but also less shades?
 
Noise will be the same, but:
12-bits = 4096 possible discrete values (0-4095) per color channel
14-bits = 16,384 possible discrete values (0-16,383) per color channel

So per color channel, yes, 12-bit shows 12,288 fewer shades of color per color channel than 14 bits can show.
 
this is fascinating stuff

so, if i choose to use 12 bit raw files will i get the same picture quality as 14 bits?

will 12 bits have less noise but also less shades?

The two questions are independent from each other. You won't be able to tell the difference between a 12bit and a 14bit recording of a scene. In fact your video card will clobber it to 8bits to display on the screen anyway.
However you start boosting the shadows in any kind of extreme way and things start looking VERY different. As KmH has said you have more possible shades to represent an image. So when you want to stretch the colours in an image you need some data that isn't normally visible. To illustrate this look at the following example:

This first image is the direct result of a stack of 200 images of the Orion nebula. This shows what my (32bit in this case) file looks like when I first open it on the computer:
Autosave020_zps4e9f59ad.jpg


So after we brighten, brighten again, apply a few layers of tonemapping, brighten again for good measure, increase saturation, and fix the colour balance:
Autosave020-64bit_zpsc136c749.jpg


The below result was a direct duplicate of all the above steps with all the same settings. The only difference is it dropped to 8bit at the start. When we did that there was no visible change on the computer screen. But have a look at what happens after we apply all the above corrections:
Autosave020-8bit_zpsdab8295d.jpg


So as you can see the data that may not be visible is sometimes quite important.
In case you're interested this is what it looks like when I dedicate more than 5 minutes to processing the same image: http://dafaq.garbz.com/photography/space/images/M42.jpg


The noise on the other hand depends on other factors and as I eluded to the important part is that camera manufacturers don't waste processing power and silicon to processing nothing but noise. You can do analogue to digital conversion at any bitrate you want. The question is if the least significant bits will be relevant data or noise.
 

Most reactions

Back
Top