16 bit (or more) RAW files?

Vautrin

No longer a newbie, moving up!
Joined
Jun 26, 2008
Messages
927
Reaction score
58
Location
It changes
Website
www.withoutamapphotography.com
Can others edit my Photos
Photos OK to edit
So I've got a question that's I've been wondering at.

Most computers and electronic devices use bytes. You have an 8 bit processor, a 16 bit processor, a 32 bit processor.

But never a 14 bit processor

Cameras however, tend to be 12 or 14 bit. My Nikon D700 uses 12 bit raw files, unless I specifically select 14 bits... Canon appears to have the same problem if I do a quick google.

Hasselblad, however, uses 16 bits. And if you look at the skin tones on a hassy, they're much better than a nikon or canon

So what's the deal? Why don't Nikon and Canon use 16 bit files?

And why stop there? My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo
 
The sensors tend to be 12 or 14 bits deep. That's all the bits the sensor has per pixel.

You can trade off megapixels for bit depth, in general terms (the math works out, that is. technically, you need some noise, but there's always noise so you're probably OK). I suspect strongly that this is why the Hasselblad images look better, if indeed they do and it's not just post processing.
 
8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving. What makes you think 16- or 32- or 64-bit depth would somehow be better?
 
The 'blad sensors might be 16 bits deep. Not sure, and I'm not sure it actually has any value. Mainly more bits give us more exposure latitude, but you COULD crush the range a bit and get finer tonal gradations within 12 stops or whatever, if you wanted to build your sensor that way.
 
The Blad is a Fuji. Just sayin.

The usual reason that medium format looks better is that the sensor size to focal length ratio, along with the fact that medium format lenses are usually great optics.
 
So I've got a question that's I've been wondering at.

Most computers and electronic devices use bytes. You have an 8 bit processor, a 16 bit processor, a 32 bit processor.

But never a 14 bit processor

Cameras however, tend to be 12 or 14 bit. My Nikon D700 uses 12 bit raw files, unless I specifically select 14 bits... Canon appears to have the same problem if I do a quick google.

Hasselblad, however, uses 16 bits. And if you look at the skin tones on a hassy, they're much better than a nikon or canon

So what's the deal? Why don't Nikon and Canon use 16 bit files?

And why stop there? My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo

I shot with 39MP Hasselblad and I didn't notice any difference in colors from my D7000. I'm looking at them now, and I don't see this.
 
And why stop there? My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo
One aspect is the reality of the situation. A 32-bit or 64-bit color depth would ostensibly display better colors, however a 24 megapixel image would create an image file in excess of 100mb uncompressed at 32-bits and over 200mb uncompressed at 64-bits. Data transfer within the camera would be enormous, an 8gb SD card would only hold about 85 images (at 32 bits per pixel, half that at 64 bits), any software trying to handle the file would be horrendously slow, etc. Plus, there would be so much overkill on the color depth that it's unlikely that anyone would really see much of an improvement over 14-bit color.
 
8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving. What makes you think 16- or 32- or 64-bit depth would somehow be better?

Well if I look at my computer monitor it'll say it's using 32 bit color.

32 bit color looks much better than the old graphics that used 16bit or 8 bit...

On top of that, I haven't seen color depth get raised since most computers started using 32bit color standard...

Plus, if 8bit was really enough, we'd never have to worry about getting out of color gamut...

Thus my hunch is there is perceivable difference...
 
So I've got a question that's I've been wondering at.

Most computers and electronic devices use bytes. You have an 8 bit processor, a 16 bit processor, a 32 bit processor.

But never a 14 bit processor

Cameras however, tend to be 12 or 14 bit. My Nikon D700 uses 12 bit raw files, unless I specifically select 14 bits... Canon appears to have the same problem if I do a quick google.

Hasselblad, however, uses 16 bits. And if you look at the skin tones on a hassy, they're much better than a nikon or canon

So what's the deal? Why don't Nikon and Canon use 16 bit files?

And why stop there? My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo

I shot with 39MP Hasselblad and I didn't notice any difference in colors from my D7000. I'm looking at them now, and I don't see this.

Which Hassy and which raw file format?

B&H seems to imply some of the older models could save to 8bit tiff files, makes me wonder the difference in color

Hasselblad H3DII-39 SLR Digital Camera Kit with 80mm Lens
 
So I've got a question that's I've been wondering at.

Most computers and electronic devices use bytes. You have an 8 bit processor, a 16 bit processor, a 32 bit processor.

But never a 14 bit processor

Cameras however, tend to be 12 or 14 bit. My Nikon D700 uses 12 bit raw files, unless I specifically select 14 bits... Canon appears to have the same problem if I do a quick google.

Hasselblad, however, uses 16 bits. And if you look at the skin tones on a hassy, they're much better than a nikon or canon

So what's the deal? Why don't Nikon and Canon use 16 bit files?

And why stop there? My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo

I shot with 39MP Hasselblad and I didn't notice any difference in colors from my D7000. I'm looking at them now, and I don't see this.

Which Hassy and which raw file format?

B&H seems to imply some of the older models could save to 8bit tiff files, makes me wonder the difference in color

Hasselblad H3DII-39 SLR Digital Camera Kit with 80mm Lens

DNG and Hasselblad 503cw with a CFV-39 back.
 
The ratio of the brightest signal that can be captured (well capacity) to the noise limited signal of the sensor tends to set the max and min range of what is digitized in a meaningful manner. 14-bits does a good job with most high-end sensors on the market. Beyond 14-bits, you are recording noise.
 
Well if I look at my computer monitor it'll say it's using 32 bit color.

32 bit color looks much better than the old graphics that used 16bit or 8 bit...

On top of that, I haven't seen color depth get raised since most computers started using 32bit color standard...

Plus, if 8bit was really enough, we'd never have to worry about getting out of color gamut...

Thus my hunch is there is perceivable difference...

8-bit color depth gives you over 16 million colors. The human eye can only distinguish around 10 million of them.

12 bit gets you 68,719,476,736 colors. 14 bit gets you 4,398,046,511,504 colors.

16-bit depth ups it to 281,474,976,710,656. Seriously.... 281 trillion.

If you want to go to 32-bit depth, you're looking ar roughly 7.92EE28 colors. 64-bit? Hold on to your slide rule........ 6.277EE57.

That's a helluva lot of distinct colors!

You're saying you can perceive all those?
 
8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving. What makes you think 16- or 32- or 64-bit depth would somehow be better?

Well if I look at my computer monitor it'll say it's using 32 bit color.

32 bit color looks much better than the old graphics that used 16bit or 8 bit...

On top of that, I haven't seen color depth get raised since most computers started using 32bit color standard...

Plus, if 8bit was really enough, we'd never have to worry about getting out of color gamut...

Thus my hunch is there is perceivable difference...


Don't get confused - 32 bits is for all channels. Probably 10 bits per color if you are lucky.
 
8-bit color depth is per channel, and is also known a 24 bit color - 8 bits x 3 color channel = 24 bits.
The 34 bits used to describe an electronic display is also describing multiple channels.

Image sensors are analog, and have no bits at all. Image sensors cannot actually record color either.

When it says a camera makes 12-bit or 14-bit depth Raw files they are taking about the output of the Analog-to-Digital converter (A/D). Analog-to-digital converter - Wikipedia, the free encyclopedia

The A/D coverts the analog voltage values the pixels develop when exposed to light to digital numbers.

8-bits = 256 possible discrete values (0 - 255)
12-bits = 4096 possible discrete values (0-4095)
14-bits = 16,384 possible discrete values (0-16,383)
16-bits = 65,536 possible discrete values (0-65,536)

Here is the kicker though - Photoshop 16-bit mode only uses 15-bits - 32,768 bits (0-32,767).

32,678 tonal ranges per channel is for human vision purposes way more than sufficient to describe the data coming off a digital device.
From an engineering perspective, 15-bit calculations give an exact midpoint value. Having a precise midpoint value is important for blending.

Olympus Microscopy Resource Center | Digital Imaging in Optical Microscopy - Introduction to CMOS Image Sensors
 
Last edited:

Most reactions

Back
Top