High ISO vs Underexposing

I thought this was interesting, from the Hogan review of the earlier Sony A7 models:

"But let me lay out the basics: the D800E will shoot 14-bit raw files with no underlying artifacts and fully recoverable data. The A7r will shoot 11-bit raw files with potential posterization issues in the data. The same is true of the A7 versus a D610, too.

Let’s start with the 11-bit thing. Sony always uses compression in storing raw files. The way they do that is quite clever. They slice each pixel row into 32 pixel blocks. In a Bayer sensor, that means two colors, each with 16 data points). For each 16 pixels of a color, Sony looks at the minimum and maximum pixel values for each and stores that. For the other 14 pixels they store a 7-bit value that is offset from the minimum value. In essence, they get 32 pixel values stored in 32 bytes, when normally 11-bit storage for that data should take 44 bytes.

This is not lossless compression. It is highly lossy. Nor is it visually lossless. That’s because when you have an extreme set of values in the 32-pixel block (e.g. sun peaking out from behind tree edge), you get posterization of data. Don’t believe me? See this article, which describes it better than I can in the limited space of a review. Indeed, every A7/A7r owner should probably have a copy of RawDigger so that they can understand exactly where the issues in their raw files lay. Even Nikon’s optional visually lossless compression scheme does a better job at this, as it hides its posterization only in very bright values that our eyes just don’t resolve."

So....when is "RAW data" not really unaltered data, right off of the sensor? Has Sony adopted this same 14-to-11 bit squish-down in later cameras in the A7 line, or in other cameras?
 
"But let me lay out the basics: the D800E will shoot 14-bit raw files with no underlying artifacts and fully recoverable data. The A7r will shoot 11-bit raw files with potential posterization issues in the data. The same is true of the A7 versus a D610, too

I read through this until I started to get brain pain (out of practices on concentrating that hard anymore) LOL Noise, Dynamic Range and Bit Depth in Digital SLRs -- page 3 By Emil Martinec, Professor of Physics at the University of Chicago. I'll leave it to others to debate the validity of his statements.

Basically the paper claims that none of the 14 bit cameras (circa 2010) could utilize all the data because "in the absence of noise, the quantization of an analog signal introduces an error, as analog values are rounded off to a nearby digitized value in the ADC. In images, this quantization error can result in so-called posterization as nearby pixel values are all rounded to the same digitized value". The author continues to say that there is sufficient loss in data that the reality is, there wouldn't be that much difference in an 11 bit file, versus a 14 bit, and unless you have something beside a standard monitor, 8 bit would be sufficient. Applying this to the OP's original comment about underexposing, doing so would be effectively reducing the bit depth of the raw file

As to the differences in compression he says "The "lossy" form of NEF compression is a clever use of information theory to save space by eliminating redundant raw levels. The noise which is unavoidably present in light effectively dithers tonal transitions so that the compression is lossless in that the image is still encoded without loss of visual information. In this sense, "lossy" compression is perhaps an inappropriate appelation. Amusingly, Nikon engineers seem to have forgotten the logic behind the thinning of raw levels when upgrading to 14-bit tonal depth -- the NEF compression table has roughly four times as many entries (2753) for the 14-bit table as it does (689) for the 12-bit table in the D3 and D300, even though there is no purpose to the extra values given the relation between noise and quantization step in efficient data encoding".
 
Last edited:
If you read the Thom Hogan article, iot seems that theory from 2010 and reality diverge, and there is an actual and an observable loss of quality in the resulting photos made from these heavily-compressed files, at least to the observer who is trained in noticing fine details and subtle differences. Not that posterization is all that tough to spot...

Hogan draws the point that the D800 and the A7s were using vastly different RAW file writing strategies. Look at the horrible posterization from an 11-bit Sony "RAW" file that's been compressed, heavily. Not using Nikon's much better lossy compression, but using Sony's routines. and not on Nikon .NEF files, but on Sony files. Different beasts.

RawDigger: detecting posterization in SONY cRAW/ARW2 files | RawDigger

Now this shot? image02.png

That is flat-out some pretty AWFUL posterization...which makes the 2010 theoretical article pretty much not much more than theorizing, about a perfect system, and flat-out wrong about an actual product sold by a real manufacturer, some years later.

Nikon's lossy NEF compression is excellent, but it's clear that Sony taking a full sensor capture, and then crushing it into an 11-bit and horribly compressed raw file is not the same thing as a Nikon .NEF file.
 
If you read the Thom Hogan article, iot seems that theory from 2010 and reality diverge, and there is an actual and an observable loss of quality in the resulting photos made from these heavily-compressed files, at least to the observer who is trained in noticing fine details and subtle differences. Not that posterization is all that tough to spot...

Wouldn't be the first time theory and reality ran on separate tracks. The truth likely lies somewhere between the two.
 

Most reactions

Back
Top