That is true for a "linear" encoding scheme, which the RAW file is but the JPEG is not.Nothing is "linear" here... that's the whole point. Everything is exponential. Or in some cases some other things, but never linear.
linear original.
That is true for a "linear" encoding scheme, which the RAW file is but the JPEG is not.Nothing is "linear" here... that's the whole point. Everything is exponential. Or in some cases some other things, but never linear.
linear original.
My appologies if I used common technical terminology that you are not familiar with. In fact my statement was precisely correct.
Linear encoding refers of course to a linear gamma curve. Virtually all RAW data files use linear encoding. And virtually everyone who is familar with digital data encoding is aware of that meaning and uses it consistently and commonly.
Various digital encodings have different advantages. Gamma compression of digitally encoded analog data is commonly used (meaning photography is hardly unique) to preserve a higher SNR at the expense of dynamic range while using fewer bits. The most common example (but perhaps not typically realized by users) is the use Mu-Law gamma encoding of voice traffic in the Public Switched Telephone Network (PSTN).
Some good sources of information, for photographers:
Understanding Gamma Correction
Learn about RAW, JPEG, and TIFF with the digital photography experts at Photo.net.
A more technical article:
Linear Encoding
View attachment 66423I think it's abundantly clear by now what everybody meant, apaflo. You're talking in a digital data encoding language, where apparently linear refers to actual luminance.
I'm talking in photography language, where exponential increases in actual physical light are usually referred to using linear language (+/- EVs / stops).
The miscommunication was cleared up a page ago, and no matter which community's language you use, the answer to the OP's question is the same, so who cares?
I think it's abundantly clear by now what everybody meant, apaflo. You're talking in a digital data encoding language, where apparently linear refers to actual luminance.
I'm talking in photography language, where exponential increases in actual physical light are usually referred to using linear language (+/- EVs / stops).
The miscommunication was cleared up a page ago, and no matter which community's language you use, the answer to the OP's question is the same, so who cares?