Let's see if I understand this about raw files.

480sparky

Chief Free Electron Relocator
Supporting Member
Joined
Mar 8, 2011
Messages
25,157
Reaction score
9,010
Location
Iowa
Website
pixels.com
Can others edit my Photos
Photos NOT OK to edit
I've been trying to understand raw camera files a bit more, and I think I've got my head wrapped around it. But I want to make sure I have it straight.

So if any of my lines of thinking are wrong, please let me know:

1. Each camera sensor pixel is colorblind, so either a red, blue or green filter is placed over it (typically a Bayer array, but other methods are used). This causes the pixel to only record one color of the spectrum.
2. When the shutter opens, light strikes the pixels, and each pixel records it's red, green or blue value based on how many photons strike it during the exposure.
3. A given pixel can record 4,096 shades of it's pre-assigned color, assuming a 12-bit depth.
4. The pixel will create an electrical charge, increasing it as more photons strike the pixel.
5. When the shutter closes, the camera's processor queries each pixel for it's value, which can range from 0 to 4,096 based on how many photons struck the pixel.
6. If the camera is capable of recording raw, and that is the option chosen by the user, the values created by each sensor pixel is recorded.
7. The processor will then use adjacent pixels of red, green and blue values to create an RGB pixel used to display in the final image. It will compress the 12-bit data to 8-bit to create an imbedded jpeg thumbnail in the raw file. It will also compress the 12-bit data to an 8-bit full-size jpeg to record on the memory card if that is an option chosen by the user.
8. Once the raw file is saved, the data itself (i.e., the values recorded from each pixel) cannot be altered. It can be manipulated many ways and many times, but the original data obtained from the sensor pixels are never changed even with heavy and multiple post-processing.
9. In-camera settings, such as sharpening, white balance, contrast, saturation, etc. are saved along with the raw data, but not applied to it (unless the image is to be saved as a jpeg).
10. All digital cameras create the raw data, but not all of them are capable of recording it. Those that don't, or the option to record it is not chosen by the user, delete the sensor pixel data once the 8-bit jpeg is created.


Have I got it right?
 
That all sounds right to me. Little bit of detail to add to 5.... The charge at the photosite is not recorded directly. It passes through an amplifier, whose gain is controlled by ISO value, and once amplified, is read by the A/D converter.
 
I've been trying to understand raw camera files a bit more, and I think I've got my head wrapped around it. But I want to make sure I have it straight.

So if any of my lines of thinking are wrong, please let me know:

1. Each camera sensor pixel is colorblind, so either a red, blue or green filter is placed over it (typically a Bayer array, but other methods are used). This causes the pixel to only record one color of the spectrum. The pixel only records the luminosity of that color, not the color itself, so yes pixels only record grayscales.
2. When the shutter opens, light strikes the pixels, and each pixel records it's red, green or blue value based on how many photons strike it during the exposure. Yes.
3. A given pixel can record 4,096 shades of it's pre-assigned color, assuming a 12-bit depth. No. The pixels are analog and are not limited to 4096 values.
4. The pixel will create an electrical charge, increasing it as more photons strike the pixel. Essentially. Pixels can become saturated, at which point they cannot record any more photons that may stike them.
5. When the shutter closes, the camera's processor queries each pixel for it's value, which can range from 0 to 4,096 based on how many photons struck the pixel. No. The voltage each pixel developed has to first be amplified, then the amplified signal is run through an analog-to-digital converter. The output of the AD converter determines the bit-depth. !2-bit output is 4096 gradations of grayscale and 14-bit is 16.384 gradations of grayscale.
6. If the camera is capable of recording raw, and that is the option chosen by the user, the values created by each sensor pixel is recorded. Essentially. But, it's the AD converter output that gets recorded, not the analog voltage the pixel developed.
7. The processor will then use adjacent pixels of red, green and blue values to create an RGB pixel used to display in the final image. Maybe. In many cameras the actual process is proprietary.
It will compress the 12-bit data to 8-bit to create an imbedded jpeg thumbnail in the raw file. It will also compress the 12-bit data to an 8-bit full-size jpeg to record on the memory card if that is an option chosen by the user. Yep.
8. Once the raw file is saved, the data itself (i.e., the values recorded from each pixel) cannot be altered. No. The values each pixel recorded got altered twice. First in the amplifier circuits, then in the AD conversion. It can be manipulated many ways and many times, but the original data obtained from the sensor pixels are never changed even with heavy and multiple post-processing. The Raw data can indeed be permanently altered. Parametric raw converters are favored because they do not alter Raw pixel data. AParametric means all changes are line commands written in an .XML or .XMP metadata file appended to the Raw image data file.
9. In-camera settings, such as sharpening, white balance, contrast, saturation, etc. are saved along with the raw data, but not applied to it (unless the image is to be saved as a jpeg). Camera maker Raw converters can be configured to apply those settings to the Raw files made by that camera.
10. All digital cameras create the raw data, but not all of them are capable of recording it. Yes. Those that don't, or the option to record it is not chosen by the user, delete the sensor pixel data once the 8-bit jpeg is created. The pixel data is not really discarded, but is is no longer accessable on a pixel by pixel basis.

Have I got it right?
Yes, and no.

A 3 transistor (3T) CMOS pixel:


Active pixel sensor - Wikipedia, the free encyclopedia

The 3T pixel comprises the same elements as the 4T pixel except the transfer gate and the pinned photo diode. The reset transistor, M[SUB]rst[/SUB], acts as a switch to reset the floating diffusion which acts in this case as the photo diode. When the reset transistor is turned on, the photodiode is effectively connected to the power supply, V[SUB]RST[/SUB], clearing all integrated charge. Since the reset transistor is n-type, the pixel operates in soft reset. The read-out transistor, M[SUB]sf[/SUB], acts as a buffer (specifically, a source follower), an amplifier which allows the pixel voltage to be observed without removing the accumulated charge. Its power supply, V[SUB]DD[/SUB], is typically tied to the power supply of the reset transistor. The select transistor, M[SUB]sel[/SUB], allows a single row of the pixel array to be read by the read-out electronics.
 
Last edited:
Raw converters do quite a bit to a Raw file to make it look like what we see.

The image sensor is linear and has a Gamma response of 1.0, but our eyes aren't linear and have a Gamma response between 2.0 and 3.0. If you'd like to explore the math and other concepts involved : Gamma encoding - Bing

Here is an approximate example of what the image sensor see's before a Raw converter alters it:
RawLinear.jpg



and after the Raw converter has demosaiced, sharpened, and applied a non-linear curve to that image, which is what it presents to us for likely further editing:

Converted.jpg
 
Last edited:
The reason I'm asking this is I have yet to find one, single source that truly explains raw files that satisfies my curiosity.

What I'm really interested in is what happens at the sensor-pixel level, between the shutter opening and post-processing. How the data is collected, manipulated, transferred and stored. And not just catch-phrases like, "Raw is the digital equivalent of film negatives" or "Shooting jpeg means you throw away a lot of data". While the statements may be true, unless I can really get a grip on the entire process.... step by step (and not get technical like Keith's schematic), I have a hard time with it.

What I'm working on is to create this one, single source I have yet to find.
 
Don't understand the difference between raw and jpeg. I noticed my camera will shoot a lot less picures in raw on the memory card.
For the average hobby photographer, what is the benefit of raw pictures vs jpegs?
 
Don't understand the difference between raw and jpeg. I noticed my camera will shoot a lot less picures in raw on the memory card.
For the average hobby photographer, what is the benefit of raw pictures vs jpegs?

This is precisely why I'm trying to collect all this information scattered across thousands of websites into one source.

Basically, when you take an image, the raw data (that's why it's called raw) from the sensor pixels are processed by the camera. For most people, shooting jpeg means all that data is processed then thrown away when the jpeg is created. Shooting raw means you retain that information. There's advantages to doing so, but also disadvantages.
 
What I'm really interested in is what happens at the sensor-pixel level, between the shutter opening and post-processing. How the data is collected, manipulated, transferred and stored.

...

What I'm working on is to create this one, single source I have yet to find.

You're going to have problems there. I have a set of 3 full sized textbooks that you would need to read to fully understand this from start to end. One in optoelectronics and photonics which describe the process of capturing photons, one in advanced analogue circuit design which will detail amplification and control circuits for sensors, and one on digital signal processing. That last book has maths in it which will probably require you to get a 4th book on advanced engineering mathematics.

There is a whole world of knowledge about each individual step. For instance even the process of converting a signal from analogue to digital will incorporate concepts such as aliasing, quantisation, and all that wonderful mathy stuff that will alter the final value of the raw pixel.

But on the upside along with kmh's pedantic corrections you pretty much know the process in a nutshell :)
 
What I'm really interested in is what happens at the sensor-pixel level, between the shutter opening and post-processing. How the data is collected, manipulated, transferred and stored.

...

What I'm working on is to create this one, single source I have yet to find.

You're going to have problems there. I have a set of 3 full sized textbooks that you would need to read to fully understand this from start to end. One in optoelectronics and photonics which describe the process of capturing photons, one in advanced analogue circuit design which will detail amplification and control circuits for sensors, and one on digital signal processing. That last book has maths in it which will probably require you to get a 4th book on advanced engineering mathematics.

There is a whole world of knowledge about each individual step. For instance even the process of converting a signal from analogue to digital will incorporate concepts such as aliasing, quantisation, and all that wonderful mathy stuff that will alter the final value of the raw pixel.

But on the upside along with kmh's pedantic corrections you pretty much know the process in a nutshell :)

The nutshell version is what I want. I'm not wanting to describe the process of moving an electron along an electronics circuit, or the subtleties of demosaicing algorithms.
 
What I'm really interested in is what happens at the sensor-pixel level, between the shutter opening and post-processing. How the data is collected, manipulated, transferred and stored.

...

What I'm working on is to create this one, single source I have yet to find.

You're going to have problems there. I have a set of 3 full sized textbooks that you would need to read to fully understand this from start to end. One in optoelectronics and photonics which describe the process of capturing photons, one in advanced analogue circuit design which will detail amplification and control circuits for sensors, and one on digital signal processing. That last book has maths in it which will probably require you to get a 4th book on advanced engineering mathematics.

There is a whole world of knowledge about each individual step. For instance even the process of converting a signal from analogue to digital will incorporate concepts such as aliasing, quantisation, and all that wonderful mathy stuff that will alter the final value of the raw pixel.

But on the upside along with kmh's pedantic corrections you pretty much know the process in a nutshell :)

Good thing they make that button that says Auto so I don't have to know all that fancy schmancy stuff. :thumbup::lol:
 
Good thing they make that button that says Auto so I don't have to know all that fancy schmancy stuff. :thumbup::lol:

Wait till you find out about the "P" or "Professional" mode on your camera. It will blow your mind.

There's a cool flowcharty thing with a quick description in this article that might interest you: Advantages of Fourteen Bit Cameras-- Part I

Good article but I did find one thing I disagree with. That's his comment on bit-depth vs dynamic range. Unfortunately for him his views from the camera world do not match those from the digital signal processing world. While I understand where he's coming from I have about 3 textbooks that disagree with him and say that dynamic range is very well a function of bit depth. Just not his description of what "dynamic range" means. The answer is as always in the middle, both uses of the term is correct.
 
The article also perpetuates the myth that linear Raw files are 'dark'. They aren't anything, of course, they are simply lists of numbers to be interpreted. They are only 'dark' when interpreted incorrectly: for example displayed on a graphics/monitor system that applies a reverse tone curve to balance the tone curve applied when converting from linear to logarithmic. If they were displayed linearly they would look perfectly natural: it's the display system mismatch that causes the 'darkness', not the nature of the data itself.
 

Most reactions

New Topics

Back
Top