What is a pixel?

KmH

In memoriam
Supporting Member
Joined
Apr 9, 2009
Messages
41,401
Reaction score
5,706
Location
Iowa
Website
kharrodphotography.blogspot.com
Can others edit my Photos
Photos OK to edit
From Wikipedia.org;

In digital imaging a pixel, or pel, (picture element) is a single point in a raster image, or the smallest addressable screen element in a display device; it is the smallest unit of picture that can be represented or controlled.

Most pixels are square, which means they can't be called dots. Some camera image sensors have had both square and rectangular pixels. The Nikon D1x image sensor has both square and rectangular pixels as an example.

The D3's and D300's I use have 12.1 and 12.3 mega pixel (MP0 image sensors, respectively. In round numbers that's 12 million picture elements on each image sensor.

There are 2 kinds of image sensor: Charge Coupled Devices (CCD's), which were invented in 1969, the year I graduated from high school (Class of '69 Forever!!! Yep, I'm an old guy), and Complementary Metal–Oxide–Semiconductor, or CMOS - active pixel sensors.

Both types of sensor do essentially the same thing, capturing light and converting it into electrical voltages (signal). CMOS image sensors use less power, and because CMOS uses less power, CMOS generates less heat. Heat is one source of image noise, so less heat, less image noise.
Part of the reason long digital exposures have more noise is because the image sensor gets hotter the longer power is applied to it.

Make a note here: The image sensor in a digital camera (CCD pixels or CMOS pixels), isn't a digital device, it's an analog device. Make another note here: Neither type of image sensor can record color.

OK, so we now have the basics of what a pixel is.

About now you ask, "OK! But how does the voltage (pixel) get changed into a piece of a picture, and how come we can make color photographs if a camera image sensor can't record color?

I'm glad you asked.

First lets handle where the color comes from. The color is mathematically interpolated. For our purposes only part of the Dictionary.com definition of interpolate is needed:
in·ter·po·late
2.Mathematics . to insert, estimate, or find an intermediate term in (a sequence).

Yep, the color is estimated, but the estimate is pretty accurate because of a filter array that is placed in front of the image sensor, called a Bayer Array:

BayerArray.png
note that each array segment has 3 colors - red, green, and blue (RGB) and the array is passive. It just sits there in front of the pixels and it uses no power.

Digital images are made using the RGB color model. A single Bayer Array has 2 green squares, because human eyes are most sensitive to green light. The red square covers a single pixel, each green square coves a single pixel, and each blue square covers a single pixel. A 12 MP image sensor has 3,000,000 more of those 4 pixel Bayer arrays (4 times 3,000,000 = 12 MP).

The light falling on any 4 pixels arrayed right together like that, is almost certainly all the same color and the same intensity because those pixels are really, really small.

But not all red light is exactly red. It more often is some subtle shade of red. In the RGB color model different shades of color can be made by adding differing amounts of the three colors in the model.

Pure red is R=255, G=0, Blue=0. Pure green is R=0, G=255, B=0. Pure blue is R=0, G=0, B=255.

Yellow is a mix - R=255, G=255, B=0. Cyan is a mix - R=255, G=0, B=255. Any shades of red, yellow, green, blue, or cyan in between will have some of all 3 RGB colors.

White is a mix of all 3 at maximum value R=255, G=255, B=255.

So though the image sensor can't record colors, by having the Bayer Array in front of the pixels the voltage each pixel generates is in part determined by the color of light falling on the pixel, so the colors in the image can be mathematically interpolated.

The voltages are still analog information though, and the mathematical interpolation can only be performed on digital data. But the voltages the pixels generate are really small, and they need to be amplified. How much the voltages get amplified is determined by the camera's ISO setting.
once amplified the voltages are then input to an Analog To Digital (A/D) converter.

If the camera has been set up to record only Raw image data files, the output of the A/D converter is written to the memory card and the image data is not yet a photo you can see, it's all just 1's and 0's or Raw data. The Raw image data file has to be converted into a photo outside the camera using any of many Raw converters.

If JPEG, TIFF, or Raw + JPEG has been selected for output the JPEG and TIFF files have to be made in the camera.

In the camera, a demosaicing algorithm (a set of rules for solving a problem in a finite number of steps) is applied to the digital data that interpolates the digitized voltages the image sensor/Bayer Array captured, and further processes the image data to complete the JPEG or TIFF file conversion process before the image files are written to the memory card.

Since JPEG is a lossy, compressed, final, ready-to-print, file type, those files require less memory card space. Unfortunately, because so much image data is discarded making a JPEG file they can't be edited very much, if at all.
 
Last edited:
Great read Keith. Being the technical minded person I am, and always wanting to know the nitty gritty's of my gear, it's review, but you put it in a nice concise package. Should be very useful for people who want to understand how/why are camera's sensors work they way they work.
 
Looks like few care to know how their image sensor works.

Less than 60 views so far.

Maybe the mods should move the thread to 'Beyond the Basics' or the new 'Camera Forums' section.
 
Thanks for taking the time to put this up. Great read.
 
Learned about the bayer array on Cambridgeincolor. Thanks Keith!
 
For some technical fun learn about the image sensor that doesn't use a Bayer Array for color interpolation, the Foveon image sensor used in Sigma cameras.
 
Some people doesn't care much about the technicalities, but I'd really like to know. Thanks for a nice, short article. Being familiar with the terms will help google it and absorb even more information about it :thumbsup:
 
Thanks for taking the time to put it together.

Great read, possible to link it in the tutorial thread? :D
 
Interesting read, thanks.

Just a thought / question - if it takes 4 'pixels' on the sensor to determine a single unit of color this would suggest that there are 4 times as many sensor pixels as there are output pixels, which you confirmed when you said that a 12mp has approx 3 million bayer arrays. This in turn would at first suggest that a 12mp sensor would create a 3mp image natively, but a quick calculation using output image dimensions suggests my 10mp 40D creates an image with 10mp. From this I am 'interpolating' that the demosaicing algorithm includes some sort of comparison of surrounding colors to allow the single point of calculated color to be converted back to 4 pixels again (probably with slightly different rgb values). Is this the case or am I way off the mark?
 
Looks like few care to know how their image sensor works.

Less than 60 views so far.

Maybe the mods should move the thread to 'Beyond the Basics' or the new 'Camera Forums' section.
And you're surprised? Most people don't care how things work; they just want them to work. Take a poll; how many people do you think could tell you what the four components of a car engine's combustion cycle are?
 
Interesting read, thanks.

Just a thought / question - if it takes 4 'pixels' on the sensor to determine a single unit of color this would suggest that there are 4 times as many sensor pixels as there are output pixels, which you confirmed when you said that a 12mp has approx 3 million bayer arrays. This in turn would at first suggest that a 12mp sensor would create a 3mp image natively, but a quick calculation using output image dimensions suggests my 10mp 40D creates an image with 10mp. From this I am 'interpolating' that the demosaicing algorithm includes some sort of comparison of surrounding colors to allow the single point of calculated color to be converted back to 4 pixels again (probably with slightly different rgb values). Is this the case or am I way off the mark?
Well the color at each of the 4 pixels under the Bayer Array is represented by the voltage recorded at each of the 4 pixels under the Array, and is translated into digital numerical values in 3 color channels during demosacing.

Put another way, the demosaicing algorithm generates an RGB value for each pixel - R=something, G=something, B=something. So there are 12 MP worth of RGB color data.

Sometimes people confuse image sensor MP's with file size MB's.

That each Raw converter uses a unique demosaicing algorithm is why each Raw converter outputs photos that look somewhat different from all the other Raw converters.

Also at issue is the bit-depth the A/D conversion uses. Most Raw files have a 12-bit color depth, or 4096 gradations of color tone per pixel, per color channel (68,719,476,736 possible colors). Newer cameras use, or offer the option of choosing, a 14-bit color depth, 16,384 gradations of color tone per pixel, per color channel (4,398,046,511,104 possible colors).
 
Some people doesn't care much about the technicalities, but I'd really like to know. Thanks for a nice, short article. Being familiar with the terms will help google it and absorb even more information about it :thumbsup:
For those who ignore the technicalities, a large portion how to do photography well will remain a mystery, because they will never know or understand how to get the tools - camera, lens, light - to do what they want the tools to do.
 

Most reactions

New Topics

Back
Top