What's new

bayer interpolation and downsampling

Pier

TPF Noob!
Joined
Jul 20, 2012
Messages
7
Reaction score
0
Hi, I've read a bit about bayer interpolation and how actual resolution in camera sensor is only about 50% of what is being sold by the manufacturers.
If I take my camera and shoot at half of it's resolution do I counterbalance the bayer interpolation in some way?
Pier
 
I prefer Advil to Bayer. Hope that helps.
 
I'm not sure how lower resolutions are obtained on camera (e.g, binning), however you could always downsample in postprocessing, if you think is needed for you.
However, since conversion from Bayer pattern to RGB, carefully done, involves also noise reduction and sharpness enhancement, maybe it is not worth to a priori discharge pixels.
 
I prefer Advil to Bayer. Hope that helps.

Is it something you can choose with your camera? or you mean when converting from raw?

I'm not sure how lower resolutions are obtained on camera (e.g, binning), however you could always downsample in postprocessing, if you think is needed for you.
However, since conversion from Bayer pattern to RGB, carefully done, involves also noise reduction and sharpness enhancement, maybe it is not worth to a priori discharge pixels.


Any suggestion on how to convert from bayer to rgb? links or books?
 
The camera or the Raw converter does that. The image sensor in a digital camera is an analog device that isn't capable of recording color. The image sensor can only record luminosity (grayscale).

Of course RGB is an additive color model, while photographic prints are made using the subtractive color model CMYK.

Reading -
http://wwwimages.adobe.com/www.adob...ly/prophotographer/pdfs/pscs3_renderprint.pdf
http://wwwimages.adobe.com/www.adob...e/en/products/photoshop/pdfs/linear_gamma.pdf

http://en.wikipedia.org/wiki/Active_pixel_sensor
http://en.wikipedia.org/wiki/Bayer_filter
http://en.wikipedia.org/wiki/Rgb
http://en.wikipedia.org/wiki/Cmyk
http://en.wikipedia.org/wiki/Color_space

http://www.cambridgeincolour.com/color-management-printing.htm
 
Hi, I've read a bit about bayer interpolation and how actual resolution in camera sensor is only about 50% of what is being sold by the manufacturers.
If I take my camera and shoot at half of it's resolution do I counterbalance the bayer interpolation in some way?

It's not technically half the resolution. One thing about beyer interpolation is that it's not the same as typical upscaling. Take a look at the following beyer matrix:
RGRG
GBGB
RGRG
GBGB

You see in groups this represents a 2x2 set of true colour pixels. I.e. you combine every R with a B and two G once and you end up with 4 pixels worth of colour data. However... Because of the pattern this grid is arranged in every individual pixel actually contributes to the 4 surrounding pixels. You can see from this picture it is possible to extract 9 individual pixels with actual unique data (thanks to the green pixels contributing more to luminance than red or blue). This is not the same as normal interpolation where data is simply guesstimated based on some algorithm from the bits beside it. This data is .... educated guesstimated for want of a much better term.

As such except for the row of pixels on the edge of the sensor, each pixel in the final image is actually made of a unique data from the pixel next to it. This can be seen most clearly on cameras without an anti-aliasing filter. When details get small enough that they only fall on one pixel then the camera will actually produce a colour cast on that one pixel in the final image.

In practical terms. What are you trying to achieve?
You want a tack sharp image? In a practical sense you don't need to reduce your resolution by half. Maybe by 1/3rd or maybe by 1/4 but if you reduce by 1/2 you're likely throwing away some unique data.
You want a perfect scientific calculation of which photons hit what pixel? Then you need software like the software used by astronomers to create a super pixel from a grid of 4 without interpolating that second grid. I.e. your 4x4 grid gives you 4 pixels in the end, and each of those 4 pixels have a unique set of photons hitting them.
 
In practical terms. What are you trying to achieve?
You want a tack sharp image?

Yes, sharpness. In practical terms I wanted to know if taking photos not at full resolution of your sensor makes any sense from a technical-scientifical point of view. Your explanation helped but I guess that in the end this is not a common technique to enhance sharpness.
 
It should be noted that without anti-aliasing, moirée in images becomes much more possible.
 
Yes Bayer reduces resolution.

No, I dont think its 50%. More like 30%.

For example, according to a test done by the german magazine "chip", my D5100 can resolve 2972 lines at ISO 100, thats about 13 Megapixel (of 16 Megapixel officially). However, some of this might be faked - the sharpening of the algorithms hides the fact that information actually got lost. I've seen tests where the camera had better "resolution" than the chip thanks to that.

Either way, I usually reduce the resolution of my pictures in the last step before conversion to JPEG.


P.s.: Checked on my camera, its actually not a true 3:2 factor picture format, so my previous numbers are wrong.
 
Last edited:
yes, 30% is probably more accurate. The reason for this is that while each composite photo element is interpolated into one composite pixel, they do still record information in two dimensional space.

My tests with "half" mode in RPP, which does not interpolate the data, also confirm that there is more resolution in an interpolated processed image than in a non-interpolated procesed image. However, noise and color reproduction is improved.
 
Yes, sharpness. In practical terms I wanted to know if taking photos not at full resolution of your sensor makes any sense from a technical-scientifical point of view. Your explanation helped but I guess that in the end this is not a common technique to enhance sharpness.

It's used quite a bit in Astrophotography. Not only does combining adjacent images help with sharpness (think a detail that used to be blurred between two pixels is now confined into one), but also with noise (combining multiple points with random noise causes them to bias towards their average).

The question is, would you want to throw away data you have? Remember based on my explanation the data is interpolated but it is unique to each pixel unlike say upscaling. Also downsamping does not normally simply throw away data unless you use nearestneighbor method, and that method only works if you downsample by a factor of 1/2 in each direction (1/4 the total image resolution). So for any non sqaure value you downsample the image the resulting pixel ends up being a gaussian distribution of the original pixels usually followed by some kind of sharpening algorithm applied.

Really I suggest rather than trying to achieve absolute sharpness (distinct data in each pixel) try to achieve good visual acuity (how sharp an image appears to be to the eye). The latter can be done by sharpening using any of the million methods modern image editing programs offer to you. The human eye perceives sharpness as high edge contrast. That's why a technically not sharp image can appear sharper (after sharpening) than an image which each pixel is perfectly sharp.
 

Most reactions

Back
Top Bottom