Why smaller sensors beat full-frame sensors for wildlife photography

freixas

No longer a newbie, moving up!
Joined
Jun 11, 2010
Messages
132
Reaction score
29
Location
Portland, Oregon
Can others edit my Photos
Photos NOT OK to edit
Update 6/14/2019: The article at http://www.clarkvision.com/articles/does.pixel.size.matter/ covers the technical issues I brought up about 10,000x better than me. Read that instead.

Update 6/13/2019:
Before responding to the long post below (or, more likely, my thread title), read response #63.


I was reading the TechTips columns of the June 2019 issue of Outdoor Photographer. A question was raised about whether it makes any difference if one shoots a scene using an APS-C sensor or using a full-frame sensor and later cropping to APS-C size. When shooting birds, for example, one often fails to come close to filling an APS-C sensor, much less a full-frame sensor.

The TechTips response was incorrect. The authors claimed that a 30 MP full-frame sensor has about the same resolution as a 20 MP APS-C sensor and that a 40 or 50 MP full-frame sensor has more.

There is an easy way to compare: just check the pixel pitch.

The Canon EOS 5D Mark IV has a full-frame, 30 MP sensor. The Canon EOS 70D has an APS-C, 20 MP sensor. The respective pixel pitches are 5.36 µm and 4.09 µm. Therefore, the 20 MP APS-C camera has 1.31x the resolution of the 30 MP full-frame camera.

The Canon EOS 5DS is a 50 MP camera. The pixel pitch for the 5DS is 4.13 µm, still slightly larger than the 70D. A 50 MP full-frame sensor will have slightly less detail than a 20 MP APS-C sensor. A 40 MP full-frame sensor has even less.

Of course, the full-frame sensor will capture more overall detail, which is great for landscapes, portraits and other shots. But we are just looking at an APS-C-sized portion of a full-frame sensor image.

I started thinking about the quality advantage of the bigger pixels and reached some counter-intuitive conclusions.

Let's go back to 30/20 MP comparison and let's say that a bird captured by the APS-C sensor fills a box whose dimensions are 131 x 131 pixels. Shot with the same lens, the bird would fit a 100 x 100 box on the 30 MP full-frame camera. Scale the 130 x 130 image to 100 x 100 and the photon noise should be the same.

Think of photons as drops of water and pixels as buckets. The image area covered by the bird is the same for both cameras; therefore, the total drops of water collected by all the buckets in this area are the same. In one case, the buckets are smaller, but if we carefully re-distribute the water into the larger buckets, we should get the same water levels as if we had started out with the larger buckets. This is clearest if the buckets are an integral scale different (as in 1 large bucket vs 4 smaller buckets covering the same area--i.e. 2x). So, there isn't even a clear quality advantage for the larger full-frame pixels, at least for photon noise.

I started thinking about other factors as well and sensors with smaller pitch sizes might possibly win.

Each pixel has a color filter in front of it (the Bayer filter). Squares formed of four pixels will have two green filters, one red filter and one blue filter. A demosaicing process tries to determine the actual color at each pixels through interpolation.

Let's say that the pixel pitch of two sensors differs by a factor of 2. The smaller pixels have four filters in the same space that larger pixels have one. If we scale the the image from the smaller pixels to match the larger pixels (i.e. we scale the image by half in each dimension), the color accuracy of the smaller pixels should exceed that of the larger pixels.

Let's consider noise from the electronics. Assume that the electronics are equivalent. Let's say that the ideal pixel value for a particular scene is 100 and that the electronics noise alters that at random by +/- 2. So the actual pixel value will range from 98 to 102, with each of the five possible values (including the ideal value) having a possibility of 20%.

For simplicity, we again consider a 2x pixel pitch difference. For a single large pixel, the pixel will record something in the 98-102 range. If we combine four of the smaller pixels into one large one, we still have the 98-102 range, but the probability distribution is different. To get a value of 98, for example, all four pixels have to be 98, which is 20% times 20% times 20% times 20% or 0.16%. If we were to map all of the possible combinations of values, we would see that the probability of getting the ideal value of 100 has increased, as would the probability of getting closer to the ideal value.

My assumption here is that pixels produces the same signal strength (voltage, presumably) given the same number of photons hitting a given area--i.e. the size of the pixels does not matter. It could be, however, that signal strength is proportional to the total photons hitting a pixel; this would mean that smaller pixels produce a weaker signal that requires a boost to reach the same value as a larger pixel. The increased boost might introduce more noise, negating the advantage I described above.

For example, let's say the signal strength is proportional to the pixel area. The smaller pixels in or 2X scenario would have one quarter the area and would need a 4x boost. This could boost the noise from +/- 2 to +/- 8. For the ideal value of 100, the pixels would now range from 92-108. Someone who is better at probability theory than me could draw the probability distribution resulting from combining the four pixels into one. The error range is clearly wider, but it's not clear if probability of reaching the ideal value is higher or not.

My suspicion is that, at worst, once the data from the smaller pixels are scaled to match the larger pixels, the total noise would be no worse. Feel free to pipe in if you actually know how this works. Alternatively, one could try to determine the difference empirically (shoot the same scene with two cameras; be sure to use the same lens, aperture, exposure, lighting, etc.).

My current camera is a Canon EOS 80D (24 MP) with a pixel pitch of 3.15 µm. To match the resolution, a full-frame sensor would need 9600 x 6400 pixels (about 61.5 MP).

Corrections welcome.
 
Last edited:
I am so going to get tarred and feathered for this.

But there is a factor that I am going to throw out and then let the fight begin.

Aspect Ratio Distortion.

Ill let everyone figure that out.
 
simple answer: reach.
 
meaning?


apply some thought here.
 
To be blunt, the argument is academic and non sensicle.

I am not arguing that great shots can be made with either-or, but here is the reality.
It doesn't matter.
Its the photographer, not the equipment.

In the 1970's there was a guy posting in magazines various very beautiful photographs he took with a 110 film camera. (Instamatic) and had a frame size 1/4 the size of 35mm.
If you know what your doing, the format size and pixle size is irrelevant.
As for Aspect Ratio, I guess you have to understand Projection to grasp that.
 
To be blunt, the argument is academic and non sensicle.
I would suggest irrelevant vice non-sensicle, but certainly academic and without any real-world importance. None of this factors in the single most important factor, which is the lens. Further, a whole host of other real-world conditions such as lighting, SS, Aperture, ISO, etc are far more relevant to a better image. While your postulation might be true in a purely theoretical sense, it's real world application, is, IMO, approximately the sum of the square root of fourth fifths of bugger all!
 
To be blunt, the argument is academic and non sensicle.

I am not arguing that great shots can be made with either-or, but here is the reality.
It doesn't matter.
Its the photographer, not the equipment.

In the 1970's there was a guy posting in magazines various very beautiful photographs he took with a 110 film camera. (Instamatic) and had a frame size 1/4 the size of 35mm.
If you know what your doing, the format size and pixle size is irrelevant.
As for Aspect Ratio, I guess you have to understand Projection to grasp that.

Again, I have no idea what you're fired up about, but it sounds completely unrelated to anything I talked about. Who said a thing about "great shots"? I gather that you have little experience with wildlife photography.

As for your "aspect ration" rant, when using the same lens, the projection of an image onto a portion of a sensor is independent of the sensor size. But you'd have to understand projection to understand that.
 
me thinks that the argument is mostly based on some scientific aspects based on cluster grouping of the number of sensors,e tc.
Fine.
But again my main aspect is and I have had this learned to me multiple times, is not the equipment overall but the photographer.
I seem to remember that some info came out that some images in various magazines involving sports had been shot with Fuji Film instamatics.
Or something to that effect, and no one noticed.
 
What about defraction at smaller f stops the more smaller pixels the more spillover as I understand it
 
To be blunt, the argument is academic and non sensicle.

I am not arguing that great shots can be made with either-or, but here is the reality.
It doesn't matter.
Its the photographer, not the equipment.

In the 1970's there was a guy posting in magazines various very beautiful photographs he took with a 110 film camera. (Instamatic) and had a frame size 1/4 the size of 35mm.
If you know what your doing, the format size and pixle size is irrelevant.
As for Aspect Ratio, I guess you have to understand Projection to grasp that.

Again, I have no idea what you're fired up about, but it sounds completely unrelated to anything I talked about. Who said a thing about "great shots"? I gather that you have little experience with wildlife photography.

As for your "aspect ration" rant, when using the same lens, the projection of an image onto a portion of a sensor is independent of the sensor size. But you'd have to understand projection to understand that.
Aspect Ratio.
There is plenty on it.

No I am not fired up, its simply that the argument that a "better" photo is made with a smaller sensor is at least to my experience, a non issue.
 
While your postulation might be true in a purely theoretical sense, it's real world application, is, IMO, approximately the sum of the square root of fourth fifths of bugger all!

I have no idea if you are talking to me or replying to Soocoom1.

If you are talking to me, then I would say that what I wrote is of practical relevance to wildlife photographers.
 
Last edited:
Serious question... Have you ever shot Square format?

its relevant to the image quality aspect here.
 
Aspect Ratio.
There is plenty on it.

No I am not fired up, its simply that the argument that a "better" photo is made with a smaller sensor is at least to my experience, a non issue.

When you are dealing with the high levels of cropping that occur in most people's wildlife photographs, you can bet that it is an issue. One can increase resolution or buy a bigger lens. Increasing the lens size is ideal—but: the step size from 400mm to 800mm might be from $1,000 to $20,000. Plus the lenses weigh a ton.

Again, I think you have no idea what the issues are in wildlife photography.
 
Again, I think you have no idea what the issues are in wildlife photography.
And I would submit that you are postulating theoretical/academic points, which may be accurate, but have no practical relevance because the real-world, OBSERVABLE difference is undetectable.
 

Most reactions

New Topics

Back
Top