Update 6/14/2019: The article at http://www.clarkvision.com/articles/does.pixel.size.matter/ covers the technical issues I brought up about 10,000x better than me. Read that instead. Update 6/13/2019: Before responding to the long post below (or, more likely, my thread title), read response #63. I was reading the TechTips columns of the June 2019 issue of Outdoor Photographer. A question was raised about whether it makes any difference if one shoots a scene using an APS-C sensor or using a full-frame sensor and later cropping to APS-C size. When shooting birds, for example, one often fails to come close to filling an APS-C sensor, much less a full-frame sensor. The TechTips response was incorrect. The authors claimed that a 30 MP full-frame sensor has about the same resolution as a 20 MP APS-C sensor and that a 40 or 50 MP full-frame sensor has more. There is an easy way to compare: just check the pixel pitch. The Canon EOS 5D Mark IV has a full-frame, 30 MP sensor. The Canon EOS 70D has an APS-C, 20 MP sensor. The respective pixel pitches are 5.36 µm and 4.09 µm. Therefore, the 20 MP APS-C camera has 1.31x the resolution of the 30 MP full-frame camera. The Canon EOS 5DS is a 50 MP camera. The pixel pitch for the 5DS is 4.13 µm, still slightly larger than the 70D. A 50 MP full-frame sensor will have slightly less detail than a 20 MP APS-C sensor. A 40 MP full-frame sensor has even less. Of course, the full-frame sensor will capture more overall detail, which is great for landscapes, portraits and other shots. But we are just looking at an APS-C-sized portion of a full-frame sensor image. I started thinking about the quality advantage of the bigger pixels and reached some counter-intuitive conclusions. Let's go back to 30/20 MP comparison and let's say that a bird captured by the APS-C sensor fills a box whose dimensions are 131 x 131 pixels. Shot with the same lens, the bird would fit a 100 x 100 box on the 30 MP full-frame camera. Scale the 130 x 130 image to 100 x 100 and the photon noise should be the same. Think of photons as drops of water and pixels as buckets. The image area covered by the bird is the same for both cameras; therefore, the total drops of water collected by all the buckets in this area are the same. In one case, the buckets are smaller, but if we carefully re-distribute the water into the larger buckets, we should get the same water levels as if we had started out with the larger buckets. This is clearest if the buckets are an integral scale different (as in 1 large bucket vs 4 smaller buckets covering the same area--i.e. 2x). So, there isn't even a clear quality advantage for the larger full-frame pixels, at least for photon noise. I started thinking about other factors as well and sensors with smaller pitch sizes might possibly win. Each pixel has a color filter in front of it (the Bayer filter). Squares formed of four pixels will have two green filters, one red filter and one blue filter. A demosaicing process tries to determine the actual color at each pixels through interpolation. Let's say that the pixel pitch of two sensors differs by a factor of 2. The smaller pixels have four filters in the same space that larger pixels have one. If we scale the the image from the smaller pixels to match the larger pixels (i.e. we scale the image by half in each dimension), the color accuracy of the smaller pixels should exceed that of the larger pixels. Let's consider noise from the electronics. Assume that the electronics are equivalent. Let's say that the ideal pixel value for a particular scene is 100 and that the electronics noise alters that at random by +/- 2. So the actual pixel value will range from 98 to 102, with each of the five possible values (including the ideal value) having a possibility of 20%. For simplicity, we again consider a 2x pixel pitch difference. For a single large pixel, the pixel will record something in the 98-102 range. If we combine four of the smaller pixels into one large one, we still have the 98-102 range, but the probability distribution is different. To get a value of 98, for example, all four pixels have to be 98, which is 20% times 20% times 20% times 20% or 0.16%. If we were to map all of the possible combinations of values, we would see that the probability of getting the ideal value of 100 has increased, as would the probability of getting closer to the ideal value. My assumption here is that pixels produces the same signal strength (voltage, presumably) given the same number of photons hitting a given area--i.e. the size of the pixels does not matter. It could be, however, that signal strength is proportional to the total photons hitting a pixel; this would mean that smaller pixels produce a weaker signal that requires a boost to reach the same value as a larger pixel. The increased boost might introduce more noise, negating the advantage I described above. For example, let's say the signal strength is proportional to the pixel area. The smaller pixels in or 2X scenario would have one quarter the area and would need a 4x boost. This could boost the noise from +/- 2 to +/- 8. For the ideal value of 100, the pixels would now range from 92-108. Someone who is better at probability theory than me could draw the probability distribution resulting from combining the four pixels into one. The error range is clearly wider, but it's not clear if probability of reaching the ideal value is higher or not. My suspicion is that, at worst, once the data from the smaller pixels are scaled to match the larger pixels, the total noise would be no worse. Feel free to pipe in if you actually know how this works. Alternatively, one could try to determine the difference empirically (shoot the same scene with two cameras; be sure to use the same lens, aperture, exposure, lighting, etc.). My current camera is a Canon EOS 80D (24 MP) with a pixel pitch of 3.15 µm. To match the resolution, a full-frame sensor would need 9600 x 6400 pixels (about 61.5 MP). Corrections welcome.