Megapixel Quantity versus Sensor Quality

Leica's approach on this was that if you eliminated all the "color stuff" then every pixel would be dedicated to black/white and each adjacent pixel on the sensor would correspond to a pixel in the image file, meaning higher resolution.

Not entirely accurate. While a 40mp b/w sensor will have modestly higher resolution than if those pixels were to represent an RGBG array, you don't get 4x the resolution either. Each of those pixels contain information about the scene in the spatial domain, and this information is not disacarded when the image is interpolated.

A larger benefit to greyscale is sensitivity since the array is a subtractive filter. Though again, because filtered pixels all contribute to the luminescence (and you have an extra green pixel) I don't think the advantage would even be that substantial. I can think of one hypothetical processing technique in particular that would would possibly even yield greater dynamic range of a full color scene with no substantial impact from interpolation.
 
Last edited:
I'll take your word for it. I was simply recalling the write-up in whichever photo mag reviewed it when it came out... I've never seen nor used the Leica.
 
OP - Leica you're paying for a name. They're good cameras, I'm sure. And the glass is very consistent. But a camera can only be so good. Fuji X2 with the Fuji lenses are probably equal to Leica in every practical way, and if you really want Leica get an M adapter and spend money where it matters ... on the lens. Even a Plastic Fantastic will be 95% of what a professional body from all but a few users (and that has less to do with image quality and more to do with frame rate and AF speed ... being that you're thinking about Leica, I can assume this doesn't matter. About anyone will tell you the biggest factor in image quality is the lens.

The Leica M is a goofy camera and I do not think limiting yourself to b/w is necessarily what you want. Sure it has a nice sensor, but that limitation is pretty substantial unless you're old and moldy and just want to shoot film but can't find anyone to process it for you. That's sort of what the Leica M is about: for old rich people who think that by spending a lot of money and limiting themselves to b/w they'll be validated. It's an appeal to tradition with some modest improvement in resolution and sensitivity ... and that's about it.

But it's mostly just an appeal to tradition.

Now, as for your question here about resolution. In my experience 12 megapixel is plenty and the real limitation is sensitivity and noise. Larger pixels means greater signal relative to system noise, which means you can shoot at a significantly higher ISO. If mirroless is your thing, the Sony A7S is probably the best option. That thing is pretty much night vision.

If you still want to spend 10K take a gander at Cooke, Leica, and Zeiss. The CP.2 series will satisfy your desire to needlessly spend money. Though I doubt you'll be any better off if you bought a solid 'Made in Japan' lens for a fraction of the cost.

---
Have you thought about medium format at all? If you really have the budget for Leica, take a look at Fuji GFX...
 
While a lens can be a limiting factor w.r.t. the ability to resolve fine detail, there's also a physics problem that can't be overcome.

You want to learn about "diffraction limited photography": Diffraction Limited Photography: Pixel Size, Aperture and Airy Disks

Due to the 'wave' nature of light (it doesn't travel in a straight beam... it's a wave), the ability to resolve fine detail is related to the physical diameter of the lens (larger is better). This puts limits on the level of detail that any lens can possibly resolve.

The limits were originally discovered by astronomers (George Airy gets the credit for working out the details), who wanted to keep increasing the magnification of the objects they were studying, only to discover that while reasonable levels of magnification did make objects larger, there's a point where you get into unreasonable magnification where the object does get larger... but it also gets blurrier... and you ultimately can't see any more detail by increasing magnification.

As an amateur astronomer, one of my telescopes is fairly large... having a 14" aperture. I'm often asked by guests if they would be able to see the flag on the moon. Never mind that flag... I point out that you can't even see the moon buggy (lunar rover). The rover is roughly 3 meters long. But that's much too small to be seen from Earth... by any telescope in existence.

The math is described in a simple equation called "Dawes' Limit" (see: Dawes' limit - Wikipedia ) and when you do the math, you find that a telescope would need to have an aperture roughly 25 meters in diameter to be able to resolve detail as small as a 3 meter object located about 384000 km away. The largest telescopes on Earth are currently just barely larger than 10 meters in diameter.

If you read that first article I linked above, what it will describe is that the light from a single point on a subject does not land on a single point on your sensor. It lands in some "area" due to it's wave nature. If that area can land entirely within the boundaries of a single pixel, great. If not... then the light is landing on an area that overlaps several pixels so it doesn't matter that you have "more" pixels, you will not be able to get a sharper image unless you have a significantly larger physical aperture diameter.
 

Most reactions

Back
Top