Please help me confirm my conclusion on DX lens compatibility with FX body

DxO Mark is annoying. How do they arrive at their "sharpness" score? Nobody knows. And that's pretty much the only number that the two lenses in question differ on significantly, that is creating almost the entire difference in total score.

DxO Mark says they derive their sharpness score somehow from MTF charts mixed with some subjective human visual acuity function. The two I just pasted above look pretty much identical to me out to the limits of the DX sensor size. How the hell are they arriving at one bing TWICE as high of a sharpness score as the other? It must be coming almost entirely from their subjective component, which they don't bother to ever explain in any detail. When you go to the page about perceptual megapixels, it basically can be summarized as "You, reader, are probably really dumb and don't know what MTF scores mean. After all, there are 4 whole lines! And numbers! Who understands those?! Hah! Right? Right. So trust us instead to boil it down to one number... somehow. It's scientific though. It's very scientific, that much we can tell you. We have like, all these top scientists doing sciencey cutting edge stuff. You wouldn't understand, you being a lowly photographer, but you should feel very confident about the final pretty number. Did we mention science was involved in calculating it?"

Sigh. Every time I actually try to use DxO Mark for anything or am directed there by anybody else, I get more and more disappointed in them.
 
Last edited:
"Most" modern lenses are adequate performers for "most" people I think. At f/8 I joke that ,"All Canon lenses give L-glass performance."

MTF charts have never taken a single good picture. I think if a user wants to pop a DX Nikkor on an FX camera, well, it's pretty much a WTF kinda' deal to me.
 
MTF charts have never taken a single good picture.
What do you mean by that? I disagree, unless I'm misunderstanding. If you are a photographer who actually NEEDS your full camera's resolution (which is by no means everyone, which speaks to your first sentence that I totally agree with)--for example, a bird photographer who routinely crops most of their image--then MTF provides very useful and meaningful data, which can help you make a better lens purchase and quite likely better photos as a result.

Or, alternatively, it might tell you two lenses are about the same, and save you some money. Which doens't make better pictures but does make a happier photographer.
 
MTF charts do NOT make pictures.

PEOPLE make pictures. Hence my statement,"MTF charts have never taken a single good picture".

I once had a guy tell me about the awful MTF charts for a lens I own. I've made hundrerds of good images with the lens.

Even the chittiest lens can make good images. How does a Lensbaby, original, test out in terms of MTF?

How does a 1965 Nikkor 85 1.8 H-Nikkor test out? Probably every bit as crappy as a 50mm 1.4 Nikkor from 1966. "Chit lenses" in MTF terms, but with character.

MTF charts can be either actual, tested, or "computed". They tell you jack-squat about how a lens actually SHOOTS in the real world.

The 105 DC Nikkor with the rear defocus cranked to MAX has horrible MTF....but it makes beautiful, dreamy images. If a guy wants to slap a DX Nikkor on an FX camera, why not??? ANy lens made since the early 1980's is amply good to work with.

There is more to a lens than its MTF score. MTF scores do not tell us the character a lens has, or how well it focuses in bad light, or what its actual usefulness is. Take the bird photographer scenario: which lens will be a better lens? 1) A lens with FABULOUS MTF performance, but which tends to focus very slowly, or erratically? Or 2) A lens with 30% lower MTF, but which absolutely NAILS focus 95% of the time?
 
Lol, now I really want to see a lensbaby double glass optic MTF chart!

But anyway, *trying hard to focus now* that's why I said "for people who actually need the resolution." If you do, it matters. E.g. amateur bird photographers who can't afford 10,000 dollar glass and can't approach very closely may crop a lot of their images out routinely, and thus need every pixel of resolution they can get to make reasonable looking prints. Such people are not artsy portrait photographers. And they aren't using lensbabies. Astrophotographers would be another example of somebody who would and should care about MTF charts.

These are very much "real world" shooting situations, and the MTF tells you plenty of critical information about them. Are there also OTHER real world situations that MTF tells you jack squat about? Yeah of course. Remember, I'm the one in the other thread who is currently building homemade lenses out of PVC pipes and crappy pieces of $3 plastic originally manufactured primarily for scamming elderly people. if anybody here is actively appreciating the concept of lenses having charming, non-sharp character at the moment, it's me! But that doesn't make the sharpness data useless across the board. Plenty of people desire maximally sharp images.



Nobody ever won a pullitzer prize for having the sharpest photo. But then again, nobody ever won the Nobel Prize in biology for having the cleanest test tube, either. That doesn't mean clean test tubes don't matter in biology, or that technical resolution doesn't matter in photography.
 
Last edited:
I'm not confident that FX Nikkor lenses are automatically "better" than DX lenses...in fact, I think the opposite might be true, since DX lenses, being designed for smaller sensors, need to deliver higher MTF than FX lenses, which can be lesser in resolving ability, due to the significantly larger image capture area used in FX. This is a lot like the way that 35mm lenses simply MUST have higher MTF than medium format lenses, because 35mm negatives are so,so tiny compared to medium format, with image sizes as large as 6x9 bedding pretty common, with 6x6 and 6x7 also being common. The small-format Olympus lenses designed for 4/3 sensor size have simply OUTSTANDING optical performance...but then they absolutely need to offer stellar resolving capabilities, since 4/3 is like a small postage stamp sized sensor.

Thom Hogan's lens reviews usually specifically discuss how the DX lenses cover the FX image size. In addition to FX, and DX size, the pro Nikon bodies (D3 and D4 series) also offer the 5:4 aspect ratio, which is of course, in between FX and DX in size, and is more-squareish; it is often called 8x10 crop, sometimes referred to as 1.2x FOV mode.

Which has what, exactly, to do with one's personal finances?
 
DxO Mark is annoying. How do they arrive at their "sharpness" score? Nobody knows.
As long as they do it the same with each product the values are comparable.

DxOMark - Lens scores
DxOMark - DxOMark testing protocols
DxOMark - Sensor scores
Tutorials - Sharpness

Yes, I've read all their pseudoscience mumbo jumbo webpages, and I know what things people typically test to get at "sharpness." Nor am I saying they are inconsistent. None of that is the point.

The point is that they don't TELL you how they arrive at the score (if you saw where they shared actual equations on their pages and I missed it, please let me know), and they plainly say that subjectivity is a part of the score in addition to the MTF for lens sharpness (but not how or why). Since the MTFs for those two lenses are almost completely identical, their dramatically different scores must be based almost entirely in the subjective part of their sharpness score.

Why should anybody trust their purchasing decisions on some random company applying secret AND subjective tests as the deciding factor between lenses?

Just because they (probably, though maybe not) do it consistently doesn't mean it is meaningful, or that I should care. As far as I'm concerned, if they can't be bothered to tell me how they did it, then I should generally assume the worst/laziest possible test going on behind the scenes instead, to be conservative, and until notified otherwise. Which would be something along the lines of taking a photo of a static scene with each lens, squinting at them, and saying "yeah that looks about twice as sharp."

Real scientists share their methodologies. That is a worldwide reality. Those who don't usually have something to hide, or nothing to show.
 
Last edited:
Also, the very concept of having to type in which body and lens you want to pair together for your scores is all wrong.

They should be testing every single lens from every company on the same exact literal camera every time. To do this, you would choose the shortest flange to focal distance camera from amongst the ones you wish to test. This would probably be a Pentax Q camera. Then you would buy or machine yourself / commission adapters from Pentax Q to each brand of lens that you want to test. Since if has the shortest flange focal, none of those adapters will have to add optics into the image path. Also conveniently, the Pentax Q has a 1.5 micron pixel pitch, which means it is capable of detecting MUCH higher resolution than pretty much any other SLR. So if the resolution is in the lens, you can detect it with that sensor.

Then put each lens on, and run your tests with exactly equal and optimal conditions. In order to test the edge performance on lenses designed for larger sensors, all you have to do is shift the lens however many millimeters to one side, so that the image circle moves and what would be the edge of it on a larger sensor falls on the Pentax Q sensor, then test again. You would have to build a little universal sort of shift rig to do this and use different adapters, but the flange focal distance is so miniscule that this could probably be machined for $1000 or so (if you did this, you would use the rig unshifted with the same adapters for center testing of course).

DxO Mark is clearly doing nothing anywhere close to any of these things, as evidenced by the fact that you can only choose nikon bodies in the drop down menu when you compare lenses.

This is all beyond the capabilities of a garage reviewer, but for a company that touts itself as the premier testing company for consumer optics, and talks about how awesome their sciency scientists are every paragraph, to not have any sort of rigs like that, OR seemingly any consideration for body vs. lens interactions other than "here guys you figure it out" is almost laughable.



(Similarly, by the way, to test bodies most fairly, you'd buy the sharpest, highest quality large format 200mm or so lens you can find and test to confirm, so that you know the lens is not the bottleneck, if possible for it not to be. Then you can adapt that to any brand body easily, with enough image circle to cover any sensor. Then use this same literal lens to test all of them. If LF lenses of sufficient resolution cannot be found, you may have to drop down to MF or T-mount, or whatever.)




Edit: you know, actually, you wouldn't need adapters at all. just put the camera in a light tight box and have the front panel switch out for different lens mounts, with just air in between the back of the lens and the camera. Makes shifting to test image edges trivial as well. This probably IS within the ability of a garage reviewer. hm...
 
Last edited:
HERE. READ THIS. And then please, stop all your bleating and bitching about how unscientific they are at DxO Mark.

Proprietary information is seldom discussed in today's business climate. You should understand that. Your constant whining about DxO Mark's alleged sloppiness is pretty typical of the pseudointellectual photo forum types who shoot Canon and are soooo terribly bent out of shape because Canon scores so poorly on repeated tests... I've heard it sooo often ever since Canon's sensor technology was passed over beginning back in 2007...

DxOMark Camera Sensor

Then when you are done, go back and read the ORIGINAL article detailing how they do their work at DxO Mark, and make an effort to learn what the numbers mean.
http://www.luminous-landscape.com/essays/dxomark_sensor_for_benchmarking_cameras2.shtml#_end1

Have you never, ever heard of Google searching?

Maybe, put a little effort into it reading about and understanding how DxO Mark works, mmkay? Maybe consider paying for a nice reality check service consultation.

Oh, and while you're at it, please ask the beverage scientists group at Coca~Cola to send you their full recipe for regular Coke, Diet Coke, and Cherry Coke. It's what good scientists do. And, while you're at it, send a very firm e-mail to Boeing, and demand some plans to build a new aircraft. Surely they will provide alllllll their data to you, free. And, to finish off your day, maybe you could demand that Microsoft convert all of their operating system source code to free, open-source code. IMMEDIATELY! It's what good scientists do...they reveal EVERYTHING they do, to any Tom, Dick, or Harry.

Oh....I almost forgot.... :)
 
Derrel, this is a thread about lenses. I'm not entirely sure why you're linking me to DxO Mark Camera sensor information. Their sharpness score is the thing in question. They say they have used it to replace MTF charts, and that it is based on their perceptual megapixel rating. Which nowhere do they actually define how they measure.

And additionally, the P-Megapixel rating is tied to a camera-lens combination, so it's not even a real measure of pure lens sharpness, which is what I obviously care about when buying lenses. I may own my lenses for 30 years, and continue using them on cameras of the future that might have 10x the resolution than what is on the market now. It is plenty meaningful to me to know if the lens has resolution capabilities beyond the sensor I have it on to know whether I'm actually paying for something with more future compatibility and quality in an absolute sense.

And I can't just test the lens on a smaller pixel pitch camera on their website, because they don't even offer that on the dropdown menu (only bodies that are supposed to be compatible right now).

This also raises issues like the fact that most camera companies' RAWs are not entirely raw, but have some pre-filtering done to them in between the sensor and the RAW file, even unmodified, which makes another reason to not use different cameras for all your tests.

Which is why I said that they should be testing all of these lenses against a single standardized camera, like the Pentax Q, so that we can get completely controlled LENS-ONLY numbers. That's major issue #1 with their lens ratings. Major issue #2 is that nobody seems to know how they even come up with perceptual megapixels in the first place.







If you wish to talk about sensor scores, an entirely different subject, that's a bit more defined, but still all messed up. And the author of your own link disagrees with most of the points you made.
Low ISO bias

If you compare the DxOMark data in Figure 7 for a number of prominent cameras you would get a more balanced impression about which camera to buy than by just looking at the overall DxOMark Sensor score. If you focus on the latter, you would strongly prefer the Nikon D800 with its excellent low ISO dynamic range. But this emphasizes one aspect of the sensor (essentially the ability to do single shot HDR) that provides a capability we never had in the past. It is a feature which we may infrequently need – and one that some types of users may never see (e.g. if you shoot JPG).
However, at sufficiently high ISO, other models win. High ISO usage may be a more relevant usage for many users than HDR ability at low ISO.
One can therefore ask whether DxOMark hasn’t overstressed low ISO noise[75]. This may explain why some reviewers arrive at different conclusions about the image quality of the Canon 5D3 (or 1Dx) compared to the Nikon D800 (or D4).
To DxOMark’s credit, the user does get three detailed scores to choose from. So you can focus on “dynamic range” if you need single-shot HDR like capability and “low light ISO” if you need to boost your ISO settings often.
The author specifically addresses that the DxO Mark score may be abnormally biased in a way that would explain the 6D/5D3 vs. D800 ISO performance issue that comes up a lot, by emphasizing parts of the ISO range that don't really matter much between cameras in real usage. Which was the topic of the last thread where I commented on DxO, to which I assume you are referring.

He goes on to make more than one other complaint about their ISO measuring techniques, which I must say I agree with. They bizarrely cut off the ISO quality metric when the body falls below any of three separate thresholds (signal to noise ratio, dynamic range, and color depth), which is just a pretty terrible-seeming way to do anything. Because two of the three measures could be way above threshold, while one falls below, and you call it then, whereas another camera could be right at threshold for all three, but go a little higher, and get a better score, despite having an average of the three values that is much lower than the first camera! Also, why is dynamic range for instance even INCLUDED AT ALL in a noise measurement? They already have a dynamic range measurement, listed separately, and now they are counting it yet again in the ISO noise measurement at well? Noise can be (and often is) measured without any dynamic range at all, just taking pictures of black screens. The two variables are not necessarily dependent, so why are they being made dependent here? Color depth is also questionable, since luminosity noise is almost always a dealbreaker before chroma noise is, so I'm not sure why I should care much about color depth dropoff at super high ISOs where I have bigger problems to worry about.

They should just publish all three values as curves at each tested ISO. Then they can make up whatever thresholds they want, while I can just look at the actual data if I disagree. It seems ridiculous to me to gather a bunch of data that would be useful to people, and then not publish it, forcing them to go by your arbitrary threshold interpretations of that data. And it can and probably does lead to really strange and maybe misleading results. Maybe NOT misleading, too. But isn't the purpose of a benchmark to tell us things objectively, not leave us with even more questions than we began with?

If they did this, then I could could just look at signal to noise ratio alone (which is all I think they should be including in high-ISO noise scores), and apply my own threshold for acceptableness based on past shots with the lesser camera I already own, and get exactly what I need to make a decision, instead of exactly what DxO thinks I need + 5 other things all wrapped together mysteriously.

(note: I'm not talking about the three sensor scores. I'm talking about the 3 subscores that go into their "sports" rating, apparently)

The author of your link also agrees with me that they should share their actual formulas for blending things:
DxO does not document how the final DxOMark Camera Sensor score is computed from the individual Dynamic Range, Color Sensitivity and Low-Light ISO scores. I feel it should be provided as the overall score gets a lot of attention.
He focuses on the master score, but from what I have read still, I see a similar lack of formulae for the other intermediate component scores and even less information for many of the raw individual measures. The same logic would apply to those.



Boeing, Coca-cola, etc.

DxO Mark is not making any products. They are not Boeing or Nikon or Coca-Cola. They are a testing agency that provides scientific comparisons.

An equivalent company for other products would be not any manufacturer, but some place like underwriters laboratories. Who DO, in fact, make all or most all of their standards publicly available. They don't sit there and tell you exactly how they built their rigs and crap, but if you're looking at, say, a roofing fire resistance test, you can look up that standards code, and see that test #1 requires exactly "1400 degree heat source application in 12 mph winds for 10 minutes with an allowable flame spread on the material of 6 feet or less during that time" or whatever. test #2 allows 12 feet of fire spread, etc. etc.

On many product tests, they actually provide full on data curves. usually whenever there is actually a curve to be provided. The roof test is just pass/fail, for instance, but some other things may have different % failures at different loads, etc. and they seem to usually provide all the data they collect.

It's not hidden away at all. Yet somehow, magically, UL stays in business!
 
Last edited:
Okay that ended up overly longwinded. TL;DR version:

It makes no sense for a company whose entire business model is "provide benchmark data to the public" to withhold benchmark information (metholodogy or raw component data) from the public. DxO Mark seems to do this all the time, however, and seemingly often to detrimental effect, in terms of being confusing, oddly biased, or just not giving you what you need to make a decision, even though the data you need is in one of their file cabinets already.
 

Most reactions

Back
Top