Derrel, this is a thread about lenses. I'm not entirely sure why you're linking me to DxO Mark Camera sensor information. Their sharpness score is the thing in question. They say they have used it to replace MTF charts, and that it is based on their perceptual megapixel rating. Which nowhere do they actually define how they measure.
And additionally, the P-Megapixel rating is tied to a camera-lens combination, so it's not even a real measure of pure lens sharpness, which is what I obviously care about when buying lenses. I may own my lenses for 30 years, and continue using them on cameras of the future that might have 10x the resolution than what is on the market now. It is plenty meaningful to me to know if the lens has resolution capabilities beyond the sensor I have it on to know whether I'm actually paying for something with more future compatibility and quality in an absolute sense.
And I can't just test the lens on a smaller pixel pitch camera on their website, because they don't even offer that on the dropdown menu (only bodies that are supposed to be compatible right now).
This also raises issues like the fact that most camera companies' RAWs are not entirely raw, but have some pre-filtering done to them in between the sensor and the RAW file, even unmodified, which makes another reason to not use different cameras for all your tests.
Which is why I said that they should be testing all of these lenses against a single standardized camera, like the Pentax Q, so that we can get completely controlled LENS-ONLY numbers. That's major issue #1 with their lens ratings. Major issue #2 is that nobody seems to know how they even come up with perceptual megapixels in the first place.
If you wish to talk about sensor scores, an entirely different subject, that's a bit more defined, but still all messed up. And the author of your own link disagrees with most of the points you made.
Low ISO bias
If you compare the DxOMark data in Figure 7 for a number of prominent cameras you would get a more balanced impression about which camera to buy than by just looking at the overall DxOMark Sensor score. If you focus on the latter, you would strongly prefer the Nikon D800 with its excellent low ISO dynamic range. But this emphasizes one aspect of the sensor (essentially the ability to do single shot HDR) that provides a capability we never had in the past. It is a feature which we may infrequently need – and one that some types of users may never see (e.g. if you shoot JPG).
However, at sufficiently high ISO, other models win. High ISO usage may be a more relevant usage for many users than HDR ability at low ISO.
One can therefore ask whether DxOMark hasn’t overstressed low ISO noise[75]. This may explain why some reviewers arrive at different conclusions about the image quality of the Canon 5D3 (or 1Dx) compared to the Nikon D800 (or D4).
To DxOMark’s credit, the user does get three detailed scores to choose from. So you can focus on “dynamic range” if you need single-shot HDR like capability and “low light ISO” if you need to boost your ISO settings often.
The author specifically addresses that the DxO Mark score may be abnormally biased in a way that would explain the 6D/5D3 vs. D800 ISO performance issue that comes up a lot, by emphasizing parts of the ISO range that don't really matter much between cameras in real usage. Which was the topic of the last thread where I commented on DxO, to which I assume you are referring.
He goes on to make more than one other complaint about their ISO measuring techniques, which I must say I agree with. They bizarrely cut off the ISO quality metric when the body falls below any of three separate thresholds (signal to noise ratio, dynamic range, and color depth), which is just a pretty terrible-seeming way to do anything. Because two of the three measures could be way above threshold, while one falls below, and you call it then, whereas another camera could be right at threshold for all three, but go a little higher, and get a better score, despite having an average of the three values that is much lower than the first camera! Also, why is dynamic range for instance even INCLUDED AT ALL in a noise measurement? They already have a dynamic range measurement, listed separately, and now they are counting it yet again in the ISO noise measurement at well? Noise can be (and often is) measured without any dynamic range at all, just taking pictures of black screens. The two variables are not necessarily dependent, so why are they being made dependent here? Color depth is also questionable, since luminosity noise is almost always a dealbreaker before chroma noise is, so I'm not sure why I should care much about color depth dropoff at super high ISOs where I have bigger problems to worry about.
They should just publish all three values as curves at each tested ISO. Then they can make up whatever thresholds they want, while I can just look at the actual data if I disagree. It seems ridiculous to me to gather a bunch of data that would be useful to people, and then not publish it, forcing them to go by your arbitrary threshold interpretations of that data. And it can and probably does lead to really strange and maybe misleading results. Maybe NOT misleading, too. But isn't the purpose of a benchmark to tell us things objectively, not leave us with even more questions than we began with?
If they did this, then I could could just look at signal to noise ratio alone (which is all I think they should be including in high-ISO noise scores), and apply my own threshold for acceptableness based on past shots with the lesser camera I already own, and get exactly what I need to make a decision, instead of exactly what DxO thinks I need + 5 other things all wrapped together mysteriously.
(note: I'm not talking about the three sensor scores. I'm talking about the 3 subscores that go into their "sports" rating, apparently)
The author of your link also agrees with me that they should share their actual formulas for blending things:
DxO does not document how the final DxOMark Camera Sensor score is computed from the individual Dynamic Range, Color Sensitivity and Low-Light ISO scores. I feel it should be provided as the overall score gets a lot of attention.
He focuses on the master score, but from what I have read still, I see a similar lack of formulae for the other intermediate component scores and even less information for many of the raw individual measures. The same logic would apply to those.
DxO Mark is not making any products. They are not Boeing or Nikon or Coca-Cola. They are a testing agency that provides scientific comparisons.
An equivalent company for other products would be not any manufacturer, but some place like
underwriters laboratories. Who
DO, in fact, make all or most all of their standards publicly available. They don't sit there and tell you exactly how they built their rigs and crap, but if you're looking at, say, a roofing fire resistance test, you can look up that standards code, and see that test #1 requires exactly "1400 degree heat source application in 12 mph winds for 10 minutes with an allowable flame spread on the material of 6 feet or less during that time" or whatever. test #2 allows 12 feet of fire spread, etc. etc.
On many product tests, they actually provide full on data curves. usually whenever there is actually a curve to be provided. The roof test is just pass/fail, for instance, but some other things may have different % failures at different loads, etc. and they seem to usually provide all the data they collect.
It's not hidden away at all. Yet somehow, magically, UL stays in business!