What's new

Middle Grey? Really?

I'm trying to get my brain wrapped around this concept...I will work on the three paper deal tonight. I'd like to be able to have that sink in a good bit more.

Think about it this way. The camera does not know the reflectivity of your subject(i.e. whether it is white, grey or black). All it knows is that most scenes have white and black in them and most average out to some level that is pre-programmed; as in the mythical 18% grey or 12% grey, or whatever.

So, basically, your meter is assuming you are looking at an average scene. It has a pre-defined idea of what it thinks is average or what will fit most circumstances.

You really throw it for a loop if you are metering a completely black or completely white object(like the paper experiment). In the case of the white paper, it will underexpose the shot to bring it up to a middle grey. In the case of the black piece of paper, it will overexpose the shot to bring it down to a middle grey. For your grey shot, it should be pretty close depending on what shade of grey you go with.

This is actually what Derrel showed in his 'example'. When he metered something white, it gave him a faster shutter speed. When he metered a black subject it gave him a much slower shutter speed. Basically, his meter was trying to turn what he was metering into grey because that's the only thing it knows how to do.
 
Ok. Digesting this. And thank you for your comments.

So now how does all this play out with spot metering and evaluative metering?

At the end if the day is the overall objective to have colors, and I am thinking of colors and not B&W...true and accurate?

For example how all this affects white and choosing what the highlight is and what color it "should" be.

Do different manufacturers systems handle this differently or are they all pretty close or in a industry range for manufacturing for grey? If so then how do they translate/bridge over to colors in an image.. Sticking with raw for example.
Do some manufacturers accomplish this differently or better?

Thinking out loud now.
 
I'm trying to get my brain wrapped around this concept...I will work on the three paper deal tonight. I'd like to be able to have that sink in a good bit more.

Think about it this way. The camera does not know the reflectivity of your subject(i.e. whether it is white, grey or black). All it knows is that most scenes have white and black in them and most average out to some level that is pre-programmed; as in the mythical 18% grey or 12% grey, or whatever.

So, basically, your meter is assuming you are looking at an average scene. It has a pre-defined idea of what it thinks is average or what will fit most circumstances.

You really throw it for a loop if you are metering a completely black or completely white object(like the paper experiment). In the case of the white paper, it will underexpose the shot to bring it up to a middle grey. In the case of the black piece of paper, it will overexpose the shot to bring it down to a middle grey. For your grey shot, it should be pretty close depending on what shade of grey you go with.

This is actually what Derrel showed in his 'example'. When he metered something white, it gave him a faster shutter speed. When he metered a black subject it gave him a much slower shutter speed. Basically, his meter was trying to turn what he was metering into grey because that's the only thing it knows how to do.

Current metering do more than metering into grey if you didn't know.
 
I'm trying to get my brain wrapped around this concept...I will work on the three paper deal tonight. I'd like to be able to have that sink in a good bit more.

Think about it this way. The camera does not know the reflectivity of your subject(i.e. whether it is white, grey or black). All it knows is that most scenes have white and black in them and most average out to some level that is pre-programmed; as in the mythical 18% grey or 12% grey, or whatever.

So, basically, your meter is assuming you are looking at an average scene. It has a pre-defined idea of what it thinks is average or what will fit most circumstances.

You really throw it for a loop if you are metering a completely black or completely white object(like the paper experiment). In the case of the white paper, it will underexpose the shot to bring it up to a middle grey. In the case of the black piece of paper, it will overexpose the shot to bring it down to a middle grey. For your grey shot, it should be pretty close depending on what shade of grey you go with.

This is actually what Derrel showed in his 'example'. When he metered something white, it gave him a faster shutter speed. When he metered a black subject it gave him a much slower shutter speed. Basically, his meter was trying to turn what he was metering into grey because that's the only thing it knows how to do.

Current metering do more than metering into grey if you didn't know.

That depends a lot on how you have your metering set up and what type of scene you are metering. In the white, grey, and black piece of paper case, the meter will return a middle grey. The camera actually does a much better job metering complex scenes than very simple ones.

But regardless of how you have your metering set up, a very bright scene or a very dark scene will fool your meter. You have to understand how your meter works well enough to compensate for it. Derrel showed that in his example. The lighting didn't change, but when the color of the subject changed, the meter changed drastically.
 
Last edited:
Think about it this way. The camera does not know the reflectivity of your subject(i.e. whether it is white, grey or black). All it knows is that most scenes have white and black in them and most average out to some level that is pre-programmed; as in the mythical 18% grey or 12% grey, or whatever.

So, basically, your meter is assuming you are looking at an average scene. It has a pre-defined idea of what it thinks is average or what will fit most circumstances.

You really throw it for a loop if you are metering a completely black or completely white object(like the paper experiment). In the case of the white paper, it will underexpose the shot to bring it up to a middle grey. In the case of the black piece of paper, it will overexpose the shot to bring it down to a middle grey. For your grey shot, it should be pretty close depending on what shade of grey you go with.

This is actually what Derrel showed in his 'example'. When he metered something white, it gave him a faster shutter speed. When he metered a black subject it gave him a much slower shutter speed. Basically, his meter was trying to turn what he was metering into grey because that's the only thing it knows how to do.

Current metering do more than metering into grey if you didn't know.

That depends a lot on how you have your metering set up and what type of scene you are metering. In the white, grey, and black piece of paper case, the meter will return a middle grey. The camera actually does a much better job metering complex scenes than very simple ones.

Personally, I do not use evaluative/matrix metering, and instead use spot metering 90+% of the time. I intentionally dumb down my meter so that I don't have to work as hard about predicting how it will react in certain circumstances.

If only there were a metering that only meters the brightest part of the scene.
 
I'm trying to get my brain wrapped around this concept...I will work on the three paper deal tonight. I'd like to be able to have that sink in a good bit more.

Think about it this way. The camera does not know the reflectivity of your subject(i.e. whether it is white, grey or black). All it knows is that most scenes have white and black in them and most average out to some level that is pre-programmed; as in the mythical 18% grey or 12% grey, or whatever.

So, basically, your meter is assuming you are looking at an average scene. It has a pre-defined idea of what it thinks is average or what will fit most circumstances.

You really throw it for a loop if you are metering a completely black or completely white object(like the paper experiment). In the case of the white paper, it will underexpose the shot to bring it up to a middle grey. In the case of the black piece of paper, it will overexpose the shot to bring it down to a middle grey. For your grey shot, it should be pretty close depending on what shade of grey you go with.

This is actually what Derrel showed in his 'example'. When he metered something white, it gave him a faster shutter speed. When he metered a black subject it gave him a much slower shutter speed. Basically, his meter was trying to turn what he was metering into grey because that's the only thing it knows how to do.

Current metering do more than metering into grey if you didn't know.

All meters are calibrated to some specific reference, and then compare that value to the value being metered. The supermeters of todays cameras just make intelligent decisions about those values over a wide area to best fit exposure into a generic, one size fits all program.

I think there is a big misconception about meters and what exactly they are doing, and what "null" specifically means.

If only there were a metering that only meters the brightest part of the scene.

This would be very cool. In my 'Unified Zone System' for digital exposure and processing I've established an index which runs from the highest to lowest values recordable starting at 0. So one stops under maximum is indexed at 1, two stops, 2 and so on.

I've daydreamed of a matrix-style meter that scales it's readings this way.
 
Last edited:
If only there were a metering that only meters the brightest part of the scene.

There is, sort of. It takes about a second to take a test shot and look at the histogram.

Not really accurate at all.
How is a graph that outlines exactly where every pixel is located on the entire recordable scale in all three channels not accurate?

Yes, white balance, tone curves, contrast, and in camera settings influence the histogram, but it's not that difficult to set your settings to get a fairly representative histogram.

If you can't interpret it properly, that is much different than just saying it's not accurate.
 
If only there were a metering that only meters the brightest part of the scene.

There is, sort of. It takes about a second to take a test shot and look at the histogram.

Not really accurate at all.

Consistently my histogram is fairly accurate provided white balance is set accurately in both the camera and the processor.

Shadows tend to be conservative, but usually by that point there is so much noise the detail is pretty obstructed.

Regardless, understanding your camera's abilities is essential.
 
There is, sort of. It takes about a second to take a test shot and look at the histogram.

Not really accurate at all.
How is a graph that outlines exactly where every pixel is located on the entire recordable scale in all three channels not accurate?

Yes, white balance, tone curves, contrast, and in camera settings influence the histogram, but it's not that difficult to set your settings to get a fairly representative histogram.

If you can't interpret it properly, that is much different than just saying it's not accurate.

The issue is that the histogram is generated off a processed jpeg, not off the RAW data recorded. But generally the settings in camera that affect processing don't affect the hilights.

Regardless, it's easy to find the brightest point with the spot meter, it's then just a matter of knowing how much exposure to provide.
 
And does that then take us right back to Zone?
 
Yes. It's just another scale. The difference is that it's camera latitude referenced and not meter specific reference.

In my system you use the Unified index for exposure and the zone system index for processing.
 
Just to clairify, you can translate between the two scales:

6766787155_58e5c73366_z.jpg
 

Most reactions

Back
Top Bottom