luminance histogram

malbec

TPF Noob!
Joined
Sep 29, 2009
Messages
7
Reaction score
0
Can others edit my Photos
Photos NOT OK to edit
i was asked in a photography workshop...considering the luminance histogram, what does it imply about the dynamic range of scene?

i answered that the histogram is the average of the rgb channels. it implies the dynamic range of scene allows images to represent more accurately the wide range of intensity levels found in the scenes.

does that sound about right?
 
I don't know if it's more accurate...if it's an average.
I think an RGB histogram would be more accurate because you have three exact histograms to look at.
 
i was asked in a photography workshop...considering the luminance histogram, what does it imply about the dynamic range of scene?

i answered that the histogram is the average of the rgb channels. it implies the dynamic range of scene allows images to represent more accurately the wide range of intensity levels found in the scenes.

does that sound about right?

I don't know if it is the average. I had always thought that if you took the RGB histogram and laid all three on each other you would have the normal historgram. But now I would like to know.
 
If you are only looking at the luminance histogram, it may not look like you have any blown highlights...when you might actually have some.

For example, the red & green channels might be well within the dynamic range while the blues are blown out and clipped on the histogram. So what would the luminance histogram show? If it's an average, it might show that you haven't clipped any highlights at all.
 
Nah, the histogram is a nice way of seeing if you're fooking-up really badly or not, by either severely underexposing or overexposing. Other than that, it's really anyone's guess; the way a histogram is calculated for any particular image varies depending on what's being used to view it. In the case of RAW files, the histogram is usually generated by the output of the JPEG preview.
 
i was asked in a photography workshop...considering the luminance histogram, what does it imply about the dynamic range of scene?

i answered that the histogram is the average of the rgb channels. it implies the dynamic range of scene allows images to represent more accurately the wide range of intensity levels found in the scenes.

does that sound about right?

If the actual question was, "What does [the luminance histogram] imply about the dynamic range of the scene?" (emphasis added), then I don't think you are correct.

The histogram will show you what "window" you have captured of the dynamic range of the scene. I don't remember what most cameras have these days, but I think the dynamic range is around 12-bits, meaning it can display 2^12 (4096) different shades of intensity, or about 12 EV I think. By comparison, the human eye sees about 10-14 EV. A true 16-bit detector will see about 16 EV in dynamic range.

A scene may contain much more or much less. The full moon is a good example. The reflectivity over its surface averages between about 4-10% and so its dynamic range is fairly small, only around 6 EV (I'm estimating this from experience). If you look at a luminance histogram of just the full moon that is properly exposed, you will see a nice curve with near-zero tails on both the black and white sides (left and right, respectively). That will tell you that for what you shot, you have captured the dynamic range (yay!). If you over-exposed, then you will see the histogram pushed up against the right side and that smooth tail will not be present. That tells you that you did not properly capture the range.

But it really doesn't say much about the actual dynamic range of a scene, just what you've captured relative to that "window" of dynamic range you're looking at. For example, let's say you're in a canyon and have brightly lit walls right next to walls in shadow. The difference between the sun-lit walls and the shadowed walls may be 20 EV. You cannot capture that in a single exposure with a modern digital camera. But when you take the photo and properly expose for, say, the sun-lit part, there may not be enough shadowed pixels to show you a hump up against the right (black end) edge of the histogram. You may see just a hint of it while you'll see a nice hump in the middle of the sun-lit wall pixels and a tail up towards the white end.

In other words, what I'm trying to say is that the luminance histogram will only show you how well you have captured the dynamic range within the window of the dynamic range you're looking. You cannot use it to really know what the dynamic range of the entire scene originally was without also looking at the resultant image to see if there are just a few completely saturated or under-exposed pixels. And even if the luminance histogram does show those pixels, squished against the left or right side of the histogram, you still will only be able to say that the dynamic range of your photo was not enough to capture the scene - you can only set a minimum limit.
 
I don't know if it is the average. I had always thought that if you took the RGB histogram and laid all three on each other you would have the normal historgram. But now I would like to know.

I was intrigued so I spent the last 20 min playing in photoshop and here's what I have come up with. The histogram is made up of a pixel count for various brightnesses. The red green and blue histograms which are the simplest ones to understand may also surprise you.

Assume you have an image that has 3 stripes, one pure red, one pure green, one pure blue. Each stripe takes up 1/3rd of the image. And here is an explanation of what is shown:

Each channel histogram, R, G, and B will each have the 255 value at 50%, and the 0 value at 100%. This may sound surprising at first but you need to realise that in the red channel there is twice as much black as there is completely red, the same for a green and blue.
This should instantly show why the RGB histogram can't give a distribution of tone in an image. There is no black in an image however each histogram shows far more black in every channel than it does colour.

Now the RGB, or colour channel histogram is now the average of each of the pixel values. Again this shows twice as much pure dark as pure light colours, which is misleading, but tells us that the values are directly proportional to the values in each channel. Thus yes if you lay each channel on top of each other and average them you end up with A histogram, the RGB histogram, but this is not a normal one. It doesn't take into account that the three stripes don't look like the same brightness to me.


The luminosity channel however does not concern itself with the pixel value of a single channel in the other systems. Instead it looks at the perceived (note that word it's very important) brightness of each individual pixel. In the histogram for this channel you will see 3 distinct peaks. All three are distributed towards the middle and low ends of the histogram. However none are white and none are black, since none of the 3 colours are perceived to be pure white, nor pure black, which is exactly what is to be expected.

What may not be expected is the value of blue is assigned 28, red 76, and green 150. However this is easily explained as the eye perceives brightnesses in different ways. There's a few various standards for colour to brightness conversion used by various television and media formats, some more accurate than others (I feel for Americans and your NTSC), but these standards typically supply a weighted average to use in calculations.


Anyway, each pixel is counted by the weighted addition: Y = 0.299*R + 0.587*G + 0.114*B to correspond how each value has a different perceived brightness. It may come as no surprise that .299+.587+.114 = 1. So a value of pure white (255,255,255) will give a Y = 255 value for the histogram, and a value of any colour will give it's perceived value.

In a quick google I also found out that this method is inaccurate, where the actual perceived brightness is supposed to be a weighted average. However since a weighted average involves 3 squares and square root per pixel, no one would want to wait for photoshop or their camera to calculate that histogram on a 15mpx image :)

In addition, the above formula is for photoshop. Cameras are much likely to use yet another approximation: Y = 0.375*R + 0.5*G + 0.125*B. This quite a bit less accurate but has the distinct advantage of involving very very basic maths. This can be written as Y = (3*R + 4*G + B) / 8, which in micro-controller code is Y=(R+R+R+G+G+G+G+B)>>3 and is a VERY efficient integer operation compared to the more accurate method above.


You probably didn't want to know half of that. I need a life.
 

Most reactions

Back
Top