What's new

Exposing to the right

Status
Not open for further replies.
ETTR seems a mixed bag. The original idea was with techies, pushing everything as far right as it will go (which is possibly overexposure, but not clipping), so that at later processing, it can be pushed back down to where it should be, which also pushes any noise down too, offscale on the left. I think no one in this thread has mentioned that.
The idea is that data in a raw file is not dispersed linearly and that most of the data (bits) recorded is at the high end. By pushing the exposure to the high end (without clipping) you thereby record the most usable data... and then push it back left/down for "proper exposure."
For me, it's too much hassle for "gains" I don't see the need for. (and often I don't see them as being a reality) IME, and for what I do, protecting highlights is of more value.

A lot of the DR increases in modern sensors are in the shadow areas... areas that are not of a lot of use generally, but the same areas that ETTR was meant to benefit.
 
It's about increasing the amount of signal over noise.

But in practice, for me anyway, it's about capturing the widest possible amount of useable ("not-noisy") data so that I can make choices about how it should be rendered in the end photograph. By providing sufficient headroom I have more information to choose how it best reflects the scene.

The problem with ETTR has always been it's arbitrary implementation. The technique is sound, and is pretty much the same in concept as The Zone system.
 
I don't really see the point in checking the raw file histograms... it's not something you can "work with." Maybe to get an understanding of how the camera histograms relate to the raw data so that you can more accurately interpret the camera histograms. But that's not much different than just working with the files in your conversion software and determining what's there (recoverable) that wasn't shown in the camera.

It's information. You noted earlier that the JPEG histograms displayed on the camera are not accurate -- you're right, and as you also noted UniWB is a hassle if not a kludge. I'm not an ETTR fanatic but I do make exposures carefully with the goal of getting maximum value from my equipment. I learned over 40 years ago that you don't do anything important with untested equipment. Examining the raw histograms let's me precisely test my equipment. Because your raw converter is demosaicing the raw file and applying a tone curve and WB to the RGB data, the information you get there isn't as clean as looking at the raw files prior to demosaicing. It's not something I do with every photo, but I do run periodic checks on my equipment and at my age on myself to make sure everything is working.

Joe
 
ETTR is of no value when shooting JPEG only, and the histogram is exactly what it should be. It always very accurately shows data for the JPEG.
Orly?

I suppose these two images look about equally high quality to you?
$A.webp$B.webp
Left: Image ETTR in jpeg only, then darkened in photoshop (random ACE hardware spray bottle on my desk)
Right: Image ETTL in jpeg only, then brightened in photoshop
So both were matched in lightness in the end, both suffered exactly 1 edit, both had their full curve in the range of the camera.

The ETTL is abysmal, the ETTR looks fine.

I did not touch the focus wheel or move an inch in between them, only changed the exposure compensation. Neither one clipped on either side of the histogram. And the one on the right was at 1/1600th of a second with a 135mm lens, so that is not motion blur (the other one was at 1/100 for comparison).





This is not a surprising result to me. The reason why ETTR works is equally as applicable to jpeg mode as it is to RAW: the file format has more data slots available in the higher stops of the histogram than in the lower ones, since number of lightness values per stop is an exponential function, not a linear one.

The bit depth (8bit jpeg or 16bit RAW, whatever) is irrelevant, because this fact is still true at any bit depth, and the number of values available at the left is going to be terrible no matter what your bit depth (maybe if you had a 500 bit camera or something, it would be fine, but nothing within the realm of reasonableness)


By the way, this image posted earlier is a brilliant depiction of why to ETTR (if light allows):

View attachment 66354
See how those bars get super spaced out over to the left? That's because the lower stops of the range ONLY HAVE 1, or 2, or 4 lightness values available to them. Look at the numbers just below each graph. That is as precise as the data gets to the left of the histogram! If you were to expose a narrow dynamic range image to the darkest (leftmost) 3 stops, your entire image would be posterized to 7 lightness values total, which would be absolutely horrendous. The spray bottle image above is almost that bad -- that's what ETTR looks like in RAW or JPEG.

Notice that to the right, though, the data comes in thick and plentifully. That's where you want your image to be. Not chopped up into the sparse wasteland of data on the left. As far to the right as possible, where the most data is, short of clipping your highlights.

The rightmost stop in this image (for what I'm guessing is an 11 bit camera) has 1024 lightness values within it. the leftmost has... 1. Which of those sounds like where you want the bulk of your image to be?
 
Last edited:
ETTR is of no value when shooting JPEG only, and the histogram is exactly what it should be. It always very accurately shows data for the JPEG.
Orly?

I suppose these two images look about equally high quality to you?

They appear to be very poor quality images in terms of exposure and editing choices.

This is not a surprising result to me. The reason why ETTR works is equally as applicable to jpeg mode as it is to RAW: the file format has more data slots available in the higher stops of the histogram than in the lower ones, since number of lightness values per stop is an exponential function, not a linear one.

That is true for a "linear" encoding scheme, which the RAW file is but the JPEG is not. The RAW file has twice as many levels at each brighter stop. A JPEG actually can encode 20 stops though only about 9 are useful. The 9th stop has 6 levels (which means it will appear posterized), the 8th has 7, then 11, then 14, then 19, then 27, 37, and 50 for the next to the brightest, while the brightest stop has still has only 69 levels. Clearly there are not enough even in the brightest stop to allow significant expansion or compression, and in no case does any stop have twice the levels of 1 stop less.

The bit depth (8bit jpeg or 16bit RAW, whatever) is irrelevant, because this fact is still true at any bit depth, and the number of values available at the left is going to be terrible no matter what your bit depth (maybe if you had a 500 bit camera or something, it would be fine, but nothing within the realm of reasonableness)

But the reason it is not irrelevant is that with JPEG the number of values at the right is also terrible. There just is no room for significant changes to brightness or contrast.

View attachment 66354
See how those bars get super spaced out over to the left? That's because the lower stops of the range ONLY HAVE 1, or 2, or 4 lightness values available to them. Look at the numbers just below each graph. That is as precise as the data gets to the left of the histogram! If you were to expose a narrow dynamic range image to the darkest (leftmost) 3 stops, your entire image would be posterized to 7 lightness values total, which would be absolutely horrendous.

Has anyone suggested doing that?

What is significant though, is that with a 14 bit RAW file (and clearly this is worse by 2 stops with a 12 bit RAW file), there are almost as many levels in the 8th stop of the RAW file as there are in the 1st stop of the JPEG file. That is why it isn't a good idea to try editing JPEG images to correct for any significant amount of exposure error. Thus, ETTR is only valid for RAW files.
 
They appear to be very poor quality images in terms of exposure and editing choices.
Obvious avoidance of the evidence is obvious.
This isn't a submission to an art gallery. It's a 2 minute's effort ETTR demo with junk on my desk. And it does that quite effectively.
And yes, the whole point is that one of them is a worse choice of exposure and editing choices. Whenever you don't ETTR, you're making a poor exposure and editing choice (unless you simply don't have the light or time to do so)

That is true for a "linear" encoding scheme, which the RAW file is but the JPEG is not. The RAW file has twice as many levels at each brighter stop. A JPEG actually can encode 20 stops though only about 9 are useful. The 9th stop has 6 levels (which means it will appear posterized), the 8th has 7, then 11, then 14, then 19, then 27, 37, and 50 for the next to the brightest, while the brightest stop has still has only 69 levels. Clearly there are not enough even in the brightest stop to allow significant expansion or compression, and in no case does any stop have twice the levels of 1 stop less.
If you read carefully you will discover that I never claimed the JPEG has twice as many per stop there. I said that both formats have more slots as you go higher, that it is an exponential function, and that because of this, the logic of why you ETTR is applicable to both formats.


As you yourself just said, the lowest stops of JPEG are not as useful, because they lead to posterization.
Which is a very succinct explanation of why ETTR matters in JPEG. So, thanks!


Has anyone suggested doing that?
No nobody talks about exposing to the left, but I thought the logic was pretty obvious: it's a continuum. The further to the left you go, the more of the problem you have. ETTL is just the easiest extreme case to demonstrate and describe the problem.


In reality, yes, people either ETTR, or they leave the histo in the middle. But the middle is still more to the left, and thus you are throwing data away. And still quite a bit of it, too, depending on the image (low dynamic range images suffer more by comparison, since they could have been ETTR'ed that much more).


What is significant though, is that with a 14 bit RAW file (and clearly this is worse by 2 stops with a 12 bit RAW file), there are almost as many levels in the 8th stop of the RAW file as there are in the 1st stop of the JPEG file. That is why it isn't a good idea to try editing JPEG images to correct for any significant amount of exposure error. Thus, ETTR is only valid for RAW files.
...wat


If you're shooting JPEG, you don't have a RAW file, so the number of levels per stop in RAW files is utterly irrelevant to you. RAW files obviously only matter to people who shoot RAW files. Those who do JPEG start out with only the levels in JPEGs, amongst which ETTR matters almost as much as for anybody else.

Also, it really isn't that bad to edit JPEGs to be DARKER, for the same reasons we are both talking about: you're starting out with more data than you need for your final image if you ETTR and then darken. Massive posterization in JPEG editing primarily only arises when you have a dark image and you BRIGHTEN it (in other words, exposing to the left! Usually unintentionally).
 
Last edited:
They appear to be very poor quality images in terms of exposure and editing choices.

That is true for a "linear" encoding scheme, which the RAW file is but the JPEG is not. The RAW file has twice as many levels at each brighter stop. A JPEG actually can encode 20 stops though only about 9 are useful.

I think that is misleading (and not about ETTR and then backing it back off). That is only about gamma of course (and is about any RGB image, not just JPG), Gamma encodes RGB file data as approximately square root values, so yes, the result (in the file) has chart coordinates with more values of range, while encoded -but the data still has the same range, as in the linear original.

The data in our histograms does show these new gamma values (Not our real data, he he he). But our video screens do not show it (only presents to our eyes a reproduction of our original linear data)

I have never understood why Poynton claimed this to be any range advantage for digital data (like you mentioned). One, these are not New data values, not more values with more range in dark data, but just same tones (one for one) as we already had. But two, the really big deal, mainly because humans NEVER look at gamma data. RGB is always decoded back to linear (one way or another, CRT losses or LCD software) before human eyes ever see it. We would not like looking at gamma data. So it is always (hopefully) restored to be exactly same linear data as before. The best possible result is absolutely no change. Anything else is an error.

I can see gamma was an advantage for analog TV broadcast, noise added during RF analog transmission was decoded back out (in some slight extent, lowest values, squared and separated more - something like ETTR in that respect). But this is not applicable to digital data, since noise is not added in the file (and CRC is another issue for digital HDTV broadcast). The standards still do gamma only for data compatibility (all RGB data in the world already has gamma in it - and for old CRT monitors too).

But gamma is just a confusion here, and I certainly do not see it as any argument for JPG. :)
 
Last edited:
That is true for a "linear" encoding scheme, which the RAW file is but the JPEG is not.

linear original.
Nothing is "linear" here... that's the whole point. Everything is exponential. Or in some cases some other things, but never linear.

Jpeg is not linear. The data slots available go up exponentially, roughly 1.35^(stop)
RAW is not linear. The data slots also go up exponentially, at 2^(stop)
Your brain/eyeballs are not linear. On average, they're about 2^(stop), though sensitivity rises more slowly than that in dark light and faster than that in bright light.
F stops are not linear. They go up at 2^(stop)
Histograms aren't linear. They go up at 2^(stop)
Theoretical maximum data available is not linear. Actual discrete information in photons goes up at 2^(stop)
Gamma adjustments on monitors are not linear. It's a power law function.

Gamma also has nothing to do with why you should expose to the right.
The reason is based on the fundamental quantum nature of light: 2 photons per second = 1 stop less than 4 photons per second = 1 stop less than 8 photons per second, etc.
8 photons is only 2 stops higher than 2 photons, yet it has 4x the data precision. So data density/precision goes up faster than stops do, thus you get the highest data integrity by recording image at the brightest levels your camera can handle.



It doesn't matter what technology they invent in the future, or how you crunch the numbers, or what formats you use.
Exposing to the right will ALWAYS forever and ever give you higher data precision than exposing to the middle, as long as photons remain quantized (which is presumably for the rest of the age of the universe)


By the way, ETTR is also valid logic in film photography (as it is to some extent in any photographic process possible): When you pull process film (overexpose, then underdevelop, i.e. ETTR), you increase the effective latitude / dynamic range of the film, allowing for greater detail / more data not being clipped like it otherwise would be in some shots. It isn't used as often as in digital, because it's hard to aim the exposure just right without blowing the highlights when you don't have a histogram to fine tune in the field, and because film already usually has more latitude to work with in the first place, so it just isn't as often necessary. But in certain scenes, it is sometimes done on purpose, and functions from the exact same concept: More silver grains in the film react at high exposure in a NON-LINEAR relationship to stops, and thus more data precision is possible by exposing to the right.
 
Last edited:
Nothing is "linear" here... that's the whole point. Everything is exponential. Or in some cases some other things, but never linear.

You should have stopped earlier.

In video processing, linear is the word used to mean "before gamma encoding", and nonlinear means after gamma. Because, all RGB video has gamma in it.

RAW is not linear. The data slots also go up exponentially, at 2^(stop)

In the math sense, linear means that 2x change makes a 2x result change (instead of an exponential change).

Raw data is linear, in that math sense (because it has not yet been encoded in the video sense). You seem to just be claiming that a 2x progression is not linear (x squared), but the data is. We call 2x to be a stop (twice as much), but the only reason it is twice as much is because the data is linear.

Shutter speed is linear, ISO is linear. So is aperture area, but fstop numbers are a circular area computation which is not numerically linear.

Raw data is linear (no gamma), but then RGB conversions have gamma, and are not linear.
 
Yes, in terms of actual raw luminance, it is linear.

But I'm not talking in terms of raw luminance, because humans don't see raw luminance, so why should we care about that?
Our perception changes exponentially. We see twice as much light as "one linear step brighter"
Our brains see things as if stops were linear.So what I mean is that there is not a linear relationship between human perception versus data value distribution, the two things that actually practically matter to us as practicing photographers.

Ideally, due to our perception, you would want an equal distribution of data values in each and every stop. That's the assumption people always intuitively make. And that's the assumption that is inherent in the concept of exposing to the middle.

Sorry for any confusion in terms.




Regardless of what one labels "linear" or not from different rhetorical perspectives, though....
Bottom line = exposing to the right still encapsulates more data than exposing to the middle (thus maximum editing latitude), which is the main issue of the thread, and the take-home message for practical photography purposes.
 
Yes, in terms of actual raw luminance, it is linear.

But I'm not talking in terms of raw luminance, because humans don't see raw luminance, so why should we care about that?
Our perception changes exponentially. We see twice as much light as "one linear step brighter"

Yes, of course our human eyes do see linear light. That is all there is to see, light is linear.

Our perception may not be, but our non-linear eyes obviously require that we see the original analog linear data (or a faithful reproduction), so our brain doing its thing won't think it was distorted.

The original light is linear. Our raw sensors are linear. By definition. Our RGB is encoded with gamma for other reasons, but it is always decoded back to linear before human eyes see it. Linear is a pretty big deal.
 
Yes okay, so in some situations, you convert it to something else and then convert it back to the original again. And the incoming light is linear to your eyes. So what?
I still fail to see the point of what any of this has to do with the topic of the thread: exposing to the right.

Gamma in between, not gamma in between, brain eyes, whatever, you still get the most data precision by exposing to the right...

The answer to the OP is yes regardless.
 
No. The light that the raw file records is exponential, but the data that represents the light is linear relative to one another. Linear relation is a mathematical fact and not subject to rhetoric.

A lot goes on behind the closed doors of your RAW processor. I have a raw processor that permits no gamma correction be applied to any of the channels (uni-WB with a gamma set to 1.0).

The results are very dark and overwhelmingly green. ETTR permits the gamma correction not need to be applied as severely to get the desired result. In most RAW processors this is "undone" by a RAW curve adjustment, in RPP it can be set firsthand - in practice it may not really matter, though it would depend on the processing pipeline. As for the native color balance, I believe that a magenta filter would be beneficial, though I haven't tried it yet and cannot say to what extent IQ would be impacted.

I do know is that a daylight correction pushes the red channel significantly, WAY more than what most of us would be comfortable with. So my point is that no matter how you look at it, your camera sensor sees the world very differently and that the raw processor must do all sorts of funny business to make us believe that it does.
 
No. The light that the raw file records is exponential, but the data that represents the light is linear relative to one another. Linear relation is a mathematical fact and not subject to rhetoric.

A lot goes on behind the closed doors of your RAW processor. I have a raw processor that permits no gamma correction be applied to any of the channels (uni-WB with a gamma set to 1.0).

The results are very dark and overwhelmingly green. ETTR permits the gamma correction not need to be applied as severely to get the desired result. In most RAW processors this is "undone" by a RAW curve adjustment, in RPP it can be set firsthand - in practice it may not really matter, though it would depend on the processing pipeline. As for the native color balance, I believe that a magenta filter would be beneficial, though I haven't tried it yet and cannot say to what extent IQ would be impacted.

I do know is that a daylight correction pushes the red channel significantly, WAY more than what most of us would be comfortable with. So my point is that no matter how you look at it, your camera sensor sees the world very differently and that the raw processor must do all sorts of funny business to make us believe that it does.

None of which changes the answer to the OP, either, which is still "yes, ETTR if possible with time constraints and without going into too slow of a shutter speed"

Anybody can try it themselves and go take a photo exposed to the right, left, center, edit them to be the same exposure, and compare to see the obvious and sometimes massive (in the case of low dynamic range scenes) quality differences.
 
Status
Not open for further replies.

Most reactions

New Topics

Back
Top Bottom