Why Bother?

superhornet59

TPF Noob!
Joined
Apr 24, 2006
Messages
60
Reaction score
0
Location
Ontario, Canada
Hey all, I don't want to create an uproar from all you HDR lovers out there, but seriously, what exactly is the point of using multiple exposures? The only thing I can imagine is lower noise in shadows.

See, if you shoot jpeg, you have an 8-bit-per-channel image x 3 (RGB) for a total of 24 bits.. now people will tell you 'Well that gives you 16 million colors, what more do you need?' well the problem is that you only get 256 brightness levels. In terms of tonal range (dynamic range) that sucks. Now, if you shoot RAW, and have a lower end SLR you have 12 bits per channel, or 4096 brightness levels (and 4 TRILLION colors!). If you've gone the extra mile and bought something high end, you're getting 14 bits per channel, or 16384 brightness levels (for a whopping 281 TRILLION colors!).

my point is, say you try and capture the shadows, the midtones, and the highlights with 3 jpegs, you have 256 brightness levels to work with. lets say you go with the best case scenario, where the images have ZERO overlap (the highlights cover all 256 of one photo and everything is flat black, the shadows cover all 256 of another and everything else is white, and you capture the midtones in between with the 3rd) you still only have 768 brightness levels.. that's not even a quarter of the brightness levels the single 12-bit RAW captured.

Details are changes in luminosity in each color channel channel. By using more bits per channel, you get a wider range, allowing you to capture more shadow detail and highlight detail.

That's why we see these 'HDR from one photo' images.. because there is so much data in that one single RAW photo. You might say combining 3 RAW files is 'even more' quality, but what kind of color space are you going to handle the final file in? All those extra tones just going to go to waste as there is no medium to view/print it on, so you have to compress in he end anyway.

The only problem with a single image HDR is noise in the shadows when they are brightened... but.. if you're in a situation where you need to use a high ISO (ie: moving subjects/camera), you probably don't have the luxury of taking multiple exposures anyway.

So my question, why does everyone still bother? Pop a photo into Photoshop RAW and play around with the exposure.. look how much you *really* have in the highlights and shadows that your MONITOR can't display all at once (that's when 'recover highlights' and 'fill shadows' comes into play, compressing the dynamic range into one that you can display/print).

Make the most of your camera's amazing sensor.. technology has come a long way and can capture all the detail you will ever really need (unless you are trying to photograph the surface of the sun and the stars around it at the same time.) Multi-exposure HDR should be a thing of the past... just a 'look what I can do in photoshop' novelty.
 
A lot might (maybe most) like your post but yeah, I've read in many HDR tutorials that 1 RAW file will do the trick for HDR shot. Although I've read not to use 1 jpeg as it won't give half as good result.

Come to think of it, here's a video of it: [ame]http://www.youtube.com/watch?v=NkVozL3QEx4&feature=related[/ame]
 
Last edited:
You are missing the point for what HDR is for. It is meant to capture the range outside of what 1 or even 2-3 exposures can give you.
This range of light is too large for an image sensor to capture, RAW or otherwise, but is only needed for situations where the contrast is so large.. .ie. the inside of a church.

'Hdrs' made form 3 images or a RAW file don't usually need to be HDR in the first place.
 
If the shadows/dark areas go to black or the bright areas go to white there will be no information to recover. In those situations you will need to create additional exposures to capture that information. Situations where you can get all of the information from one exposure are shots that don't require HDRi in the first place. No amount of PP work will bring detail back to a sky that was rendered pure white or a tree that was rendered black.
 
The only problem with a single image HDR is noise in the shadows when they are brightened... but.. if you're in a situation where you need to use a high ISO (ie: moving subjects/camera), you probably don't have the luxury of taking multiple exposures anyway.

Even if you have a low ISO if you have dark shadows you will get noise in them when you try to brighten them up. Digital (even in RAW) even has less dynamic range than film does (at present at least, its improving over time) so there is even more reason for digital shooters to use either filters like ND Grads as well as methods like HDR to capture scenes that contain wide dynamic ranges.

Arch's example is a great one to show you this effect - if you expose for the dark indoor shadows the stained glass windows will overexpose far beyond what RAW can pull back - similarly if you expose for the windows the shadows will be very dark to black. And where the camera records total over or under exposure it records only black or white data - no contrast changes, no details - just white or black.

So you need RAW still - of course there are those who will shoot and work with multiple exposures who don't need to (no harm in a little insurance though) and those who will use the HDR method in ways and for situations that others would not - but the method is still very valid. Heck you don't even have to use it "perfectly" and often the best uses are where its there to give a little edge, but is subtle enough to blend in and not "look like an HDR cartoon"
 
Superhornet, have you tried making an HDR? If you have, why not post it? If you never have then why the concern whether its a thing of the past or not? It seems to me you've read a few articles, got a few numbers in your head, and interpreted them all wrong. While there are many advantages from taking a single shot HDR image, it just isnt possible at this time. So multiple shots are still necessary.
 
I'd be glad to upload some images sometime, I've been doing this for a long time (but haven't been, I admit, visiting TPF). I have images back on my home computer (from which I am temporarily away) that I will gladly display. But, in the meantime I will appeal to your logic:

Let me put it this way, when you take the multiple shots for your HDR image, what do you bracket? Shutter speed? Aperture? well.. what about ISO? would that work? But what is ISO? well, the amount of signal amplification the sensor does to capture a said image? but, what does that mean in a RAW file? well it means about as much as white balance: nothing. See, if you were to single out a single photoreceptor on your sensor, and say during a given photo it detected 18 photons. Whether you are at ISO 100 or 1600, you still only detected 18 photons. The brightness you see in the RAW viewer is caused by the same thing a contrast settings and tonal curves (whether you shot 'VIVID' or 'PORTRAIT'). It is not actual RAW data, just an additonal 'filter' which results in the image you see on your screen. Yes, signal amplification in camera is often an anologue process, but there is much information out there on ISO settings vs Post Capture 'Pushing', and the final result is actually very similair. In other words, adding exposure, or amplifying RAW signal post capture to parts of the image is the same as using 3 different images with different ISO settings.


You know, most LCD monitors (crt's are only a bit better) only display 8 bit's of information. If you read my OP you know that is less than 1/5th of what your 12 bit sensor captures, let alone a 14 bit sensor. While on the cameras LCD/ on your monitor the highlights may seem blown out and the shadows flat black in high contrast photos, it's only because the monitor cannot display the full range, but that data is there.. in the RAW file, all you need to do is 'compress' the curve back down to 8 bits by brightening the shadows and darkening the highlights so it can 'fit' in the 8 bits the monitor shows. All of which can be done in any good RAW editor.


Like I said, you're only seeing 1/5th of the available data at any given exposure... now tell me... do you really think bracketting the exposure by 1 or 2 Stops is going to make a difference in real life shooting? You must be kidding, that's sheer ignorance.

The multiple exposure HDR technique may be neccessary but only in *extreme* examples which many photographers will never face. I'm reffering to something like photographing the star's and a spot-lit subjet at the same time. You will almost never need THAT wide of a tonal range. The only reason I can image is if you need extremely low noise in the shadows, but if you need that kind of extreme accuracy you should be using a medium format camera anyway.


Don't believe me on all this, search 'HDR from single RAW' in google images.

Let's face it alot of the 'established' guys arent familiar with digital technologies (and likely get headaches talking about bits-per-channel) and who do the new, young photographers learn from hmm? I'm posting this because I come from an engineering background and know my way around these technologies.

If you like the novelty of messing around with photoshop, go ahead. But, I hate to see great potential moving-subject HDR shots get missed because the photographer didn't think he had the tonal range available that he needed, which is completely untrue!
 
Why bother indeed!

Why bother bitching about it? Certain people like certain things.
Some people like automatic transmissions while others prefer manual.
Some like blue while others like pink.
Some use filters and others use software.
Some even *gasp* use different methods of processing!

At the end of the day, who really gives two peanut decorated ****s how the image was created? If you like it, awesome! If not, move along. Give your 2 cents on why you like it or why you hate it.

When I send something in for repair I really don't care what they did to fix it. As long as it works I'm cool with it. I apply the same thought to an image.

If you disagree, I really don't care. I'll consider what you have to say and we can agree to disagree:)
 
No one is disagreeing that producing a fake HDR from a single RAW file is possible - heck many times its as you say - an action shot with a high dynamic range benefitting from processing the same RAW two or even three times to pull the best data out of the darks and the whites so as to give a final improved version of the shot.

However you appear to be ignoring any situation where there is a scene before the camera that cannot be recorded with the full tonal range. Also you are ignoring the noise generated by adding light to the darker shadows of a shot - RAW or JPEG you will get noise there and if you don't want that noise you have to expose for that area - that means a second shot for those darker spots that might well blow out brigther areas far beyond RAW files abilty to store this data.

Remember full over or full under exposure is white or black only on sensor and thus only on the RAW file. No data to restore, no details to uncover.
 
I'm not sure how you would refer to those as "fake" hdr if they do indeed pull more detail from both the shadows and the highlights than was *visible* (but not recorded in the RAW) beforehand. That sounds like increasing the perceived dynamic range.

Anyway it's a courtesy things. I can capture images of very high contrast which you would often dare not, simply because I know how to manipulate my images better. I'm only trying to broaden your knowledge of image capture techniques. If you would like to stick your nose up high, then feel free to press the "Back" button of your browser and never look back at this thread. I deal with enough ignorance in my line work, and get paid a healthy salary to know better than naysayers and improve old ideas. The 'why bother nagging' technique goes both ways, the difference is i provided data and supported my argument with proof, whereas you can only criticize. It's an engineering thing.

I posted this to inform open minded photographers about the capability of the camera they have, not argue with old fools who don't know how to create an excel spreadsheet let alone understand the difference between floating point vs interger color data storage on computers.

There certainly are situations where multiple exposures are required, but what i am saying is you underestimate the capability of your camera and take such measures when they are not necessary, and are therefore missing out on opportunities for fantastic photographs.

As for the 'noise in shadows argument' I already went over that earlier. If noise is a large concern you likely are using a very low ISO which means speed, therefore motion capture, is not your biggest priority, and yes you might as well take multiple exposures.

EDIT: There is no 'lack' of detail, just excessive signal to noise ratio. Never will your sensor record a pixel having ZERO photons hit it. It's only the noise generated that interferes, and that's where post processing is also important. only highlights can be blown when the recorded data (in the form of voltage) exceeds what the sensitive electronics can measures.
 
Last edited:
I'm not sure how you would refer to those as "fake" hdr if they do indeed pull more detail from both the shadows and the highlights than was *visible* (but not recorded in the RAW) beforehand. That sounds like increasing the perceived dynamic range.

Anyway it's a courtesy things. I can capture images of very high contrast which you would often dare not, simply because I know how to manipulate my images better. I'm only trying to broaden your knowledge of image capture techniques. If you would like to stick your nose up high, then feel free to press the "Back" button of your browser and never look back at this thread. I deal with enough ignorance in my line work, and get paid a healthy salary to know better than naysayers and improve old ideas. The 'why bother nagging' technique goes both ways, the difference is i provided data and supported my argument with proof, whereas you can only criticize. It's an engineering thing.

I posted this to inform open minded photographers about the capability of the camera they have, not argue with old fools who don't know how to create an excel spreadsheet let alone understand the difference between floating point vs interger color data storage on computers.

There certainly are situations where multiple exposures are required, but what i am saying is you underestimate the capability of your camera and take such measures when they are not necessary, and are therefore missing out on opportunities for fantastic photographs.

As for the 'noise in shadows argument' I already went over that earlier. If noise is a large concern you likely are using a very low ISO which means speed, therefore motion capture, is not your biggest priority, and yes you might as well take multiple exposures.
You seem to be ignoring a point made often in this thread, that if the sensor records pure white due to something being overexposed or pure black due to something being underexposed there is no data to be recovered, adjusting the exposure levels will only change the shades but will not "recover" the image. Even though you may not be able to "see" the full tonal range of the image on your monitor the colors are represented by numerical values. Once you've hit that numerical value that represents pure white it doesn't matter by how much or little you have clipped that highlight, it is gone. You can argue and be pompous about it but I have just provided you (for the second time) with exactly the reasoning why HDRi is still in use. If you are bracketing a single exposure in post then there are other ways to bring back the detail without going through the effort of making a single exposure into an HDR image (although for some it may be easier).
 
If anyone is interested in this I suggest you look up the definition of troll and trolling. Superhornet hasnt got a bloody clue what he is talking about and encouraging him to babble on will only encourage him to keep trolling.
 
You seem to be ignoring a point made often in this thread, that if the sensor records pure white due to something being overexposed or pure black due to something being underexposed there is no data to be recovered, adjusting the exposure levels will only change the shades but will not "recover" the image. Even though you may not be able to "see" the full tonal range of the image on your monitor the colors are represented by numerical values. Once you've hit that numerical value that represents pure white it doesn't matter by how much or little you have clipped that highlight, it is gone. You can argue and be pompous about it but I have just provided you (for the second time) with exactly the reasoning why HDRi is still in use. If you are bracketing a single exposure in post then there are other ways to bring back the detail without going through the effort of making a single exposure into an HDR image (although for some it may be easier).

In that case I am afraid I will have to explain for the third time that I understand there are situations where multiple exposures are necessary, but I maintain that based on many of the HDR images I see, it was not necessary in the situation it was employed in. Obviously no sensor can capture the infinite dynamic range that exists in the universe, but the detail is there, but again, cannot be displayed on a monitor with a more limited range.

I challenge you to go pop an image into Photoshop RAW and move the exposure slider around and see how much you can recover from the highlights and how much you can bring out of the shadows. just because the highlights are blown out/shadows blacked out on your screen does not mean they are so in the RAW file, and playing with that exposure slider will show you that.

and of course i do not play with exposure to create 3 versions of one image, i just use the functions in the raw workspace I use.

Here is one example why: 16-Bit Vs. 8-Bit Workflow

see how much more shadow detail the 16-bit? now look at the original photo and see if you can see that same detail. You cant, your screen cannot simply cannot show it. i would have sworn it's 'lost, just flat black' as well, but surpringly it's there, in the RAW, which is why it was brought out in the 16-bit not the 8-bit.
 
If anyone is interested in this I suggest you look up the definition of troll and trolling. Superhornet hasnt got a bloody clue what he is talking about and encouraging him to babble on will only encourage him to keep trolling.

I've been a member here twice as long as you. That should say enough.
 

Most reactions

Back
Top