Probably stupid, Can you do this?

NP, There are more detailed properly presented pages from both Ward and Debevec from siggraph 97, 98, and 2001. That's just kind of a quicky read intro thingy. :D It's actually part of the HDRShop utility DL page. ;)

Yep, but it's a good example of the difference between lightening/darkening in post and straight from the camera with the correct exposure, in a round-about way, and so quite pertinent to the discussion — you can't rescue lost detail, basically.
 
The "fake HRDIs" you guys are talking about are actually the results of a process called "Tone Mapping".

Thanks :) so we finally arrived back at my post number 8 :) and have reached agreement :)
 
So how are my three chopper pictures above not "real" exposures? The end result has a significant increase in shadow and highlight detail over the first.
 
So how are my three chopper pictures above not "real" exposures? The end result has a significant increase in shadow and highlight detail over the first.

As well as considerable aura and detail loss from the image information not actually being there and simulated or guessed by the processing software. Multiple exposures will get that information acurately where the processing software can not.

------------------------------------------------

Since everyone else is doing it below are a couple of my false HDR images done using the same process give or take, the difference is the information was there. These are one shot processed at five different exposure settings.

001_2_2.jpg


002_post_2.jpg
 
So my question is: Is there any way to do the tone mapping without adjusting the picture to 3 different "fake" exposures and using an HDR program? IE, something that can directly do the tone mapping and produce the same or similar results from one file?

By doing this, it will not only prove that using one image is tone mapping, but also make it easier to produce the richer images in the future. At the same time, the difference in a 1 file fake HDR and a 3 file real HDR could be demonstrated

-DATAstrm
 
Hopefully this can clarify things by restating what K_Pugh already explained:

The easiest way to know if the image can benefit from HDR is by looking at the histogram. If you look at the darkest shadows and they're completely black (ie. crawling up the left edge of the histogram) and if you look at the brightest highlights and they're completely white (ie. crawling up the right edge of the histogram), the image could use HDR.

The sections of the image that register on the far edges of the historgram are lost and are beyond the point of recovery. The value of the section with the darkest shadow, if it is 0, will still be 0, even if the RAW is saved with EV -2 (it can't assume/interpolate a value less than 0 because it would have no idea what the original ratio is to its neighboring color in the original file); likewise, the value of the brightest highlight (assuming a 12-bit image), if it is 4096, will still be 4096, even if the RAW is saved with EV +2. (I like to think of the computer having only 4096 fingers and toes -- it can't count higher than that)

By doing a 'true' HDR, by capturing, in the camera, a 'true' EV -2 exposure, everything that would have been <= 0 in the EV 0 image can now have a value to it. The same would be similar for EV +2; values greater than 4096 can now be recorded. By capturing values that would have been forever lost (always 0 or always 4096) in the camera, dynamic range is increased.

Tonemapping is a part of creating an HDR because there needs to be a mechanism to blend the three exposures together. The breadth of the dynamic range is already determined by the time one gets to the tonemapping stage.

So... to answer your question, Syndac. No, they are not 'real' exposures (it's a nice exposure, mind you). With your technique, you have exploited the benefits of tonemapping, but in the true sense of HDR, your images did not expand the breadth of data collected by the sensor, so... well-tonemapped image, but not HDR.

Your technique is possible because, even though the sky may look blown out (which only a little bit seems to be in your image), especially in JPEG (range 0-255), the RAW file has a lot more lattitude (range 0-4095) in terms of storing the exact color. Where the JPEG would render a point in the brightest sky as 255,255,255; the RAW may record it as 4090, 4090, 4090. That's as good as white for a JPEG, but that may be the beginning of a silver lining of a cloud in RAW.
 
So how are my three chopper pictures above not "real" exposures? The end result has a significant increase in shadow and highlight detail over the first.


"real exposures"???

I'll assume you mean real HDR Images as every photograph is a real exposure. ;)

And the answer is that technical terms apply. See above for the definitions of the terms being discussed (I guess post 8 and post 38) namely "Tone mapping", HDR or HDRI, RAW, and JPeg.

But maybe this will help: What happens if you try to upload a RAW file to this website? You can't view it right? (unless you're on a mac but...) Same thing with an HDRI. The full width of HDR Images aren't displayable on most monitors at all and can't be viewed in a web browser. But you can save one exposure level of it as a jpeg which is viewable. You can also put it through a process called tone mapping before you save it as a jpeg. Tone mapping attempts to "mix" one or more exposure levels and then sandwiches then into a single viewable width (bit depth) that can then be saved as a non HDR image and uploaded here for display. But tone mapping is a process while HDR is a file format. The most popular extensions for HDRIs are .hdr and .exr So tone mapping is like sharpening or color balancing in that it's just a process. You don't need to tone map an HDR. In fact I guess 99% of HDR files are not tone mapped. They look just like a well exposed JPeg file when viewed on your monitor with HDR viewers. The difference being the exposure adjustment slider can mover farther to the right or left.

Additionally there are several ways to create an HDR file. Rendering applications can output them directly (maybe see www.lightwave3d.com among others). Some highly specialized camera equipment can create them on the fly in a single take. I guess such equipment costs about the same as a house. ;) And you can assemble them from multiple exposures from a regular camera like you and I have. Now here maybe is where your question gets answered. Since we dealing with different ranges or bit depths there will be little or no advantage in creating an HDR file from 3 identical (small range) exposures after just adjusting each one in Photoshop -which doesn't change the range - just the weighting.

Your camera's image sensor is sensitive to a much larger range of light (light and dark) than it is able to capture in one picture. If you try to use just one picture to make the HDR then you're still limited to the range of that one picture. Your camera didn't expose for the brighter or darker areas. So you're just shifting the weight around within one range and using those shifted values to assemble something 4 times the width. 8 bpp --> 32 bpp. This isn't optimal and some say it'snot effectual at all but I haven't done the math so I dunno 1st hand. There's a much better more effectual way that's almost as easy and I always use it so I never bothered to figure out the lesser way. It would only be futile academics for me to.

The better way is to take three or more (more is better - 12 to 15 is great if you can and if you have good enough software to do it) separate exposures thus utilizing the full sensitivity range of the image sensor in your camera. Now you're not simply shifting the same range of a single image but have multiple ranges recorded and ready to assemble into the HDRI. All the highs and lows that would have been clipped in the single picture are captured and added in the various multiple exposures we took in the bracket. Assembling those into an HDRI is very effectual.

Did that make sense or did I just confuse you more?

The HDR file itself is not useful for web display and sharing purposes. For that we need to choose an exposure range we like and save it as something that is useful - like a JPeg file. Tone mapping or even sharpening for that matter, works better if we are working with a file that contains a larger range. So tone mapping an HDR file before saving it as a JPeg is "better" than tone mapping a file with less range. After you tone map it, sharpen it, whatever, then you save it as a JPeg and the newly saved .jpg file seises to be an HDRI.
 
Last edited:
No need to go that far. Here i've demonstrated the difference in doing 3 separate exposures 2 stops apart and doing 3 exposures from one RAW file.

1.
hdrcomp_2over.jpg


2.
hdrcomp_2under.jpg


I think that pretty much sums it up - you can't get back any lost detail. tonemapping has nothing to do with high dynamic range as such, tone-mapping can be fun to play with but it doesn't gain you any detail you never had.
 
Here you can see another similar technique called exposure blending. very similar to tone mapping but using multiple individual files without creating an HDRI first - it's also a slightly different process. In the same thread there are also tone mapped examples processed into JPeg files after assembling interim HDR files.

http://thephotoforum.com/forum/showthread.php?t=127657
 
See I was right, go me! When I said photoshop I meant PP...
 
Aggressor, K_Pugh, Bifurcator,

Thanks for the info. I actually did my own little experiment right before K_Pugh's last post where I took a pic purposely underexposed by 4 stops in RAW format and attempted to recover it. Yes, you are correct in that the data was lost. I blame the many resources that have led me to believe otherwise while dealing with RAW files. I have several magazines stating that the advatange of shooting RAW is that if your shot is over/under exposed, it can be corrected without any loss of data. That's the point I was basing my argument on which as I've just discovered from my own experiment and K-Pugh's, is not true.

I'll still continue with my current method on any moving subjects though as it's the only way to capture it. (such as the previous chopper example)
 
Alex, prodigy, battou, and nynfortoo are all correct.

YES, there are advantages to making an "HDR" out of one image, like when you won't be able to get the same composition due to movement or whatever, but that is not true HDR, and getting different exposures BEFORE uploading your pictures to your computer is ALWAYS better than forcing different exposures with one RAW.
 
so how can i make a "fake" hdr?
 
I have several magazines stating that the advatange of shooting RAW is that if your shot is over/under exposed, it can be corrected without any loss of data. That's the point I was basing my argument on which as I've just discovered from my own experiment and K-Pugh's, is not true.

Yeah I've seen that kind of thing in a lot of magazines. I wouldn't go so far to say it's not true just a bit simplistic. PP exposure adjustment can recover detail from over and under exposed shots but not if they're too far gone in either direction. Blacks will stay black and whites will stay white. Photoshop can do a lot of things but it can't save an shot that was wrecked to begin with.
 
Aggressor, your answer makes the most sense from a technical standpoint and I appreciate the post, Pugh, yours shows the actual difference... This was the answer to my original post, because I was wondering what the difference was between the way I did the image and the true way an HDR is done. So if anything, the way I was doing it was a good tone-mapping, and that's good to know that I can do that if I don't have a tripod to do an HDR exposure... Thanks for the detailed explanations everyone...
 

Most reactions

Back
Top