HDR may have ruined me!

High Dynamic Range.

In the best of cases, you take at least three photos of the same thing, with the camera never moving, in three different exposure steps. One photo exposes for the brightest part (which will throw all dark parts into underexposure), one exposes for the midtones ("normal" exposure), and one exposes for the shadow parts (throwing all that is bright into glaring overexposure). You can achieve even better results if you up the odd number of photos you take.

Later, in post processing, you layer all your exposures one above the other, and then comes this (to me still unexplored and therefore kind of "magic") thing of "tone mapping" where you adjust all the newly to be seen highlight and shadow areas so that you end with a very dynamic (with regards to the distribution of the light) picture. "Dynamic" meaning: the range of areas that are correctly exposed is much wider than what the camera could ever manage to do in one photo only.

That is the idea as I understand it.
I don't understand the MAKING-OF HDRs too well so far, though.
I have created a few, and was pleased with ONE, but I often lack both time and patience for them - and my tripod is too sketchy for the "real" HDRs (with three, five, seven or more photos) and I get "where-are-my-glasses" pics in the end, and as I understand it, changing exposure values of a RAW file later in the RAW programme to create three or five different exposures is not considered the "true HDR-technique" ...
 
This sounds like something a Photoshop instructor was telling me about. Is there another name for it in Photoshop?
 
Not that I would know of. No. It is even the same in my language. Seems an international term, kind of.
 
Only terms I know of are HDR-HDRI-tone mapping-radiance map-cross probe-light probe.
In photography it's mainly called HDR because your goal is a High Dynamic Range photo.

In 3D design it's usually called HDRI to describe the process. But the individual shots are known as cross probes or light probes because what you're doing is essentially probing a scene for it's real world light model to apply to a computer generated scene.
You can see this lighting method in CG movies like Shrek, Cars, and hybrids like Star Wars, Lord of the Rings, etc...
You do that by taking a series of photos with varied exposures by using a mirrored ball suspended in front of your camera in the same way a golf ball sits on a tee. You take the photos from all sides of the mirrored ball to basically get the light readings from the entire scene, not just from the perspective of the camera pointed in 1 direction the way a normal image is taken.
Seems that in both photography and 3D rendering it is considered a very advanced technique and really does require some technical knowledge and experience to accomplish.
With photography you don't need to take as many photos to cover the dynamic range. In 3D design you have to be strict about getting ALL the shots right to cover the dynamic range in a technically correct manner. Because that extra dynamic range is used to apply the light to the rendering.
And without having the entire range correct, your rendering will look synthetic and incorrect in it's illumination.
Once you have your HDRI for your 3D rendering, you map the image to the luminance channel of your material on a sphere created in your modeling app and put your objects inside the sphere.
The computer calculates light rays inside the sphere according to the luminance values of the HDRI and gives the synthetic scene the exact same light readings that you recorded with your camera.

The terminology used to describe the actual physics of what the light is doing is in CG terms, 'global illumination' but in physical terms is known as 'diffuse inter-reflection', and then a cheat method called 'ambient occlusion'.
In physical reality-
Diffuse interreflection is a process where light reflected from an object strikes other objects in the surrounding area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground, walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected light is also colored, resulting in similar coloration of surrounding objects.

Radiance mapping is somewhat like using a light box, of course a very advanced and technically difficult to setup light box that exists only within a coordinate grid inside the computer..
It took me about 3 months to be able to accurately use the HDRI method in my renderings.
And the application of it is constantly evolving so it's important to keep up with it.

Sound complicated? It is! But it's SO interesting. IMO, anyway.
Understanding these things gives you a broader understanding of how light interacts with matter.
I believe that this allows me to understand the physics of light much better, which in many cases allows me to push my standard images above an beyond because I know more about what the light is doing and needs to be doing to get the result I'm after.
 
Im in the same boat as the OP. I am still experimenting with using HDR freeware and some of the results are asstounding. No doubt the purists would be up in arms as a lot of the images are bordering on fantasy however I find myself admiring the strong contrasts and deep saturated colors. Hopefully this newfound fad will eventually wear off and I can get back to regular two step post processing :)
 
High Dynamic Range.

In the best of cases, you take at least three photos of the same thing, with the camera never moving, in three different exposure steps. One photo exposes for the brightest part (which will throw all dark parts into underexposure), one exposes for the midtones ("normal" exposure), and one exposes for the shadow parts (throwing all that is bright into glaring overexposure). You can achieve even better results if you up the odd number of photos you take.

Later, in post processing, you layer all your exposures one above the other, and then comes this (to me still unexplored and therefore kind of "magic") thing of "tone mapping" where you adjust all the newly to be seen highlight and shadow areas so that you end with a very dynamic (with regards to the distribution of the light) picture. "Dynamic" meaning: the range of areas that are correctly exposed is much wider than what the camera could ever manage to do in one photo only.

That is the idea as I understand it.
I don't understand the MAKING-OF HDRs too well so far, though.
I have created a few, and was pleased with ONE, but I often lack both time and patience for them - and my tripod is too sketchy for the "real" HDRs (with three, five, seven or more photos) and I get "where-are-my-glasses" pics in the end, and as I understand it, changing exposure values of a RAW file later in the RAW programme to create three or five different exposures is not considered the "true HDR-technique" ...


couple posts above you.
 

Most reactions

Back
Top