Question on interpreting the histogram in LR3.

I've not used the specific software in the comparison above, but the software that I do use, UFRAW, provides separate input and output histograms. If the input histogram taps the right edge, you've overexposed, if you're close without touching, then you've done fine.

Even here you're assuming that the input histogram was created without software interpretation. You can't assume that. Just to get that input histogram your RAW file has to be decoded and transferred to an 8 bit (0 through 255) scale graph. C1 has an option to show what they call a linear histogram of the photo (without their applied tone curve) -- ACR has the same option as does DPP. In theory this would be the input histogram and, for the same file, I should assume the same result from each program, BUT they're different.

You can't look at the negative anymore and the histograms are an interpretation of your data. We're forced to place some trust in the interpreters. My trust comes with a wary eye.

Joe
 
This problem has existed since photography was invented. The image is a product of the photographer's intention but mediated by the technology. As the technology has become increasingly sophisticated the photographer has been losing control. In the past the problem may have been that the technology was too crude to realize the photographer's intention. Now the photographer doesn't have a clue what the technology is doing.

I have to disagree with this, I feel like we have more control than ever. The technology used to be chemicals and timings and temperatures, and if you got it wrong, it took a while to find out and you couldn't "undo". Now you have complete control over the process of getting the image from the camera to it's delivery medium, coupled with instant feedback and the ability to try anything any number of times at any step in the process. You say that the photographer doesn't have a clue what the technology is doing... I think there are two points to be made about that. The first is, how much does the photographer need to know? Understanding the intricacies of computer programming languages and data management may provide a certain intuitive advantage, but I wouldn't consider it necessary to understand completely how to use the software tools available. Just as film photogs didn't _need_ 6 years of college chemistry to understand the development process, though it likely helped. Now the other side of the photographers not understanding, is that it's not the technology's responsibility to be self explanatory. Every photographer has the choice to learn the technology or not.. If all someone is doing is poking at a piece of software without learning in depth about why it does what it does and what options are available, then that person has simply opted to "noodle around" instead of learn. It's not the developers fault that people don't have a clue what the software is doing, all information to that end is readily available to anyone with an internet connection. A situation vastly superior to what a person had to do in order to learn how to properly develop film.

Fair enough. I was referring to photographers in a general sense. I have much more control now than I used to have. However the learning curve has steepened and we are awash with photographers (this forum is evidence enough) who don't know why their photos change color when they uploaded them to the internet.

Joe
 
We're forced to place some trust in the interpreters. My trust comes with a wary eye.

This is the point I was trying to make... you don't have to trust the interpreters, all you need to do learn is learn where they get their data from. If a company is so worried about piracy that they do keep their algorithms secret, and no one has provided an explanation of their behavior, that's sort of a different game. As I said before I use UFRAW which is open source and community supported.
 
We're forced to place some trust in the interpreters. My trust comes with a wary eye.

This is the point I was trying to make... you don't have to trust the interpreters, all you need to do learn is learn where they get their data from. If a company is so worried about piracy that they do keep their algorithms secret, and no one has provided an explanation of their behavior, that's sort of a different game. As I said before I use UFRAW which is open source and community supported.

Yeah and I use RT which is also open source. However I haven't picked through the source code and considered modifying it -- yet. You have? And if you have, you're certainly not representative of most photographers. And that means for most photographers the histogram (input or output) that they see in a RAW file converter is an interpretation of their data. And that's the point I'm making. Open source or not UFRAW has to decode the RAW file and apply processing to get that histogram. That's an interpretation of the data whether the method is transparent or not. Publishing the rules applied to achieve the interpretation doesn't make it not an interpretation and to all but a select few photographers publishing those rules is pretty meaningless.

Joe
 
The point about them being open source is that someone else has already picked it apart and written about it. If you're lucky you can even get comments on design intention from the original developer. We're sort of getting lost in specifics a bit... My point is, any time you act on any piece of photographic data, the process is nuanced but explainable. In modern digital technology and software, the nuances are well explored and documented, and explanations are traded constantly online. When you say that publishing details about software is meaningless to all but a select few photographers, you are distinguishing the photographers who choose to understand why their tools work from those who just use the tools without researching. The "select few" you mention are the folks who would've had their own darkrooms and spent hundreds of hours in them. Now the attention to detail and understanding of process is no longer so difficult to obtain. Anyone can, at no cost, learn about every step of the process in detail.

Specifically about histograms.. they are an interpretation of the data, yes. That is the nature of a graph, it simplifies an otherwise incomprehensibly large pool of data. Reading the individual 14-bit outputs of 18 million pixels or whatever is impractical, so instead we have histograms, complete with methods of manipulating them to make them most useful to us. You're right, the histogram is not the same as the data, it's a format that is less robust, but more readable. Different software engineers have different interpretations of what kind of data is most useful to photographers, and so they'll bias their algorithms to generate histograms to make the most useful data, the most readable (based on their own perceptions of useful and readable). They are explicit about how they achieve their goals though, so if you're interested, you can figure out exactly why they're drawn the way they are, and determine for yourself if information that you find useful is not well represented.

What would you propose as an alternative to interpretive histograms? Is there something for you that you feel would make the data more readable? Or bring more useful data forward? Software developers love new ideas :). It sounds as if you expect the software to provide the photographer with all the information they need up front, to the extent that they don't need to do additional research. I don't think any photographic process has ever been that transparent, digital or otherwise. Photography is the merging of a number of separate fields of technology, and the photographers who best understand the technologies involved are the ones with the most control over their tools. My point is that over time it seems the knowledge required to master all the associated technologies is increasingly more available. And now that we've replaced chemistry with information technology, the cycles of experimentation>mistakes>innovation are permitted to happen quite quickly, at minimal cost, by many more photographers.
 
They are explicit about how they achieve their goals though, so if you're interested, you can figure out exactly why they're drawn the way they are, and determine for yourself if information that you find useful is not well represented.

I don't think so; with open source software yes but not with the commercial companies. I use (and have to use) those commercial products.

What would you propose as an alternative to interpretive histograms? Is there something for you that you feel would make the data more readable?

What I would like is standardization. Go back to the image I posted. In that example I allowed each converter to open the file and apply their standard tone curves. Compare the two commercial products, ACR and C1. There's a big difference with C1 showing that I have clipped highlights. The C1 image is lighter and has more contrast. This is the photo as shot.

I know both programs are applying adjustment curves to the photo. You noted that UFRAW has a input histogram. ACR and C1 both allow you to remove the adjustment curves. They both use the same term for the alternative: linear -- fare assumption that linear would be the same as UFRAW's input histogram. So I go back with that same image and open it again in ACR and C1 and select the linear option. What do I get? -- a reversal in fact. Now the ACR image is lighter than the C1 image and they are not nearly the same.

I don't use UFRAW but I regularly use DPP, ACR, C1, and RT. I am very familiar with Capture NX and SlikyPix and slightly familiar with Bibble. That's seven different RAW file converters. And I'm confident that they will all open up the same DNG file and all show you something at least slightly different and that's with their tone curves disabled. I find that unsettling. I am not prepared to begin a detailed investigation into how seven different software programs are interpreting my RAW files assuming I actually could get that info.

Consider already how different I am from your average photographer. I've got four RAW file converters loaded on my computer and I use all four of them because I have already devoted long hours to investigating their performance. I wish one of them was fully adequate. You use UFRAW to open a DNG file and then open the same DNG file in C1 and the UFRAW input histogram and C1 linear histogram don't agree -- fair assumption. No problem? How valuable then is the info from either?

We do have standards. Color spaces are standards. R=123, G=87, B=166 sRGB is a specific color we have all agreed upon. If I set that color in all seven of those RAW converters they'll all show me the exact same thing. However when they first open one of my RAW files, a pixel that one of them displays as R=128, G=128, B=128 is displayed by one of the others as R=102, G=102, B=102 and by one of the others as R=143, G=143, B=143 and by one of the others as ......etc. Assuming that was my grey card in the photo it would sure be nice if there was a way that I could get my RAW file converters to open that file to a standard starting place so I'd get the same value from the grey card in each converter.

Going all the way back to the OP's question. I'd like valuable feedback on my exposure. I'd like to photograph a grey card and then open my RAW file and read a value for that grey card and know it meant something -- right now I know it doesn't. The same file open in all four of my converters gives me four different values for that grey card. I used to be able to get that info with film and a densitometer. The densitometer gave me a standard. I want that standard back.

Joe

P.S. I suspect you and I agree far more than not.
 

Most reactions

New Topics

Back
Top