Once the RGB photo is created the amount of data is fixed. But that RGB photo is a reduction from the amount of available data in a raw file. That's the point of the first set of photo -- the backlit river. The raw file contains a huge amount of data and far more data than will eventually fit into the final RGB photo. We need to get to the final RGB photo one way or another. In that backlit river scene the camera software couldn't do a reasonable job of that -- I had to do it. That photo is an extreme case -- direct backlight.
Normally the camera will do better, but still the camera jumps straight to an 8 bit JPEG so getting the camera to get that right becomes very critical. From a raw file you can work your way down to that 8 bit RGB final photo in stages. Raw conversion to 16 bit RGB and then tweak that down to the final 8 bit RGB. It's a process of reduction down from the raw capture which starts with more data than we can use. To get the most out of it you need to manage that reduction process.
Joe
well im shooting in jpeg + raw on the dlsr and just jpeg on my bridge camera. usually I use the bridge but anything more serious shoot with the dslr but still use the jpeg unless It needs some drastic change. I've had the backlight issue. I couldn't fix it. it killed that entire section. Seems to happen with the jpegs..
not quite sure what you are talking about with "manage that reduction process".
im thinking you mean the processing and clicking the conversion box and start in my program.
Manage the reduction process: This entire photo process is still bottom line print targeted. In other words you start with whatever you're photographing and when the process ends you have a print to nail up on the wall. Many of us now stop before we get to the print but the print is still implied and it continues to determine the target characteristics. This is important because:
The range from black to white in a print is a fixed range and it's not really all that much. The 8 bit RGB data structure that a JPEG for example conforms to is a good match to that target print. And so the limits of that 8 bit RGB data container are our end target.
Do this: Lay a one foot ruler on the table in front of you. Let's call that the total, real, physical range of dark to light that's possible in a print -- better yet let's call it an 8 bit RGB photo. Next lay a yardstick on the table above the ruler. That's the total range of dark to light that the sensor in your camera can record. It's almost a fair analogy. Depending on your specific camera the sensor can capture a tonal range that's between double to triple the range you can squeeze onto a print or stuff into an 8 bit RGB data structure.
Next get a carpenter's tape measure and pull it open to 4 feet. That's a backlit scene and it could go to 5 feet. The fun thing about the carpenter's tape measure is that it's variable. It can also contract to 2 feet. It expands and contracts all the time. That's the lighting contrast out there in the real world.
The trick to photography then is to evaluate the carpenter's tape measure. Make sure the right segment of that tape measure is recorded on the yard stick and then squeeze the content captured with the yard stick onto the ruler so that when you're done you have a good photo.
That's a process of reduction. Consider the first set of photos I posted of the backlit river. Again the middle and right photos are the same exposure. The camera software attempted that process and failed in disgrace by blowing out the highlights. I attempted that same process and did much better; my version at least has a blue sky. The blue sky really was there (carpenter's tape measure). The sensor really did record it (yard stick) and I managed the reduction process to retain it (ruler).
Joe