Why do clouds look white?

NortT

TPF Noob!
Joined
May 20, 2022
Messages
3
Reaction score
0
Can others edit my Photos
Photos OK to edit
Hi guys,

I'm a beginner in photography and just trying to understand the basics. I believe this question was asked many times but I can't find a good solution. Attached are 2 photos taken with my Nikon Z5 and google Pixel 6 Pro. As you can see, clouds are just a white spot on my nikon photo. If I decrease the shutter speed, I can't see the building but get nice clouds. At the same time, my phone captures all details of either the clouds and the building. I understand that the clouds are overexposed but how to solve this problem with my camera? Btw, I use a CPL filter with my Z5 (you can see there is no reflection on a window). I can change the exposure level for clouds in LightRoom but I don't want to process every single photo.
photo_2022-05-21_14-14-51.jpg
photo_2022-05-21_14-14-55.jpg
 
The main difference here is the way your camera and phone treat and take an image.

Nikon. Press shutter = image taken with setting entered. One exposure for entire image.

Pixel. As long as the camera app is open it is evaluating focus/shutter speed/ISO and it is doing this for multiple points throughout the image in order to provide for the best overall image.

In order to do this with your Nikon you will either have to shoot in an HDR mode or shoot and stack a few images manually.
 
Thanks for your answer mate. What is the reason a camera doesn't do it as pixel does? I mean why a camera doesn't process multiple points at once?
 
Thanks for your answer mate. What is the reason a camera doesn't do it as pixel does? I mean why a camera doesn't process multiple points at once?
Because the camera is far more likely to respect the fact that a photographer is behind the camera and operating it. I would consider interference from the camera along the lines of what your phone camera did as unacceptable to the point of discarding the camera as unusable. My first response to what your phone camera did would be, "there better be a way to turn this bleep bleep off or I can't use this thing."

The problem you identified has been with us since the day photography was invented. We know how to handle it and I would handle it better than any computer algorithm could and if that computer algorithm got in my way then that would be unwelcome interference.

Here's a similar condition in a photo I took earlier this Spring:

process-required.jpg


The image on the left is straight from the camera with only the camera's standard processing. My photo is obviously much different. If my camera had the software installed that's in your phone camera it would have processed the image closer to what I did and not nuke the sky but, and it's a big but and this is the point, the phone camera software would not have matched the image I produced and therefore it would not be the image I wanted. Putting it another way; I know what I want and I know how to get what I want and I do not want to be interfered with. My camera respects that so I'll keep it.
 
Most phone cameras have an HDR default setting, meaning they with compress the dynamic range. Notice that while the sky is darker in your phone picture, the building is actually a bit lighter.

Your camera will not do that, although if you take multiple images, bracketed for exposure, you can combine them with software to get a similar result. The camera can only make a decision based on its metering of a single spot, a larger center-weighted spot, or an average of several point around the image. It cannot darken the sky without darkening the entire image. If you look at Ysarex's example, his final image is generally darker overall than the original, although there's more to it than just lowering the exposure.

Cameras have a much more limited dynamic range than your eyes. As you look at a scene, like looking at the building and then at the sky in your example, your eye adjusts to what you're looking at, exactly. When looking at the sky, your pupils contract, like dialing down the camera's aperture. When you look at the building, they open back up. A photograph can't do that, as all the camera can expose for whatever its meter sees as the scene as a whole. Extra bright areas can wash out, like you're seeing, and extra dark areas can disappear into full black. The camera's limited dynamic range is something you have to be aware of as a photographer. Your phone compensates for it with HDR, which is an acronym for High Dynamic Range. It's actually a reduction in dynamic range of the image, as it reduces highlights and enhances shadows, all to bring the overall exposure within what can be presented in a single frame.

As for "processing every single photo," I think you'll find that you'll rarely shoot an image you're satisfied with without some amount of work in Lightroom or Photoshop.
 
Thank you guys for your comments and advice. I think I understand the reasons 👍
 
The point being made is that the phone algorithms satisfy the vast majority of the users. This is the intent of the phone manufacturers.

The fact that a digital camera will capture far more useable data that can be accessed and manipulated in post process is the intent of the camera manufacturers.

This probably explains the reason why photo quality has become such a selling point with the phone manufacturers. Lets face it, convivence sells product. Hence, phone cameras have replaced almost all small camera sales.

Fortunately, for those who need, or just like to diddle with, optimizing their photos, the camera makers also continue to upgrade their product, so to the increased wizardry available in post processing.
 

Most reactions

Back
Top