Dynamic Range of Film still better than Digital Cameras?

DR for digital cameras likewise varies one camera to the next and tends to correlate with sensor size -- bigger sensor = more DR.

Correct me if i'm wrong but Isn't this statement also dependent on sensor pixel quantity, size, and camera software?

I started on film in the 60's had a nice commercial darkroom in the 70's at the newspaper, processing 35mm B&W and graphic arts film. It was great, gave me time to experiment and learn, but to me, unless you go straight from negative to print you lose the best that film can offer.
 
Correct me if i'm wrong but Isn't this statement also dependent on sensor pixel quantity, size, and camera software?
It's definitely a hardware property -- the sensor. Software would only play a role if it interferes with access. Single pixel size I believe is a factor and bigger is better. Larger sensors tend to have larger pixels. But larger sensor size alone is also a factor in that larger sensors gather more light = less noise and DR from a digital sensor is noise limited. The size correlation is a tendency rather than a hard & fast rule. For example the Nikon D5 (FX) and the Nikon D3400 (DX) both released in 2016 are rated by Photons to Photos with a stop difference in DR and the D3400 has the greater DR. It's an uncommon exception -- I assume other hardware factors are involved.

I started on film in the 60's had a nice commercial darkroom in the 70's at the newspaper, processing 35mm B&W and graphic arts film. It was great, gave me time to experiment and learn, but to me, unless you go straight from negative to print you lose the best that film can offer.
 
Single pixel size I believe is a factor and bigger is better. Larger sensors tend to have larger pixels.

This is what I've been led to believe. Generally the medium format has touted it's bigger pixels as being able to gather more light, but I don't guess I understand how the number of pixels relate to the megapixels of the image or the DR. I had assumed that more pixels on the sensor = more megapixels and better DR, but given the size limitations of the sensor, adding more pixels would have to mean a corresponding reduction in pixel size, this is in conflict with bigger=better. Would that not also equal reduction in DR??
 
This is what I've been led to believe. Generally the medium format has touted it's bigger pixels as being able to gather more light, but I don't guess I understand how the number of pixels relate to the megapixels of the image or the DR. I had assumed that more pixels on the sensor = more megapixels and better DR, but given the size limitations of the sensor, adding more pixels would have to mean a corresponding reduction in pixel size, this is in conflict with bigger=better. Would that not also equal reduction in DR??
It's more than just pixel size -- it is also sensor size. So it's a bit more complicated to measure.

Digital DR is noise limited. In other words we measure DR from the sensor's saturation threshold down to the noise floor. The noise floor is the point where you can no longer discern, measure, separate the signal from the noise. If you can't tell the difference between signal and noise you no longer have a usable signal. So we take my Z7 and Bill Claff at Photons to Photos lists the Z7's DR range as 11.5 stops. (I think it's 12 stops but I know I'm pushing hard.) Where that noise floor occurs is in part a direct function of sensor size. As the sensor size increases we gather more light over the sensor area and that reduces noise and so lowers the noise floor and vice versa. Look at these two graphs from Bill's site: Photographic Dynamic Range versus ISO Setting

That's the exact same sensor, the exact same pixels, showing a stop difference in DR simply as a result of cropping the sensor -- FX versus DX. Making the sensor smaller reduced the DR.
 
It's more than just pixel size -- it is also sensor size. So it's a bit more complicated to measure
Yes, but what? Sony came out with a 100 mp crop sensor. Fuji has a new medium format 100 mp. In both I haven't seen corresponding sensor size increases. If pixel count = larger image size, then in order to cram more pixels in any given space you would have to reduce the individual pixel size, or there's some software wizardry going on. If bigger is better in terms of DR, wouldn't smaller be worse? Or is it a case of software being able to better utize the DR available.
 
Yes, but what? Sony came out with a 100 mp crop sensor. Fuji has a new medium format 100 mp. In both I haven't seen corresponding sensor size increases. If pixel count = larger image size, then in order to cram more pixels in any given space you would have to reduce the individual pixel size, or there's some software wizardry going on. If bigger is better in terms of DR, wouldn't smaller be worse? Or is it a case of software being able to better utize the DR available.
Given a fixed size sensor then cramming more pixels has to mean smaller pixels, yes. That's unrelated to the effect on DR due to overall sensor size -- sensor size not pixel count. I have two FX sensor cameras, one with 45mp and the other with 24 mp. They are both the same size and very similar in terms of DR -- the 45mp has a 1/2 to 2/3 stop DR advantage but is also 3 years newer tech.

Pixel size is a factor influencing DR but not THE factor and I don't think the dominant factor. Sensor size determines DR independent of pixel size.

Bigger (sensor not pixels) is better in terms of DR and yes smaller (sensor not pixels) is worse and that's physical hardware not software.
The Photons to Photos chart linked below graphs DR over ISO for different size sensor cameras medium format to cell phone. They line up in size order relative to DR. 12.3 stops DR for the medium format sensor and 6.7 stops DR for the cell phone (ignoring the ISO variances): Photographic Dynamic Range versus ISO Setting
 
Given a fixed size sensor then cramming more pixels has to mean smaller pixels, yes. That's unrelated to the effect on DR due to overall sensor size -- sensor size not pixel count. I
Okay you're losing me here. The pixels, not the physical sensor record the light. It's nothing more than a mounting point for pixels and circuit board to transmit analog signals from the pixel to the edge.
 
Okay you're losing me here. The pixels, not the physical sensor record the light. It's nothing more than a mounting point for pixels and circuit board to transmit analog signals from the pixel to the edge.
DR is noise limited. Sensor size (not pixel count) is a primary determinant of noise, therefore sensor size is determining DR to the extent it determines noise. Noise increases/decreases over total light gathered by the sensor regardless of pixel count.

Consider this analogy: You have two cookie pans, one 11 x 14 inches and the other 8 x 10 inches. Place them both outside together in a nice even rain (f/stop). After X time (shutter speed) take them in and measure the depth of the collected water. It will be the same for both pans. That's exposure (light per unit area) and both cookie pans got the same exposure. Now pour the water from the 11 x 14 pan into beaker A and pour the water from the 8 x 10 pan into beaker B. Do you have the same volume of water in both beakers? No. Beaker A has more water because the 11 x 14 pan gathered more water -- in the same way a larger sensor because of it's larger area gathers more light than a smaller sensor (regardless of pixel count). With more total light gathered you have less noise and less noise means you get more DR. Light is the signal and the noise (shot noise) is in the signal as a percentage of the signal -- signal/noise ratio. Gather more total light and the percentage of noise in the signal goes down and vice versa.

Pixel size is also a DR determinant factor but it is not the only factor. Sensor size is a DR determinant factor independent of pixel size. I can't tell you if it's a 30/70 split or a 60/40 split or a 20/80 split and I suspect that it can in fact vary between sensors. But DR is not just about how many pixels and how big they are. It's also about the total area of the sensor collecting more or less total light.
 
DR is noise limited. Sensor size (not pixel count) is a primary determinant of noise, therefore sensor size is determining DR to the extent it determines noise. Noise increases/decreases over total light gathered by the sensor regardless of pixel count.

Consider this analogy: You have two cookie pans, one 11 x 14 inches and the other 8 x 10 inches. Place them both outside together in a nice even rain (f/stop). After X time (shutter speed) take them in and measure the depth of the collected water. It will be the same for both pans. That's exposure (light per unit area) a

The flaw in your analogy is the sensor (cookie sheet) doesnt collect any light, the pixels on the sensor do. Using your example you could put 10 evenly spaced pixels on a 11x14 sensor and 1000 pixels on an 8x10 sensor and assuming the pixels on both were equal in size then you're saying the 11x14 sensor would have more DR?? You're still dealing with the light gathering ability of the individual pixels. Now I could agree with you if pixel density per square inch was equal.
 
The flaw in your analogy is the sensor (cookie sheet) doesnt collect any light, the pixels on the sensor do. Using your example you could put 10 evenly spaced pixels on a 11x14 sensor and 1000 pixels on an 8x10 sensor and assuming the pixels on both were equal in size then you're saying the 11x14 sensor would have more DR?? You're still dealing with the light gathering ability of the individual pixels. Now I could agree with you if pixel density per square inch was equal.
Pixel density and pixel size is a factor. You want it to be the only factor then that's wrong. Sensor size independent of pixel size and density is also a factor. The amount of total light collected by the sensor is a noise determinant and so a DR determinant.

If not explain why there's a DR difference in the example I listed in this post: Dynamic Range of Film still better than Digital Cameras?

How does the DR for the very same sensor change? In the Photons to Photos graph I linked in that post the DR for a Nikon Z7 changes when you use the very same sensor with same pixel size and pixel density but switch to DX mode. Switching to DX mode doesn't change the pixels on the sensor. It just changes the total light gathered.
 
I also do not like to engage these discussions because for all intent they are useless to me as a dedicated film guy. And kudos for keeping it civil, nice change of pace.

But I thought you guys might like this article on The Darkroom's web site.....
They begin the article by admitting they're biased. I think they're biased well past the point of willingly presenting misinformation. The opening photos is a real cheap shot, as if the Canon 6D doesn't allow on camera control of the output JPEG settings -- please.
 
Last edited:
We should probably get the terminology right.

On an image sensor, “pixels” do not capture light. A “pixel” is only present in a reconstructed image.

A “photosite” responds to light, but it takes more than one photosite to create a full color pixel. Pixels are not made up of equal numbers of 3 primary color pixel sites.


The physical size of a photosite directly relates to the minimum noise of that site. Bigger photosites respond to more photons, and less noise per site is the result.

Because cameras all now use extensive post-processing, there is only a rough relationship between the size of an image sensors photosites and the noise of the sensor vs temperature. Processing is a game changer. But in general, as far as the sensor goes, smaller photosites = higher noise (pre-processing).

RE: film vs digital, a photosite responds linearly to light. Twice the light, twice the charge on the sensor. Eyes respond non-linearly to light, as do all senses respond to their corresponding stimuli. Twice the light is not perceived as twice as bright, and twice the sound pressure is not perceived as twice the loudness. Human senses compress their response for increased, and incredibly wide, dynamic range. Digital image sensors by themselves do not do this. But film sort of does. Film does not have a linear response to light.

You might be going to “advantage: Film”, but hang on. IF you have enough digital information, you can apply a correction curve (gamma) to the raw data and get that same type of compression. More bits is an advantage here. That’s because, if you just look at how many digital “levels” exist per stop, half of the available levels are used in the very top stop before highlights clip. That’s on the raw sensor data. But post-processing (the jpg preview you see on the back screen, along with its histogram) are pre-corrected, so you don’t ever see it exactly that way until you process a RAW image, and then, some assumptions are already made for you.

This is where the Expost To The Right idea comes from. Deliberately over-expose and use more of the available data. It’s not wrong, but also, is not done much.

So when it comes to dynamic range of cameras and film, we are comparing apples and oranges unless we look carefully at the final, corrected image. Which, BTW, we absolutely cannot see in most comparisons of the two, especially in the video posted by the OP in post #1. Part of digital imaging is post-processing, a very big and important part. It cannot, and should not be ignored or discounted. So, that, plus the fact you can’t practically scan film’s full dynamic range without excessive trouble and expense.
 
Let's look at the issue from a different perspective -- might be helpful. The base ISO on my Nikon Z7 is 64. Let's just call it 100 so it's easier to count. At base ISO then I should be able to take a photo that fully utilizes the recording capacity of the sensor. Let's not do that. Instead let's set the ISO to 25,600. That's 8 stops above the base ISO. In other words let's start by whacking 8 stops of DR right off the top of the sensor. What I've got left to use then is the bottom of the sensor a few stops from the sensor's noise floor.

I selected an indoor scene that's purposely low contrast and, given the ISO setting, exposed as much as possible. I'm also taking advantage of state-of-the-art noise filtering technology. So I've started by removing more DR capacity from my sensor than the total DR capacity of any color transparency film.

Consider that ISO 800 (Kodak Portra) is the highest ISO color film available. To use it at ISO 25,600 you'll have to underexpose it 5 stops and develop it so long that the grain will probably create a textured surface in the emulsion you can feel -- well almost. I took the photo below 40 minutes ago, processed it and wrote this post. I don't typically use my cameras at ISO 25K, but I can and look at the result.

DR-ISO-test.jpg
 
Last edited:
We should probably get the terminology right.

On an image sensor, “pixels” do not capture light. A “pixel” is only present in a reconstructed image.

A “photosite” responds to light, but it takes more than one photosite to create a full color pixel. Pixels are not made up of equal numbers of 3 primary color pixel sites.


The physical size of a photosite directly relates to the minimum noise of that site. Bigger photosites respond to more photons, and less noise per site is the result.

Because cameras all now use extensive post-processing, there is only a rough relationship between the size of an image sensors photosites and the noise of the sensor vs temperature. Processing is a game changer. But in general, as far as the sensor goes, smaller photosites = higher noise (pre-processing).
You're talking about read noise. And you're correct that we've removed it in modern cameras -- moot issue. DR is limited by shot noise not read noise. Shot noise is in the signal (light). Here's a good explanation: What's that noise? Part one: Shedding some light on the sources of noise Also look at this concerning the size of the photosites: The effect of pixel size on noise
RE: film vs digital, a photosite responds linearly to light. Twice the light, twice the charge on the sensor. Eyes respond non-linearly to light, as do all senses respond to their corresponding stimuli. Twice the light is not perceived as twice as bright, and twice the sound pressure is not perceived as twice the loudness. Human senses compress their response for increased, and incredibly wide, dynamic range. Digital image sensors by themselves do not do this. But film sort of does. Film does not have a linear response to light.

You might be going to “advantage: Film”, but hang on. IF you have enough digital information, you can apply a correction curve (gamma) to the raw data and get that same type of compression. More bits is an advantage here. That’s because, if you just look at how many digital “levels” exist per stop, half of the available levels are used in the very top stop before highlights clip. That’s on the raw sensor data. But post-processing (the jpg preview you see on the back screen, along with its histogram) are pre-corrected, so you don’t ever see it exactly that way until you process a RAW image, and then, some assumptions are already made for you.

This is where the Expost To The Right idea comes from. Deliberately over-expose and use more of the available data. It’s not wrong, but also, is not done much.

So when it comes to dynamic range of cameras and film, we are comparing apples and oranges unless we look carefully at the final, corrected image. Which, BTW, we absolutely cannot see in most comparisons of the two, especially in the video posted by the OP in post #1. Part of digital imaging is post-processing, a very big and important part. It cannot, and should not be ignored or discounted. So, that, plus the fact you can’t practically scan film’s full dynamic range without excessive trouble and expense.
 
We should probably get the terminology right.

On an image sensor, “pixels” do not capture light. A “pixel” is only present in a reconstructed image.
Dang between you and Joe I feel like I'm back in school. LOL You are correct they aren't pixels, though they are commonly referred to as by layman. They are more correctly called Micro Lens Arrays. From Cambridge In Color - "Real-world camera sensors do not actually have photosites which cover the entire surface of the sensor. In fact, they may cover just half the total area in order to accommodate other electronics. Each cavity is shown with little peaks between them to direct the photons to one cavity or the other. Digital cameras contain "microlenses" above each photosite to enhance their light-gathering ability. These lenses are analogous to funnels which direct photons into the photosite where the photons would have otherwise been unused".

I should probably thank you both for forcing me to answer my own question as to why smaller "photosites" can actually have less noise.....from the same article, "Well-designed microlenses can improve the photon signal at each photosite, and subsequently create images which have less noise for the same exposure time. Camera manufacturers have been able to use improvements in microlens design to reduce or maintain noise in the latest high-resolution cameras, despite having smaller photosites, due to squeezing more megapixels into the same sensor area".

And now my brain is tired, this old man is on recess.😁
 

Most reactions

Back
Top