ISO discussion, digital SLR compared to film SLR

Do most cameras come with ISO set to auto as a default setting?
 
Do most cameras come with ISO set to auto as a default setting?
I don't know. Because it's so easy to change that setting, what they come with out of the box is irrelevant.
 
These ISO invariant sensors I have been hearing about...are they primarily only full frame cameras?
 
These ISO invariant sensors I have been hearing about...are they primarily only full frame cameras?

No. They come in any size. Fuji X system cameras are APS size sensors. The Fuji X-Trans II sensor is pragmatically ISO invariant.

Going back through this thread there's a lot of good information but there's also a lot of murky "grey" information that's developed over time as colloquial jargon but is in fact spurious.

First some of the best info:

You are correct in thinking that the sensitivity of the sensor does not change.
YES!!! Changing the ISO value does not alter the light sensitivity of the sensor.

And again:

Some photographers fiddle with the setting as if it were the third leg of an "exposure triangle", but in reality, changing the ISO has no effect whatsoever on the exposure.[emphasis mine]
This needs to be a little clearer but what's critical here is that ISO is not a determinant exposure factor. Photographic exposure is the amount of light per unit area that reaches the sensor and is a function of the scene illumination, attenuated through the lens (aperture) over time (shutter speed). Three factors and ISO is not one of them. ISO informs the choice of exposure but does not otherwise alter the exposure. This is a huge point of contemporary confusion.

The above becomes important in understanding the next topic: noise. Raising the ISO value on a digital camera typically reduces noise. I'm going to repeat that for effect, raising the ISO value on a digital camera typically reduces noise. What's happened in the photo jargon is the adoption of a spurious correlation. As the light level drops and photographers need to reduce exposure in order to keep the camera shutter speed high, they raise the ISO. (I took photos in the garden today under a gloomy sky and set the ISO to 1600). Back with film we would switch to a high ISO film in the same circumstance. Now as we already know raising the ISO does not increase the light sensitivity of the sensor. It does however bias our camera meters to calculate a reduced exposure. AND noise is the direct result of underexposing the sensor. That's where the noise is coming from. You get noise as a direct result of reduced sensor exposure; ISO doesn't cause the noise the underexposure causes the noise and as we know ISO is not an exposure determinant.

Now once you've underexposed the sensor (and have lots of noise) you also have a very dark image. You're going to have to brighten that image and you can do it two ways. You can brighten the image digitally and/or electronically. Some cameras do a combination of both. The electronic method which you'll often hear referred to as gain or amplification is applied to the analog sensor signal before it's converted into numbers. If you do the number conversion first then the brightening of the image is done digitally by multiplying the numbers. This is digital camera ISO preforming it's primary function. Most of our cameras (vast majority) are using the electronic method of ISO brightening and for good reason -- it helps suppress the noise that occurred from the sensor underexposure (see statement above repeated for effect).

Your ISOless or ISO invariant cameras (just a few built around Sony sensors) show no noise suppression benefit from the electronic brightening method. As a result we can wait on the digital brightening method and do that later at the computer when we can bring substantially greater processing power to bear. In the field using the same camera we can ignore the ISO setting however that can make chimping your image a bit tricky as the camera JPEG could be very dark.

Joe
 
the trouble with pedantry is that it turns into an infinite regress of "well no, actually"

for example, "well no, Ysarex, decreased exposure doesn't cause more noise, it causes a lower s/n ratio by reducing the numerator"

so what we actually do in practical terms is work with a simplified mental model that's good enough. rather than pedantically insisting that the sensor is the array of sensels, the bayer array, and the support circuits but not the amplifiers then we get to do the dramatic bit about how changing the ISO doesn't change the sensor's sensitivity, and then the noobs look on in confused wonder and then we can wander off into the weeds mixing up noise and signal-to-noise ratio, but nobody will notice because so so so very many words

if instead you include the amplifiers that implement ISO into what you mean by the word sensor, which is a perfectly reasonable thing to do (I am an ex-systems guy) then by god changing the ISO setting does indeed make your sensor (or "sensor system" if you prefer) into amore sensitive, albeit noisier, sensor (-system)

which is a perfectly good mental model to use, it works fine for millions of people successfully making billions of photographs every year.

tl;dr - yes, yes indeed, make ISO bigger makes them pichers brighter but noisier
 
the trouble with pedantry is that it turns into an infinite regress of "well no, actually"

for example, "well no, Ysarex, decreased exposure doesn't cause more noise, it causes a lower s/n ratio by reducing the numerator"

so what we actually do in practical terms is work with a simplified mental model that's good enough. rather than pedantically insisting that the sensor is the array of sensels, the bayer array, and the support circuits but not the amplifiers then we get to do the dramatic bit about how changing the ISO doesn't change the sensor's sensitivity, and then the noobs look on in confused wonder and then we can wander off into the weeds mixing up noise and signal-to-noise ratio, but nobody will notice because so so so very many words

if instead you include the amplifiers that implement ISO into what you mean by the word sensor, which is a perfectly reasonable thing to do (I am an ex-systems guy) then by god changing the ISO setting does indeed make your sensor (or "sensor system" if you prefer) into amore sensitive, albeit noisier, sensor (-system)

No. If sensitivity were increased then additional data would be recorded. That doesn't happen. Amplifying existing data doesn't record more data. Raising ISO on a digital camera does not increase light sensitivity -- it doesn't allow the "system" to record data deeper into the shadows. The trouble with oversimplification is that it can encourage incorrect assumptions about cause and effect and false assumptions about what's possible. The mental model that the sun revolves around the earth worked for millions of people for a long time. I think you can still join in fact: The Flat Earth Society

Joe

which is a perfectly good mental model to use, it works fine for millions of people successfully making billions of photographs every year.

tl;dr - yes, yes indeed, make ISO bigger makes them pichers brighter but noisier
 
Last edited:
well no it can record more data. suppose the sensor records volts, rounding down to the nearest volt, with an optional "amplify by 2x" circuit. this is simplified, of course.

consider a single sensel with a readout of 1.6 volts

without the amplifier, you read out a 1, which is off by 0.6 volts. with the amp you read a 3, which you interpret as 1.5 volts because you know about the amplifier for an error of 0.1 volts but most of this is irrelevant without considering the noise floor of the system which is determined in part by the support circuitry etc etc etc

see, you can play the "I know more fiddly details" game endlessly, but in the end it doesn't actually change the images you take one bit, it's just posturing. the fact that with some, but not all, modern sensors you can just shoot at ISO 100 and fix it in post is a mere curiosity.

"knowing" that changing the ISO "doesn't change the sensitivity" of the sensor has exactly zero practical impact on actually taking images

but i admit it sure makes you sound authoritative
 
Sometimes we lose sight of the fact that this is beginners forum, not a place for scientific dissertations.
 
Subsections are guidelines not rules in general - the OP asked a question in a way that leads into a higher level of discussion; there's no harm in that at all.
 
well no it can record more data. suppose the sensor records volts, rounding down to the nearest volt, with an optional "amplify by 2x" circuit. this is simplified, of course.

consider a single sensel with a readout of 1.6 volts

without the amplifier, you read out a 1, which is off by 0.6 volts. with the amp you read a 3, which you interpret as 1.5 volts because you know about the amplifier for an error of 0.1 volts but most of this is irrelevant without considering the noise floor of the system which is determined in part by the support circuitry etc etc etc

That's just you being guilty of the same pedantry you're complaining about. Raising the ISO on a digital camera does not increase sensitivity -- because you do not record more data. If you raise the ISO 3 stops on a digital camera you do not record 3 stops of additional shadow detail. You get nothing more.

see, you can play the "I know more fiddly details" game endlessly, but in the end it doesn't actually change the images you take one bit, it's just posturing. the fact that with some, but not all, modern sensors you can just shoot at ISO 100 and fix it in post is a mere curiosity.

"knowing" that changing the ISO "doesn't change the sensitivity" of the sensor has exactly zero practical impact on actually taking images

This is where you're really wrong. Put all the "fiddly details" away and pragmatically let's go take some "pichers." My last camera was a Fuji X-E2 which is for all PRACTICAL purposes ISO invariant. I didn't buy it because of that and initially I didn't concern myself with that aspect of the camera. But eventually I tested it. I was impressed and so I tested it more rigorously and convinced myself that apart from a brighter image to chimp on the LCD the ISO function of the camera was without any real value.

Eventually I started to take advantage of that. I started to leave the ISO at base and didn't bother the trouble of raising it when I was forced to reduce exposure. There are some interesting PRACTICAL advantages to that. Taking photos is simpler when you have less buttons and dials to deal with. But also there's the fact that if you really must reduce exposure then withholding the ISO boost keeps your highlights from any threat of clipping. When ISO brightens underexposed sensor data it does so equally for all data and can clip highlight detail. Any increase in ISO reduces sensor DR. If we don't increase ISO sensor DR remains at max.

I used the X-E2 for 4 years before upgrading and eventually became accustomed to the PRACTICAL advantages I could access by understanding how it really worked. For example I was asked by friends to take some snaps at an indoor event -- be unobtrusive, no flash please. The venue was poorly lit and large windows added a unique complication. Here's a photo:

party_04.jpg


I left the ISO at base and the gentlemen at the table are basically 4 stops underexposed. Here's the camera processed JPEG:

party_01.jpg


And this is what I would have gotten if I had made that 4 stop ISO adjustment in camera:

party_02.jpg


Raising the ISO 4 stops on the camera would have reduced sensor DR by 4 stops. I would have gotten bupkis more data in the shadows but the highlights out the window and the highlight on the side of the man's face (blue shirt) would have been terminally clipped. Go back up to the first photo and look at the data out the window and that face highlight.

Under the circumstance in the above photo where I was forced to reduce exposure, my understanding that raising ISO does not increase light sensitivity but does reduce sensor DR had a huge PRACTICAL impact on the photo I was able to produce.

Joe

but i admit it sure makes you sound authoritative
 
Last edited:
if i am understanding ysarex right, he is asserting that increasing exposure (eg extending shutter speed) will get more shadow detail whereas increasing the ISO doesnt. this is probably true for newer cameras with ISO invariance.

testing my camera which is about 7 years old ISO works pretty much like film. taking identical images, one with a longer shutter speed at ISO 100 and the other with a shorter shutter speed at ISO 1600 reveals zero difference in shadow details. diffing the images shows differences on the edges and in the midtones. likely due to increased noise in the ISO 1600 shot and some noise reduction code.

in other words my camera was engineered so that the ISO setting worked much like film ISO. the sensor is by modern standards pretty noisy, so the readout at ISO 100 leaves a lot low level signal behind (below the LSB) in favor of getting a clean signal. upping the ISO does indeed recover more shadow detail at the expense of reading out additional noise. then the software does god knows what, and in the end, it's a wash and ISO works a lot like film's ISO.

point is you can engineer these things to work any way you like. modern systems favor ISO invariance which, as a consequence, means that upping the ISO will treat shadow detail differently from opening the aperture or increasing the shutter speed. which, if you want, I suppose you can define "sensitivity" to mean "not the thing ISO does" and then you get to grump at videos.

if you care about shadow detail, or color fidelity, or highlight detail, or haptics, or how easy it is to clean the sensor, or any of those things it will behoove you to learn about your camera a bit. if you don't care about one thing or another, just let it slide.

ISO make pichers brighter.

if that's not enough detail for you do figure out what ISO does on your camera until you know enough to satisfy you. every camera is gonna be a lil differet.
 
so you exposed for the highlights at ISO 100 and, because your sensor system is ISO invariant, were able to recover the men.

if instead you had set the ISO to 1600 and then exposed for the men rather than the highlights you'd have blown out the highlights, ok.that is unsurprising. again: if you'd set the ISO up and metered in a completely different way you'd have lost the highlights.

if you have set the ISO to 1600 and exposed for the highlights, that is, exposed in the same way you did originally rather than metering completely differently, you'd still have to recover the men the same way, and they'd be all noisy. but you could have used a faster shutter speed or a smaller aperture.

so.. let's see. setting the ISO to 1600 would have made the camera more sensitve, but noisier. right? did I get that right?
 
so you exposed for the highlights at ISO 100 and, because your sensor system is ISO invariant, were able to recover the men.

if instead you had set the ISO to 1600 and then exposed for the men rather than the highlights you'd have blown out the highlights, ok.that is unsurprising. again: if you'd set the ISO up and metered in a completely different way you'd have lost the highlights.

if you have set the ISO to 1600 and exposed for the highlights, that is, exposed in the same way you did originally rather than metering completely differently, you'd still have to recover the men the same way, and they'd be all noisy.

yes.

but you could have used a faster shutter speed or a smaller aperture.

no. The reason I exposed the way I did originally was to get the faster shutter speed.

so.. let's see. setting the ISO to 1600 would have made the camera more sensitve, but noisier. right? did I get that right?

no.

Joe
 
far be it from me to judge how a persons thought process leads them to the asnwer and if yours takes a trip through details of sensor electrocics then so be it, more power to ya

me, i just expose for whatever highlights i want to keep, s ame as with slide film, same as any digital camera
 
I too tried to watch the video and almost stopped just one minute in when most of what he said was wrong. I forced myself to keep watching because sometimes to teach a concept, it's easier to fib a little to get a concept across ... and then clear up the fib later. If you flood the person with too many facts up front, you'll lose them. But at about 2/3rds of the way through the video it was still wrong and nothing was being clarified. I realized that someone watching this video will probably come away knowing all the wrong stuff.



To clear a few things...

Sensors don't have "pixels"

(photos have "pixels" but sensors do not). Sensors have "photosites". The photo-site is a single light sensitive cell that reads the luminosity of light at that position. In other words, think of it as a black & white camera and not as a color camera.

To get color, the camera uses a "color filter array" (CFA). The most common type is the Bayer Mask Bayer filter - Wikipedia

Think of this as a chess-board type filter where each square is either red or green or blue transparent tile. A photosite that happens to be located behind a "green" colored tile on the CFA is sensitive to "green" wavelength light, but not to "red" or "blue". But most light is not a pure single color... for example teal light would be partially detected by "blue" photosites as well as "green" photosites.

Still... the actual RAW data from the camera is a file which provides the light value for each photosite in single-color channel form. It's a mosaic of color tiles... no blended color. To get the blended color values, software has to "de-mosaic" the tiles.

Suppose I'm a "green" photosite. I know how "green" I'm supposed to be because that's the only color I can detect. What I could not detect was how "red" or "blue" I was supposed to be. But I have neighbors and I can borrow information from them. If I'm green, then I'll have two red neighbors and two blue neighbors. Suppose my red neighbors are above and below me and my blue neighbors are left and right. If my red neighbor above has a value of 180 and my red neighbor below has a value of 178 then that tells me that if only I could have been able to detect "red" myself, I probably would have recorded a value of 179. I can do the same with the blue.

It turns out the algorithms are a bit more involved than this simple example... there are several algorithms. But that's the basic idea. Most programs don't let you pick the algorithm (my PixInsight software that I use for astrophotography DOES let me pick from among several RAW decoding algorithms). This means if you open a RAW file in one program and save the output to a non-lossy TIFF file... then do the same using a completely different program... then compare the two TIFF files, you'll find that they may have different results when you do a pixel-by-pixel comparison.

BTW, compare the file size of your RAW to the file size of the non-lossy TIFF... you'll notice the TIFF is MUCH larger and that's because it is made up of 3-color channel "pixels" whereas as the RAW is just single-color channel data where the 3-channel color will mathematically be derived when it is processed.



As for the ISO...

Sometimes it's easier to teach a concept by simplifying it. The simplification can sometimes involve telling a fib. Hopefully someone goes back and explains that "remember when we told you X was true... well we weren't entirely honest... we skipped the details for simplicity sake, but in reality... Y is true."

So yes, it's easier to say that changing the ISO changes the sensitivity of the sensor but that's not actually what it does.

When enough photons hit a photosite, it will bump up it's recorded light level by 1.

All this happens before any ISO adjustment is applied. In other words, whatever number of photons were collected is what it it was and that's the original data.

Changing ISO really just manipulates the data values that are reported (after the fact).

There are two main ways to do this... there's an "upstream" method and a "downstream" method (and a combination of both methods). The camera manufacturer won't tell you what they do for any given sensor but there are ways to work it out via testing.

The sensor receives analog signal (not digital). The camera sensor performs an Analog-to-Digital Conversion process (ADC). That results in digital output so we call it a "digital" camera even though the light captured was analog data.

It turns out in the analog world, you can simply amplify the signal. This is referred to as "upstream" amplification if it happens prior to the ADC step.

You can also take the digital output from the ADC step (so now you just have numbers) and multiply those numbers to increase the value. If you do this, it's called "downstream" amplification.

Some cameras rely almost exclusively on "downstream" amplification and others rely on a combination of "upstream" and "downstream" amplification.

Keep in mind that digital cameras have a maximum number of bits they use for each integer that represents the amount of light they counted. An 8-bit JPEG file just has 256 values (0-255). A 16-bit integer can store 65,536 values (0-65,535). But most modern create 14-bit RAW files with 16,384 values (0-16,383).

So here's a problem: Suppose that if I shoot at ISO 100 I end up with an image and if we just pick two of the photosites, one reported a value of 500, another reported a value of 9000. This is no problem because both values are below the maximum value of 16,383.

Now I dial the camera up to ISO 200 and my camera only performs "downstream" gain. This means it will perform simple multiplication and multiply all values by 2. My photosite that reported a value of 500 will now be saved as if it reported 1000. So far so good. But my photosite that reported a value of 9000 will now have to save a value of 18,000 and this is a problem because you can't store a value of 18,000 in a 14-bit integer. That's an over-flow problem and it results in clipped data (loss of information).

This isn't just a loss of information, it's also a loss of the camera sensors dynamic range.

Some cameras have a feature to protect those highlights (e.g. "highlight tone priority"). This scales the the multiplication so that while the dark pixels are all multiplied by 2.0x, the lighter pixels are multiplied by some smaller value (say... 1.8x) so that they don't overflow and clip data... and then we linearly interpolate how much to boost each mid-range value (some are boosted by 1.81, 1.82, etc. all the way down until we reach the pixels that we multiply by the full 2.0x.) Again, a simplification... but that's the idea.

This means we protect against full data loss... but we do lose part of the information. Technically we compress the dynamic range into a smaller space to try to avoid losing data and the result looks pretty good so we're happy with it.

If you had previously believed that a change of ISO simply increased sensitivity and didn't realize the math behind what it is really doing (mathematically multiplying values) then it might have been missed that you are losing dynamic range and should take steps to protect against loss of data such as shooting bracketed images to perform an HDR merge... or enabling features such as highlight-tone-priority (which may have a different name depending on camera model).

But back to that "upstream" amplification... this results in a gain being applied BEFORE digital conversion (before ADC) and as a result it doesn't lose much in the way of dynamic range.

If you were to test the dynamic range of a camera (using a test target) what you'd probably notice is that as you boost ISO, you don't seem to be losing much dynamic range... but there's a limit to this... and then suddenly you hit an ISO where you get a linear drop off in dynamic range for each boost in ISO beyond that point.

Most cameras that do both "upstream" and "downstream" have some magic point where this trade-off occurs. If you want to protect your DR then this is the highest ISO you should use.




Noise

When you boost ISO, you boost noise right along with it. The reality is the noise was always there, you just didn't notice it. Noise happens when a photosite reports a higher value than it should have (I'll skip discussions on why this happens but will mention that some of the reasons have to do with the quantum nature of the universe... no amount of electronic wizardry will make that go away because a sensor that reports this "noise" is actually reporting what really happened.) Cameras that do try to make this problem go away are "cooking" the RAW data (quite a number of modern sensors on the market provide "cooked" RAW files.) For some interesting reading... look up the Sony "star eating" issue (you'll find lots of hits on that search). The summary for those who don't want to search is that in Sony's effort to produce sensors that have lower noise, they're averaging out data that they "think" is noise (because it seems to exhibit properties of noise) even when it's actually real data. Astrophotographers starting noticing that stars are missing from their images that area really supposed to be there. The images look cleaner than they should because the computer assumed it was noise and was "cooking" the RAW data to get rid of it (and lots of Sony owners think they have better sensors because they don't see as much noise - unaware of what the camera sensor is really doing.)

Just remember that noise is "additive" in that it always results in a photosite reporting a HIGHER numeric value than it should have reported. There is no "anti-noise" where the photosite reported a LOWER numeric value (at least if there is, I have yet to encounter it.)

This is significant because it implies you're more likely to notice the noise in darker areas of your image and less likely to notice noise in whiter areas of your image. If a pixel is nearly "white" already and there's "noisy" pixel nearby, it won't be able to be that much brighter than the pixel that is already quite bright. But when an anomalous high value pixel shows up in a "dark" area, it really stands out.

But there is an exception... suppose you shoot a photo in a very well lit "green" room. So you're near saturating the green photosites but the red and blue photosites are porting very low values. I can have noise that spikes the value of, say, a "red" photosite and this creates a bright red photosite where it should be dark. My de-mosaicing algorithm then "blends" that spiked red value and it results in my green photosite being blended with red to create a yellow or orange pixel in the output. In other words the "noise" didn't just make my pixel brighter... it shifted the color value of the pixel.

You are far more likely to notice noise in dark areas.
You are also far more likely to notice noise in regions of the image that have some relatively flat tone (non-contrasty areas).

It isn't that there is no noise in the bight or contrasty areas... it's more than your eye is much less likely to notice it because it isn't so significantly different than the neighboring pixels. (sort of a "where's waldo" problem... when the detail in an area of your image is very complicated, your eye is not likely to notice the flaw. The flaw can hide in plain sight.)

Knowing this, you can smartly deal with noise by using software that more aggressively goes after darker regions more aggressively than lighter regions.

You can also create a mask that finds edges of contrast and protect those areas against de-noising ... and more aggressively smooth out the areas that lack contrast.
 

Most reactions

Back
Top