Using long exposure to measure low luminance on a display

spacediver

TPF Noob!
Joined
Sep 8, 2014
Messages
84
Reaction score
3
Location
Toronto
Hi all,

First post here. I know next to nothing about photography, and the only camera I own is the one on my smartphone. However, I do have an interest in displays, and am considering investing in a camera in the near future.

While it is true that I am not even a beginner when it comes to photography, I have chosen to post this question in this forum instead of the beginners forum, as it deals with some rather technical issues.

I have an interest in display calibration, and have some experience calibrating Sony Trinitron CRTs (if anyone is interested, check out my white point balance calibration guide here).

One of the challenges I, and few others are facing, is characterizing the luminance response in the near black region. The CRTs I work with are capable of producing black levels that are quite a bit below the measurable range of any of the instruments I use (I have an i1 pro, an i1 display pro, and a few DTP-94s). The lowest reading I can achieve is with the i1 display pro and that is about 0.002 cd/m2. A fellow Trinitron user and I have been brainstorming ideas on how to overcome this limitation. One idea involved a parabolic reflector that would focus light from the display. Another, more reasonable suggestion that came up was to use a DSLR in a long exposure mode to sum luminance information over time.

The idea would run something like this:

1: Display a full field test pattern at a luminance measurable by our instruments, say 10 cd/m2.

2: In a dark room, set up the DSLR so that the frame encompasses a large region of the display. Set the exposure time (perhaps 5 minutes), and start capturing.

3: Examine the RAW file, and calculate the average pixel value (this could be done on a single channel basis, for example, the green channel only).

4: Calculate the relationship between average pixel value with the chosen exposure, and actual luminance. For example, suppose the average pixel value for green turns out to be 500 (out of a possible 1023 in a 10 bit file). In that case, the scaling factor would be 1/50.

5: Re-measure at a few other measurable luminances, and ensure that this scaling factor is consistent across different luminances. I'm assuming RAW encodes luminance linearly, so there shouldn't be a need for gamma correction.

6: Measure test patterns that are too dark to be directly measured, and use the scaling factor to infer actual luminance.

That's the basic idea, and here are the questions:

a) Is this general approach completely misguided? Or is there a chance it could work?

b) At what point should I be concerned about dynamic range? In steps 1-5, I need to ensure that that sensor isn't saturated, else my calculations will be meaningless, yet I need an exposure time long enough to capture usable information when measuring the very low luminance patterns.

c) Are there any considerations around ISO and aperture that I should take into consideration here? My goal is to create an image that has a reliable relationship to the luminance of the display, yet retains good sensitivity to the low light. I don't care about spatial resolution, and I imagine a certain amount of noise can be tolerated, given that the calculations are averaging over millions of pixels (and many images can be acquired for each luminance level).

I'd really appreciate any suggestions and guidance!
 
Interesting attempt. I cannot help much, however be careful that on long exposures you have also thermal noise. And check if the linearity assumption is true... I do not know if in practice is, in particular at the bounds.
 
Yep, will definitely check the linearity assumption, and at multiple exposures. It may be the case, for example, that at normal shutter speeds, light is encoded linearly, but with longer exposures, things change. I don't know enough about the physics of the sensors.

Either way, I will report the results here (or at least link to a post where they're reported).

I might even try the CHDK firmware on a cheaper camera.
 
hello,
I am the other guy interested in this.
a) Is this general approach completely misguided? Or is there a chance it could work?

b) At what point should I be concerned about dynamic range? In steps 1-5, I need to ensure that that sensor isn't saturated, else my calculations will be meaningless, yet I need an exposure time long enough to capture usable information when measuring the very low luminance patterns.
well my plan is to just increase exposure time until the sensor shows a reasonably bright image

c) Are there any considerations around ISO and aperture that I should take into consideration here? My goal is to create an image that has a reliable relationship to the luminance of the display, yet retains good sensitivity to the low light. I don't care about spatial resolution, and I imagine a certain amount of noise can be tolerated, given that the calculations are averaging over millions of pixels (and many images can be acquired for each luminance level).
afaik, using the lowest iso and largest aperture are what we want to maximize the signal to noise ratio.

read through
Image noise - Wikipedia, the free encyclopedia

from what i can tell, the important sources of noise for us are shot noise and salt-and-pepper noise. the first reduces with exposure time, if you quadruple exposure, the shot noise will halve. the second (i am not 100% sure) is proportional the exposure time. however, salt-and-pepper noise can be eliminated (completely? not sure) by subtracting off a dark frame (picture with shutter closed completely).

as for linearity, sensors should be perfectly linear for low luminances, assuming they don't get saturated.


but first, i'm going to take some pictures with my phone to get an order-of-magnitude estimate of how long exposure we need.
 
Last edited:
afaik, using the lowest iso and largest aperture are what we want to maximize the signal to noise ratio.

read through
Image noise - Wikipedia, the free encyclopedia

from what i can tell, the important sources of noise for us are shot noise and salt-and-pepper noise. the first reduces with exposure time, if you quadruple exposure, the shot noise will halve. the second (i am not 100% sure) is proportional the exposure time. however, salt-and-pepper noise can be eliminated (completely? not sure) by subtracting off a dark frame (picture with shutter closed completely).

Interesting. That makes things simple.


as for linearity, sensors should be perfectly linear for low luminances, assuming they don't get saturated.

I was just reading this, and yes, the physics appear to preserve a very linear response to photons.

My only concern would be that there may be a leakage of the accumulating charge at each photosite during the long exposure, and that the amount of leakage varies depending on the flux of the light source (so more leakage with higher luminance test patterns). This might complicate the scaling factor idea, but then again, perhaps there is no leakage worth worrying about. Anyway, there's a wide range of luminances we can verify with.

Also, I was initially concerned about bit depth limitations, but I think it's not an issue. Because of the long exposures, we are "zooming" into a particular luminance range and essentially assigning all available bits to that range.
 
another thoguht about iso: if quantization noise becomes a problem, bumping it shouldn't be a problem.

so my iphone 5 with a 1/3.2" sensor could easily "see" 0.1 nits at 1/15s with iso 3200. by "see", i mean able to distinguish from background in spite of noise.

exposure time could be increased by 100-1000 times. using a dslr with lens removed should allow a few hundred times more light onto its sensor than the iphone5's tiny aperture does.

so i'll estimate that the minimum a dslr can "see" by removing the lens and pressing the body against the screen would be around 10^-5 to 10^-6 nits.

ultimately the limitation will be the maximum exposure time before dark/leakage/salt-and-pepper noise completely saturates the sensor.

My only concern would be that there may be a leakage of the accumulating charge at each photosite during the long exposure, and that the amount of leakage varies depending on the flux of the light source (so more leakage with higher luminance test patterns). This might complicate the scaling factor idea, but then again, perhaps there is no leakage worth worrying about. Anyway, there's a wide range of luminances we can verify with.
doubt it. i dont see any physical cause for this, especially at our luminances. if there were a lot of light, that could heat up the sensor which would increase the dark/leakage noise but in that case exposure would be short.

Also, I was initially concerned about bit depth limitations, but I think it's not an issue. Because of the long exposures, we are "zooming" into a particular luminance range and essentially assigning all available bits to that range.
yup
 
another thoguht about iso: if quantization noise becomes a problem, bumping it shouldn't be a problem.

Just so I'm following: quantization noise would occur when actual differences in charge are encoded as the same tonal value, and increasing the ISO increases the gain of the sensor (less charge is required to encode each tonal value), thus reducing quantization error. Ok I think I got it.


doubt it. i dont see any physical cause for this, especially at our luminances. if there were a lot of light, that could heat up the sensor which would increase the dark/leakage noise but in that case exposure would be short.

Yea, perhaps the concern is unfounded:

From the previous link:

The recording response of a digital sensor is proportional to the number of photons that hit it. The response is linear. Unlike film, digital sensors record twice the signal when twice the number of photons hit it. Digital sensors also do not suffer from reciprocity failure like most films.


so i'll estimate that the minimum a dslr can "see" by removing the lens and pressing the body against the screen would be around 10^-5 to 10^-6 nits.

That's very exciting. If this works, a lot of calibrators are gonna be very happy (assuming they haven't already thought of your idea).
 
Green channel seems to be the one to use (at least those that use a Bayer array), as there are twice as many photosites with a green filter as there are red and blue (this is very similar to the 4:2:2 chroma subsampling idea).

Understanding Digital Camera Sensors

also see HowStuffWorks "Demosaicing Algorithms: Color Filtering"

Ideally, it would be nice if the filters could somehow be removed, so we'd get a full resolution grayscale image :)

edit: hah, look at this:

http://petapixel.com/2013/08/04/scr...ayer-off-a-dslr-sensor-for-sharper-bw-photos/

http://www.jtwastronomy.com/tutorials/debayer.html

http://www.iceinspace.com.au/forum/showthread.php?t=109439
 
A CCD sensor may be better than a CMOS sensor:

Starizona's Guide to CCD Imaging

CCD cameras are up to 50 times more sensitive than standard digital SLR
CCD cameras have a greater dynamic range than digital SLRs, meaning they can more easily capture both faint and bright detail in a single exposure
Most (but not all) CCDs have a linear response, which means they can be used for photometry--studying the brightness of objects such as variable stars or asteroids

something like this may be a good option (tiny sensor, but we're not interested in number of pixels so much).

Also see http://www.thephotoforum.com/forum/photography-beginners-forum/368940-what-real-definition-dslr.html

edit: just realized something: This technique can be used to characterize the (luminance) uniformity of the display also!


 
Last edited:
wow debayering...
and I thought removing the antiglare off my lcd was scary
A CCD sensor may be better than a CMOS sensor:

Starizona's Guide to CCD Imaging

CCD cameras are up to 50 times more sensitive than standard digital SLR
CCD cameras have a greater dynamic range than digital SLRs, meaning they can more easily capture both faint and bright detail in a single exposure
Most (but not all) CCDs have a linear response, which means they can be used for photometry--studying the brightness of objects such as variable stars or asteroids

something like this may be a good option (tiny sensor, but we're not interested in number of pixels so much). edit: just realized something.

Also see http://www.thephotoforum.com/forum/photography-beginners-forum/368940-what-real-definition-dslr.html

This technique can be used to characterize the (luminance) uniformity of the display also!



i wouldn't read too much into "50 times more sensitive". that's very vague and also cmos technology has improved significantly over the years.
the most important things are a large sensor size and a low amount of dark noise.

for uniformity, it's not as simple as taking a single picture since lenses have vignetting, and there is also natural vignetting. it would be possible to just move the camera aronud the surface of the monitor though.

my plan is to get a cheap-ish casio ex-zr700, since it can record high speed videos (and that's something i've been wanting to do for a while) as well as record raw files. if it isn't good enough for the purposes mentioned in this discussion, then ill think about getting a used dslr body or some scientific/non-general purpose camera
 
Why not look into astrophoto cameras which are by nature almost always grey-scale, have quantified response to light under longer exposure, and many have electronic temperature reduction features which become important when the exposure times range into hours?
 
Why not look into astrophoto cameras which are by nature almost always grey-scale, have quantified response to light under longer exposure, and many have electronic temperature reduction features which become important when the exposure times range into hours?

Yea, I was thinking maybe this would be a good option (although I'm not sure whether the tiny image sensor, relative to larger CMOS sensors, is an acceptable tradeoff). I haven't looked too deeply, but it seems that astrophoto cameras are quite expensive, due to the CCD sensors.
 
With astrophoto cameras you get access to much lower level data compared to what DSLR's give you, and without all the proprietary processing done by the firmware. If the issue is that of measuring luminance levels, then the sensor size is pretty irrelevant, at least if I understand your original set of requirements. Plus, given the very low signal to noise ratios present in most deep-sky photography, there are many tools available for the cameras designed to pull out a meaningful signal.
 
Wouldn't a larger sensor be better for measuring very low luminance levels? All else being equal, a larger sensor will have larger photosites, which means more photons can be captured per pixel per unit time. I think larger photosites would also have a higher dynamic range, which means more exposure range before saturation (which is useful when working out the scaling factor with higher luminance targets). But I do like the idea of avoiding proprietary processing.
 
What you're looking for is quantum efficiency, the percentage of photons collected that result in a signal. 100% efficiency is a signal for every photon captured. Many astronomical sensors get quantum efficiencies in the 75-90% range. See the following link for a manufacturer who is showing the quantum efficiencies of various popular sensor chips: CCD cameras for astronomy. Most DSLR's have quantum efficiencies of 30-40%. There's a reason why those applications that need the signal delivered by every photon use the sensors designed for astrophoto gear.
 

Most reactions

New Topics

Back
Top