What's new

A Sensor that needs no Lens

unpopular

Been spending a lot of time on here!
Joined
May 17, 2011
Messages
9,504
Reaction score
2,004
Location
Montana
"A planar Fourier capture array (PFCA) is a tiny camera that requires no mirror, lens, focal length, or moving parts.[1][2] It is composed of angle-sensitive pixels, which can be manufactured in unmodified CMOS processes."

Planar Fourier capture array - Wikipedia the free encyclopedia

"ASPs can be used in miniature imaging devices. They do not require any focusing elements to achieve sinusoidal incident angle sensitivity, meaning that they can be deployed without a lens to image the near field, or the far field using a Fourier-complete planar Fourier capture array. They can also be used in conjunction with a lens, in which case they perform a depth-sensitive, physics-based wavelet transform of the far-away scene, allowing single-lens 3D photography[2] similar to that of the Lytro camera."

Angle sensitive pixel - Wikipedia the free encyclopedia

Could tomorrows cameras do away with lenses entirely?
 
The UI sounds like it would be pretty complicated.
 
Yes. And they might well. Maybe. We assume technology will continue to march on.
 
What's the FOV on it? What is the quality of the bokeh?

I have seriously no idea about any of that. I'd imagine that FOV would be limited by the response of the sensor, so if it can generate a signal for 180° then it will be 180°, it's then a matter of processing to get any particular FOV under that, with minimum FOV determined by resolution?

But there are a lot of things I am not getting. The wikipedia article illustrates the sensor operating in only one dimension. How do these sensors measure angle of incident in the other direction? The sensor detects angle and outputs the value onto a sinusoid, but its not 1:1 - so how does it know it's looking two angles with the same corresponding phase?

As for bokeh, I'm thinking that it must pull everything into focus all at once since it's gathering useful information by measuring the angle of incident of incoming rays? Could OOF be emulated in software?

Like most things in the wibbly wobbly world of computational imaging, it's all very, very strange.
 
It is 2d.

Imagine a one pixel sensor that just measured the average tone of the scene. This produces a 1 pixel image. Not very interesting.

Now imagine that it's a weighted average instead. The middle of the scene is important but becomes less important, and then more important, and back again, in concentric rings.

The importance as you move outwards basis as a sine.

This is roughly what a one pixel angle sensitive sensor does. It's an average but weighted like an Airy disc. Roughly.

If you vary the frequency and... stuff... of those concentric rings (I understand 1d signal processing, but not 2d) and take a bunch of these weighted average measurements with a bunch of pixels, then you can recover the image with math.

So that diffraction grating business varies from pixel to pixel across the array.

This gives you something like the 2d Fourier transform of your picture. Which you can turn into the picture.

FOV will depend on the specific properties of the grating business.

Effective resolution of the final image should be pretty close in theory to the underlying sensor array. That is, a 1k by 1k angle sensitive sensor should be able to be mathematically unwound into about a 1k by 1k image file.

Not sure how color would work here. Also things are likely to get sketchy at the edges.
 
Nope, definitely not.

A lens collects light from a number of angles and focuses them into a single point. This has the property of limiting sharpness to a focal plane and everything not on that focal plane is, mathematically, not in focus. In practice, the result is a more or less limited depth of field.

But it also means that you can collect a LOT of light for every image point ! Wide open the light from the subject point, hitting the front lens at any point, will be focussed on the image point.

However, an insect eye - and thats what this other technology described basically is - only has a straight line to the subject, much like a camera obscura. Thus, just like a camera obscura, you cannot collect much light and you cannot get much sharpness. Okay, as a plus side you will also have unlimited depth of field.

This technology will be helpful to make very small cameras, like for a nanodevice ... but any higher image quality is completely out of the question.

Okay, maybe you can make a whole wall of these devices and then shrink the resulting image down brutally until it appears sharp. Then you'd archive with a lot of sensor area what a much smaller sensor with a lens can do just as well, because the lens will collect a lot more light.
 
What? I don't think you have the faintest idea how this thing works. Or a camera obscura, for that matter.
 
Oh how nice, I got called an idiot by a person unable to explain why.
 
This thing is nothing like a pinhole camera, but that is an interesting observation. One could easily imagine that it is and extrapolate a number of incorrect things.

Not sure where Solarflare is seeing 'idiot'.
 
So at each pixel they record an intensity and the incident direction of the light. This is what a lytro camera does, just by a different method. The lytro camera gets the info by splitting into multiple images on a conventional sensor. This is more promising, since it doesn't rely on conventional sensor technology it can be considerably smaller.

I don't think this is quite as useful with conventional lenses though, since many lenses change the angle of the light hitting the sensor.
 
Actually it is using a conventional sensor.
 
No, it does not record direction and intensity.
 
Hmm i clearly missed the point there. so the sensitivity is sinusoidal with respect to the angle of incidence?

Again this will limit use of lenses which change the angle of incidence of the light.
 

Most reactions

Back
Top Bottom