The ideal sensor

Solarflare

No longer a newbie, moving up!
Joined
May 24, 2012
Messages
2,898
Reaction score
395
The Ideal Sensor

IMHO:

The ideal sensor size for you is the size before your desired camera and lens are either too expensive or too heavy.

Other than that, I hope one day we'll get "3CCD on the sensor", i.e. every pixel has some optical color splitter device and records the three RGB values separately. This will tripple the data but also will give much better efficiency factor. Right now we throw away at least 50% of the light thats falling on the sensor just so we can get color information.

Another biggie is a global shutter. Allows "perfect" video, and reduces the amount of mechanics in the camera. On a mirrorless, it could even be reduced to zero.

Finally theres phase AF on the sensor, at least for mirrorless cameras. Ideally I would want quad pixel technology for that.

The last is the usual:
- Have as low ISO as possible so signal to noise can be optimized further.
- Have as high efficiency factor as possible. Silicium seems to be stuck at 40% ? So maybe other materials would be better ?
- Reduction of any other noise, too.
 
one that can captures how your eye renders a scene.

everything else is just legacy.
 
one that can captures how your eye renders a scene.

everything else is just legacy.
I hope not otherwise everything would be fuzzy
THOUGH as your eyes SHOULD render a scene would be great.
 
one that can captures how your eye renders a scene.

everything else is just legacy.

Hope you don't wear glasses

What you objectively see and what you think you see (how you feel about what you see) don't really go together without that extra work though. I hate that some scenes you just can't create a great photograph from because reproducing the experience just doesn't work for a single photograph.
 
I agree, and there's something about the way film/sensors capture light that makes photography more interesting.

I'd just like to see something new is all.
 
one that can captures how your eye renders a scene.
What a VERY limited view!

Nearly all my cameras allow the photographer to go well beyond that even if just by varying shutter speed.
Most allow varying FOV, DOF & even dynamic range well beyond what the 'eye' renders at any instant. Your brains interpretation of the scene built up from multiple instantaneous views will generally be able to go surpass these options from current cameras but not in all cases.
The most frequently used of my cameras will I suspect be able to capture more detail in a scene than will be seen by anyone with unaided 20:20 vision.
A small number of the cameras I use allow the camera to capture wavelengths the eye can't see as well.
In the right place each of these parameters can be worth exploring in rendering a scene!
 
Other than that, I hope one day we'll get "3CCD on the sensor", i.e. every pixel has some optical color splitter device and records the three RGB values separately.
Isn't that a description of the Foveon sensor, where the depth of penetration of the silicon indicates a photons energy & hence color.
Potentially the technology could provide extra channels above & below the RGB for UV & NIR which would be great for me, but I doubt the market for it would ever make it practical to develop (or cheap enough for me to try!).
 
Your brains interpretation of the scene built up from multiple instantaneous views will generally be able to go surpass these options from current cameras but not in all cases.

Yeah, so I'd like to try one that can.

OK here: I want a large sensor, with lots of megapixels, and lots of DR, and lots of ISO handling.

Basically nothing new. better?
 
I be happy with a full-frame MF (6x6 or 6x7) sensor that I could afford(ish).
No argument there.


one that can captures how your eye renders a scene.
Aside from the arguments already mentioned, I would add this is more of a request about the post processing not the sensor itself which I would want to simply record the light thats there for maximum efficiency, plus I think current cameras are already pretty good at rendering a natural looking image anyway.


2tp 4x5 for me, thank you very much.
Um ... 2 Terapixels on an area ca 10x12.5cm - thats really small pixels, 16 Megapixel on every cm^2, or a pixel size of 2.5 µm, so the pixel quality and overall image quality probably wont be that high. I'd rather vote for 8x10 Sensors with 5µm pixels if such a high resolution is really wanted.

Highest resolution I ever saw reason to demand for anything is about 200 Megapixels. For 98% of applications, 12 Megapixels is enough, anything more is more room for cropping.


Isn't that a description of the Foveon sensor, where the depth of penetration of the silicon indicates a photons energy & hence color.
No. The Foveon sensor doesnt actually record RGB separately. Thats the simplified explanation usually given, but thats not actually what is happening.

Instead it records all combined signal of all three colors in the upper layer, an already darkened RG signal in the middle layer, and only a weak red rest signal in the lowest layer and then computes from these three signals the color information. Which is quite noisy, since two of the channels have been darkened.

What I want instead is a sensor that splits the colors of every pixel BEFORE recording and records them cleanly at maximum efficiency.

The efficiency of Foveon X3 sensors is even below that of traditional Bayer color filter sensors, despite the fact that Foveon X3 evaluates all photons falling on the sensor, while traditional Bayer color filter sensors only evaluate at best 50% since the rest is stopped by the color filter.

While my solution, separate the light with a dedicated prisma, would increase the performance of traditional Bayer color filter sensors by at least a stop and could also increase overall color detection quality, which is what 3CCD / 3CMOS cameras are known for.
 
I hope one day we'll get "3CCD on the sensor", i.e. every pixel has some optical color splitter device and records the three RGB values separate

I got one even better, why not have a spectral camera? Each site could pick up a range of colors, including intermediates. This could be accomplished using traditional imaging by splitting the incoming light and recorded it onto multiple pixels, or it could be done using some sort of technology that could record color directly.

Once captured it could then be processed into RGB using a spectral approach rather than color mixing.

I think that there may be a day we see this. I'd imagine that certainly there must be a demand for spectral imaging in material sciences and security.

imagine having 16 primaries ... or heck, 256 primaries. Beyond that you could probably start figuring out what the stuff in your photograph is made out of.

The last is the usual:
- Have as low ISO as possible so signal to noise can be optimized further.
- Have as high efficiency factor as possible. Silicium seems to be stuck at 40% ? So maybe other materials would be better ?
- Reduction of any other noise, too.

There will be avalanche photodiode arrays in our cameras someday. I'm very sure of that. There are already low resolution (like 64 pixels) and linear arrays available, though they're more than just a little bit expensive. This will provide PMT-like performance in a solid state device.
 
one that can captures how your eye renders a scene.
Can't happen because photos are 2D and we see in 3D.

BTW - Looks like your shift key is broken.

not to mention all the processing our brains do that is entirely dependent on circumstance and past experience. 3d is one thing, consciousness is another.
 
I want my sensor/processor to be self-aware.
 

Most reactions

New Topics

Back
Top