Old Lenses, Chromatic Aberration, Monochrome, Filtering

VidThreeNorth

No longer a newbie, moving up!
Joined
Oct 21, 2016
Messages
1,172
Reaction score
212
Can others edit my Photos
Photos NOT OK to edit
Since last September I have used two high quality old lenses. One is my Nikkor 28mm F3.5 AI and the other is my Asahi SMC Takumar 300mm F4.0. The 300mm is the older design and according to "PentaxForums.com", the optical design goes back to 1965, but this "SMC" version came out in 1971, incorporating full-aperture metering and Asahi's "Super Multi-Coating".

According to Ken Rockwell's wesite, my 28mm F3.5 Nikkor AI was produced from 1977 - 1981, which implies that it was probably being designed around 1976. It's 6 elements in 6 groups optical design and multicoated were contemporary for its day, but not really class leading. Nikon had brought out a 28mm F2.8 around 1974 with 7 elements in 7 groups, which was overall larger, better and higher priced. Moreover, Nikon had 24mm F2.8 lenses as far back as their earliest 28mm F3.5 lenses. So all 28mm F3.5's could be looked on as targeting a more "popular priced" market. But this is a "relative" idea, and no doubt, if you checked the bags of a lot of Pro photographers, some would have carried a 28mm F3.5 in their kit.

In common, they both have fairly high chromatic aberration. Later versions of the 300mm reduced this problem. The Nikkor had higher quality models which might have had less chromatic aberration, even when it was introduced. I don't know. I haven't checked around for any recorded tests.

But it is fair to say that chromatic aberration was likely to be less of a priority in "pre-1970's" lenses. The reason is that colour photography, though available, was not as popular as "black and white" until around the mid-1960's. The cost and availability of colour prints were prohibitive for casual photography until around this 1960's to 1970's period. Even for Pros, black and white was the staple for news, including sports, corporate photography including portraits, and architecture.

So, chromatic aberration was not going to be as serious an issue even until "later", and "later" varied from company to company. Again, I will point out that the Nikkor 28mm F3.5 AI was not Nikon's top lens in its focal length, so the more expensive lenses might already have been better.


Optimizing Monochrome With Filters

Since monochrome was widely used by Pros, there were well known method of getting the most out of it. These were no secrets. If you asked any camera store salesman you'd be taught this stuff in about an hour. A good salesman could give you a good education, and had a good motivation, because s/he could legitimately advise you to spend a little money and buy some filters.

A new professional or hobbyist could be shown examples of using Red, Green, Blue and Yellow filters. Most stores had example pictures. You would be taught that for sharper pictures, you would do well to remove the blue spectrum which scatters more from haze and fog. "Anything but blue" was good for this. You would also see that each of the RGB filters resulted in distorted representations of the scene. Red was generally the most dramatic, darkening the sky and plants. Green was often called the most "natural", though green leaves tended to be brightened. Yellow was called the "fog or mist" filter. Blue was, well, actually "kind of useless", but if you bought one, the salesman would take the money. . . .

But it might also be mentioned that this was also a good way sharpen a picture because of chromatic aberration. The salesman might not mentioning this because you'd already expect the picture to be sharper, so it was to an extent "redundant information". But a more experienced expert or Pro would probably use carry red, green and yellow. The red was "sharpest", the green was "almost as sharp".

If there was a "Pro - trick", then it was the yellow filter. Yellow only got rid of the blue content so it did not sharpen quite as much as red or green, but it also did not affect the exposure quite as much, so shutter speeds were higher or apertures were small, either of which can also contribute to sharper pictures. And it also resulted in pictures that looked comparatively natural.


Translated to Digital

If I take a picture with normal settings, I can convert it to monochrome. That much will probably not affect the sharpness of the picture, but some people might prefer it simply because it got rid of the chromatic aberration. However, some (probably most) processing programs will be able to make color separation files. They might support RGB, or CYMK, or both. So it is possible to simulate using these black and white filters. The red and green files will be fairly straight forward. The Yellow, is not so clear -- at least to me. The cyan, magenta and yellow files are meant to be used for colour printing, but they are meant to be used along with the others, including the Black ('K') file. There might be a useful way to combine some of these to create a good monochrome image, but I have never tried to do so. Not yet anyway. . . .


Simulated v Real Filters

One last issue is whether using the color separation files is really the same as using real filters? At this point, all I can say is "maybe". I can go further and say that if there is a difference, then I do not expect it to be very big, and it is probably different for different bodies.

Most colour sensors have small filters which define their sensitivity as red, green or blue. But the purity of the filter is not absolute, and in fact, the degree of overlap might differ from one model to the next within a camera line.

If this is true, then the colour separation files might also not be "pure". So yes, there might be a chance that a "better" file could be made if you start by adding a purer filter on the lens. The only way to know for sure is to do the tests, and for now, I am not interested in doing so. If you are, then go ahead.


Uploaded Files:

This is a file created on the YiM1 with my Nikkor 28mm, F3.5 AI on my Shift Adapter, so the chromatic aberration is representative as far off-axis as would be expected when the lens is used on a 35mm film camera. It isn't a "horrible" amount of chromatic aberration, and it could be handled by most chromatic aberration functions, but the alternative of converting to monochrome was also available, as was the creation of colour separation files.

P9110009.DNG
[A version of this file was previously uploaded.]

Partial EXIF:
Sept 11, 2018, 16:51:55
Image width 5200
Image height 3902
Components per pixel 1
Pixel height 3888
Pixel width 5184
Component configuration YCbCr
Color space Uncalibrated [AdobeRGB]
Exposure mode Manual exposure [actually P]
Exposure bias 1.00ev
Exposure time 1/80 sec.
F number f/0.0 [probably F11]
Max aperture f/1.0 [actually f/3.5]
Focal length 0.0mm [actually 28mm]
ISO speed 200
Metering mode Center weighted average
Gain control Low gain up

The uploaded files were made from the DNG (raw) file with only white balance adjustment. No other changes (sharpening, noise reduction, etc.) were used. The first is the normal colour render. They "Grey" is the default grey conversion for Corel Paintshop Pro X9. The Red, Green and Blue files were the separation files that Corel makes normally.

Conversion White Balance:

Temperature 5830
Tint 27

"P9110009 -1b-Colour-C1.jpg"

"P9110009 -1c-Grey-C1.jpg"

"P9110009 -1d-Red-C1.jpg"

"P9110009 -1e-Green-C1.jpg"

"P9110009 -1f-Blue-C1.jpg"

Both the Red and Green files are visibly sharper than the straight grey conversion, but as expected, the Green file looks more "natural".
 

Attachments

  • P9110009 -1b-Colour-C1.jpg
    P9110009 -1b-Colour-C1.jpg
    178.9 KB · Views: 389
  • P9110009 -1c-Grey-C1.jpg
    P9110009 -1c-Grey-C1.jpg
    155.5 KB · Views: 325
  • P9110009 -1d-Red-C1.jpg
    P9110009 -1d-Red-C1.jpg
    208.2 KB · Views: 290
  • P9110009 -1e-Green-C1.jpg
    P9110009 -1e-Green-C1.jpg
    189.1 KB · Views: 323
  • P9110009 -1f-Blue-C1.jpg
    P9110009 -1f-Blue-C1.jpg
    170.1 KB · Views: 288
Your 28mm f/3.5 has what, today,many people would call "very bad" CA. issues.

The Green Filter effect looks good to me.

I saw your tests of the 300/4 Takumar in the snow last week. A lens design from 50 or so years ago, yes, and well-built no doubt, but not good compared to newer, better-corrected lenses.

The performance of MANY older,film-era Nikkor wide-angle lenses is not that great when paired with modern, high-resolution digital sensors. I applaud you for doing your own, ACTUAL tests.
 
Thanks Derrel, as I've said before, mainly I do tests because I need to know something. I keep notes as I go along, and then if I think it might be useful for others, then I write them up and post them somewhere. This was different. I knew the theory, and, yes, I actually did take specific pictures for this post using the 300mm. But then I thought it would be best to avoid having to explain the "focus issues" on that lens, so I just looked at what I had from the 28mm. Aside from the focus on the 300mm, neither lens performed outside my expectations, and the 300mm can be fixed.

I agree that generally, the older lenses really are not necessarily that wonderful. We have better glasses today (Extra Low Dispersion), aspheric elements, probably better grinding and polishing, and computers to help with designing. On top of all that, many of the companies have simply been at this a long time and have decades of experience. But occasionally, you can find something "competitive" with more modern products.

Mainly, this project was just curiosity and fun. I still do not have a great need for the 300mm. But in retrospect I'm glad I got it.

One thing I didn't mention above was that I was surprised that the separation files were noisier than the straight conversion file. I had not expected that. The Red file is pretty bad. There's no point in that one. If I "noise reduce" it, all the sharpness and detail will be gone anyway. I guess that was a lesson. Maybe there is a good reason to use real filters on the lenses?
 
One last issue is whether using the color separation files is really the same as using real filters? At this point, all I can say is "maybe". I can go further and say that if there is a difference, then I do not expect it to be very big, and it is probably different for different bodies.

Most colour sensors have small filters which define their sensitivity as red, green or blue. But the purity of the filter is not absolute, and in fact, the degree of overlap might differ from one model to the next within a camera line.

If this is true, then the colour separation files might also not be "pure". So yes, there might be a chance that a "better" file could be made if you start by adding a purer filter on the lens. The only way to know for sure is to do the tests, and for now, I am not interested in doing so. If you are, then go ahead.



Excuse the longish post, it's required to explain what is a basic and common misconception about colour which actually renders the whole test quite *inconclusive*.

The nature of light is defined by *wavelength*, the visible spectrum is a linear scale from the longest wavelengths of red through to the shortest of indigo.

When Newton split light into it's separate wavelengths and revealed the true nature of light he also revealed a problem or two. Not least of these was the problem of the three primary colours because he just proved that they didn't really exist. So why are there three primaries of Red, Green and Blue?

It was Thomas Young who made the breakthrough realisation and connection by understanding that it was describing the way the human eye worked and had nothing to do with the nature of light. The human eye doesn't have numerous types of receptors capable of detecting slight nuances in colour, it only has three. They are named after the primary colours and *not* their spectral sensitivity so be very carful about jumping to a conclusion simply by reading the label *red*. In the eye the three receptors are heavily overlapped and are based on evolution and our need to be able to gain an understanding of the space we occupied in order to survive. As an example when life first started to evolve organic visual systems the most useful was the ability to differentiate the harsh UV laden light of midday from the softer warm yellow/orange of the mornings and evenings. This is still hard wired into the human eye and the yellow/blue remains that main contrast around which we understand colour. Also if you look at the spectral sensitivity of the eye you will notice that the *red* receptors actually react a little to blue and so it is that we understand that colour is a wheel where indigo fades through to magenta (a pure hue that doesn't exist as a single wavelength) to red. Basically colour is not a property of light but the way our visual system differentiates wavelength. The way it has evolved to do this is not the same as a calibrated scientific instrument would but in a way that skews colour and contrast to exaggerate the differences that are most useful to us.

Now your RGB colour system based on the three primaries use the fact that you only have three types of receptors to produce a remarkable illusion. If you see light of the *yellow* wavelength then it triggers a pattern of response in the three receptors that our brain interprets as yellow. What a screen does is use narrow spectrums of red, green and blue that are specifically tuned to the receptors in your eye so when a certain combination RGB is shone in your eye it triggers exactly the same pattern in the receptors in your eye which your brain interprets as *yellow*. Yellow is not a combination of red and green, but because of the way the receptors in your eye work your brain can be tricked into seeing yellow by a combination of red and green.

There is not a colour reproduction system designed by man that is not purely designed around the way the human eye works and are all *perceptual*, none of them are based on the physics of light and reproducing *absolute* wavelength. Even the spectral sensitivity of the RGB array on your camera sensor is specifically balanced as close to the spectral sensitivity of the eye as possible. Digital cameras vary only in their solution to this problem not their intent.

Colour filters in B&W work on the absorption of *wavelength* not the colour wheel which is perceptual only. The *common* colour filters are:

Yellow: Because we still retain the yellow/blue perceptual contrast as a result of the way the eye evolved we still see yellow as being brighter than blue. In fact if you look at the spectral sensitivity of the eye then the receptors (for yellow) pretty much give near maximum on both red and green and near minimum on blue. When you reduce the image a monochromatic one you remove that perceptual skew from the cones and so your perception of luminosity is also altered, you finally see blue and yellow as being closer to *absolute* and so bright blue skies look closer to white as monochrome than the perceptual darkening of blue suggests when you look at the actual colour scene. So a yellow filter is used because it restores the luminance of colours closer to the perceptual one of the eye by darkening the blue and lightening the yellow.

Orange: More pronounced than yellow.

Red: The exact opposite end of the *linear wavelength scale of light*, it is really cutting out blue and cyan. Remember that the red/green opposites are a *perceptual contrast that is to do with the cones in your eye and does not describe how light works*, it doesn't darken green to the same extent which is more pronounced towards a viridian green than a yellow/green.

Green: Lightens foliage and darkens both blue and red to some extent, more pronounced and slightly more biased than the yellow but similar.

Blue: Really not perceptually very accurate as it reverses the eyes tendency to see blue as the darker colour.

EDIT having re-read the process: If you want to simulate the effect of colour filters you must use an adjustment of *luminosity* to simulate the filtering out of certain *wavelengths* as related to the linear scale of the visible spectrum. Not use the RGB channels which are simply the luminosity maps of the three primaries necessary to produce the *illusion* of colour in an additive display. They do not represent anything close to the effects of a filter on the front of a lens and are not based on the linear scale of wavelengths that filters absorb but the way the three kinds of receptor in the eye work.

In actual fact, and slightly above my understanding at present, the Green channel is closely related to to our perception of luminosity. But it MUST be remembered that on a computer screen the channels mimics the way the eye works and so it is the *perceptual* difference you are shown and not the effect of a green filter in front of the lens. Below is a comparison, on the left the effect of a green filter and note that it darkens blue and red, some of the late autumn grass has a red/brown content. On the right is the green channel which is more of a *perceptual luminosity map* based on how the eye separates and rebuilds an understanding of colour and luminosity.

ex-1.jpg


BTW, by using stronger filters in front of the lens you bias the composition of the light passing through the lens to a single portion of the spectrum. This then is far less prone to chromatic aberration simply because the combination of wavelength passing through the lens is tighter and less prone to differences in refractive indices.

When you simulate this on a computer screen with a colour digital file the chromatic aberration is already there and you simply adjust the luminosity of it and in many cases will increase the contrast making it more visible.
 
Last edited:
@Tim Tucker 2: Much thanks for your input! I think that your example shows that the filters built into the sensors of your camera do allow a much more significant "overlap" than I had thought, and I expect that your camera is about the same as most of ours. So, it does seem that if I want to reduce chromatic aberration, putting "real" photographic filters on the lenses would help more than just using the colour separation function. That becomes a hard choice for me. Earlier in my life, I did a lot of black and white photography. Now that I have returned to it, it is hard for me to get enthusiastic about black and white. If I put the filter on, then it is a commitment that the picture will be black and white. That is not as nice as simply choosing to make a black and white later. Well, we'll see.
 
I NEVER use yellow, orange, red, or green filters in the field with digital...filter effects in LR are good, faster, and give me the greatest number of options.
 
@Tim Tucker 2: Much thanks for your input! I think that your example shows that the filters built into the sensors of your camera do allow a much more significant "overlap" than I had thought, and I expect that your camera is about the same as most of ours.

A lot of people make the connection, (because of the words or labels and the belief that colour explains the nature of light), that a camera sensor splits the light into it's RGB components where it is shone back out of your screen in the correct proportions to produce the colour we see.

I can't stress just how wrong this is.

Light is not composed of red, green and blue but the full spectrum of visible wavelength.

The sensor in your camera is designed to respond to the full spectrum, when light of a certain colour hits it a certain proportional pattern from the three sensors is recorded. This only works because it mimics the way your eyes work. If you change that significantly then a camera would not be able to record colour in the way you see it, or *correctly*. Colour describes the way the eye works not the nature of light, so accurate colour is what the eye sees and not the literal recording of wavelength.

A screen does not then reconstruct those components in the correct proportions. The combination of narrow bands of RGB light (and it is RGB this time) does not reproduce the colour. It triggers an illusion so tuned to the way that the human eye works that it doesn't work for any other animal and the colour they see would be completely wrong. If you do something that never happens with subtractive colour and shine narrow bands of red and green light into your eyes at wavelengths carefully chosen to attune to the receptors in your eyes then you can trigger the same response in the receptors in your eyes as yellow and the eye simply cannot tell the difference...

Visible Light and the Eye's Response

The RGB colour separation on your computer is not reference to the *input* data but a specific reference to the *output* data necessary to display correct colour on an additive display in exactly the same way that the CMYK colour separation is related to the *output* data necessary to display colour through a subtractive CMYK press.

I wouldn't use colour filters for colour digital, B&W film only. Digital is trying to separate and record colour in a similar way to the eye so it can reproduce that in a variety of different colour processes. By using colour filters you are working against this and skewing it in a fairly non-linear way, for instance a camera may use the *green* channel as the main reference to re-build the luminosity. Once the emulsion is set B&W film no longer distinguishes or separates colour, it is just a luminosity map defined by the emulsion sensitivity and the composition of the light falling on it.

Why not just use the lens correction in the RAW editor? Or do what I do which is shoot B&W film with a LF camera? ;);););)
 
Last edited:
I think the older nikors ( with glass lenses ) are better than plastic lenses of to day.
 
It is true that human vision is much more sensitive in the green range. Having worked in a mini photofinishing lab for a while, it was always the magenta/green balance that caused the most number of redos hehe. But I'm not so sure that a Nikkor 28 3.5 has an excessive amount of chromatic aberration in it. Seemed fine to me, but that was a great while ago with Kodachrome film. Granted, it could still have been there, just not so noticeable due to the more forgiving film medium. Could there be other optical shortcomings besides CA that causes such a big loss of sharpness with digital?
 
I posted a thread on this some time back.
The images I shot were with real and "fake" filters used (in my case Canon DPP).
But the effect is basically the same.

Old Soul: A B&W test. Red filter or not.

Here is what I had discovered:
1: The camera has the ability to create in box the monochrome effect and the end result is a B&W image that once put through the processes of contrast, brightness et-al post process can "mimic" the effects of the various filters.
The filter I put in front of my camera and shot with the canon was set up where the camera would "shoot" monochrome. That was when I discovered that in realty (at least for the camera I was using,) that the red was actually recorded because the DPP can actually "undo" the monochrome information provided by the camera. And was actually able to take a "monochrome" image shot in camera and restore the full color.
I realized this when I saw in the DPP program that the data from the camera showed "Monochrome" and once I switched it to "Prorate" "Faithful" and other settings, voila.. full color, except the filter image was all red!

2: What most people who do not study photography or light, and the application in photography in digital is that the CMYK and the RGB was based on the tables and scales produced well before the 20th century when some folks discovered that color could be altered with various colored glass. Eg: Red allowed some green through (a different shade of red) and made blue essentially black. (At least mostly.)
It was also here that color temperature (kelvin scale) was first applied and realized. Ergo why red is in the lower spectrum color and blue is higher. And why White Balance adjustment has a greater effect on final image than most really understand.

3: What I DID discover with my experiment was that the application of red, green blue, yellow..blah blah blah filters on a digital set to monochrome is that there is in fact a direct result and effect on the image that is totally different from the faux filters in various programs.
This is because when the filter is actually used, guess what is happening...
The same effect as though it were B&W except the camera records mostly the red color. I discovered that some shades of green nearly burn out in a whole yellow color, but that's beside the point. The filter is only allowing red through and thus brings the digital image much closer to a "real" B&W image than if pushed through post process.

One book in my collection is "Successful Color Photography" by Andreas Feininger that goes into quite alot of detail about this very subject. It is defiantly worth the read. (Not digital, but effects of filters.)
https://www.amazon.com/Successful-Color-Photography-Andreas-Feininger/dp/B000O7VX8M/ref=sr_1_1?keywords=Successful+Color+Photography+Andreas+Feininger&qid=1563162356&s=gateway&sr=8-1.

But more relevant to the OP is this: CA was obviously not a big issue with older lenses at least to the degree that the professional photographer was concerned.
But also remember that early lens coating helped save many a Hollywood movie shot on Technicolor and I would dare someone to find on prints of El Cid, The Wizard of Oz or Gone with the Wind, bad color rendition.

However in my copy of the afore mentioned book, Mr. Feininger points out that lens coating were part in part an issue of light transmission over CA because of the amount of light lost as a result of the tunnel effect and inverse square law. By having the coating on the lenses, it helped increase overall light transmission and the lenses were typically clearer and images much sharper.

But one of the aspects involved was actually diaphragm calibration and "T stops". But I digress.

The older lenses and especially ones purchased on Ebay and other sites may have histories unknown to the current owner.
Ergo: one does not know if the lens suffered from the use of Windex on all the elements and the near complete wiping of lens coatings away from the lens.

Except if flair and CA suddenly becomes a huge issue.
But also remember that the formulations for coating from the 1960's and 70's have greatly evolved and even the structure of lens elements have evolved. Thus, the overall aspect of the lens itself if not brought into question due to history, could very well have a transitional aspect of being made between coating types and application.

As for B&W, even the use of modern lenses will show a dramatic at times increase in quality over older lenses due to contrast and other aspects.
 
As an Amazon Associate we earn from qualifying purchases.
My idea was that anti-reflective coating greatly improved light losses at air to glass surfaces and made it possible to design multi-element zoom lenses without a huge loss of light. In the old days it was commonly excepted that teach it to class service lost around 3% of The light. Imagine today's 70 to 200 mm zooms with 21 to 23 elements. Imagine if at each air to glass surface, 3% of the light was lost. In a 21 element Zoom lens can we have perhaps 10 to 15 air to glass surfaces.

Today the most recent advances
have almost virtually eliminated lens flare when shooting right into bright light sources. Nikon invented this improvement,which they called nano coating: it was only a few years before Canon released an almost-identical form of their lens coating.
 
Last edited:
My idea was that anti-reflective coating greatly improved light losses at air to glass surfaces and made it possible to design multi-element zoom lenses without a huge loss of light. In the old days it was commonly excepted that teach it to class service lost around 3% of The light. Imagine today's 70 to 200 mm zooms with 21 to 23 elements. Imagine if at each air to glass surface, 3% of the light was lost. In a 21 element Zoom lens can we have perhaps 10 to 15 air to glass surfaces.

Today the most recent advances
have almost virtually eliminated lens flare when shooting right into bright light sources. Nikon invented this improvement,which they called nano coating: it was only a few years before Canon released an almost-identical form of their lens coating.

4% on a single air/glass transition.

The problem comes when you have two or more air/glass transitions because the total reflection is then between 0% and 16%. This is a function of the relationship between the glass and the wavelength where for want of a better model the *interference* of the wave function can be use to visualise it. The reflection off the second surface can cancel or amplify the reflection off the first surface, (this is of course not what happens as when you use a detector to measure the photons reflected it increases or decreases the number of audible click of the same volume indicating particles and not waves...).

You can see this effect in thin films such as oil on water, where the colours you see are the amplified reflections from the relationship between the thickness and the wavelength of the light. It was also Newton who measured and detailed the effect with glass.

The effect is related to wavelength, but if you have a few special coatings of the right thickness it is possible to reduce the reflections quite considerably.

2: What most people who do not study photography or light, and the application in photography in digital is that the CMYK and the RGB was based on the tables and scales produced well before the 20th century when some folks discovered that color could be altered with various colored glass. Eg: Red allowed some green through (a different shade of red) and made blue essentially black. (At least mostly.)
It was also here that color temperature (kelvin scale) was first applied and realized. Ergo why red is in the lower spectrum color and blue is higher. And why White Balance adjustment has a greater effect on final image than most really understand.

Hmmm...

The trouble with most photographers on forums is that they equate colour to the physics of light. Lenses are designed and work entirely in conjunction with the physics of light. Colour and colour reproduction is entirely perceptual and works by understanding the mechanics of the human eye/brain.

Walk out of a bar lit by warm artificial light and into the heavy blue of midday under blue skies. Do you see the bar as being particularly yellow or the outside as being heavily blue? No because your eye adjusts colour to enhance the contrast between colour. This is an evolutionary thing, we survive because we have the ability to see the orange stripes of a tiger even in heavily blue biased light. Our eyes naturally cease to record as much blue, blue de-saturates the orange, and therefore we see a bigger contrast between colour.

A number of very important things are happening here, and are mimicked in photographic images. The trouble is that because things remain constant to our eyes we assume that what we see is constant and can be explained by what we see rather than how we see.

  • When we take a photographic image we equalise exposure so the SAME amount of light falls on the sensor in all conditions, (it is the calibration of exposure). We no longer reproduce absolute intensity of light, we equalise the brightness much as the eye does and record the same intensity as dictated by the sensitivity of the film/sensor and the brightness of the output media.
  • We also apply a WB. In other words we no longer record absolute colour or colour temperature but convert it to a *Reference WB* which is fundamentally derived from how the human eye tries to balance primaries to increase visual contrast.
That excessive blue de-saturates orange is entirely due to the workings of the human eye and has nothing to do with the physics of light. Combining the light of the corresponding wavelengths does NOT produce grey light. Light doesn't have the property of colour, colour is entirely how the human eye interprets wavelength.

So if you combine blue and orange light what happens is that you produce a (roughly equal) signal in the three receptors in you human eye which the brain interprets as *grey*. If you reduce the amount of blue then you reduce the blue the proportion between the signal generated by the **blue and the **red/**green cones changes more towards the red/green and so you see a colour that's more orange. (** The cones in our eyes are labelled as red-green-blue and the mistake is to apply your understanding of the word *red* to the function of the cone. The red cone actually has a much broader spectral response that actually includes some blue and as a result we classify and order colour perceptively from indigo through magenta to red where no such relationship exists in physics).

RGB and additive colour systems are an illusion so attuned to the three receptors in the human eye that it only works with humans. Even CMYK is fundamentally based on the workings of the human eye. It is because we always see yellow as the brightest colour and so in any subtractive system it's impossible to produce that brightness by subtraction, it has to be at the top of the tree and be the colour subtracted from, (yellow stimulates maximum signal in the *red/green* cones and minimum in the *blue* and so yellow always triggers more cones than blue and is always seen as a brighter colour).

Using colour filters to subtract light is essentially filtering wavelength, but the resulting *colour* we see is not an accurate recording of that wavelength but a perceptual interpretation of wavelength that is due to the mechanics and chemistry of the eye/brain.

Conclusion: Colour does NOT describe the physics of light only wavelength does. Colour is our perceptual interpretation of wavelength and all colour reproduction systems are based on that perceptual representation and NOT the physics of light or the absolute wavelength or intensity of light in any scene. I have seen posters on forums *prove* the *absorption of fragile blue* by comparing jpegs on computer screens which of course is a ridiculous concept and completely misunderstands the nature of colour and colour reproduction. All colour we see is composed of wavelengths and the way they combine to produce the colour we see, the way the saturation and intensity of that colour changes, the way we categorise and order colour is all defined by the mechanics of the human eye and cannot be understood by the physics of light. The complementary colours are entirely a product of the cones in our eyes and again have nothing do with the physics of light.

All colour reproduction systems are based on how we see and interpret colour through human eyes, what is know as Perceptual Colour Theory, and not the actual recording and reproduction of wavelength. Even the RGB of cameras is a specific reference to the cones in your eyes and nothing to do with the actual physics of the light transmitted by the lens.

So although CA is a product of the physics of light and how a lens fails with different wavelengths at the edges of the field it projects, the actual representation of it in an image is a perceptual model. It is a mistake to assume that you are looking at and can interpret the actual physics of the light and the way a lens transmits that when you view an image on any colour reproduction system. You DO NOT SEE the actual light that passed through the lens because you will ALWAYS view with the human eye and it's perceptual interpretation of colour. Which is why it's utterly pointless for any colour reproduction system not to be based on the perceptual model.

Colour film is also a perceptually based reproduction system and so the effects of filters has to be understood in line with the perceptual model, images cannot be interpreted in relation to the actual physics of light and the transmission of a lens.

BTW, it was Newton who correctly guessed the nature of light and especially white light being composed of all the colours. Before this it was assumed that white lacked the *purity* of colour and the purer the colour the more divine the light.

It was Thomas Young who shortly afterwards correctly realised that the way colour and colour filters worked when combining light was actually describing the mechanics of the human eye and had nothing to do with the physics of light.
 
Last edited:

Most reactions

New Topics

Back
Top