Oh My God I Love It

I sometimes use lab, but admit its something i should continue to use more often.
Its funny how people (and im guilty of this myself) think at some point that they have reached a pinnacle in Photoshop.... that you couldn't know much more....
After using PS for so many years tho, the phrase id often say to my fellow PS users in the industry is 'the more you think you know, the more you realise you have to learn'.
 
I've been teaching Lab (that's how it is usually written in colour science and in Photoshop) along with RGB and CMYK to beginners for a few years. They seem to get it quite easily, but the thing that usually swings it is the demo of cleaning up near-neutral colours by simply using curves. That seems to help them understand how Lab works. It may also help that I've been using Lab for a very long time.

Dan Margulis' book is indeed a good introduction to its use in Photoshop.

Best,
Helen

I certainly agree that color separation curves and basic lightness curves are easy to demonstrate and follow along with. Though I am an absolute stickler for understanding why things do what they do. As Margulis points out, it's really very simple to see a photo of a canyon, or one with a consistent color cast, and know exactly what to do. In fact, one might even have a decent idea of why Lab works so well for these sorts of things. Yet without a good conceptual and practiced understanding of how the A/B channels actually work, it can be quite difficult to do more advanced editing. That is, when looking at a more complicated image, it can be very hard to figure out where exactly to begin without having a solid grip on how exactly changes in A/B curves will manifest themselves. I'll admit it appeared deceptively simple the first time I read the book. Although when initially trying to work with a more complex image I found it quite difficult to figure out what exactly to do. Practice practice practice I suppose.

Thanks for weighing in.
 
Good to see you like it. I've used LAB a lot when I do brightness only related things. For instance adjusting the tone of an image in curves without affecting the saturation. Very very easily done in LAB. It may be imaginary and contain mathematical models for colour rather than be actual colour combinations, but damn it is useful at times.

Great for luminance noise reduction and sometimes sharpening too.
 
Good to see you like it. I've used LAB a lot when I do brightness only related things. For instance adjusting the tone of an image in curves without affecting the saturation. Very very easily done in LAB. It may be imaginary and contain mathematical models for colour rather than be actual colour combinations, but damn it is useful at times.

Great for luminance noise reduction and sometimes sharpening too.

+1 :thumbup:
 
So here's a noob question: How can you edit and print in lab color when your mediums of editing (RGB screen and CMYK printer) do not support the colors?

I understand how lab color could really make an image more dynamic and interesting, but when your camera only shoots in RGB or your scanner only reads in RGB, and your computer monitor only reads RGB, how do you even view the colors to edit them, letalone reproduce them?

Once again, I'm sure this is a really noobish question, but I don't understand.
 
You, or the colour management software, simply converts to an appropriate RGB or CMYK space for output.

Best,
Helen
 
You, or the colour management software, simply converts to an appropriate RGB or CMYK space for output.

Best,
Helen

Okay, I'm starting to get it. So I shoot my photo in raw, or have scanned my photo in a higher definition than the typical 24 bits, now how do I edit my photo in lab color if I can't see all of the colors on the screen? I'm not seeing an advantage to it if your monitor cannot distinguish the differences between two different colors. Is it a faith thing, where you assume its fine until you print and see true lab color (or Adobe RGB)?

I'm really sorry. I just feel like I'm so close to understanding the concept behind this, but it doesn't make sense if our viewing medium doesn't support the extra colors.
 
What colors are, how we perceive them, how they combine to form other colors, etc...

You are getting into the psychology of perception here.
There are only two ways that colours can combine, and there is only one way in which we can perceive them.
Colours can mix by pigment, as in an ink-jet printer:
Pigments selectively absorb or reflect the various wavelengths of light - for example grass is green because chlorophyll absorbs light at either end of the spectrum and reflects green wavelengths.
The three primary colours for pigments are magenta, cyan and yellow.
Colours can mix as light, as in your computer monitor:
The three primary colours for light are red, blue and green.
As we normally view the world by reflected light the mixing of coloured light can produce results that we wouldn't expect, as in Clerk-Maxwell's experiment. Mixing red and green light gives yellow, for example.
In the later method there are shortcomings because although the eye works on RGB (the colour response of the cones in the eye) it can percieve a much wider range of colours than can be displayed on a computer monitor (see here).
By the same token, the colours that can be produced by an ink jet printer do not have anywhere near as wide a range as the colours available in 'real' life.
And even worse, the range of colours that can be produced by an ink-jet are different to the range of colours that can be produced on a monitor.
The various colour spaces are just a range of approaches that attempt to deal with this - trying to maximise the colour range that can be displayed and match it to what is printed.

The whole thing gets even more complicated once you realise that colours as we perceive them only exist inside our head. It's our brain's way of making sense of the information the eye sends it.
And then there is colour constancy - we perceive colours as being the same even though the colour of the illuminating light changes (our brain can do what the White Balance setting on your camera tries to do).
And then we all see colours slightly differently anyway due to variations in physiology. And people with colour deficiency (colour blindness) see things completely differently.
(Try closing one eye whilst looking at a neutral surface and then swapping eyes. My left eye has a Magenta cast of about +1 compared to my right eye though I don't notice it when both are open)
It's well worth getting your colour vision checked - it's accuracy can be measured.
Which leads me on to the final point. As we get older general ocular degeneration means that we see colours with less intensity.
A child of six sees colours as being far more intense than a person of sixty. It's just that our brains compensate so we don't notice.

Bear all this in mind and you may realise that whatever colour space you work in it's never going to actually match reality - there will always be shortcomings.
There comes a point where the pursuit of perfection isn't worth the effort ;)
 
Okay, I'm starting to get it. So I shoot my photo in raw, or have scanned my photo in a higher definition than the typical 24 bits, now how do I edit my photo in lab color if I can't see all of the colors on the screen? I'm not seeing an advantage to it if your monitor cannot distinguish the differences between two different colors. Is it a faith thing, where you assume its fine until you print and see true lab color (or Adobe RGB)?

I'm really sorry. I just feel like I'm so close to understanding the concept behind this, but it doesn't make sense if our viewing medium doesn't support the extra colors.

It's based on color and brightness/contrast being separate. The benefit is that colors can be made lighter or darker without blending other colors into them.
 
I use Lab Color to use an Unsharpen Mask on just the black and white selection and then convert back to RGB. This stops the colour halos that sometimes appear after using the Unsharpen Mask in a standard colour mode. I've automated this process so all I have to do is hit F12! But having read the above I'm going to have to start playing around with curves etc now - although I usually adjust my curves in RAW.

EDIT: 200th post! Woo!!!
 
I hate to say it, but in most cases, doing a USM in RGB and setting the blending mode to luminosity looks very close to USM on the L channel in Lab.
 
I must say I am more skilled at RGB/CMYK thing in the digital world than chemicals and stuff. but I know you will find good number of students who'se willing to be your slave errrr. students
 
It's based on color and brightness/contrast being separate. The benefit is that colors can be made lighter or darker without blending other colors into them.

This intrigues me... I didn't know that adjusting the brightness of an RGB image blends other colors in to it. I guess that makes sense since you have to change each channel to adjust brightness in 24-bit color... Cool.

I'm still really new to all of this, so you'll have to excuse my ignorance. But what I lack in knowledge I make up for in desire to learn :)
 
Try watching the RGB colors change in the info palette as you make adjustments. Then it'll make more sense.
 

Most reactions

New Topics

Back
Top