What's new

RAW Vs. JPEG

RAW or JPEG

  • RAW

    Votes: 53 93.0%
  • JPEG

    Votes: 4 7.0%

  • Total voters
    57
  • Poll closed .
I too shot both, but mostly I shot Raw.

I shot JPEG when time constraints didn't allow post processing, like selling images on site at an event.

The bottom line regarding the difference between the 2 file types is - Bit Depth
Tutorials on Color Management & Printing

Raw is like a film negative and is an unfinished image that has to be 'developed' and then adjusted before a print is made.
JPEG was designed to be a finished ready-to-print format. As such JPEG has little, if any, editing headroom. JPEG is a lossy, compressed file type. About 80% of the color information the camera's image processor developed gets thrown away to make a JPEG (the lossy part). All those millions of pixels get converted into 8x8, 8x16, or 16x16 pixel blocks known as Minimum Coded Units (MCU) (compression).
This is totally misleading IMO. The color information is "not thrown away" when using JPEG. The values from sensor undergo a non linear transformation - gamma correction and only after that it compressed throwing away only information not relevant to human vision. The gamma correction is a crucial step allowing to keep almost any relevant information. The only reason to shoot raw if you want to do gamma correction using your algorithm (software), not one in camera. If raw have more "editing headroom" then the in camera algorithm sucks, but usually it is not the case.
 
The amount of information lost between Raw and Jpeg is minimal, to most they wouldn't even know the difference. Some people consider shooting jpegs as unsuitable for professional work, that is simply wrong. People will argue over the amount of information lost with jpeg compression, again this is so small that to the naked eye, you are likely not going to see a difference.

National Geographics first all digital copy was all shot jpeg. What it still comes down to, if you have a perfect exposure in camera then there is little difference. It is a personal choice.
 
I shoot raw almost exclusively, even with my compact camera. The two main differences I find are dynamic range and noise (the ideal raw exposure is usually greater than the ideal JPEG exposure, hence a reduction in noise) with colour space occasionally being important as well (raw usually has a much larger colour space than the JPEG options of Adobe RGB and sRGB, so those options do indeed 'throw colours away').
 
I just started shooting RAW about a week ago and I don't know why I waited so long.

RAW is so much more flexible to work with. Lots of pics in a hurry, I use JPEG.

I prefer RAW though.
 
This is totally misleading IMO. The color information is "not thrown away" when using JPEG. The values from sensor undergo a non linear transformation - gamma correction and only after that it compressed throwing away only information not relevant to human vision. The gamma correction is a crucial step allowing to keep almost any relevant information. The only reason to shoot raw if you want to do gamma correction using your algorithm (software), not one in camera. If raw have more "editing headroom" then the in camera algorithm sucks, but usually it is not the case.

Today's digital cameras make 12-bit depth (4096 discrete colors per color channel, as defined by colormetric interpretation of the Bayer array) or 14-bit depth (16,384 discrete colors per color channel, as defined by colormetric interpretation of the Bayer array).
JPEG is limited to an 8-bit depth, or 256 discrete colors per color channel. What happens to the other 3840 or 16,128 discrete colors?
Banding and posterization can result in image gradients because JPEG lacks enough colors to render the gradient without visible steps.

None of the luminosity data the image sensor records is discarded.

The image sensor records a linear gamma (1). Human vision cannot be modeled accurately using a gamma curve, but Raw converters usually use a non-linear gamma encoding between 1.8 and 2.2, close enough to match the way human eyes see the world.

http://www.adobe.com/digitalimag/pdfs/linear_gamma.pdf
Also see - Real World Camera Raw by Bruce Fraser and Jeff Schewe.

Here is an approximation of an image having a linear gamma and no colormetric interpretation of a Bayer array, but converted to JPEG for online display:
RawLinear.jpg


The same photo with a non-linear gamma applied, colormetric interpretation, and conversion to JPEG for online display.

Converted.jpg
 
Last edited:
This is totally misleading IMO. The color information is "not thrown away" when using JPEG. The values from sensor undergo a non linear transformation - gamma correction and only after that it compressed throwing away only information not relevant to human vision. The gamma correction is a crucial step allowing to keep almost any relevant information. The only reason to shoot raw if you want to do gamma correction using your algorithm (software), not one in camera. If raw have more "editing headroom" then the in camera algorithm sucks, but usually it is not the case.

Today's digital cameras make 12-bit depth (4096 discrete colors per color channel, as defined by colormetric interpretation of the Bayer array) or 14-bit depth (16,384 discrete colors per color channel, as defined by colormetric interpretation of the Bayer array).
JPEG is limited to an 8-bit depth, or 256 discrete colors per color channel. What happens to the other 3840 or 16,128 discrete colors?
Banding and posterization can result in image gradients because JPEG lacks enough colors to render the gradient without visible steps.

None of the luminosity data the image sensor records is discarded.

The image sensor records a linear gamma (1). Human vision cannot be modeled accurately using a gamma curve, but Raw converters usually use a non-linear gamma encoding between 1.8 and 2.2, close enough to match the way human eyes see the world.

http://www.adobe.com/digitalimag/pdfs/linear_gamma.pdf
Also see - Real World Camera Raw by Bruce Fraser and Jeff Schewe.

Here is an approximation of an image having a linear gamma and no colormetric interpretation of a Bayer array, but converted to JPEG for online display:
RawLinear.jpg


The same photo with a non-linear gamma applied, colormetric interpretation, and conversion to JPEG for online display.

Converted.jpg

I have not got a clue what any of this means, and to be honest, I think even if I did, it wouldn't affect how I shoot.
 
I think of Jpeg as a place I want to end up but not start. It is just the same as recording an audio master in mp3 instead of wav. A compressed file is an ok place to end up but not a good place to start. (jmho)
 
I think of a jpeg as a file format of last resort. it's seriously terrible.

i wish cameras at least gave the option for lossy, lossless, 8 and 16 bit PNG. JPEG is such a 20th century format.
 
Today's digital cameras make 12-bit depth (4096 discrete colors per color channel, as defined by colormetric interpretation of the Bayer array) or 14-bit depth (16,384 discrete colors per color channel, as defined by colormetric interpretation of the Bayer array).
JPEG is limited to an 8-bit depth, or 256 discrete colors per color channel. What happens to the other 3840 or 16,128 discrete colors?
Banding and posterization can result in image gradients because JPEG lacks enough colors to render the gradient without visible steps.
Indeed these 12 or 14 bit values in RAW file as you say code linear intensity. But storing linear values is very inefficient, as human vision is non linear, approximately logarithmically of intensity. If you apply gamma curve to the linear values you can use bit bandwidth much more effectively relatively human vision properties. It was my point that 8-bit gamma corrected values store almost all the information actually useful from RAW file (in case if jpeg codec in camera works properly). Banding and posterization can appear if you have an improperly exposed image and trying to fix it in post-processing. Say you have underexposed image and you trying to 'fix' exposure in graphic editor. But both jpeg and RAW are equally susceptible to this, because if image is not exposed properly you've lost bits of dynamic range already in RAW hence the banding. Instead of 14 bit device you have used only say 6 bit. And 6 bit in linear range is a banding and posterization catastrophe. Heavily underexposed RAW image contains 14 bit of trash and nothing will help it. I will repeat that RAW is useful if you know you want to use own conversion algorithm, that is the main its application. You cannot fix underexposed photos in post-process even if you use RAW. For small amount of exposure correction (-/+ 1 stop) jpeg will do just fine as RAW.
 
I have been experimenting with ETTR in conjunction with non-standard input gamma encodings, with some success. Rather than having my raw processor encode a standard 2.2 nonlinear gamma, I adjust gamma based on shadow placement.

It works well and effectively addresses the efficiency problem, though this ability isn't typically available in most raw processors.
 
Ok, I have always been under the interpretation that I can only shoot Raw or JPEG with my T2i, but I have seen people in this thread mention that you can shoot both. Can anyone briefly tell me how I would go about doing both?
 
Ok, I have always been under the interpretation that I can only shoot Raw or JPEG with my T2i, but I have seen people in this thread mention that you can shoot both. Can anyone briefly tell me how I would go about doing both?

It's available from your camera menu -- RAW plus JPEG. In which case you'll save two files on the card for each photo taken.

Joe
 
I too shot both, but mostly I shot Raw.

I shot JPEG when time constraints didn't allow post processing, like selling images on site at an event.

The bottom line regarding the difference between the 2 file types is - Bit Depth
Tutorials on Color Management & Printing

Raw is like a film negative and is an unfinished image that has to be 'developed' and then adjusted before a print is made.
JPEG was designed to be a finished ready-to-print format. As such JPEG has little, if any, editing headroom. JPEG is a lossy, compressed file type. About 80% of the color information the camera's image processor developed gets thrown away to make a JPEG (the lossy part). All those millions of pixels get converted into 8x8, 8x16, or 16x16 pixel blocks known as Minimum Coded Units (MCU) (compression).
This is totally misleading IMO. The color information is "not thrown away" when using JPEG. The values from sensor undergo a non linear transformation - gamma correction and only after that it compressed throwing away only information not relevant to human vision. The gamma correction is a crucial step allowing to keep almost any relevant information. The only reason to shoot raw if you want to do gamma correction using your algorithm (software), not one in camera. If raw have more "editing headroom" then the in camera algorithm sucks, but usually it is not the case.

I haven't met a camera algorithm that didn't suck.

If the scene lighting is ideal and the exposure correct, the camera software can be counted on to process the raw data to a mediocre result. I can always do better. The minute the lighting starts to deviate from ideal and becomes at all difficult the camera software starts to really suck. That gives you two expected outcomes from the camera software: 1. mediocre and 2. really suck -- averaged together to just suck.

Suck is not a very precise term and could become a point of contention. So for clarity then, my measure of "suck" for the software in the cameras is; can it do the job as well as I can. If it can't it sucks.

(I swore I was going to stay out of this thread! One beer and all my resistance just caves!)

Some notes: For the sake of clarity what this question is really asking is how do you arrive at a finished RGB photo. Do you rely on the software in the camera or do you rely on the software in your computer or do you rely on yourself?

1. Raw capture in camera to camera image processor to RGB JPEG.
2. Raw capture in camera to computer to automated batch processed (using raw converter) RGB photos.
3. Raw capture in camera to computer to photographer processed (controlling raw converter) RGB photo.

There are appropriate circumstances for all three of those options. If I were taking a lot of photos and selling them I'd select option 1 or 2 because time is money and I'd want to make as much money as possible -- haven't yet figured out how to make time. The photos of course would have to be good enough to satisfy my clients.

Since I'm not selling lots of photos I have enough time to get the very best result. In that case I chose option 3 since I'm better at processing a photo than the software in the cameras.

Raw has a whole lot more "editing headroom" in the hands of a skilled photographer versus in the hands of a sucky software algorithm. The algorithm lacks flexibility and any ability to adapt to unique characteristics of a specific image -- a good explanation for why it sucks. That flexibility and adaptability possessed by the photographer equates to "editing headroom." The photographer can adjust the conversion process before the final commitment to compressed 8 bit -- that's headroom. Once that final commitment is made the headroom is gone.

Joe
 

Most reactions

New Topics

Back
Top Bottom