What's new

Pixel Density Question

DBY

TPF Noob!
Joined
May 1, 2025
Messages
2
Reaction score
0
Location
Metro Boston
Can others edit my Photos
Photos OK to edit
Sorry to bring this debate up again and I promise that I researched this extensively beforehand :)
I am trying to understand the RAW/JPEG influence on sharpness and have questions on pixel density. I think I have figured this out but need someone to check my thinking. For a point of reference I am looking at a Nikon D700 which has a nominally 12mp sensor. The D700 has three Image Size settings: Large, Medium and Small, and from the manual, the settings result in a pixel count of 4.3k x 2.8k = 12mp, 3.2k x 2.1k = 6.8mp and 2.1k x 1.4k = 3.0mp respectively. My first question is how does the camera accomplish this change in pixel count? It doesn’t appear to me that the camera is using fewer sensor pixels with each setting, and it is still a full frame image with the same nominal image dimensions. When I pixel peep, I can see the pixel rendering degrades going from Large to Medium to Small settings (i.e. with reductions in Image Size, there are fewer rendering pixels attempting to render the same image size, so the rendering pixels are larger with reductions in Image Size). My theory, which I am seeking clarity on, is that the reduced pixel “count” of the Medium and Small settings are not based in a difference of sensor pixels used, but is actually accomplished by a processing algorithm in the camera that attempts to render the original 12mp picture with half the number of rendering pixels at each setting. Is this how it works?

Second question is on sharpness of RAW versus JPEG. I know the raw file doesn’t change in size regardless of the Image Size setting. Since Large Image Size is 12mp, and I have attempted to pixel peep the raw file to be certain, but it seems that the effective pixel density is the same for the JPEG Large setting and the RAW image. Is this generally true? If so, then am I correct to conclude that what is considered to be the enhanced acuity characteristics of RAW over JPEG doesn’t come from a different pixel count but actually comes from the higher image capture bit count (12 or 14 bits in the case of the D700) over the 8 bits of JPEG, the lack of JPEG compression algorithm in the RAW file and possibly other processing subtleties. Not trying to start a RAW versus JPEG battle – the engineer in me just needs to know how this works.
 
Last edited:
Sorry to bring this debate up again and I promise that I researched this extensively beforehand :)
I am trying to understand the RAW/JPEG influence on sharpness and have questions on pixel density. I think I have figured this out but need someone to check my thinking. For a point of reference I am looking at a Nikon D700 which has a nominally 12mp sensor. The D700 has three Image Size settings: Large, Medium and Small, and from the manual, the settings result in a pixel count of 4.3k x 2.8k = 12mp, 3.2k x 2.1k = 6.8mp and 2.1k x 1.4k = 3.0mp respectively. My first question is how does the camera accomplish this change in pixel count? It doesn’t appear to me that the camera is using fewer sensor pixels with each setting, and it is still a full frame image with the same nominal image dimensions. When I pixel peep, I can see the pixel rendering degrades going from Large to Medium to Small settings (i.e. with reductions in Image Size, there are fewer rendering pixels attempting to render the same image size, so the rendering pixels are larger with reductions in Image Size). My theory, which I am seeking clarity on, is that the reduced pixel “count” of the Medium and Small settings are not based in a difference of sensor pixels used, but is actually accomplished by a processing algorithm in the camera that attempts to render the original 12mp picture with half the number of rendering pixels at each setting. Is this how it works?
I'd say yes. I don't have a D700 but it should be easy enough to test. Set the camera to save raw + JPEG and set the size to medium. Take a photo and examine the raw file -- 12mp? Then the JPEG was resized in processing to create the medium output file.
Second question is on sharpness of RAW versus JPEG. I know the raw file doesn’t change in size regardless of the Image Size setting.
You just answered the question above.
Since Large Image Size is 12mp, and I have attempted to pixel peep the raw file to be certain, but it seems that the effective pixel density is the same for the JPEG Large setting and the RAW image. Is this generally true?
Yes.
If so, then am I correct to conclude that what is considered to be the enhanced acuity characteristics of RAW over JPEG doesn’t come from a different pixel count but actually comes from the higher image capture bit count (12 or 14 bits in the case of the D700) over the 8 bits of JPEG,
No.
the lack of JPEG compression algorithm in the RAW file and possibly other processing subtleties.
It's the other processing subtleties. The camera processor sharpens the image when creating the JPEG. In processing a raw file I have a choice of multiple different ways to do a better job sharpening the final image than what the camera processor can achieve.
Not trying to start a RAW versus JPEG battle – the engineer in me just needs to know how this works.
 
Since Large Image Size is 12mp, and I have attempted to pixel peep the raw file to be certain, but it seems that the effective pixel density is the same for the JPEG Large setting and the RAW image
You're not viewing the RAW file itself, you're viewing a JPEG preview image embedded in the file, which likely has at least some manufacturers proprietary algorithms applied.
 
The final jpg will be the same as the raw, for the first edit. If you come back to the image after you've deleted the raw file, and edit further there may be artifacts and that will completely destroy your resolution. Big ugly squares in your darker areas. And I find in some settings, trying to use a really small size jpeg will produce banding and artifacts right from the raw. I generally try and get my images down to 1m MB or posting on-line. Sometimes Ihave to go a bit more, as high as 1.7 MB to avoid banding.
 
The final jpg will be the same as the raw, for the first edit. If you come back to the image after you've deleted the raw file, and edit further there may be artifacts and that will completely destroy your resolution. Big ugly squares in your darker areas. And I find in some settings, trying to use a really small size jpeg will produce banding and artifacts right from the raw. I generally try and get my images down to 1m MB or posting on-line. Sometimes Ihave to go a bit more, as high as 1.7 MB to avoid banding.
Yes, editing in jpeg is a lot like trying to edit sound using mp3 instead of .wav files. The compression/uncompressing process during editing creates more and more artifacts with each generation.
 
Thanks - so I was thinking about this the wrong way - that the pixel count for Large, Medium and Small were somehow achieved optically or photographically but as you point out all JPEGS are algorithmic translations of the original RAW file and Nikon's Large, Medium and Small settings are just primary inputs to be taken into consideration when making that translation. Strictly from a pixel density perspective a Large file does have equal pixel density to the RAW file but then a lot of other things happen when you look at the final JPEG. Thanks
 
Thanks - so I was thinking about this the wrong way - that the pixel count for Large, Medium and Small were somehow achieved optically or photographically but as you point out all JPEGS are algorithmic translations of the original RAW file and Nikon's Large, Medium and Small settings are just primary inputs to be taken into consideration when making that translation. Strictly from a pixel density perspective a Large file does have equal pixel density to the RAW file but then a lot of other things happen when you look at the final JPEG. Thanks
A lot of other things happen indeed. It's just a novelty, but you can display and look at unprocessed raw data. It's very dark but you can make out the image and if you zoom in real close you can see the color filter array. (Download it, put it in an editor and lighten it up). To get some perspective on just how much in the way of other things needing to happen here's an unprocessed raw file, the JPEG the camera made, and the JPEG I made.

really-raw.webp


camera-jpeg.webp


my-jpeg.webp
 
Been a long time since I've sized down a photo. do that and suddenly you can't print a decent size photo any more. I think to many people seen to want to master the science and in the process lose the image. I use a free processing program, Picasa, and don't size down at all any more. get ready to print and I tap "print" and a review comes up and tell me if it's any good. This photography stuff get's extremely involved and I for one don't get most of it and don't miss it. I don't need to make a perfect photo, just one I like! Another thing I've come to lately is I don't need a better camera. Use a Nikon 7000 for a decent number of years now and recently felt the need for something smaller and lighter to carry around. Went point and shoor several years ago and though I found they take nice photo's they don't last very long. I addicted to shooting pointing dogs in the field and old buildings. Got to perfect P&S by accident. Panasonic ZS 100. Now I carry a camera everywhere with me. Only has a 25-250 lens but cover's me nicely. I've often wondered how Zice can get $7,000 for a point and shoot, now I know! But they are certainly not in my budget! Got my present Panasonic ZS 100 used and it's like new. $468 included shipping. But they are getting hard to find. being replaced by a ZS 99. Thing I don't care for with it is to much zoom. I suspect get to 1200 on the long side and the picture will be effected. I read that somewhere. One draw back is don't fall in the river with your Panasonic as it will kill the camera, don't ask how i know! I am finding these days I leave my DSLR home quite a bit as the Panasonic can handle most of what I do. And full frame and mirrorless don't really turn me on. I have found that I can blow up photo's from the Panasonic quite large and get good photo's from it. Have an old farm house I did from a long way off and blew it to 12" x 24" had happy as can be with it.
 
Sorry to bring this debate up again and I promise that I researched this extensively beforehand :)
I am trying to understand the RAW/JPEG influence on sharpness and have questions on pixel density. I think I have figured this out but need someone to check my thinking. For a point of reference I am looking at a Nikon D700 which has a nominally 12mp sensor. The D700 has three Image Size settings: Large, Medium and Small, and from the manual, the settings result in a pixel count of 4.3k x 2.8k = 12mp, 3.2k x 2.1k = 6.8mp and 2.1k x 1.4k = 3.0mp respectively. My first question is how does the camera accomplish this change in pixel count? It doesn’t appear to me that the camera is using fewer sensor pixels with each setting, and it is still a full frame image with the same nominal image dimensions. When I pixel peep, I can see the pixel rendering degrades going from Large to Medium to Small settings (i.e. with reductions in Image Size, there are fewer rendering pixels attempting to render the same image size, so the rendering pixels are larger with reductions in Image Size). My theory, which I am seeking clarity on, is that the reduced pixel “count” of the Medium and Small settings are not based in a difference of sensor pixels used, but is actually accomplished by a processing algorithm in the camera that attempts to render the original 12mp picture with half the number of rendering pixels at each setting. Is this how it works?

Second question is on sharpness of RAW versus JPEG. I know the raw file doesn’t change in size regardless of the Image Size setting. Since Large Image Size is 12mp, and I have attempted to pixel peep the raw file to be certain, but it seems that the effective pixel density is the same for the JPEG Large setting and the RAW image. Is this generally true? If so, then am I correct to conclude that what is considered to be the enhanced acuity characteristics of RAW over JPEG doesn’t come from a different pixel count but actually comes from the higher image capture bit count (12 or 14 bits in the case of the D700) over the 8 bits of JPEG, the lack of JPEG compression algorithm in the RAW file and possibly other processing subtleties. Not trying to start a RAW versus JPEG battle – the engineer in me just needs to know how this works.
Digital cameras employ image sensors, such as CCDs or CMOS sensors, so for example silicon detectors of 4300 columns by 2800 rows. Resampling or interpolation methods are used to downscale to a larger pixel size to produce 3200 X 2100 or 2100 X 1400.

At high resolution 14-bit RAW will be approximately twice the file size of 8-bit jpg, but with RAW each red, green, blue value
can range from 0-16,383 so much greater potential dynamic range relative to 8-bit jpg which as maximum possible red, green, blue range of 0 - 255.
 
Digital cameras employ image sensors, such as CCDs or CMOS sensors, so for example silicon detectors of 4300 columns by 2800 rows. Resampling or interpolation methods are used to downscale to a larger pixel size to produce 3200 X 2100 or 2100 X 1400.

At high resolution 14-bit RAW will be approximately twice the file size of 8-bit jpg, but with RAW each red, green, blue value
can range from 0-16,383 so much greater potential dynamic range relative to 8-bit jpg which as maximum possible red, green, blue range of 0 - 255.
A 14 bit raw file saved to disk is typically much more than twice the size of a JPEG created from that raw file.

The numbers listed are misleading because they're not comparing like units. We can't even compare one raw file with another from a different camera because we don't have like units. For example consider two staircases that both have 16,383 stairs. Staircase one has stairs that are 7 inches apart while staircase two has stairs that are 7.3 inches apart. Do both staircase have the same total length? No. They have the same number of stairs but they're not like units.

In the case of raw versus JPEG the units in a raw file are linear -- all the same whereas the unit's in the JPEG are nonlinear, units of different sizes. So for example a 14 bit raw file and an 8 bit JPEG both from the same camera can both contain the total dynamic range that the camera is capable of recording. My Fuji X-T2 has a maximum DR capability of approx. 10.5 stops. Here's a backlit photo that contains the full dynamic range my X-T2 is capable of -- it's an 8 bit JPEG. Below the photo is the RawDigger histogram for the 14 bit RAF file.

backlight.webp


DSCF5468-Full-6032x4032.webp


Note the EV scale on the graph: just shy of EV 3 the green channel has reached saturation and you can count stops down to between -7 and -8 where the data is getting pretty weak and we're at the camera's bottom out limit, gone by -8. Now here are two zoomed in sections of the JPEG that correspond to the brightest (EV 2.9) and darkest (EV -7.5) sections of the photo. The sky is blue and not blown out and there's still detail in the shadows as they culminate in black.

backlight-detail.webp


The full 10.5 stops of DR my X-T2 can record and so the full 10.5 stops that can be recorded in my X-T2's 14 bit RAF files is there and visible in the 8 bit JPEG that I created by processing the raw file. The "magic" is in the fact that the 255 units of the JPEG are non-linear and under my processing control.

Of course raw files contain more data that JPEGs, but quoting those bit depths and numbers, 14 bit, 8 bit, 255 and 16,383 which are not like units doesn't paint the full picture and in fact causes a lot of misunderstanding.
 
A 14 bit raw file saved to disk is typically much more than twice the size of a JPEG created from that raw file.

The numbers listed are misleading because they're not comparing like units. We can't even compare one raw file with another from a different camera because we don't have like units. For example consider two staircases that both have 16,383 stairs. Staircase one has stairs that are 7 inches apart while staircase two has stairs that are 7.3 inches apart. Do both staircase have the same total length? No. They have the same number of stairs but they're not like units.

In the case of raw versus JPEG the units in a raw file are linear -- all the same whereas the unit's in the JPEG are nonlinear, units of different sizes. So for example a 14 bit raw file and an 8 bit JPEG both from the same camera can both contain the total dynamic range that the camera is capable of recording. My Fuji X-T2 has a maximum DR capability of approx. 10.5 stops. Here's a backlit photo that contains the full dynamic range my X-T2 is capable of -- it's an 8 bit JPEG. Below the photo is the RawDigger histogram for the 14 bit RAF file.

View attachment 287343

View attachment 287345

Note the EV scale on the graph: just shy of EV 3 the green channel has reached saturation and you can count stops down to between -7 and -8 where the data is getting pretty weak and we're at the camera's bottom out limit, gone by -8. Now here are two zoomed in sections of the JPEG that correspond to the brightest (EV 2.9) and darkest (EV -7.5) sections of the photo. The sky is blue and not blown out and there's still detail in the shadows as they culminate in black.

View attachment 287344

The full 10.5 stops of DR my X-T2 can record and so the full 10.5 stops that can be recorded in my X-T2's 14 bit RAF files is there and visible in the 8 bit JPEG that I created by processing the raw file. The "magic" is in the fact that the 255 units of the JPEG are non-linear and under my processing control.

Of course raw files contain more data that JPEGs, but quoting those bit depths and numbers, 14 bit, 8 bit, 255 and 16,383 which are not like units doesn't paint the full picture and in fact causes a lot of misunderstanding.Th
The maximum possible range RGB values in an 8-bit jpg is 0-255, while the maximum possible RGB value range in a 14-bit RAW is 0 - 16,383.
 
Digital cameras employ image sensors, such as CCDs or CMOS sensors, so for example silicon detectors of 4300 columns by 2800 rows. Resampling or interpolation methods are used to downscale to a larger pixel size to produce 3200 X 2100 or 2100 X 1400.

At high resolution 14-bit RAW will be approximately twice the file size of 8-bit jpg, but with RAW each red, green, blue value
can range from 0-16,383 so much greater potential dynamic range relative to 8-bit jpg which as maximum possible red, green, blue range of 0 - 255.
If you use a 1 bit depth, you have two possible settings. 1 and zero. So say Zero is the blackest black answer 1 is interprete as the whitests white. YOu have one bit data depth, but still the widest possible dynamic range, with no increments in between.

The 16.383, and 255 refer to the number of gradations, not the ability to produce dynamic range. In fact dynamic ranges is frequently altered in post. The number of gradations up to a certain extent will affect how natural the image looks, although I’ve never seen a demonstration showing 14 bit is better than 12, or even that 12 is better than 10. As noted above, jpegs achieve thier file size by being non-linear. They discard data not needed in the file to be saved. It may however be needed in the next edit. Jpeg is for finished display, not further manipulation with photo software.

This may be fascinating for those who are engineers. However, it’s not necessay for most photgrpahers. After all the ultimate test of a photograph is not what technical specboxes it ticks, or what kinds of graphs can be derived from it.

I’ve seen award winning photos that were far less than technically perfect, and create many that looked better if I clipped parts of the dynamic range, then re-established the full visual dynamic range of part of the photo with curves in post production, without the parts I deemed unimportant. While I understand the use of graphs like the above, you can’t tell from those graphs if the image is any good.

Use of light to produce focus, highlights, contrast, subject isolation, compositional suggestions etc. can all be accomplished relative to the subject matter. It could be argued that applying technical measurements to an artistic product is of theoretcial value only.

Despite years of following various technical sites, I’ve never seen a site prove that even higher resolution necessarily leads to better images, after about 12 MP. Looking at photos from various lenses, the most important values to me, out of focus areas and transitions are not represented in the tech specs. Yet to me they are among the most important features of photograph. I know engineers want it to be all about the specs. but it’s not like that. In photography the specs have to be in sync with the subject, and that’s not something you can measure. Ultimately, it’s about what you see and how you respond to it. There is no technology I know of that can define that. Specs are interesting, but hardly essential. After one achieves a basic understanding of them it could be argued they are a waste of time.
 
Last edited:
21Limited wrote
"Despite years of following various technical sites, I’ve never seen a site prove that even higher resolution necessarily leads to better images, after about 12 MP. Looking at photos from various lenses, the most important values to me, out of focus areas and transitions are not represented in the tech specs. Yet to me they are among the most important features of photograph. I know engineers want it to be all about the specs. but it’s not like that. In photography the specs have to be in sync with the subject, and that’s not something you can measure. Ultimately, it’s about what you see and how you respond to it. There is no technology I know of that can define that. Specs are interesting, but hardly essential. After one achieves a basic understanding of them it could be argued they are a waste of time."

Well said.
 

Most reactions

Back
Top Bottom