Laws of Digital Darkroom

Alpha

Troll Extraordinaire
Joined
Mar 15, 2005
Messages
5,451
Reaction score
41
Location
San Francisco
Can others edit my Photos
Photos NOT OK to edit
I'm just writing this in passing. Happened to be thinking about essential rules of the digital darkroom. So here are the indisputable laws of the digital darkroom.

1. Archival quality scans are done by drum scan, .tiff format, 1:1 size, at the film's native resolution.
2. There is no such thing as too many pixels. They are infinitely divisible.
3. Monitors must be calibrated.
4. Drums scanners > negative scanners > flatbed scanners.
5. .tiff > jpeg2000 > jpeg
6. Chemical printing > LightJet > Inkjet
7. Post-processing reduces quality.
8. Un-calibrated inkjet printers do not make good prints.
9. If an inkjet print is not dry when it comes out of the printer, then the printer is not calibrated correctly.
10. Scanners need calibration, too.

I'll add more as I think of them.
 
while i agree with most of the stuff there, im confused as to why post processing reduces quality.
If you mean of the actual data, i can buy that, but of the artistic value of the image or quality of the image as a whole?

i mean that would be like saying and ansel adams print is insuperier to an ansel adams negative...
 
bummer, i posted a long reply here why i do not agree with some points, but it went nirvana!!! :( :(
 
just to summarise:

there is no unprocessed digital or film image.

and you always have to chose the "settings" or "parameters" for this processing.

Also, images from digital Bayer-pattern sensors have to be sharpened as they are un-sharp by nature.

and if your pixel resolution exeeds the optical resolution of your camera-lens system considerably, then you have too many pixels.
 
Fighttheheathens: I did, in fact, mean the quality of the digital image itself-- not artistic value.

Alex_B: Technically speaking, you're of course correct about no image being unprocessed. But in the context of the "digital darkroom," one can consider a raw image from a digital camera or a drum-scanned negative/positive to be un-post-processed.

As for the Bayer-pattern sensors, they aren't in need of sharpening, they're in need to interpolating the colors recorded by various parts of the sensor. While this means that any given color can only be captured by a maximum of 50% of the sensor (green), relatively speaking, it doesn't make the images any lower in resolution.

And as for the resolution bit, I wasn't referring to optical (lens) vs film/sensor resolution. But I'll entertain that briefly. To begin, I'm not entirely sure what you mean. The resolution of a particular image is an absolute value that is limited by the number of pixels that the film/sensor can capture. There's no such thing as too many or too few pixels when you're talking optical vs capture-able pixels. What I'm talking about here is in terms of scanning negatives, for which there can never be too many pixels. It is correct that a true 1:1 scan is at exactly the resolution of the film (which is dependent upon developing). But scanning at a resolution higher than the native resolution of film can never be a bad thing. Remember that pixels are measured in squares. So if you, say, double the resolution, then you're getting an image that is four times the original size. What this translates to is 4 pixels for every one that was in the original image. You might also think of each pixel as being four times as large. So long as your viewing distance is 4 times the original viewing distance, then the image looks the same. You could increase the number of pixels to approaching infinity so long as you were viewing the image from approaching infinity away. It's just math. Any given image will be identical to a copy of it four times larger viewed at 25% zoom (4 times .25 = 1). Therefore, you can proportionally increase the number of pixels in a photo to infinity and the image will be the same. Conversely, you can down-sample any image (turn say, 4 identical pixels into 1) as much as you like so long as you don't down-sample past the image's native resolution.

So to go back to your point about optical resolution, a film or sensor that can capture more pixels than your lens is just fine. But the opposite, where lens resolution is the higher of the two, is nearly always the case anyway. Like I said, it's about the weakest link here. The resolution of an image on your computer is limited by the lowest-resolution piece of equipment, whether it's lens or film/sensor or scanner.
 
7. Post-processing reduces quality.
...
I did, in fact, mean the quality of the digital image itself-- not artistic value.
I think it would be good to further define "quality" here. Quality of what? I would say that setting a correct white and black point improves the overall quality of the image. If there are no data point beyond 248 and so set your white point there, I don't see how that reduces quality however you define it.

Chemical printing > LightJet > Inkjet

I don't really agree with this either. From the data I've seen, inkjets can get a wider color gamut and tonal range than traditional paper. If you are going for archival longevity, some of the pigment inks are amazing. There are some that are mostly composed of just carbon.
http://www.inkjetmall.com/news/02-16-06.html (scroll to bottom)
I think it depends on what you are going after.

I completely agree with the importance of calibration.
 
one can consider a raw image from a digital camera or a drum-scanned negative/positive to be un-post-processed.

I guess noone here has ever seen a RAW image. The moment you look at a RAW image with a converter or any kind of preview programme, it is translated into something viewable (according to the default settings of that program and whatever you chose for white balancde and all).

If you shoot JPG, then this is done in-camera, with the in-camera defaults or creative programme or whatever settings. And some things like white balance you can influence while shooting.

But these settings are arbitrary! They mainly represent what whoever defined theses settings (the guys at canon, the people that wrote your converter, whoever) felt best.

The same with film, each film has it's own characterisitics. Changing film is like changeing RAW conversion settings.

So now tell me, why should I not change any of theses parameters weeks after the shooting, when I have the chance, and think my old choice was not the best. This is, strictly speaking, post-processing!

As for the Bayer-pattern sensors, they aren't in need of sharpening, they're in need to interpolating the colors recorded by various parts of the sensor. While this means that any given color can only be captured by a maximum of 50% of the sensor (green), relatively speaking, it doesn't make the images any lower in resolution.

It gives a lower resolution per colour channel, and hence a lower resolution for two channels per pixel. Interpolating dos a great job in giving the right colour-feeling, but you lose "sharpness" here. I agree this is different from the sharpness you would lose if you were slightly out of focus. It is a different thing. But it is compensated by sharpening.

Unless you use high end cameras beyond 4000 USDs, they all by default do a hell lot of sharpening when creating the jpeg, with some you cannot even switch that off properly.
The same with the RAW converters.

The resolution of a particular image is an absolute value that is limited by the number of pixels that the film/sensor can capture. There's no such thing as too many or too few pixels when you're talking optical vs capture-able pixels.

Well, we have to distinguish between the image projected onto the sensor by the optical system (resolution of the optical image), and the MP of the sensor (pixel "resolution"). Those two together are responsible for the resolution of the resulting digital image. The problem is that the term resolution is defined twice, once as how much detail is resolved in an image, and once as pixels times pixels (or more precice as ppi), the latter is something introduced with computer slang.

The resolution you see on the digital image later (as in details which can be resolved), is either mainly determined by the pixel resolution of the image (if the pixel resolution is too small), or by the optical resolution of your lens (if the pixel resolution or ppi is large enough to reproduce the details of the image projected onto the sensor).

You could produce a 20MP image with the poor resolution of a 1 MP mobile phone cam.

I usually shoot at 13 MP these days, and I use fairly high-quality lenses only, and all fine-adjusted after I bought them. The limitations in my images in terms of resolution are determined by theses lenses though (as I can clearly see!) and not by the MPs.

If I was now shooting at 40 MP, that would only increase my filesize, but not the overall resolution of my image .. the extra pixel density just does not give extra information.

But scanning at a resolution higher than the native resolution of film can never be a bad thing. Remember that pixels are measured in squares. So if you, say, double the resolution, then you're getting an image that is four times the original size. What this translates to is 4 pixels for every one that was in the original image.

Being a film guy I agree here.
I scanned tons of 35mm film, some of it fine grain pro film. I did scan at 4000ppi anyway (too lazy to translate this into MP, but it is a lot). That is well beyond what I could see as my film resolution.
I archive the scans at this resolution, with 16 bit per channel. That gives me file sizes beyond 100 MByte per scanned image. Compression is not an option, since the grain of the film is already well resolved at 4000 ppi so even smooth parts of the image with no detail (such as the blue sky) are full of film-grain detail.

NOW, it would make no sense to go to say 8000 ppi (meaning four times as many pixels and almost half a Gig of storage per image) .. since there is no more detail to be resolved better.

So, there is such a thing as too many pixels, and that is when more pixels don't store more information.

The resolution of an image on your computer is limited by the lowest-resolution piece of equipment, whether it's lens or film/sensor or scanner.

agreed.

And that, in my case, is always the lenses or the film. So going higher in sensor or scanner resolution is useless for me.

guess we basically agree here, but you should phrase it more like "always archive your digital images in the highest sensible resolution possible, since you can always downsample depending on need".
 
Okay I have no desire to spend pages addressing such obtuse nit-picking. You have to be right even when you concede a point! Fine, you win.

Somebody close this thread.
 
sad to hear ... I thought this was a lively and constructive discussion about (post)processing to be starting. And I was trying to contribute how I understand things.

There have been occasions where I have been made understood to shut up since I was not precise enough.

Now I am trying to be precise, and I am told to shut up for doing so.

But OK, so I will shut up here.
 
Can you point to the sticky where it says everything needs to turn into an argument? I have no problem discussing this stuff. I'm not trying to make anyone shut up, but to the extent that everyone feels the need to be right about everything, I don't feel like participating in that discussion, because I don't think it will go anywhere constructive.
 
Can you point to the sticky where it says everything needs to turn into an argument? I have no problem discussing this stuff. I'm not trying to make anyone shut up, but to the extent that everyone feels the need to be right about everything, I don't feel like participating in that discussion, because I don't think it will go anywhere constructive.

I just tried to explain on what basis I formed my opinion... but nevermind.
 

Most reactions

New Topics

Back
Top