• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

RAW vs JPEG (1 Viewer)

Just thought I would start a new thread after reading about various opinions in the 300D vs 10D thread.

Speaking from the perspective of someone who occasionally writes software to work on RAW files (see http://www.wonderland.org/crw/ for an old page) here is my personal view;

1. CRW+THM (collectively RAW files) are IMHO like unprocessed digital negatives, while JPEGs are prints. What I mean is that you can reprocess the RAW files over and over again, using different settings and even new algorithms and software to get viewable output. JPEGs can also be processed, but the process is always lossy in some way (except rotation and cropping with correct software).

2. 12 bits vs. 8 bits vs. 16 bits; A normal JPEG is encoded as a 24bit image. The underlying encoding is typically NOT RGB, but the conversion from the typical YUV space to RGB is well known an understood. A D30/D60/10D/300D RAW files presents 12 bits per sensor, where each sensor records only one of red, green or blue. The (white balance first - trust me - and then) demosaiking algorithms recombines these into the more normal image you see. A 16 bit TIFF file made from a RAW file is simply presenting the combination of the colours to a precision of 16 bits per colour - think about how many digits you get when you multiply 1.5 by 1.5 (bad analogy to anyone with more technical background, but I am trying to keep this simple).

3. Convertors. My favorite is BreezeBrowser (www.breezesys.com) and the guy who writes it (Chris Breeze) is a UK chap who is very helpful and far easier to deal with than the others (Adobe, Phase One, Canon ...).

4. The Future. Who knows what software and algorithmic inventions will increase the quality of images from RAW files in the future ? I do know for certain that once you smudge the sensor details together as in a JPEG, you can't go back...

It's late, and I am off to bed - my contacts are drying out - so excuse any obvious spelling or possible technical errors. Hope that some of this is helpful to someone.

rgds,
 
Peter Galbavy said:
1. CRW+THM (collectively RAW files) are IMHO like unprocessed digital negatives, while JPEGs are prints. What I mean is that you can reprocess the RAW files over and over again, using different settings and even new algorithms and software to get viewable output. JPEGs can also be processed, but the process is always lossy in some way (except rotation and cropping with correct software).

All analogies, by definition, must break down at some point. But I think I can expand on yours a bit to improve it. Let me know what you think.

The RAW file is more like exposed but unprocessed film than it is like a digital negative. If directly viewed with its true RGB values, it is almost useless. But it contains all the data recorded at exposure.

The demosaicing and related processing that takes the RAW and converts it to a more useable format (TIFF or JPEG file typically) is akin to developing the film. Just as with film, different developement can alter the results - for good or bad. Unlike film, you get about as many shots at different types of developement as you care to make time for since digital developement - outside the camera - is non-destructive.

The JPEG or TIFF is more like the developed film. It is directly used to create a finished image (for print, video or computer). It can be analyzed directly. It is not like a print because it is seldom represents the final output. It should be modified differently depending on the end use - possibly even depending on the particular printer and paper used when printing.

Photoshop (or other image processing programs) and the output printer are more akin to the printing process. Dodging, fine color correction, final contrast changes, retouching etc. are handled here. The "print" can be an actual physical print or it might be a JPEG that has been sized for email or the web.

As a photographer, we can choose to have the camera "develope our film", or we can choose to do that later with more flexibility if we have the camera save RAW files. And this is where the rubber meets the road. Which is better? Greater flexibility in later processing your exposure or more storage, faster camera performance and greater simplicity? There is no "right" answer.

In the days of the view camera, better photographers would custom develope their negatives. Especially if they were large format negatives. In fact, with panchromatic film, they could do this by visual inspection. I used to load my own 35mm cassettes with high resolution fine grain litho film and use special developers (hand mixed) to produce full tone images. I kept notes and might develope different roll differently depending on what those notes showed. Film developement is a part of photography that most people have little exposure to (pun not intended). The vast majority of film images have been processed using standard "bulk" methods where automated methods make "one size fits all".

At least with our digital cameras we can customize our "developement" to some degree in camera. We can even analyze the results of our choices in the field by examining the image histogram and looking at the image in the camera's LCD. We can retake the image with different "developement" settings (white balance, contrast, sharpening and the like) on location if we so choose.

Peter Galbavy said:
4. The Future. Who knows what software and algorithmic inventions will increase the quality of images from RAW files in the future ? I do know for certain that once you smudge the sensor details together as in a JPEG, you can't go back...

True. But doesn't the very nature of the Bayer mask mean that you will have to "smudge the sensor details together" at some point? The sensor that only records green will have to smudge in some info from surrounding sensors in order to show that particular mauve hue that was in the actual scene. The benefit of RAW is that we get to retry how we choose to do the "smudging" and other processing.
 
Jay Turberville said:
All analogies, by definition, must break down at some point. But I think I can expand on yours a bit to improve it. Let me know what you think.

The more the merrier... as long as the opinions are all based on fact and experience (please!) :)

True. But doesn't the very nature of the Bayer mask mean that you will have to "smudge the sensor details together" at some point? The sensor that only records green will have to smudge in some info from surrounding sensors in order to show that particular mauve hue that was in the actual scene. The benefit of RAW is that we get to retry how we choose to do the "smudging" and other processing.

Yes and no. While in most digital cameras there is an array of microlenses that make an effort to gently spread light over adjacent sensors in an effort to avoid stair-step effect (AKA anti-aliasing filters I believe), there are also different algorithms already with different qualities and execution costs. I cannot find the URL of a nice presentation comparing a number of current methods, but I have this one (http://www.changed.org/pubs/3650-05_final.PDF) which is an algorithm I have implemented for a CRW to JPEG project I did once. The results are better than most naive linear or bicubic interpolation schemes, as it tried to account for edges.
 
Jay is quite correct in his assertion on the Bayer mask, here is a link which shows what a Bayer image looks like before interpolation produces a raw/jpeg/tiff image file :http://www.ddisoftware.com/reviews/sd9-v-bayer/.
It seems bizarre that a blurring filter is deliberately placed over the CCD which then has another filter(Bayer) mask made up of 2x more green the red or blue. After all that a mathematical calculation interpolates the green/red/green/blue dots to produce a fair full colour representation of a bird.

Good birding all
Dahyon
 
dahyon said:
It seems bizarre that a blurring filter is deliberately placed over the CCD which then has another filter(Bayer) mask made up of 2x more green the red or blue. After all that a mathematical calculation interpolates the green/red/green/blue dots to produce a fair full colour representation of a bird.

If you do not blur the light, then fine detail may be completely lost. Think of the chance that a fine "red" detail lands only on the diagonal green masked sensors ? If you resolution is higher than the resoloving power of the lens, then you do not need this filter. This is what Kodak is implying when they leave the anti-aliasing filter off the DCS14n.
 
Peter Galbavy said:
Yes and no. While in most digital cameras there is an array of microlenses that make an effort to gently spread light over adjacent sensors in an effort to avoid stair-step effect (AKA anti-aliasing filters I believe), there are also different algorithms already with different qualities and execution costs. I cannot find the URL of a nice presentation comparing a number of current methods, but I have this one (http://www.changed.org/pubs/3650-05_final.PDF) which is an algorithm I have implemented for a CRW to JPEG project I did once. The results are better than most naive linear or bicubic interpolation schemes, as it tried to account for edges.

I found the article on threshold based recovery interesting, but while they use the term "recover" what they are actually doing is estimating or interpolating color values - although with an apparently better algorithm. So I guess this is the "yes" part of your reply.

I thought the microlens was placed on a per cell basis and that its function was to concentrate light from a broader area onto the small cell of the CCD since the actual sensitive cell areas on the CCD are spaced significantly apart and therefore don't cover a lot of area. The microlens increases the cell's effective light collection area and therefor its sensitivity.

http://www.sony.net/Products/SC-HP/sys/ccd/sensor/super_had.html

Or perhaps the antialias filters also use microlenses. But either way, isn't their function to filter out information that has a higher spatial frequency than the sensor cell spacing. In other words, to spread out detail not between the cells, but to make sure that detail spaced smaller than the cells gets removed. But perhaps it is inevitable that such AA filters will cause some blurring between adjacent cells and add to the blurring introduced by demosaicing interpolation.

Norman Koren has a lot of interesting articles on his site that discuss the technical aspects of digital imaging. His calculations suggest somewhere between a 20-25% loss in resolution due to the need to interpolate the missing color values from adjacent cells. My resolution testing on Nikon Coolpix cameras is very much in line with this estimate.

http://www.normankoren.com/Tutorials/MTF7.html
 
Jay Turberville said:
I thought the microlens was placed on a per cell basis and that its function was to concentrate light from a broader area onto the small cell of the CCD since the actual sensitive cell areas on the CCD are spaced significantly apart and therefore don't cover a lot of area. The microlens increases the cell's effective light collection area and therefor its sensitivity.

You are absolutely correct and I only plead a bad back and not enough coffee for getting my terms mixed up after all this time. Microlens, gather light to sensor that is smaller than the area dedicated to it; Anti-alias filter, spread the high frequency detail a bit (i.e. blur).
 
Warning! This thread is more than 20 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top