PDA

View Full Version : Absorption Spectra of Water



MicroKid
2005-Mar-02, 08:29 AM
Below is the Absorption Spectra of Water and how the pamcam filters map to the spectra:

http://img.photobucket.com/albums/v376/MicroKid/AbsorptionSpectraWater.gif

From this water should be darker and darker as the wavelength increases. IE at around 950-980nm (Far IR) as compared to 480nm (Blue) water should be much darker.

Here I present a series of six images ordered by increasing wavelength centered on the big "Squish" just below the "Broken Pipe" site:

Left 480nm (Blue):
http://img.photobucket.com/albums/v376/MicroKid/BPLeft480nmcrop.jpg

Left 530nm (Green):
http://img.photobucket.com/albums/v376/MicroKid/BPLeft530nmcrop.jpg

Left 750nm (Near IR / Deep Red)
http://img.photobucket.com/albums/v376/MicroKid/BPLeft750nmcrop.jpg

Right 750nm (Near IR / Deep Red)
http://img.photobucket.com/albums/v376/MicroKid/BPRight750nmcrop.jpg

Right 930nm (Far IR)
http://img.photobucket.com/albums/v376/MicroKid/BPRight930nmcrop.jpg

Right 980nm (Far IR)
http://img.photobucket.com/albums/v376/MicroKid/BPRight980nmcrop.jpg

That dark "Squish" of Oppy's tyre, which broke through the surface crust, sure looks like it absorbs light like water would do. Note Oppy's tyre imprints to the right and left of the "Squish" do not significantly alter in their light absorption.

Could this significant darkening of the "Squished" soil brought up by Oppy's tyre be indicative of some form of water based fluid mixed with soil under the surface crust?

The images have not been altered except for cropping.

slinted
2005-Mar-02, 08:54 AM
The images have not been altered except for cropping.

It appears that you found that tire tracks exposed a dark material. It is dark in L7 and its dark in R7...it's dark. But I really question whether or not what you posted in any way shows a fit to the absorption(?) spectrum of water (I assume you mean liquid?).

The images have not been altered...from what? If you wish for us to participate in a discussion with you about your posting, could you please reference the original images. If they are from the calibrated to radiance PDS release, have you applied the corrections to absolute units? (PDS tags radiance_offset and radiance_scaling_factor) This is the only way in which two images from different filters can be compared in actual units. Otherwise they are just stretched for maximum contrast, and useless for comparision except to say that one material is darker than another material in that wavelength.

The reason I ask is that there are a large number of 0 value pixels in the wheel tracks, which suggests that these images have been contrast stretched. How much? Who knows. The bright patch in the upper left could actually be 10 to 25 times as bright in Far IR as the wheel tracks as your image suggests, but I doubt it. It could also be 1% brighter for all we know, a difference which is highly exagerated in the contrast stretching.

MicroKid
2005-Mar-02, 09:05 AM
Here is the Absorption Spectra measured in a small spot in the middle of the "Squish" based on the unaltered (Contrast, Brigtness, Gamma) images. The difference between the right and left 750nm images was taken into account and adjusted to be zero.

http://img.photobucket.com/albums/v376/MicroKid/AbsorptionSpectraSquish.gif

As I said before the images are straignt off the Nasa site. Nothing was done to them except for cropping. NO contrast, gamma or brightness adjustments was done. As soon as the Analysis site is back up I'll download the RAD corrected images.

Anyway here are the image links from Oppy Sol 122:

http://marsrovers.jpl.nasa.gov/gallery/all/1/p/122/1P139018947EFF2809P2266R2M1.HTML

http://marsrovers.jpl.nasa.gov/gallery/all/1/p/122/1P139019045EFF2809P2266R6M1.HTML

http://marsrovers.jpl.nasa.gov/gallery/all/1/p/122/1P139019144EFF2809P2266R7M1.HTML

http://marsrovers.jpl.nasa.gov/gallery/all/1/p/122/1P139018947EFF2809P2266L2M1.HTML

http://marsrovers.jpl.nasa.gov/gallery/all/1/p/122/1P139019045EFF2809P2266L5M1.HTML

http://marsrovers.jpl.nasa.gov/gallery/all/1/p/122/1P139019144EFF2809P2266L6M1.HTML

slinted
2005-Mar-02, 10:46 AM
I'm not sure how you are calculating an absorption spectrum without knowing the strength of the incoming light, but I get a much different relationship between the values for the radiance (which would relate to reflectance, if one had the exact spectrum of incoming light) of your dark spot when looking at the RAD (Radiometrically Corrected RDR) files.

These are in units of Watts / m^2 / nm / steradian:

L6@483nm 0.0086
L5@535nm 0.0118
L2@753nm 0.0208
R2@754nm 0.0208
R6@933nm 0.0118
R7@1001nm 0.0125


All the images that have come from the raw JPL site have had 'contrast stretching' in the form of different exposure lengths, the transmisivity of the different filters, and the effectiveness of the ccd at those wavelengths. This is exactly why the calibrated files are necessary, without calibration the pictures brightness values don't have any absolute meaning with which you could compare.

Fortis
2005-Mar-02, 12:31 PM
You definitely need to calibrate the data back to something like reflectance.

Tom Ames
2005-Mar-02, 02:32 PM
I think this is as good exercise, but there are many reasons that a feature could show increasing optical density with wavelength, besides that it is water saturated.

IOW, the facts that:
1. water shows a particular pattern of absorbance/reflectance and
2. a feature shows a similar pattern

Does NOT imply that the feature's spectrum is caused by the presence of water.

Bamf
2005-Mar-02, 08:55 PM
Below is the Absorption Spectra of Water and how the pamcam filters map to the spectra:

http://img.photobucket.com/albums/v376/MicroKid/AbsorptionSpectraWater.gif

That plot says right on it, "Optical Density". It's not an absorption spectrum, and has absolutely no relation at all to a pancam reflectance spectrum.

sts60
2005-Mar-02, 09:38 PM
Eh, FWIW, I've heard from a guy who works at a lab in Pasadena :) that there will some "interesting announcements" about water on Mars at the upcoming Lunar & Planetary Science Meeting (http://www.lpi.usra.edu/meetings/lpsc2005/). No, I don't know anything more. It didn't seem to deserve a new thread, so I dropped the rumor here.

Swift
2005-Mar-02, 10:20 PM
The other thing that you have to be careful of generally (though I don't know if this is a problem in this case) is that in certain areas of the electromagnetic spectrum, different substances can have very similar spectra. For example, if you look at the OH stretching region in the infrared (around 3000-3500 wavenumbers), almost anything with an OH group (water, alcohols, certain oxide minerals with hydrogen impurities) will have similar absorptions. Things like sample form and prepartion can become critical.

MicroKid
2005-Mar-03, 01:31 AM
Interesting site with reflective spectra for quite a lot of material. I take it absorptive is the reverse curve?

BTW. the sea water curve looks close to that in my first post:

http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/L/seawater_coast_sw1.9616.gif

Bamf:
In regard to "Optical Density" I would assume as the density increased less light was passed through?

MicroKid
2005-Mar-03, 02:48 AM
Here is my analysis of the famous Spirit Sol 007 "Is it MUD?" images:

http://img.photobucket.com/albums/v376/MicroKid/SpectraSpiritSol007.jpg

Looks like the spectra of the measured spot is close to sea water.

Bamf
2005-Mar-03, 03:21 AM
Interesting site with reflective spectra for quite a lot of material. I take it absorptive is the reverse curve?

BTW. the sea water curve looks close to that in my first post:

http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/L/seawater_coast_sw1.9616.gif

So what? If you're not comparing apples, a superficial shape similarity is meaningless.


Bamf:
In regard to "Optical Density" I would assume as the density increased less light was passed through?
That is incorrect. Optical density is how much light will slow down while in the medium. It has nothing to do with reflectance or absorption.

MicroKid
2005-Mar-03, 03:37 AM
"Bamf"
If you're not comparing apples, a superficial shape similarity is meaningless.

So where do I get a Martian Apple to compare against? :wink: Are you saying the absorpative sprectra of Martian brine would be nothing like Earth brine?

BTW here is the Spirit Sol 007 "Is it MUD?" image which I graphed above:

http://img.photobucket.com/albums/v376/MicroKid/SpiritSol007.jpg

And here is the original of the graph I used. I suggest you take it up with them about their use of "Optical Density" as being directly related to absorption spectra:

http://webexhibits.org/causesofcolor/5B.html

Fortis
2005-Mar-03, 04:23 AM
You could also have a look at the ASTER spectral library (http://speclib.jpl.nasa.gov/) which contains a couple of thousand spectra of man-made and natural materials. Also, if you can get the spectra back to something like reflectances, then you could use a spectral angle mapper approach (effectively treating the target spectra and library spectra and finding the angle between them) to try to identify the most likely candidates. (The smallest angle corresponds to the best match.) :)

Bamf
2005-Mar-03, 04:56 AM
"Bamf"
If you're not comparing apples, a superficial shape similarity is meaningless.

So where do I get a Martian Apple to compare against? :wink: Are you saying the absorpative sprectra of Martian brine would be nothing like Earth brine?


I'm saying that you're not actually doing any sort of comparison, you're just making superficial claims about the similarity of those two plots, and that not only are you wrong, but what you've done is completely meaningless. You could just as well claim that "when plotted in a polar fashion, the values describe a star of david and therefore the spectra are kosher", and you'd have an equally meaningful claim.

The whole point behind spectroscopy is that specific materials absorb (or emit) in specific wavelengths, and you have to compare things at the same wavelengths for anything to make any sense. Plotting your two data sources together, anyone can easily see that these two spectra aren't very similar (nevermind that the seawater spectrum only has a 3% reflectance)

http://www.mars.asu.edu/~gorelick/plot2.png

Bamf
2005-Mar-03, 06:33 AM
And here is the original of the graph I used. I suggest you take it up with them about their use of "Optical Density" as being directly related to absorption spectra:

http://webexhibits.org/causesofcolor/5B.html

So much for the short answer. In my haste, I misspoke.

Optical density is the base 10 logarithm of opacity (D=log(t)),
opacity is the reciprocal of transmission (t=1/T),
and transmission is indeed a measurement of absorption.

However, things like rocks and mud and pretty much anything pancam is going to image other than the atmosphere are all opaque, and therefore don't have absorptions in transmission (or more properly, they have 100% absorption). My original point though (however poorly stated), that the optical density of material X is unrelated (or at least not meaningfully compared) to a pancam reflectance spectrum is true.

And, if you happen to have a material that isn't completely opaque (like a body of liquid or a thin film), you still can't compare reflectance spectra to transmission spectra. Here's an example that shows the difference between relectance, emission and transmission spectra for the same (transparent) material. While they look similar, you clearly can't get to one from another.

http://minerals.gps.caltech.edu/FILES/quartz_spectra_comparison.jpg (http://minerals.gps.caltech.edu/FILES/quartz_spectra_comparison.jpg)

slinted
2005-Mar-03, 07:37 AM
Looks like the spectra of the measured spot is close to sea water.
Again, I don't think you are using the data correctly (or i'm not), as I got a wildly different values for that part of the "Magic Carpet" image taken by Spirit.

Values are in units of Watts / m^2 / nm / steradian:

L7@440nm: 0.0071
L6@483nm: 0.0102
L5@535nm: 0.0137
L4@602nm: 0.0251
L3@673nm: 0.0271
L2@753nm: 0.0231
R3@803nm: 0.0192
R4@864nm: 0.0157
R5@903nm: 0.0141
R6@933nm: 0.0129
R7@1001nm: 0.0137

These values look nothing like the curve you posted.

I understand you are using ImageJ to look at these images, but I would really like to encourage you to read the Software Interface Specification (http://pds-geosciences.wustl.edu/geodata/mer2-m-pancam-2-edr-sci-v1/mer2pc_0xxx/document/camera_dpsis.pdf) with regards to two keywords that are included in the PDS tags of every radiometrically calibrated file, RADIANCE_OFFSET and RADIANCE_SCALING_FACTOR (page 128,129). With regards to how these are treated, I'll quote:

"RADIANCE_SCALING_FACTOR Provides the constant value by which a stored radiance is multiplied.
RADIANCE_OFFSET Provides the constant value by which a stored radiance is added.
Note: Expressed as an equation:
true_radiance_value = radiance_offset + radiance_scaling_factor * stored_radiance value"

The stored radiance value is the 16 bit number included in the RAD files, if you open them with a program capable of handling 16 bits of information (values range from 0 - 65535) like Nasaview (http://starbeam.jpl.nasa.gov/tools/license.html) you can read out the stored radiance values (listed as DN in Nasaview) for whatever area interests you. Then take those values, and apply the above formula using the radiance_offset and radiance_scaling_factor included in the PDS tags of that file, and you'll have calibrated radiances by which you can start to honestly compare some spectrum. As I stated earlier, this isn't going to tell you reflectance spectrum, unless you figure out the spectrum of incoming light and divide by it, and it certainly doesn't directly relate to optical depth, but it is a start to looking at these files with a fair treatment of the data.

If you are just taking a RAD file and saving it as an image, even with something like NASAVIEW, you are completely ignoring these two values which differ between every frame, thus preventing you from comparing any two files with each other on an absolute scale.

MicroKid
2005-Mar-03, 07:56 AM
Hi Slinted,

I use the PDS add-in for ImageJ which does import the .IMG images as 16 bit data. I then save them as lossless & non compressed 16 bit .PNG. Have you a suggestion for a better format to save them in?

Thanks for the time you spent to reply. I'll read the document you mentioned and get back.

I did find a spectra for (water + clay) mud which seems to be a match but I'll redo the plots with your suggestions.

slinted
2005-Mar-03, 08:02 AM
I use the PDS add-in for ImageJ which does import the .IMG images as 16 bit data. I then save them as lossless & non compressed 16 bit .PNG. Have you a suggestion for a better format to save them in?

I doubt the PDS add-in includes the calculation of actual radiance using those two tags I mentioned, as they are a very specific format used (i could be very wrong about this) exclusively on the MER rovers' PDS releases. If you can read out the 16 bit .png files as integers (like i said, ranging from 0 to 65535 max, not 255 max in an 8 bit image) then you can apply the scaling and factor by hand, by opening up the tags file in NASAview for each image.

MicroKid
2005-Mar-04, 12:11 AM
Hi Slinted,

I'm an electronics design engineer (embedded stuff) so I do under binary systems. In looking at your recent site photos I feel the images are lacking contrast due to the RAD images being scaled for bright & dark bad pixels. This compresses the real dynamic range and lowers contrast from the non RAD images.

BTW the dynamic range for 12 bit data is 0 - 4,095. The RAD images are stored as signed 16 bit integers or 15 bits of effective data plus one bit of sign. So if bad bright pixels are scaled as 32,767 (max of 15 bits) and bad dark pixels are scalled as 0, everything else (the real data) gets squashed up & down and the result is lower contrast images. Check out the histograms on the RAD images and you will see what I'm talking about. Most of the image data is in the lower half of the data range.

Here is an example:

http://img.photobucket.com/albums/v376/MicroKid/BrightPixel.jpg

ImageJ's 16 bit histogram counter found 461 pixels with a brightness of 32,766 in this image with the real data starting at about 19,000.

How to fix?

We need to adjust the histogram dynamic range upward to eliminate the undesirable effect of the bright bad pixels and thus restore the images dynamic range. But then how much to expand upward?

This processing error sure makes RAD images somewhat questionable.

slinted
2005-Mar-04, 01:01 AM
The main reason I love the RAD files is that they already did all the work for us. I think I understand your concern re: the histograms, as some obivously bad pixels did make it through to the final images. But this doesn't change their calibration procedures, except to say that bad pixels are just that, bad data. Lets reduce this to its simpliest elements:

You have a bin on the ccd behind a filter that responds to light by filling up with electrons, which gets read out as a DN value for each pixel. They know the responsivity of the filter+ccd, which is to say that they know how the DN responds as a function of the strength of light and the length of the exposure. If you had 2 images side by side that look identical in their raw form, but one was exposed twice as long as the other, you could know that the second image had light that was 1/2 as strong by applying those responsivities.

Now, lets say you have an image which gets overexposed, so the DN of certain pixels goes to its max value. In the time after which the those 'bad pixels' (or in some cases, just a relatively very bright object) are full, the other pixels are continuing to fill up. In terms of histograms of the raw images, the longer it is exposed, the further you push the histogram to the right. Does this change the correspondance of the DN to the actual strength of light for the 'good' pixels? No

This is the beauty of the calibration procedures: if two images are taken of the same target one right after another, with different exposure times, while the raw images would look different, the calibrated radiance will be found to be the same for both images when one takes into account the exposure time. This is the purpose of the scaling factors, it takes the data values derived from the DN's and puts it in real units which can be compared.

Now, lets look at what you are suggesting we do with the histograms of images with bad pixels. If you were to take the image, and stretch the histogram out such that 'good' pixels now occupy the full range, you would need to also adjust the scaling factors accordingly otherwise you just arbitrarily changed the calibrated values. There isn't anything particular to this procedure that says the original images actually filled the full dynamic range, most of them probably didn't.

The only bad data that comes out of this relates to those bad pixels. In the image you posted, some pixels occupy the 32766 value. If we apply the scaling factors to that pixel, we'd read out that some innordinatly high radiance was responsible for that pixel. This doesn't change the fact that radiance values for the rest of the pixels can still be calculated based on exposure and filter.

Another case of bad pixels would be those images which got overwhelmed with actual light, to fill the bins to their max. At the moment the bins got filled, they stopped responding to the normal DN/second function. This means that radiance values with these sort of bad pixels aren't calculated as being high enough (the DN should have continued to rise, but it didn't due to the bin being filled).

MicroKid
2005-Mar-04, 03:01 AM
Slinted,

I think the problem is that the bad pixels and the good dynamic data are used to set the scaling for the RAD signed integer data.

The procedure as described in the documents you reference on your site suggest the only adjustment for bad pixels was done at the factory via the flat field file data set (page 8, MER/Pancam Data Processing User's Guide). Thus any new bad pixels which have devloped since then are considered as part of the dynamic range of the data captured via the CCD and the RAD files are adjusted thus reducing the real dynamic range.

As you point out the problem of the elimination of the newly developed bright and dark bad pixels is not so easy.

I suggest the flat field file data sets would need to be updated to include the newly aquired bad dark and bright pixels (so they can be eliminated from the dynamic data), then reapplied against the real original raw data to get the true dynamic range of the RAD images.

Alpha_Tauri
2005-Mar-06, 05:31 AM
Hi Microkid,

You might know me as Aldebaran from another forum.

Another point that I'd make is that the Pancam emission levels, even when corrected for radiance are not very useful for identification of water. Why even bother looking at the near infrared when TES reports are available for a much more useful range. These are already calibrated for you. and even the most casual inspection will show no emission peak at 1640 cm-1. If there was water present as free water (as opposed to water of crystallisation) you would see a very strong peak.

I agree with the points made regarding comparing apples with pears.
The other problem is that a great many substances, hematite included, absorb strongly in the near infrared. The dark substance that you are seeing could just as easily be hematite.

I know Levin has made a similar point, and well, the least said about that the better.

- Jim

MicroKid
2005-Mar-06, 11:06 AM
"Alpha_Tauri"

Another point that I'd make is that the Pancam emission levels, even when corrected for radiance are not very useful for identification of water. Why even bother looking at the near infrared when TES reports are available for a much more useful range. These are already calibrated for you. and even the most casual inspection will show no emission peak at 1640 cm-1. If there was water present as free water (as opposed to water of crystallisation) you would see a very strong peak.

Hi Aldebaran,

The only problem with the mini TES data is that is was not used at all on Oppy's Sol 122. As a side note I wish Nasa would provide mini TES locators for all the mini TES data sets. I would sure help to know what they know as to where it was pointed. :(

I have found spectra data for various clay & water mixtures which do get very close to the "Magic Carpet" curve I obtained. I'm working with the RAD data to get the real floating point data adjusted for the Martian solar spectra as per the MER Data user's Manual to get a better data set.

Here are the curved based on the integer RAD data sets so far:
http://img.photobucket.com/albums/v376/MicroKid/MagicCarpet.png

Data from:
http://pubs.usgs.gov/of/2003/ofr-03-395/datatable.html

There are 11 Hematite data sets on the above site. Which one to use? For most the IR absorption seems to be heavy in the visible and decreasing in the IR bands as used in the PamCam filter range.

http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/M/hematite_fe2602.3158.gif

BTW the heavy absorption only occured for the soil brough up from deeper underground. The rover tracks to the right and left of the "Squish" site show very little absorption in the IR even though Hematite (BBs) are visible in the images. Are you suggesting the Hematie concentration increases the further down into the soil Oppy digs? Doing the IR compares on the dug trenches does not show increasing IR absoption the deeper Oppy digs?

01101001
2005-Mar-06, 11:18 AM
Alpha_Tauri, I didn't see anyone give you a greeting, so...

Welcome to BABB.

slinted
2005-Mar-06, 10:33 PM
Slinted,
I think the problem is that the bad pixels and the good dynamic data are used to set the scaling for the RAD signed integer data.

I don't know what to say, except that I disagree. I agree that bad pixels are bad data in the RAD files, but I fail to see how those bad pixels make the calibrated values for the good pixels bad as well.

The scaling factors are determined by hard numbers, in a bottom up approach, that would be unaffected by the presense or absense of bad pixels. (I hope this explaination makes sense, I've been struggling with a way to get this across):

Lets assume that the first steps in the calibration procedures are already done (8->12 bit conversion, bias, darkframe, smear and flatfield). Lets also assume that some of the hot pixels that have showed up as the missions progressed were not corrected for, and remain as higher values than they should be.

Calculating the responsivity cooefecients seems to be determined exclusively by 3 values,
1) the responsivitiy of the ccd+filter at the temperature the image was taken, which tells you how quickly the DN value will go up given a lightsource of radiance "x" (these were calibrated in the lab before leaving Earth, and have nothing to do with the dynamic range of the images)
2) the exposure time
and 3) the DN value of the corrected images

Lets take a boring made up image, one which has values of 100 DN at every pixel, and was exposed for 1 second, and has (oversimplified) a responsivity of 1 DN per second per W/m^2/nm/sr

The calibrated radiance for all the pixels would be 100 Watts/m^2/nm/sr based on this calibration.

Now lets take that same boring image, and give it some hot pixels. 99 percent of our image will be those same 100DN pixels, while 1% of the image had hot pixels that weren't removed by flatfield, which have values of 32766.

Lets assume the same 1 second exposure, and the same 1 DN per second per W/m^2/nm/sr responsivity.

The values of 100 DN are unaffected by the hot pixels, and would *still* be calibrated to the correct radiance of 100 W/m^2/nm/sr. It would only be the hot pixels that would read incorrectly as an unreasonable value of 32,766 W/m^2/nm/sr.

It seems like you are suggesting that they assign calibrated values to the high and low end of the dynamic range, then just fit the rest of the pixels inbetween them. This is not the case. They assign calibrated values to each bin individually based on its particular DN. I don't see how the bad pixels could affected the calibration without their having 'automatically adjusted the histograms' or something similar, which they don't do.

If what you were suggesting were true, then images of the same target, taken at different times, wouldn't show much similarity since they would be stretched to whatever the hot pixels determined the dynamic range to be in that particular exposure. I haven't found that to be the case at all, those times when they have repeat imaged a target, the differences that do exist can easily be explained by differences in brightness because of changes in time of day, other than that they are identical.

I know that looking at the histograms of the calibrated data might lead you believe that something is wrong, because the hot pixels do appear strange at the very high end of the range. But by as much as I can say I've learned from reading the documentation of their calibration procedures, I can assure you that if they eliminated the hot pixels entirely, and you had a nice clean histogram that filled the entire range, that the calibration factors would also change, yielding the exact same calibrated radiances.

Alpha_Tauri
2005-Mar-06, 10:47 PM
There are 11 Hematite data sets on the above site. Which one to use? For most the IR absorption seems to be heavy in the visible and decreasing in the IR bands as used in the PamCam filter range.

No, I just gave hematite as an example. Some forms of hematite do absorb in the same way in the near IR.

I should have given the example of Olivine, which shows a marked increase in absorbance with increasing wavelength, at least over the very narrow range of the Pancam. If you asked me, olivine would be much more likely than ice.

The point is that the TES instrument has not yet indicated the presence of free water, so why pursue this idea? As a previous poster has said, you need to eliminate those substances that we know are present first. I've already provided Olivine as an example that would provide this kind of absorbance spectrum.

It is likely that less oxidised substances occur under the immediate regolith. We know for example that the material contains hydrated sulfates such as magnesium, calcium and iron sulfates. Have you eliminated these ? I think you'll find that some hydrated minerals will give a similar result.

MicroKid
2005-Mar-06, 11:46 PM
Hi Aldebaran,

I have no problem with the conversion of the RAD integer data back into the real floating point version and yes the bad pixels will not alter the accuracy of the conversion process. All that changes are the scaling and offset.

What does alter (and was my point) is the apparent contrast of the integer based RAD images if used to directly make RGB colour images. When newly aquired (since the "Flat Field" data sets were created) bad dark and bright pixel data are mapped into the same 15 bit integer data space (and not removed by the "Flat Field" process) as the dynamic data from the imaged scene, the resultant RAD "Image" will have reduced contrast (less dynamic range) due to being squashed into a tighter / smaller data range by the bad dark and bright pixels.

What I was suggesting was that RGB colour images made from the 15 bit integer based RAD files will not give a good indication of the real contrast in the images scene. It was never by suggestion that the recoverable real floating point values of the dynamic imaged scene data are incorrect.


They assign calibrated values to each bin individually based on its particular DN. I don't see how the bad pixels could affected the calibration without their having 'automatically adjusted the histograms' or something similar, which they don't do.

This assignment is the "Flat Field" data set which tries to eliminated pixel to pixel variations which existed pre launch. It does not seem to be the unique scaling and offsets which are generated for each RAD image.

What the MER Pamcam document does not describe is how the scaling and offsets are calculated because it is this data which decides how the real floating point data is mapped into the 15 bit integer data space of the RAD file. The scaling sets the dynamic range and the offset the position of the dynamic data.

How are the radiance_scaling_factor and radiance_offset calculated as they vary for RAD images taken with the same camera / CCD and filter combo? It would appear that scaling is dynamically calculated for each RAD image by something like (max(corrected radiance)-min(corrected radiance))/32768.

Alpha_Tauri
2005-Mar-07, 07:12 AM
01101001 oh binary encoded one, thanks for the welcome. You have a very nice Bulletin Board here.

Microkid,

I think you were replying to Slinted, but your point on the radiance correction is irrelevant. You've obviously gone to a lot of trouble with this, but I have pointed out at least one mineral commonly found on Mars that would give a similar response over this narrow range.

In short, your argument has been invalidated.

slinted
2005-Mar-07, 07:35 AM
What the MER Pamcam document does not describe is how the scaling and offsets are calculated because it is this data which decides how the real floating point data is mapped into the 15 bit integer data space of the RAD file. The scaling sets the dynamic range and the offset the position of the dynamic data.


It is necessary for the person making color to scale them accordingly. As you said, the scales of the floating point data files themselves are inappropriate. Google around for "tone mapping" as this may help you get started.

It is important to note that all filters must be scaled to an identical scale to retain true color. If you were to histogram stretch each filter individually you would effectively be changing the colors in the image. While this yields some of the most interesting color views showing some of the greatest chromatic contrast, they are false color (or enhanced) views of the scene.