#### Hybrid View

1. Established Member
Join Date
Jun 2009
Posts
1,878

A human mind can get more information from an in-focus image than an out-of-focus image. But do in-focus images contain have more information (less entropy) in the physical sense than out-of-focus images? For example, in elementary optics there are situations where a system of lenses can project an in-focus image on a screen at a certain location, but the image will be out-of-focus at other locations. Is there any information "lost" if the screen is placed where the image is out-of-focus?

2. any plate of film will hold or store a particular amount of light.
that light that is being stored is dependent of the chemical substrate that makes the film.

every piece of film can hold a predetermined amount of light.
this amount is supposed be close to the same from one picture to the other.

so it can be said that all film from picture to picture is pretty uniform in their ability to store light in an image form.

same with digital. all images can be set to a predetermined resolution and such will have an uniform storage capacity.

didja?

were you asking if an unfocused image has less info than a sharp in focus image.

well, when you focus light energy into an image you are taking the rays of light that make up that image and focusing them into one place.
so sure, a focused beam has more energy than an unfocused beam.

i would assume that the unfocused image has all of the information needed to make the whole image, but the focus plane can only be a certain depth, and that depth of field can change. but it self is variable.

imagine looking at a pencil through a camera.
the eraser is close to you and you can see down to the sharpened graphite tip.

on the side you can see the name of the pencil and the lead softness. you notice it is a number two.. there is the alphabet written on the side with a starting at the eraser and z at the ruff of the cone of the tip where the sharpener ceased its lunch.

depth of field is the full area of the lens and how many letters are in focus.
you can move that focus plane up and down the pencil as you focus.
from a sharp clear eraser with an ever blurring pencil drooping away to the tip.

if you move that focal plane down the pencil reveling the letters as you go, the eraser going out of focus a, then b, the c..

depth of field is how many letters will be in focus, the deeper the field, the more letters you see.

so your focal plane is where the information is coming from.

so of course, if the above is correct,
there is information lost from the image of the image
when the focal plane of the lens is not encompassing the images location in space..

as for total information, the the lens is focal merely collecting the most information from focal plane the lens happens to be set at...

is that right?
anybody?
that is the focus.
so if an image is out of focus, it is the information from a different focal plane.

3. How exactly does one define "information" for this purpose?

4. Established Member
Join Date
Jun 2009
Posts
1,878
I don't know! As an "operational" definition, I think that if you can define an algorithm that transforms the data in an un-focused image to a focused image and perhaps a different algorithm, that does vice-versa then the two images contain the same information. (I'm assuming the two images are created by moving the screen in the example to two different locations so the "source" of the images is the same.)

5. Though I could be wrong, I think the answer is yes. The reason is, I think you can transform a fuzzy picture into a sharp one, but the opposite is not possible without tricks. Also, if you think of a piece of paper with the letters getting progressively smaller at the bottom (like the liability contract in the original Willy Wonka movie), then the sharp image will contain that information but the unfocused one will not.

6. Well, there's only so much information you can get from a low resolution image - unlike what they show you in detective shows on TV, you can't 'enhance' a pixellated image and magically bring out the detail hidden in the pixels.

I'd be inclined to say the same about blurry 'analogue' images too. The sharp image contains the maximum "information" for the scene, a blurry one contains a degraded version of that information, and therefore should be incomplete.

7. Originally Posted by EDG_
Well, there's only so much information you can get from a low resolution image - unlike what they show you in detective shows on TV, you can't 'enhance' a pixellated image and magically bring out the detail hidden in the pixels.

I'd be inclined to say the same about blurry 'analogue' images too. The sharp image contains the maximum "information" for the scene, a blurry one contains a degraded version of that information, and therefore should be incomplete.
but I think that interpolation can be done between pixels and an enhancement can be made that would be 'approximate'.

8. Originally Posted by sithum
but I think that interpolation can be done between pixels and an enhancement can be made that would be 'approximate'.
Nope. If you've got say, a single 'big pixel' that is the equivalent of 8x8 smaller pixels, and that big pixels is just the average of the 8x8 that comprise it, there is no way you can tease out what was in the 8x8 pixels. You just don't have enough info to be able to do it remotely accurately, or even approximately very well.

This was pretty bad in the Galileo mission, which was sending back lossy compressed images that basically did exactly that - it averaged out blocks of pixels, like really bad JPEG compression. When the compression was overdone (IIRC, the imaging team had to take an educated guess at what level of compression to use before the images were taken) you could hardly make out anything useful in the image at all, and there was no way to extract that info out of the picture. And it was especially bad in the areas that didn't have much variation in brightness, because that's where the algorithm would compress things the most, so flat areas had all their detail destroyed by the compression. Fortunately they sent back other images that used lossless compression, and most of the time the lossy compression algorithms weren't too bad, but I remember several occasions while working on the images for my PhD where I was really frustrated by the compression blocks...

9. Banned
Join Date
Oct 2002
Posts
3,841
I think your mention of entropy is very relevant. The light beam of a projector contains all the information in the slide, but it is only organised when focussed on the screen. An analogue would be differently coloured beads in a glass jar. Arranged in layers, one could calculate the number of beads by measuring the depth of the layer. Shake the container, and counting the beads would need a sample of the number of different colours in a given area, which would require staistical calculation. The result would be a probability, a rougher estimate than the layer method, because the beads are less organised.

John

10. Originally Posted by JohnD
I think your mention of entropy is very relevant.
Yes, the whole idea of information goes back to Claude Shannon's paper A Mathematical Theory of Communication, in the Bell System Technical Journal in 1948. He discusses entropy and information.
Originally Posted by Jens
Though I could be wrong, I think the answer is yes. The reason is, I think you can transform a fuzzy picture into a sharp one, but the opposite is not possible without tricks. Also, if you think of a piece of paper with the letters getting progressively smaller at the bottom (like the liability contract in the original Willy Wonka movie), then the sharp image will contain that information but the unfocused one will not.
Did you mean that the other way around?
Originally Posted by HenrikOlsen
So basically, if the blurred image could be recorded noiselessly to infinite precision, it would be possible to reconstruct the sharp image, since the information would be there?
It depends upon how the blurring is accomplished. Clearly, if two pixels had values A and B, and the resulting pixels were blurred to (A+B)/2, even with the knowledge of how they were blurred you would not be able to reproduce the original pixels in the original order with any certainty.

11. Established Member
Join Date
Apr 2004
Posts
1,805
If there was the same info in the unfocused image, it would have sufficed to process the slightly out-of-focus Hubble images by computer, rather than go to the huge expense of installing a correcting lens.

12. If someone gets you a copy of a book you want except it's in Chinese and you can't understand it, does it contain less information?

Of course not!

Let's put it another way. If the CCD on Hubble could sense the phase, polarization, and incident angle of photons the image could have been reconstructed although out of focus.

This brings up a rather obvious point: information often gets lost in translation.

13. A huge issue here in the real world is signal-to-noise. Even for a perfect detector, there is Poisson noise form the stochastic arrival of photons. This limits image reconstruction even when the blur pattern is known exactly. Blurring spreads the signal over more pixels, and simultaneously has Poisson noise from multiple pixels contributing to the error in each reconstructed image pixel.

The attached example is from a 1991 paper of mine demonstrating the performance of the sigma-CLEAN algorithm. The main point is that it works well for S/N limited only by the machine's representation, and the fidelity of reconstruction goes downhill rapidly with higher background or shorter exposure. An additional issue with deconvolution and PSFs as structured as the old HST system is that there will often be artifacts which aren't distinguishable in an obvious way from real features.

14. So basically, if the blurred image could be recorded noiselessly to infinite precision, it would be possible to reconstruct the sharp image, since the information would be there?
But when noise is present, then the blurrier the image then more pixels will be holding less information each for each sharp one so then will be each be affected more by the noise, so the information will quickly disappear below the noise threshold?
And as every real life image will have at least quantification noise there's a limit to how much sharpening can be done for real life images.

15. Originally Posted by HenrikOlsen
So basically, if the blurred image could be recorded noiselessly to infinite precision, it would be possible to reconstruct the sharp image, since the information would be there?
Up to the limitation that there are certain patterns to which a given blur pattern (point-spread function=PSF) is "blind". This was worked out (or at least magisterially described) by Ronald Bracewell in the Fourier domain; any finite sampling of the PSF will have zeros in the Fourier domain, so will not detect any distribution of intensity whose Fourier transform is zero except at these spatial frequencies and directions. This is a big problem for sparse sampling with interferometers, but also enters into direct imaging. In real life, noise matters more in this application - deconvolution amplifies high-frequency noise, which is easy to see from the observations that deconvolution in the image plane is division in the Fourier domain: FT(deconvolved image) = FT(observed image)/FT(PSF). The "invisible pattern" issue shows up clearly in this case - what happens where FT(PSF)=0? This limit applies whether in or out of focus, whether the PSF is a nice Gaussian or a complicated set of sidelobes.

16. "information" is measured in bits, thank you, Claude Shannon, and we can calculate the total bits in an image in any number of ways. The easiest is just to calculate the spectral flatness measure over the image, which will tell you how many dB you can recover from redundancy. The rest is information, or as close as one can get without considering nonlinear modelling.

You could do the same with high-order LPC models, adaptive models, entropy models, complex lzw coding (2 d makes it interesting), etc.

You could do a transform (IDCT) and calculate the entropy of the resulting transform.

All of these are going to be convinced that more high spatial frequency content means more information, and "more high spatial frequency content" is directly equal to "sharper" images.

Also, you can only transform a fuzzy picture into a sharp one within noise limit bounds. Deconvolution of this sort can be extremely ill-conditioned if the transfer function you have to invert has near-zeros. Been there.

17. Established Member
Join Date
Jun 2009
Posts
1,878
I don't know how you would carry out that programme. For example, if we record each of an un-focused and focused image as an array of digitized pixels, why would there be more bits in one image than the other? There is also a question of what "noise" means. A systematic process like blurring is different than having independent random noise in each pixel.

18. Established Member
Join Date
May 2008
Posts
618
The image itself has as much information, but more of that information is noise. If you add additional information in the form of a complete mathematical description of the distortion, you can bring the image back into focus. Since this requires extra information the in focus image must have more of information you are looking for.

19. Established Member
Join Date
Jun 2009
Posts
1,878
What definition of "noise" are you using?

20. Tashirogst, "information" includes the part of the data that is not predictable from the rest.

In any data that isn't white noise, some portion of the data can be predicted from the adjacent data. This is "redundancy" and does not count as actual information.

PCM storage (which is what images use a 2d version of) does not guarantee any degree of decorrelation or correlation, it simply stores the waveform. In that sense, it is not an efficient storage, but still an accurate one.

When you calculate the "LPC gain" (strictly speaking that's only for single-dimension waveforms, but the same idea exists for 2D with some interesting complications) or the Spectral Flatness Measure (SF), these can tell you the degree of predictability in the data to the extent of the predictor's size, etc. SFM is the "maximum predictable part" of a signal, and constitutes the ratio of the geometric mean of the spectrum to the arithmetic mean of the spectrum.

These measures can measure the amount of redundancy in your data. In the case of a soft vs. sharp image, the sharp image is going to have more high spatial frequencies (this is an unavoidable consequence of a sharper image), and therefore a lower SFM, and as a result a higher Shannon Entropy. (Note, Shannon entropy, or "bits", is the negative of the calculation from physics.)

The shannon entropy of a system is described as sum ( -p(i) log2(p(i))) where "p(i)" is the probability of each value, and log2 the base 2 logarithm. The units is "bits".

For something like an image with lots of redundancy, you can not directly measure information by calculating the probability of each value in the image (not each pixel, but each r, g, b, value, or better triplets of rgb) because that does not account for the redundancy between pixels, i.e. the predictability. The maximum entropy you will get will be the same as the number of bits used in PCM, by the way, so you will see a reduction in "information" when you calculate the individal pixel entropies, just bear in mind that's still not the actual entropy. To show the actual entropy, you must decorrelate the samples first and get rid of the predictable part of the signal. Then you can apply the same process and get the right answer.

This is not as easy as it sounds, as you may have gathered.
Last edited by jj_0001; 2010-Feb-10 at 08:18 PM. Reason: typo

21. Established Member
Join Date
Jun 2009
Posts
1,878
I thought the scenario for Shannon entropy involved a communication over a "noisy" channel, i.e. one where there is some probabilistic phenomena at work. The distortion of an image by its being out-of-focus is a deterministic process.

22. Originally Posted by tashirosgt
I thought the scenario for Shannon entropy involved a communication over a "noisy" channel, i.e. one where there is some probabilistic phenomena at work. The distortion of an image by its being out-of-focus is a deterministic process.
The concept goes far beyond the channel capacity theorem, which is what you seem to be confusing with the general idea of information measurement.

Once again, the point is quite simple, the "information" is the part of the image that is not predictable. A wider spectrum (in spatial frequency terms for an image) makes something less predictable, and thereby raises the entropy. Were the image white noise, the entropy of the image would be maximized.

The fact that the image capture (as is true of all physical processes) is by definition noisy could enter into this, but the quantization level (bits/plane) and the spectral flatness define the actual information content.

23. Established Member
Join Date
Jun 2009
Posts
1,878
Originally Posted by jj_0001
The concept goes far beyond the channel capacity theorem, which is what you seem to be confusing with the general idea of information measurement.

Once again, the point is quite simple, the "information" is the part of the image that is not predictable. A wider spectrum (in spatial frequency terms for an image) makes something less predictable, and thereby raises the entropy. Were the image white noise, the entropy of the image would be maximized.

The fact that the image capture (as is true of all physical processes) is by definition noisy could enter into this, but the quantization level (bits/plane) and the spectral flatness define the actual information content.
I don't see the connection between information and what your are talking about, which seems to involve the fourier transform of a signal. It is true that "white noise" can be modeled as something that has a spectrum in the sense that a stochastic process has a spectrum. However, de-focusing is not a stochastic process. "White noise" is a stochastic process. It seems to me that a blurring of the image would cause sharp edges to smooth out, so I see no reason that a de-focused image would have more power in the higher frequencies of a fourier transform than a focused image.

24. Originally Posted by tashirosgt
I don't see the connection between information and what your are talking about, which seems to involve the fourier transform of a signal. It is true that "white noise" can be modeled as something that has a spectrum in the sense that a stochastic process has a spectrum. However, de-focusing is not a stochastic process. "White noise" is a stochastic process. It seems to me that a blurring of the image would cause sharp edges to smooth out, so I see no reason that a de-focused image would have more power in the higher frequencies of a fourier transform than a focused image.
It doesn't. It has fewer high frequencies. That is why it has less information.

A defocused image has ***fewer*** high frequencies. The image capture has noise. If you compensate for extensive loss of high spatial frequencies in the image, in order to deconvolve (i.e. invert) the blurring, you will amplify the image capture noise along with the image information, resulting in an inferior signal to noise ratio at high frequencies, also known as "speckle" or "snow" or various other kinds of image impairment.

A defocused image has LESS high frequency energy than a focused image.

And the presence of broader spectrum (i.e. more energy at all frequencies) means an image has *more* information.

The connections between autocorrelation (which describes redundancy right out of shannon and what follows) and Fourier analysis is simple, the transform of the power spectrum IS the autocorrelation of the signal. I don't mean approximately, I mean exactly, as in trivially shown by considering what it means to multiply the signal transform by its complex conjugate, i.e. to calculate the power spectrum. Ergo, the power spectrum is the inverse transform of the autocorrelation. Ergo, either tells you directly how much of the signal is redundant and if you know the signal power, also how much isn't.

If the power spectrum is white, there is zero autocorrelation. If the power spectrum has lots of low values, there is some high autocorrelation somewhere. Non-zero autocorrelation means exactly that there is intersample redundancy, which is NOT information according to Shannon, who specifies that the -p log2(p) calculation is for UNCORRELATED samples.

You can learn all about the autocorrelation part of this in any beginning Fourier analysis book, and about the relationship of autocorrelation to redundancy in any basic signal processing text. You can get a good dose of all of this in Wozencraft and Jacobs or a more modern comm text, as well. Proakis, or Rabiner and Gold, or Rabiner and Schaeffer will all suit, as will a more modern book, I only own the oldies, I fear, being somewhat older myself.

You (and some others) forget about the noise in the image capture when you talk about backing out the defocusing. This kind of deconvolution, i.e. compensation for the loss of high frequencies due to the blurring, is not a simple process, is not perfect in the presence of noise (which in the real world is always present, especially in images!), and is sometimes (if there are zeros in the foward (blurring) transfer function, which there often are) theoretically impossible.

All of this is basic signal processing theory, or modem theory (same thing, different fashion of statement), or information theory (same math as modem, but again a different lexicon).

25. kzb
Established Member
Join Date
Apr 2005
Posts
2,548
If you know why the picture is out-of-focus, ie what is wrong with the lens or mirror that produced the image, in principle, shouldn't you be able to make a computer model that would take the place of a correcting lens?

What's more, if you play about with the image to find what lens would have given a sharp image, you have then arrived at the lens to model.

In this sense, the out-of-focus image is like a code, and you just need a key to break it. The information is there, but it is hidden until you have the key.

This argument applies to out-of-focus images only, not to pixel-limited blurriness.

26. Originally Posted by kzb
If you know why the picture is out-of-focus, ie what is wrong with the lens or mirror that produced the image, in principle, shouldn't you be able to make a computer model that would take the place of a correcting lens?
Not necessarily. Consider what "out of focus" means. It means that you have a substantial rolloff in high frequency response. (It is easily possible for the situation to be worse than that, but I'm picking the simplest, most easily inverted case for now.)

Ok, you say, let's put in an inverse filter (this is what you're suggesting, which is also known as the "deconvolution problem"). Fine. Now, we have quantized the signal during capture. Furthermore, there is both shot noise (photon limits) and electronic/grain/etc noise in the image as well.

When you put in the gain that this inverse filter requires (no matter if it's by direct frequency domain methods or by inverse filtering), what happens? You increase the noise level as well as the signal level.

If (as is commonly the case in out of focus images that are mildly far out of focus) the noise is larger than the actual captured signal, you do nothing but amplify noise. This does not improve the image at all.

This is assuming that in fact the transfer fuction you're trying to invert does not have zeros or near zeros, which out-of-focus transfer functions can easily have. If you have a zero, the problem is theoretically uninvertable even if you have infinite signal to noise ratio, as you've removed part of the signal and the system is both realistically and mathematically uninvertable.
Last edited by jj_0001; 2010-Feb-11 at 08:39 PM. Reason: unbogusing taggage

27. Originally Posted by kzb
If you know why the picture is out-of-focus, ie what is wrong with the lens or mirror that produced the image, in principle, shouldn't you be able to make a computer model that would take the place of a correcting lens?

What's more, if you play about with the image to find what lens would have given a sharp image, you have then arrived at the lens to model.

In this sense, the out-of-focus image is like a code, and you just need a key to break it. The information is there, but it is hidden until you have the key.

This argument applies to out-of-focus images only, not to pixel-limited blurriness.
This is a good question, and a good discussion thread.

Regardless of theory, a review of recent history shows it's not possible, at least with current technology -- even when there's extreme incentive.

E.g, the original Hubble optic flaw was eventually understood with great precision, but that knowledge did not allow completely effective de-blurring of the images.

Likewise with the STS-107 Columbia disaster, a critical launch camera which filmed vital information about the debris strike was out of focus. The camera could be disassembled and studied in minute detail. Yet that did not allow de-blurring of the out of focus images.

No amount of supercomputer time or optic modeling would allow this, even given the original flawed optics to examine first hand.

28. Originally Posted by joema
This is a good question, and a good discussion thread.

Regardless of theory, a review of recent history shows it's not possible, at least with current technology -- even when there's extreme incentive.

E.g, the original Hubble optic flaw was eventually understood with great precision, but that knowledge did not allow completely effective de-blurring of the images.

Or, simply put, you can not recover information that was not originally captured.

In particular, the frequency response of the blur is a very important point. If there is a zero in the desired frequency band, you can never, ever recover that information. It was never captured.

And, for information (we're in the spatial frequency domain for this discussion) that the signal to noise ration is degraded via attenuation, the amount of information is also reduced.

29. Banned
Join Date
Dec 2005
Posts
14,315
Originally Posted by tashirosgt
A human mind can get more information from an in-focus image than an out-of-focus image. But do in-focus images contain have more information (less entropy) in the physical sense than out-of-focus images? For example, in elementary optics there are situations where a system of lenses can project an in-focus image on a screen at a certain location, but the image will be out-of-focus at other locations. Is there any information "lost" if the screen is placed where the image is out-of-focus?
Depends on whether or not they're looking for the proverbial needle in a haystack or an aesthetic masterpiece.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•