
Originally Posted by
jj_0001
It doesn't. It has fewer high frequencies. That is why it has less information.
A defocused image has ***fewer*** high frequencies. The image capture has noise. If you compensate for extensive loss of high spatial frequencies in the image, in order to deconvolve (i.e. invert) the blurring, you will amplify the image capture noise along with the image information, resulting in an inferior signal to noise ratio at high frequencies, also known as "speckle" or "snow" or various other kinds of image impairment.
A defocused image has LESS high frequency energy than a focused image.
And the presence of broader spectrum (i.e. more energy at all frequencies) means an image has *more* information.
The connections between autocorrelation (which describes redundancy right out of shannon and what follows) and Fourier analysis is simple, the transform of the power spectrum IS the autocorrelation of the signal. I don't mean approximately, I mean exactly, as in trivially shown by considering what it means to multiply the signal transform by its complex conjugate, i.e. to calculate the power spectrum. Ergo, the power spectrum is the inverse transform of the autocorrelation. Ergo, either tells you directly how much of the signal is redundant and if you know the signal power, also how much isn't.
If the power spectrum is white, there is zero autocorrelation. If the power spectrum has lots of low values, there is some high autocorrelation somewhere. Non-zero autocorrelation means exactly that there is intersample redundancy, which is NOT information according to Shannon, who specifies that the -p log2(p) calculation is for UNCORRELATED samples.
You can learn all about the autocorrelation part of this in any beginning Fourier analysis book, and about the relationship of autocorrelation to redundancy in any basic signal processing text. You can get a good dose of all of this in Wozencraft and Jacobs or a more modern comm text, as well. Proakis, or Rabiner and Gold, or Rabiner and Schaeffer will all suit, as will a more modern book, I only own the oldies, I fear, being somewhat older myself.
You (and some others) forget about the noise in the image capture when you talk about backing out the defocusing. This kind of deconvolution, i.e. compensation for the loss of high frequencies due to the blurring, is not a simple process, is not perfect in the presence of noise (which in the real world is always present, especially in images!), and is sometimes (if there are zeros in the foward (blurring) transfer function, which there often are) theoretically impossible.
All of this is basic signal processing theory, or modem theory (same thing, different fashion of statement), or information theory (same math as modem, but again a different lexicon).