Page 1 of 3 123 LastLast
Results 1 to 30 of 63

Thread: Do in-focus images have more information that out-of-focus images?

  1. #1
    Join Date
    Jun 2009
    Posts
    1,875

    Do in-focus images have more information that out-of-focus images?

    A human mind can get more information from an in-focus image than an out-of-focus image. But do in-focus images contain have more information (less entropy) in the physical sense than out-of-focus images? For example, in elementary optics there are situations where a system of lenses can project an in-focus image on a screen at a certain location, but the image will be out-of-focus at other locations. Is there any information "lost" if the screen is placed where the image is out-of-focus?

  2. #2
    Join Date
    Mar 2004
    Posts
    15,801
    0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 ...
    Skepticism enables us to distinguish fancy from fact, to test our speculations. --Carl Sagan

  3. #3
    Join Date
    May 2008
    Posts
    1,153
    any plate of film will hold or store a particular amount of light.
    that light that is being stored is dependent of the chemical substrate that makes the film.

    every piece of film can hold a predetermined amount of light.
    this amount is supposed be close to the same from one picture to the other.

    so it can be said that all film from picture to picture is pretty uniform in their ability to store light in an image form.

    same with digital. all images can be set to a predetermined resolution and such will have an uniform storage capacity.

    but you didn't ask that.
    didja?

    were you asking if an unfocused image has less info than a sharp in focus image.

    well, when you focus light energy into an image you are taking the rays of light that make up that image and focusing them into one place.
    so sure, a focused beam has more energy than an unfocused beam.

    i would assume that the unfocused image has all of the information needed to make the whole image, but the focus plane can only be a certain depth, and that depth of field can change. but it self is variable.

    imagine looking at a pencil through a camera.
    the eraser is close to you and you can see down to the sharpened graphite tip.

    on the side you can see the name of the pencil and the lead softness. you notice it is a number two.. there is the alphabet written on the side with a starting at the eraser and z at the ruff of the cone of the tip where the sharpener ceased its lunch.

    depth of field is the full area of the lens and how many letters are in focus.
    you can move that focus plane up and down the pencil as you focus.
    from a sharp clear eraser with an ever blurring pencil drooping away to the tip.

    if you move that focal plane down the pencil reveling the letters as you go, the eraser going out of focus a, then b, the c..

    depth of field is how many letters will be in focus, the deeper the field, the more letters you see.

    so your focal plane is where the information is coming from.

    so of course, if the above is correct,
    there is information lost from the image of the image
    when the focal plane of the lens is not encompassing the images location in space..


    as for total information, the the lens is focal merely collecting the most information from focal plane the lens happens to be set at...

    is that right?
    anybody?
    that is the focus.
    so if an image is out of focus, it is the information from a different focal plane.

  4. #4
    Join Date
    Apr 2007
    Posts
    2,364
    How exactly does one define "information" for this purpose?

  5. #5
    Join Date
    Jun 2009
    Posts
    1,875
    I don't know! As an "operational" definition, I think that if you can define an algorithm that transforms the data in an un-focused image to a focused image and perhaps a different algorithm, that does vice-versa then the two images contain the same information. (I'm assuming the two images are created by moving the screen in the example to two different locations so the "source" of the images is the same.)

  6. #6
    Join Date
    Jun 2005
    Posts
    13,829
    Though I could be wrong, I think the answer is yes. The reason is, I think you can transform a fuzzy picture into a sharp one, but the opposite is not possible without tricks. Also, if you think of a piece of paper with the letters getting progressively smaller at the bottom (like the liability contract in the original Willy Wonka movie), then the sharp image will contain that information but the unfocused one will not.
    As above, so below

  7. #7
    Join Date
    Apr 2007
    Posts
    2,364
    Well, there's only so much information you can get from a low resolution image - unlike what they show you in detective shows on TV, you can't 'enhance' a pixellated image and magically bring out the detail hidden in the pixels.

    I'd be inclined to say the same about blurry 'analogue' images too. The sharp image contains the maximum "information" for the scene, a blurry one contains a degraded version of that information, and therefore should be incomplete.

  8. #8
    Join Date
    Oct 2002
    Posts
    3,841
    I think your mention of entropy is very relevant. The light beam of a projector contains all the information in the slide, but it is only organised when focussed on the screen. An analogue would be differently coloured beads in a glass jar. Arranged in layers, one could calculate the number of beads by measuring the depth of the layer. Shake the container, and counting the beads would need a sample of the number of different colours in a given area, which would require staistical calculation. The result would be a probability, a rougher estimate than the layer method, because the beads are less organised.

    John

  9. #9
    Join Date
    Apr 2004
    Posts
    1,805
    If there was the same info in the unfocused image, it would have sufficed to process the slightly out-of-focus Hubble images by computer, rather than go to the huge expense of installing a correcting lens.

  10. #10
    Join Date
    Nov 2005
    Posts
    297
    If someone gets you a copy of a book you want except it's in Chinese and you can't understand it, does it contain less information?

    Of course not!

    Let's put it another way. If the CCD on Hubble could sense the phase, polarization, and incident angle of photons the image could have been reconstructed although out of focus.


    This brings up a rather obvious point: information often gets lost in translation.

  11. #11
    Join Date
    Mar 2004
    Posts
    3,163
    A huge issue here in the real world is signal-to-noise. Even for a perfect detector, there is Poisson noise form the stochastic arrival of photons. This limits image reconstruction even when the blur pattern is known exactly. Blurring spreads the signal over more pixels, and simultaneously has Poisson noise from multiple pixels contributing to the error in each reconstructed image pixel.

    The attached example is from a 1991 paper of mine demonstrating the performance of the sigma-CLEAN algorithm. The main point is that it works well for S/N limited only by the machine's representation, and the fidelity of reconstruction goes downhill rapidly with higher background or shorter exposure. An additional issue with deconvolution and PSFs as structured as the old HST system is that there will often be artifacts which aren't distinguishable in an obvious way from real features.
    Attached Images Attached Images

  12. #12
    Join Date
    Sep 2003
    Location
    Denmark
    Posts
    18,442
    So basically, if the blurred image could be recorded noiselessly to infinite precision, it would be possible to reconstruct the sharp image, since the information would be there?
    But when noise is present, then the blurrier the image then more pixels will be holding less information each for each sharp one so then will be each be affected more by the noise, so the information will quickly disappear below the noise threshold?
    And as every real life image will have at least quantification noise there's a limit to how much sharpening can be done for real life images.
    __________________________________________________
    Reductionist and proud of it.

    Being ignorant is not so much a shame, as being unwilling to learn. Benjamin Franklin
    Chase after the truth like all hell and you'll free yourself, even though you never touch its coat tails. Clarence Darrow
    A person who won't read has no advantage over one who can't read. Mark Twain

  13. #13
    Join Date
    Apr 2005
    Posts
    11,545
    Quote Originally Posted by JohnD View Post
    I think your mention of entropy is very relevant.
    Yes, the whole idea of information goes back to Claude Shannon's paper A Mathematical Theory of Communication, in the Bell System Technical Journal in 1948. He discusses entropy and information.
    Quote Originally Posted by Jens View Post
    Though I could be wrong, I think the answer is yes. The reason is, I think you can transform a fuzzy picture into a sharp one, but the opposite is not possible without tricks. Also, if you think of a piece of paper with the letters getting progressively smaller at the bottom (like the liability contract in the original Willy Wonka movie), then the sharp image will contain that information but the unfocused one will not.
    Did you mean that the other way around?
    Quote Originally Posted by HenrikOlsen View Post
    So basically, if the blurred image could be recorded noiselessly to infinite precision, it would be possible to reconstruct the sharp image, since the information would be there?
    It depends upon how the blurring is accomplished. Clearly, if two pixels had values A and B, and the resulting pixels were blurred to (A+B)/2, even with the knowledge of how they were blurred you would not be able to reproduce the original pixels in the original order with any certainty.

  14. #14
    Join Date
    Mar 2004
    Posts
    3,163
    Quote Originally Posted by HenrikOlsen View Post
    So basically, if the blurred image could be recorded noiselessly to infinite precision, it would be possible to reconstruct the sharp image, since the information would be there?
    Up to the limitation that there are certain patterns to which a given blur pattern (point-spread function=PSF) is "blind". This was worked out (or at least magisterially described) by Ronald Bracewell in the Fourier domain; any finite sampling of the PSF will have zeros in the Fourier domain, so will not detect any distribution of intensity whose Fourier transform is zero except at these spatial frequencies and directions. This is a big problem for sparse sampling with interferometers, but also enters into direct imaging. In real life, noise matters more in this application - deconvolution amplifies high-frequency noise, which is easy to see from the observations that deconvolution in the image plane is division in the Fourier domain: FT(deconvolved image) = FT(observed image)/FT(PSF). The "invisible pattern" issue shows up clearly in this case - what happens where FT(PSF)=0? This limit applies whether in or out of focus, whether the PSF is a nice Gaussian or a complicated set of sidelobes.

  15. #15
    Join Date
    Feb 2009
    Posts
    64
    Quote Originally Posted by EDG_ View Post
    Well, there's only so much information you can get from a low resolution image - unlike what they show you in detective shows on TV, you can't 'enhance' a pixellated image and magically bring out the detail hidden in the pixels.

    I'd be inclined to say the same about blurry 'analogue' images too. The sharp image contains the maximum "information" for the scene, a blurry one contains a degraded version of that information, and therefore should be incomplete.
    but I think that interpolation can be done between pixels and an enhancement can be made that would be 'approximate'.

  16. #16
    Join Date
    Apr 2007
    Posts
    2,364
    Quote Originally Posted by sithum View Post
    but I think that interpolation can be done between pixels and an enhancement can be made that would be 'approximate'.
    Nope. If you've got say, a single 'big pixel' that is the equivalent of 8x8 smaller pixels, and that big pixels is just the average of the 8x8 that comprise it, there is no way you can tease out what was in the 8x8 pixels. You just don't have enough info to be able to do it remotely accurately, or even approximately very well.

    This was pretty bad in the Galileo mission, which was sending back lossy compressed images that basically did exactly that - it averaged out blocks of pixels, like really bad JPEG compression. When the compression was overdone (IIRC, the imaging team had to take an educated guess at what level of compression to use before the images were taken) you could hardly make out anything useful in the image at all, and there was no way to extract that info out of the picture. And it was especially bad in the areas that didn't have much variation in brightness, because that's where the algorithm would compress things the most, so flat areas had all their detail destroyed by the compression. Fortunately they sent back other images that used lossless compression, and most of the time the lossy compression algorithms weren't too bad, but I remember several occasions while working on the images for my PhD where I was really frustrated by the compression blocks...

  17. #17
    Join Date
    Aug 2008
    Posts
    447
    "information" is measured in bits, thank you, Claude Shannon, and we can calculate the total bits in an image in any number of ways. The easiest is just to calculate the spectral flatness measure over the image, which will tell you how many dB you can recover from redundancy. The rest is information, or as close as one can get without considering nonlinear modelling.

    You could do the same with high-order LPC models, adaptive models, entropy models, complex lzw coding (2 d makes it interesting), etc.

    You could do a transform (IDCT) and calculate the entropy of the resulting transform.

    All of these are going to be convinced that more high spatial frequency content means more information, and "more high spatial frequency content" is directly equal to "sharper" images.

    Also, you can only transform a fuzzy picture into a sharp one within noise limit bounds. Deconvolution of this sort can be extremely ill-conditioned if the transfer function you have to invert has near-zeros. Been there.

  18. #18
    Join Date
    Jun 2009
    Posts
    1,875
    I don't know how you would carry out that programme. For example, if we record each of an un-focused and focused image as an array of digitized pixels, why would there be more bits in one image than the other? There is also a question of what "noise" means. A systematic process like blurring is different than having independent random noise in each pixel.

  19. #19
    Join Date
    May 2008
    Posts
    618
    The image itself has as much information, but more of that information is noise. If you add additional information in the form of a complete mathematical description of the distortion, you can bring the image back into focus. Since this requires extra information the in focus image must have more of information you are looking for.

  20. #20
    Join Date
    Jun 2009
    Posts
    1,875
    What definition of "noise" are you using?

  21. #21
    Join Date
    Aug 2008
    Posts
    447
    Tashirogst, "information" includes the part of the data that is not predictable from the rest.

    In any data that isn't white noise, some portion of the data can be predicted from the adjacent data. This is "redundancy" and does not count as actual information.

    PCM storage (which is what images use a 2d version of) does not guarantee any degree of decorrelation or correlation, it simply stores the waveform. In that sense, it is not an efficient storage, but still an accurate one.

    When you calculate the "LPC gain" (strictly speaking that's only for single-dimension waveforms, but the same idea exists for 2D with some interesting complications) or the Spectral Flatness Measure (SF), these can tell you the degree of predictability in the data to the extent of the predictor's size, etc. SFM is the "maximum predictable part" of a signal, and constitutes the ratio of the geometric mean of the spectrum to the arithmetic mean of the spectrum.

    These measures can measure the amount of redundancy in your data. In the case of a soft vs. sharp image, the sharp image is going to have more high spatial frequencies (this is an unavoidable consequence of a sharper image), and therefore a lower SFM, and as a result a higher Shannon Entropy. (Note, Shannon entropy, or "bits", is the negative of the calculation from physics.)

    The shannon entropy of a system is described as sum ( -p(i) log2(p(i))) where "p(i)" is the probability of each value, and log2 the base 2 logarithm. The units is "bits".

    For something like an image with lots of redundancy, you can not directly measure information by calculating the probability of each value in the image (not each pixel, but each r, g, b, value, or better triplets of rgb) because that does not account for the redundancy between pixels, i.e. the predictability. The maximum entropy you will get will be the same as the number of bits used in PCM, by the way, so you will see a reduction in "information" when you calculate the individal pixel entropies, just bear in mind that's still not the actual entropy. To show the actual entropy, you must decorrelate the samples first and get rid of the predictable part of the signal. Then you can apply the same process and get the right answer.

    This is not as easy as it sounds, as you may have gathered.
    Last edited by jj_0001; 2010-Feb-10 at 08:18 PM. Reason: typo

  22. #22
    Join Date
    Jun 2009
    Posts
    1,875
    I thought the scenario for Shannon entropy involved a communication over a "noisy" channel, i.e. one where there is some probabilistic phenomena at work. The distortion of an image by its being out-of-focus is a deterministic process.

  23. #23
    Join Date
    Apr 2005
    Posts
    2,491
    If you know why the picture is out-of-focus, ie what is wrong with the lens or mirror that produced the image, in principle, shouldn't you be able to make a computer model that would take the place of a correcting lens?

    What's more, if you play about with the image to find what lens would have given a sharp image, you have then arrived at the lens to model.

    In this sense, the out-of-focus image is like a code, and you just need a key to break it. The information is there, but it is hidden until you have the key.

    This argument applies to out-of-focus images only, not to pixel-limited blurriness.

  24. #24
    Join Date
    Aug 2008
    Posts
    447
    Quote Originally Posted by tashirosgt View Post
    I thought the scenario for Shannon entropy involved a communication over a "noisy" channel, i.e. one where there is some probabilistic phenomena at work. The distortion of an image by its being out-of-focus is a deterministic process.
    The concept goes far beyond the channel capacity theorem, which is what you seem to be confusing with the general idea of information measurement.

    Once again, the point is quite simple, the "information" is the part of the image that is not predictable. A wider spectrum (in spatial frequency terms for an image) makes something less predictable, and thereby raises the entropy. Were the image white noise, the entropy of the image would be maximized.

    The fact that the image capture (as is true of all physical processes) is by definition noisy could enter into this, but the quantization level (bits/plane) and the spectral flatness define the actual information content.

  25. #25
    Join Date
    Aug 2008
    Posts
    447
    Quote Originally Posted by kzb View Post
    If you know why the picture is out-of-focus, ie what is wrong with the lens or mirror that produced the image, in principle, shouldn't you be able to make a computer model that would take the place of a correcting lens?
    Not necessarily. Consider what "out of focus" means. It means that you have a substantial rolloff in high frequency response. (It is easily possible for the situation to be worse than that, but I'm picking the simplest, most easily inverted case for now.)

    Ok, you say, let's put in an inverse filter (this is what you're suggesting, which is also known as the "deconvolution problem"). Fine. Now, we have quantized the signal during capture. Furthermore, there is both shot noise (photon limits) and electronic/grain/etc noise in the image as well.

    When you put in the gain that this inverse filter requires (no matter if it's by direct frequency domain methods or by inverse filtering), what happens? You increase the noise level as well as the signal level.

    If (as is commonly the case in out of focus images that are mildly far out of focus) the noise is larger than the actual captured signal, you do nothing but amplify noise. This does not improve the image at all.

    This is assuming that in fact the transfer fuction you're trying to invert does not have zeros or near zeros, which out-of-focus transfer functions can easily have. If you have a zero, the problem is theoretically uninvertable even if you have infinite signal to noise ratio, as you've removed part of the signal and the system is both realistically and mathematically uninvertable.
    Last edited by jj_0001; 2010-Feb-11 at 08:39 PM. Reason: unbogusing taggage

  26. #26
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by jj_0001 View Post
    The concept goes far beyond the channel capacity theorem, which is what you seem to be confusing with the general idea of information measurement.

    Once again, the point is quite simple, the "information" is the part of the image that is not predictable. A wider spectrum (in spatial frequency terms for an image) makes something less predictable, and thereby raises the entropy. Were the image white noise, the entropy of the image would be maximized.

    The fact that the image capture (as is true of all physical processes) is by definition noisy could enter into this, but the quantization level (bits/plane) and the spectral flatness define the actual information content.
    I don't see the connection between information and what your are talking about, which seems to involve the fourier transform of a signal. It is true that "white noise" can be modeled as something that has a spectrum in the sense that a stochastic process has a spectrum. However, de-focusing is not a stochastic process. "White noise" is a stochastic process. It seems to me that a blurring of the image would cause sharp edges to smooth out, so I see no reason that a de-focused image would have more power in the higher frequencies of a fourier transform than a focused image.

  27. #27
    Join Date
    Aug 2008
    Posts
    447
    Quote Originally Posted by tashirosgt View Post
    I don't see the connection between information and what your are talking about, which seems to involve the fourier transform of a signal. It is true that "white noise" can be modeled as something that has a spectrum in the sense that a stochastic process has a spectrum. However, de-focusing is not a stochastic process. "White noise" is a stochastic process. It seems to me that a blurring of the image would cause sharp edges to smooth out, so I see no reason that a de-focused image would have more power in the higher frequencies of a fourier transform than a focused image.
    It doesn't. It has fewer high frequencies. That is why it has less information.

    A defocused image has ***fewer*** high frequencies. The image capture has noise. If you compensate for extensive loss of high spatial frequencies in the image, in order to deconvolve (i.e. invert) the blurring, you will amplify the image capture noise along with the image information, resulting in an inferior signal to noise ratio at high frequencies, also known as "speckle" or "snow" or various other kinds of image impairment.

    A defocused image has LESS high frequency energy than a focused image.

    And the presence of broader spectrum (i.e. more energy at all frequencies) means an image has *more* information.

    The connections between autocorrelation (which describes redundancy right out of shannon and what follows) and Fourier analysis is simple, the transform of the power spectrum IS the autocorrelation of the signal. I don't mean approximately, I mean exactly, as in trivially shown by considering what it means to multiply the signal transform by its complex conjugate, i.e. to calculate the power spectrum. Ergo, the power spectrum is the inverse transform of the autocorrelation. Ergo, either tells you directly how much of the signal is redundant and if you know the signal power, also how much isn't.

    If the power spectrum is white, there is zero autocorrelation. If the power spectrum has lots of low values, there is some high autocorrelation somewhere. Non-zero autocorrelation means exactly that there is intersample redundancy, which is NOT information according to Shannon, who specifies that the -p log2(p) calculation is for UNCORRELATED samples.

    You can learn all about the autocorrelation part of this in any beginning Fourier analysis book, and about the relationship of autocorrelation to redundancy in any basic signal processing text. You can get a good dose of all of this in Wozencraft and Jacobs or a more modern comm text, as well. Proakis, or Rabiner and Gold, or Rabiner and Schaeffer will all suit, as will a more modern book, I only own the oldies, I fear, being somewhat older myself.

    You (and some others) forget about the noise in the image capture when you talk about backing out the defocusing. This kind of deconvolution, i.e. compensation for the loss of high frequencies due to the blurring, is not a simple process, is not perfect in the presence of noise (which in the real world is always present, especially in images!), and is sometimes (if there are zeros in the foward (blurring) transfer function, which there often are) theoretically impossible.

    All of this is basic signal processing theory, or modem theory (same thing, different fashion of statement), or information theory (same math as modem, but again a different lexicon).

  28. #28
    Join Date
    Dec 2005
    Posts
    14,315
    Quote Originally Posted by tashirosgt View Post
    A human mind can get more information from an in-focus image than an out-of-focus image. But do in-focus images contain have more information (less entropy) in the physical sense than out-of-focus images? For example, in elementary optics there are situations where a system of lenses can project an in-focus image on a screen at a certain location, but the image will be out-of-focus at other locations. Is there any information "lost" if the screen is placed where the image is out-of-focus?
    Depends on whether or not they're looking for the proverbial needle in a haystack or an aesthetic masterpiece.

  29. #29
    Join Date
    Dec 2005
    Posts
    14,315
    PS, I'd agree with others that the greater the resolution of the original, the greater the information that may be seen by the observer.

  30. #30
    Join Date
    Nov 2005
    Posts
    297
    Quote Originally Posted by jj_0001 View Post
    It doesn't. It has fewer high frequencies. That is why it has less information.

    A defocused image has ***fewer*** high frequencies. The image capture has noise. If you compensate for extensive loss of high spatial frequencies in the image, in order to deconvolve (i.e. invert) the blurring, you will amplify the image capture noise along with the image information, resulting in an inferior signal to noise ratio at high frequencies, also known as "speckle" or "snow" or various other kinds of image impairment.

    A defocused image has LESS high frequency energy than a focused image.

    And the presence of broader spectrum (i.e. more energy at all frequencies) means an image has *more* information.

    The connections between autocorrelation (which describes redundancy right out of shannon and what follows) and Fourier analysis is simple, the transform of the power spectrum IS the autocorrelation of the signal. I don't mean approximately, I mean exactly, as in trivially shown by considering what it means to multiply the signal transform by its complex conjugate, i.e. to calculate the power spectrum. Ergo, the power spectrum is the inverse transform of the autocorrelation. Ergo, either tells you directly how much of the signal is redundant and if you know the signal power, also how much isn't.

    If the power spectrum is white, there is zero autocorrelation. If the power spectrum has lots of low values, there is some high autocorrelation somewhere. Non-zero autocorrelation means exactly that there is intersample redundancy, which is NOT information according to Shannon, who specifies that the -p log2(p) calculation is for UNCORRELATED samples.

    You can learn all about the autocorrelation part of this in any beginning Fourier analysis book, and about the relationship of autocorrelation to redundancy in any basic signal processing text. You can get a good dose of all of this in Wozencraft and Jacobs or a more modern comm text, as well. Proakis, or Rabiner and Gold, or Rabiner and Schaeffer will all suit, as will a more modern book, I only own the oldies, I fear, being somewhat older myself.

    You (and some others) forget about the noise in the image capture when you talk about backing out the defocusing. This kind of deconvolution, i.e. compensation for the loss of high frequencies due to the blurring, is not a simple process, is not perfect in the presence of noise (which in the real world is always present, especially in images!), and is sometimes (if there are zeros in the foward (blurring) transfer function, which there often are) theoretically impossible.

    All of this is basic signal processing theory, or modem theory (same thing, different fashion of statement), or information theory (same math as modem, but again a different lexicon).
    What is all of that?
    You seem to be refering to the interpreted spatial resolution as the data. That is not implicit in the question.

    A blurred image of the sun still gives valid temperature, luminosity, and spectral absorbtion data. Spatial resolution is just one aspect that is harder to obtain.

Similar Threads

  1. Focus on focus
    By George in forum Astronomical Observing, Equipment and Accessories
    Replies: 7
    Last Post: 2011-Jun-23, 06:37 PM
  2. ISS in prime focus
    By Wolfhound32 in forum Astrophotography
    Replies: 22
    Last Post: 2009-Feb-05, 12:33 PM
  3. Help with prime focus set up.
    By slotdrag in forum Astronomical Observing, Equipment and Accessories
    Replies: 6
    Last Post: 2006-Dec-19, 07:05 AM
  4. Prime Focus
    By Miketmbt in forum Astrophotography
    Replies: 5
    Last Post: 2006-Nov-15, 09:06 AM
  5. Focus Knobs
    By Dickenmeyer in forum Astronomy
    Replies: 17
    Last Post: 2003-May-21, 04:09 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •