View Full Version : Yet another HB site to debunk (Fairly big page)

2002-Jun-02, 02:07 PM
I don´t think that this one has been posted before, so .....


Gentlemen, start thau debunking !

2002-Jun-02, 02:13 PM
<a name="20020602.6"> page 20020602.6 aka Link to Index
On 2002-06-02 10:07, Cyberspaced wrote: To:
MY baINdex (http://www.badastronomy.com/phpBB/viewtopic.php?topic=183&forum=1#LOCKTHRD)
:::::::::::::::|:::::::::::::::|:::::::::::::::|:: :::::::::::::|
:::::::::::::::|:::::::::::::::|:::::::::::::::|:: :::::::::::::|
:::::::::::::::|:::::::::::::::|:::::::::::::::|:: :::::::::::::|
:::::::::::::::|:::::::::::::::|:::::::::::::::|:: :::::::::::::|

Kaptain K
2002-Jun-02, 03:05 PM
Nothing new here. Just a rehash of Kaysing, Sibrel and Percy. Everything has already been debunked. If the web author has not bothered to look for the other side of the questions, it is unlikely that (s)he would be open to anything we could say. Why bother?

2002-Jun-02, 04:13 PM
Looking on the bright side – there is no mention of the VanAllen Radiation Belts. /phpBB/images/smiles/icon_biggrin.gif

But they do say that on the moon “…there is also an incredible amount of solar radiation, enough to kill a human being within minutes.” But fail to say where that “fact” comes from.

<font size=-1>[ This Message was edited by: SpacedOut on 2002-06-02 12:13 ]</font>

2002-Jun-02, 05:54 PM
I always find it odd to hear people whine that the Apollo 11 wasn't "live", and then they go on to talk about what amounts to nothing more than frame rate conversion. Listen, kiddies, if you have a signal coming down at 10 fps and you want it jacked up to 25 or 30 fps for broadcast, you're going to have to hold some frames in a frame buffer so they can be duplicated. For all intents and purposes that's still a "live" signal.

In these days of digital uplinks there is no such thing as a directly live signal. Okay, MPEG-2 for dummies:

MPEG-2, otherwise known as ISO 13818-2, is the means of encoding and compressing moving pictures in a digital means. It's basically composed of three types of frames: The I-frame, the P-frame, and the B-frame.

The I-frame is an entire frame of video, JPEG-ishly encoded. The P-frame is the "difference" information that tells how to take the original video frame and turn it into a subsequent video frame. That subsequent video frame is usually four or five frames down the line from the original frame. Then the B-frames contain "difference" information that tell how to create the in-between frames between the I-frame and the P-frame by interpolation.

"Difference" information may be subframes that change. The frame is spatially divided into zones, just as for JPEG compression. Clearly you can save bandwidth if, for subsequent frames, you send only the information for zones that have changed between the two frames. So if you have a reporter in a studio, the studio background zones don't change and so are not sent in P-frames or B-frames. Just the zones that correspond to the reporter's mouth.

It may also contain displacement information. If the camera pans slowly, the decoder can be told simply to shift parts of the previous frame left, right, up or down to create the new frame. This information would be in a P-frame or B-frame.

I-frames are sent about twice a second, or as necessary to refresh the image (i.e., the program "cuts" to a completely new scene so there is no coherence.)

But here's the catch. It's obvious that the decoder must receive an I-frame before a P-frame, other wise the P-frame information is useless. But what's less obvious is that the P-frame must be received before the B-frames. But the B-frames are for frames that should be displayed before the P-frame. Therefore the decoder must maintain the current I-frame, any applicable P-frames, and then the currently decoded frame.

So although the frames are sent in this order:


they are displayed in this order


Now if I haven't put you to sleep yet, you've probably realized that at the encoding end, the encoder must store up four or five frames before it gets to the one that it's going to make into a P-frame. Those stored-up frames become the intermediate B-frames, encoded and sent after the P-frame.

Now the efficient encoding of MPEG is something of a black art, and it requires equipment that costs more than your house, your parents' house, and your children's houses put together. Very often quite a few frames are stored up so that an optimal mix of I, P, and B-frames can be determined.

The short version is that Apollo television was actually more live than what is considered "live" today. Everybody uses MPEG encoding for uplinks. And because of the latency in the MPEG encoding process, it doesn't even leave the transmitter until up to a second beyond when it actually happened.

The problem is that people like Bart Sibrel and David Percy should know this sort of thing inside and out. It's their profession. But they explain it to layman as if it's some sort of NASA-only underhanded method of delaying the signal for some undisclosed nefarious purpose. Shame on them!