PDA

View Full Version : Moon landing Hoax?



Lobreiter
2007-Aug-29, 02:57 AM
im not denying or supporting the moon landings, but there are some things that are questionable

like for example, the mass amounts of cosmic radiation experienced would have surely killed the astronauts, unless they in the year 1969 they had somthing that could generate a magnetic feild around them, gamma radiation is the stongest on earth and it takes 4 feet of solid lead to protect you,

and what about the onboard computers that they used, dont you ppl tell me they could land somone on the moon with a computer that has not even half the compteing power of my graphing calculator, because if im correct, i is possible to control a Saturn V rocket, with a TI-83+.

but on the other side, in 1969 there were some prtty smart ppl and u cant really beleave anything on the internet these days, who know the entire hoax thing could be the work of some ****ed off dude in is basement, hell i could creatively edit most of the footage and pictures to make it look fake, its called photoshop.

Bob B.
2007-Aug-29, 03:04 AM
like for example, the mass amounts of cosmic radiation experienced would have surely killed the astronauts...

What mass amount of cosmic radiation?


and what about the onboard computers that they used...

How much computing power is needed to run an autopilot?

JayUtah
2007-Aug-29, 03:05 AM
...that are questionable

"Are questionable," or you just don't know the answers?

...the mass amounts of cosmic radiation experienced...

Exactly how much radiation does it take to kill someone, and exactly how much would someone get by traveling to the Moon, and how do you know?

...gamma radiation is the stongest on earth and it takes 4 feet of solid lead to protect you.

How much gamma radiation is there between the Earth and Moon?

...dont you ppl tell me they could land somone on the moon with a computer that has not even half the compteing power of my graphing calculator...

How much computing power, exactly, is required to land on the Moon? And how do you know?

...i is possible to control a Saturn V rocket, with a TI-83+.

I've seen rockets than can be controlled with a few well-placed fins. Where were those computers? How much computing power does a baseball require to get from the pitcher's mound to home plate? How much computing power does a comet have? How much does it need?

Swift
2007-Aug-29, 03:08 AM
Hi Lobreiter. Welcome to BAUT. I suggest you read the rules and the FAQs (for example, this is a family friendly board and language must be G rated).

There are some great resources for specific moon landing hoax questions.
clavius.org (http://www.clavius.org/) is a great website that answers these specific questions as does the Bad Astronomy (http://www.badastronomy.com/) website. You can also search around the conspiracy threads right here, we have discussed both of these quite often.

I will give you two quick answers - others can give you a lot more detail. Cosmic rays and charged particles from the sun are a very different form of radiation than gamma radiation, and so the means of stopping them are very different - you don't need all that solid lead.

I don't know how old you are, but a lot of pretty advanced technology can be run without computers. And the computers on Apollo weren't all that bad; remember, they were designed for some very specific tasks, which they were well qualified to perform. A lot of the power of modern computers is used for things like graphical user interfaces and the ability to run a lot of different types of programs, neither of which the Apollo computers needed to do.

On the flip side, the evidence in support of the landings is absolutely overwhelming, and it is not just still and movie films - rocks, core samples, radio transmissions that can be tracked to and from the moon, and tens of thousands of people who were involved in making it all work, none of who have ever come forth to "spill the beans" (there are no beans to spill).

If you have other questions, or need more details, just say the word.

Bob B.
2007-Aug-29, 03:15 AM
- you don't need all that solid lead.

Heck, you don't need all that lead to stop even gamma rays. Outrageous numbers like four feet of lead (or six feet, or whatever the number of the day is) are erroneous.

JayUtah
2007-Aug-29, 03:17 AM
Even if there were gamma rays that needed stopping.

Van Rijn
2007-Aug-29, 04:13 AM
and what about the onboard computers that they used, dont you ppl tell me they could land somone on the moon with a computer that has not even half the compteing power of my graphing calculator, because if im correct, i is possible to control a Saturn V rocket, with a TI-83+.


From http://en.wikipedia.org/wiki/TI-83

The TI-83 Plus is a graphing calculator made by Texas Instruments, designed in 1999 as an upgrade to the TI-83. The TI-83 Plus is one of TI's most popular calculators. It uses a ZiLOG Z80 microprocessor running at 6 MHz, a 96◊64 monochrome LCD screen, and 4 AAA batteries as well as backup CR1616 or CR1620 battery. A link port is also built into the calculator. The main improvement over the TI-83, however, is the addition of 512 KiB of Flash ROM, which allows for OS upgrades and applications to be installed. Most of the Flash ROM is used by the OS, with 160 KiB available for user files and applications.
[snip]

Programming may also be done in TI Assembly, made up of Z80 assembly and a collection of TI provided system calls. Assembly programs run much faster, but are more difficult to write.

It also has 32KB of RAM.

It certainly looks to me that, with the appropriate programming and with the proper hardware interface (which might or might not require major surgery depending on the capabilities of the "link port") that it probably could do AGC like functions. Just because you're used to fancy user interfaces that require enormous resources doesn't mean they're required for this application.

Obviousman
2007-Aug-29, 04:57 AM
My old Tandy MC-10 could run a lunar landing simulation programme with a whole 4K of RAM. I can't remember what processor it used.

Van Rijn
2007-Aug-29, 05:22 AM
My old Tandy MC-10 could run a lunar landing simulation programme with a whole 4K of RAM. I can't remember what processor it used.

It had a 6803. I wasn't familiar with the 6800 series, except it was apparently pretty similar to the 6502. The CPU was okay, though I wouldn't want to have an HCF (Halt and Catch Fire (http://en.wikipedia.org/wiki/Halt_and_Catch_Fire)) instruction in the code. :)

It looks like the memory specs on the MC-10 were below that of the AGC, so it would have had problems.

Peter B
2007-Aug-29, 06:49 AM
but on the other side, in 1969 there were some prtty smart ppl

G'day Lobreiter, and welcome to the BAUT Forum.

Yes, the Apollo program employed essentially the best people available in the USA in a range of fields, including engineering and computing.


and u cant really beleave anything on the internet these days

No you can't. But you can cross-check the claims people make. And you can also cross-check claims in books - not everything is on the Internet, you know.


who know the entire hoax thing could be the work of some ****ed off dude in is basement

Well, we're fairly familiar with the backgrounds of most of the leading Hoax Believers, and what motivates them. It seems to be mostly a desire to be noticed. They're not so much interested in resolving issues as keeping them unresolved.


hell i could creatively edit most of the footage and pictures to make it look fake, its called photoshop.

But Photoshop wasn't available in 1969. Photos and video of astronauts on the Moon have been available since then.

In any case, Photoshop won't create 350 kilograms of lunar samples. The only way to collect that amount of samples in 1969 was to send astronauts to the Moon.

Laguna
2007-Aug-29, 06:51 AM
and what about the onboard computers that they used, dont you ppl tell me they could land somone on the moon with a computer that has not even half the compteing power of my graphing calculator, because if im correct, i is possible to control a Saturn V rocket, with a TI-83+.

Guess what...
The AGC was not run on Windows Vista...

AGN Fuel
2007-Aug-29, 06:55 AM
I don't know how old you are, but a lot of pretty advanced technology can be run without computers. And the computers on Apollo weren't all that bad; remember, they were designed for some very specific tasks, which they were well qualified to perform. A lot of the power of modern computers is used for things like graphical user interfaces and the ability to run a lot of different types of programs, neither of which the Apollo computers needed to do.



Added to which is the frequently overlooked fact that a lot of computing grunt work was actually done on the ground and fed up to the spacecraft (leading to one of my favourite Apollo transmission phrases, "POO and accept"). :lol:

Serenitude
2007-Aug-29, 07:13 AM
It would amaze most modern Microsoft GUI kids that a fully functional web server could run on a hard drive less than 5 megabytes in size, providing full DNS, mail, Samba, etc... services TODAY, on P3 400ish hardware, in an era when familiar bloated OS's won't even think of installing the "barebones" essentials on anything less than 3 Gigs of hard disk space and dual-core processors, but such LAMP systems are entirely feasible. Because one is unfamiliar that the technology is possible, does not therefore make it so.

Serenitude
2007-Aug-29, 07:15 AM
Oops - forget to welcome you to the Forums! And meanwhile, have you checked out the Bad Astronomer's debunking of some of the more enduring Apollo Hoax myths? They're written in a very accessible, layman's style ;)

Nicolas
2007-Aug-29, 07:32 AM
From http://en.wikipedia.org/wiki/TI-83

The TI-83 Plus is a graphing calculator made by Texas Instruments, designed in 1999 as an upgrade to the TI-83. The TI-83 Plus is one of TI's most popular calculators. It uses a ZiLOG Z80 microprocessor running at 6 MHz, a 96◊64 monochrome LCD screen, and 4 AAA batteries as well as backup CR1616 or CR1620 battery. A link port is also built into the calculator. The main improvement over the TI-83, however, is the addition of 512 KiB of Flash ROM, which allows for OS upgrades and applications to be installed. Most of the Flash ROM is used by the OS, with 160 KiB available for user files and applications.
[snip]

Programming may also be done in TI Assembly, made up of Z80 assembly and a collection of TI provided system calls. Assembly programs run much faster, but are more difficult to write.

It also has 32KB of RAM.

It certainly looks to me that, with the appropriate programming and with the proper hardware interface (which might or might not require major surgery depending on the capabilities of the "link port") that it probably could do AGC like functions. Just because you're used to fancy user interfaces that require enormous resources doesn't mean they're required for this application.

I didn't know the 83plus (which I own) has 512kb flash. However I did once install the dutch software from CD-rom, so now all menus and functions are in dutch. That must have been an OS upgrade.

I think the TI-83 plus is plenty fast enough for apollo computer functions. But here too, you see that it depends on how well you program it, to name just one thing. Just running something from a few commands will result in slow program execution. I once ran a Mario clone on the Ti-83 plus, which was programmed directly into Assembly (some people must have loads of time). It outperformed the original Gameboy in how fluent it ran.

So if you'd make a very clean program and have the hardware 100% fit for the task, I think 6mhz 32k is just fine for apollo, possibly serious overkill for many applications. Remember that you don't need to calculate the looks of the moon, nor the physics, nor the craft, you just need basic sensor-info and command-servo formulas.

Laguna
2007-Aug-29, 10:50 AM
What did it actually calculate?

NEOWatcher
2007-Aug-29, 01:24 PM
and what about the onboard computers that they used, dont you ppl tell me they could land somone on the moon with a computer that has not even half the compteing power of my graphing calculator, because if im correct, i is possible to control a Saturn V rocket, with a TI-83+.
Controls don't even need computers. Most of the reactions can be hardwired. The computer just allows an easier way to make it adjustable.

Anyway, if a cheap 1K 3.25MHz machine (http://en.wikipedia.org/wiki/Sinclair_ZX81) can play chess, then what's wrong with Apollo?

JayUtah
2007-Aug-29, 01:35 PM
This is starting to look rather hit-and-run.

Since we're gravitating toward the computer topic, now would be a good time to mention a couple of things.

First, the AGC is proven to work. That is, modern hobbyists have made emulators and actual working hardware from the original designs and specifications that run the actual flight software. In answer to the question, "How could the AGC have gotten them to the Moon?" several people can point to their work and say, "Just like this." You can't necessarily do the same thing with the F-1 engine (although that would be fun). The AGC is an easily-verified piece of technology, and it has been verified.

Second, NASA in the mid-1960s had no way of knowing that in 20 years computers would take over the planet. That is, they had no way of knowing the degree to which people would come to understand them and rely on them, and each to own several of them. Not many people in the 1960s had a clue how computers worked or what they could and couldn't do. So a conspiracy theory wouldn't have to go into such correct detail in order to be convincing. In fact, usually the more detail you go into in a lie, the easier it is to prove wrong. NASA (well, really MIT) didn't have to design a fully-functional computer just to fool people into thinking they had one. So why did they, if not to actually use it?

Bob B.
2007-Aug-29, 01:52 PM
What did it actually calculate?


JayUtah is the real expert but Iíll give my two bits anyway. I havenít studied the LM computer very closely therefore I must qualify all my comments with ďI thinkĒ. Iíll surely be corrected if any of this is incorrect. Hopefully Iím not too far out in left field.

My understanding is that computer performed an autopilot role during landing. Data input came from the inertial measurement unit and the landing radar. The IMU included three accelerometers and three gyroscopes for measuring acceleration and attitude angles in X-Y-Z axes. The landing radar provided altitude, horizontal velocity, and vertical velocity. If Iím correct, that is nine inputs. The computer compared these measurements against a pre-planned trajectory and if any were out of tolerance a command would be sent to the appropriate control to correct it. For instance, if angle X is too high then pulse a designated thruster. If X is too low then pulse a different thruster, etc. If there are nine inputs and two corrective functions for each, then that really isnít a very demanding a task to perform.

I believe the computer also performed rendezvous computations when returning to the CSM in lunar orbit. The Gemini computer was doing this same thing as early as 1965. Angles and distances to the target would be measured and input into the computer, which would then run a series of computations to determine the transfer orbit required to intercept the target and the necessary engine burns. This again wouldnít be anything particularly tasking on the computer.

EDIT:
The lunar module also included two additional computers -- the brains of the Commander and Lunar Module Pilot.

sts60
2007-Aug-29, 01:56 PM
You can't necessarily do the same thing with the F-1 engine (although that would be fun).

I've actually fired an F-1 emulator. Faithful in all respects except it was smaller-scale and solid-fueled instead of LH2RP1/LOX. You stick the little nichrome wire into the nozzle, connect the 6V battery, count down from 10, and ...

Bob B.
2007-Aug-29, 02:02 PM
I've actually fired an F-1 emulator. Faithful in all respects except it was smaller-scale and solid-fueled instead of LH2/LOX.

Sounds cool! One nitpick ... that should be RP-1.

ineluki
2007-Aug-29, 03:12 PM
im not denying or supporting the moon landings, but there are some things that are questionable

Leaving the technical details aside, while only NASA performed manned landings, they are not the only ones that went to the moon.

Perhaps you were not aware, that the USSR also sent some probes to moon, Zond (http://en.wikipedia.org/wiki/Zond_program) to return and Luna (http://en.wikipedia.org/wiki/Luna_programme) to land.

IMHO this pretty much dismisses your "questionable things" without even going into the technical detals. At least unless you want to claim that both sides were faking their programs while ignoring the other side's fake :wall: ...

Eta C
2007-Aug-29, 03:46 PM
EDIT:
The lunar module also included two additional computers -- the brains of the Commander and Lunar Module Pilot.

Which, according to Von Braun, "are easily manufactured with unskilled labor"

:)

Jason Thompson
2007-Aug-29, 03:47 PM
im not denying or supporting the moon landings, but there are some things that are questionable

In general, or just to you?


like for example, the mass amounts of cosmic radiation experienced would have surely killed the astronauts,

Radiation in space is not something NASA has an information monopoly on. Where is the data that led you to conclude that it was sufficient to kill the astronauts?


gamma radiation is the stongest on earth and it takes 4 feet of solid lead to protect you,

No it doesn't. But it's academic anyway since gamma radiation is not abundant in space in lethal quantities.



and what about the onboard computers that they used, dont you ppl tell me they could land somone on the moon with a computer that has not even half the compteing power of my graphing calculator,

All right, we won't. What we will tell you is that two highly trained and skilled professionals were able to land on the Moon with the aid of an onboard computer with less computing power than your graphics calculator and a number of skilled professionals and much larger computers on the ground back at Mission Control.

What we will also tell you is that NASA and the USSR had also by that point sent unmanned probes past, around, into orbit of and into the surface of the Moon, some of which made a soft landing. Are you contending that Lunar, Ranger, Lunar Orbiter, Surveyor and Zond probes were also faked? If not, what makes the Apollo landing more difficult to achieve than the Surveyor landing, for instance?


because if im correct, i is possible to control a Saturn V rocket, with a TI-83+.

No, though it might be possible to control it with something with comparable amounts of memory. In these days of desktops and laptops with fancy interfaces, excessive memory and redundant software bundled in even though you never use it, it is apparently hard for many people to realise that the early computers were dedicated to one task. Every bit of memory on the Apollo computer was used for the sole purpose of navigation. No fancy interface, no sounds, no spare memory, no extra programs, no nice graphics of the LM as it approached the surface, not even a user interface that gave messages in English. They taught the astronauts computer code rather than teaching the computer the astronauts' language. Hell, occasionally they even overloaded the computer so it couldn't so everything it was being asked to. Check out Apollo 11 and the 1201 alarm.


but on the other side, in 1969 there were some prtty smart ppl

Indeed there were. By that time we already had the nuclear bomb (fission and fusion), the ICBM, the submarine, the jet fighter, the supersonic aircraft, the SR-71, the passenger aircraft. Quantum physics was fifty years old!


and u cant really beleave anything on the internet these days,

You can believe things on the net, but you shouldn't just believe uncritically. A blanket distrust of one source is as bad and misleading as unquestioning acceptance.

hplasm
2007-Aug-29, 04:06 PM
BTW - magnetic fields won't shield gamma rays, though they will deflect charged particles.

Jakenorrish
2007-Aug-29, 05:39 PM
Well, some people find it hard to comprehend that human beings went to the Moon, but the fact is that we did. Because despite the computers being less powerful than your average 21st century calculator, they were adequate for the task. Despite idiots like Bart Sibrel spouting unscientific mumbo jumbo, the people involved were also more than up to the task.

I've no doubt that in decades to come there will be people claiming that the MIR space station, Cassini Saturn mission and New Horizons missions were also faked. Why? Because these individuals do not have the scientific knowledge to grasp the facts. The information they read or are told sounds too ridiculous to be true to them.

Check out the scientific facts and the masses of data available, not the (often very incorrect) innuendo put forward by Hoax believers.

sts60
2007-Aug-29, 05:44 PM
Sounds cool! One nitpick ... that should be RP-1.
Doh! Fixed.

01101001
2007-Aug-29, 05:55 PM
I've no doubt that in decades to come there will be people claiming that the MIR space station, Cassini Saturn mission and New Horizons missions were also faked. Why? Because these individuals do not have the scientific knowledge to grasp the facts. The information they read or are told sounds too ridiculous to be true to them.

What a semi-pleasant thought: todays' Apollo Hoax believers having to defend the existence of the ISS 30 years from now.

Or will they do so? With their powers of reasoning, maybe they'll just think back in time to now, and decide they were duped all along and the ISS -- and the Space Shuttle missions and all the cool space science and technology going on now -- was all really done in a secret movie studio with special effects.

Buckle up, hoaxies. Prepare to defend what you are now experiencing, from the claims of the nattering newbies of the future. If you care. You better have some documentation that will convince them. It better conform exactly to their expectations of what it should be. Good luck.

JayUtah
2007-Aug-29, 06:23 PM
JayUtah is the real expert but Iíll give my two bits anyway.

You give yourself far too little credit.

The AGC and LGC (same hardware, different application software) are examples of straightforward closed-loop control. "Closed loop" means that the system is able both to impose control and to sense state directly, so that a difference between what's supposed to be the case and what is the case can be remedied by computing the appropriate control action. In contrast, open-loop control simply applies a control according to some deductive procedure.

My sprinkler system is an open-loop system. The controller's output is the signal to open or close the valves that let water pass to the sprinklers. A timer tells the system when to open and close them. It is up to the user to correlate that to the desire degree of watering. A closed-loop system might measure the amount of water deposited on the ground and close the valve when a predetermined amount of water has been applied. That would more closely correspond to the user's intent and would also be more reliable in the face of faults such as varying water pressure. Any system that uses sensors to adapt the behavior of the system to a measured effect (rather than a deduced one, e.g., "10 minutes of watering is probably enough water") "closes the loop."

The basis for all guidance is the state vector: six numbers representing the position and velocity of the vehicle in a three-dimensional coordinate system anchored at some point. Three numbers give the position and the other three give its velocity as components vectors. You can reckon the state vector according to any "fixed" point that seems suitable to your task. For example, the state vector for an airplane might use the airport radio beacon as its fixed point whereas an interplanetary spacecraft state vector would be hampered by that point moving along with the Earth; the sun might be a better reference point.

While the state vector describes where the vehicle is, it's usually also important to know where the vehicle is pointing because various mission and operational restrictions will apply. Hence you have a guidance problem in both position and orientation.

Closed-loop guidance in the positional sense constantly asks "Where am I?" and compares it against a reference of "Where should I be?" Then if a significant difference is found, it computes and applies a proper corrective action. In the orientation sense, closed-loop guidance asks "Where am I pointing?" and compares it against "Where should I be pointing?" and again takes corrective action. These two problems are completely separable. That means that small programs can be written to solve each problem separately without involving the other -- something computer scientists find comforting.

In turn, both the positional and orientational problems can be broken down even further since the problem is expressed in a 3D vector space. Many 3D problems in such spaces can be reduced to three 1D problems using the same code for each dimension. Routine state-vector maintenance measures the elapsed time since the last update and integrates the new position along some axis by multiplying elapsed time by the velocity along that axis. Repeat for each of the three axes. Because the position and velocity are expressed as orthonormal component vectors, this works mathematically with provable rigor. Orientation is not integrated from rates, but is expressed in the same component-wise fashion that makes working with it largely a matter of the same program code applied in sequence to each of three components.

The questions "Where should I be?" and "Where should I be pointing?" are generally answered by consulting tables uploaded from Mission Control, which were computed by the RTCC for that mission and adapted as the mission proceeded. Different mission phases and the difference in type of flying the CSM and LM do call for different tables and ways of consulting those tables.

The questions "Where am I?" and "Where am I pointing?" can be answered in various ways. "Where am I pointing?" was most commonly answered in Apollo by consulting the IMU's stable member. That's a big hunk of beryllium in which gyroscopes are embedded. The gyroscopes spin so as to keep the beryllium hunk oriented in the same position in space. The hunk is contained within three orthonormal gimbals that allow the spacecraft to rotate (almost) freely while letting the stable member beryllium hunk to retain its orientation in space. Sensors in the gimbal swivels measure the deflection of each gimbal relative to its outer neighbor, and from this the orientation of the spacecraft relative to the stable member can be derived. In Apollo parlance this is known as the REFSMMAT -- "reference to stable-member matrix".

For those who haven't studied linear algebra, directions expressed as XYZ formulations of a vector can be converted easily from one reckoning to another by a straightforward matrix multiplication, where the matrix is composed of the coordinates of the new coordinate system as expressed in the old one. Not surprisingly, the AGC and LGC software loads contained data types and program libraries for vector and matrix arithmetic.

The stable member is aligned with a fixed-space reference prior to launch. Those gimbal angles can then be correlated to a known orientation in space, and changes in gimbal angles can then be reckoned as changes in orientation in space.

As the stable member drifts (as gyroscopic systems do), it has to be corrected. That's what CMPs and LMPs do in Apollo. One procedure for doing that tells the spacecraft to point at a reference star. A telescope aligned very precisely with the spacecraft axis is used to "shoot" the star. The spacecraft has aligned itself with what it thinks is that star's orientation. It does that by converting the star's absolute orientation through the guidance platform's conversion matrix into relative coordinates, then positioning the spacecraft according to those coordinates. If the telescope shows that the spacecraft's relative orientation is in error, the pilot uses controls on the telescope to bring the reference star into the telescope crosshairs. When the star is "dialed in," the telescope tells the pilot the difference between the spacecraft's axis and the angle the telescope had to adopt relative to the axis in order to dial in the star, in terms of pitch and yaw corrections.

But not roll. The astute linear algebraist will rise up in revolt to tell us that one degree of freedom is still missing. The spacecraft can indeed roll through its entire gamut when properly oriented without changing the telescope error angles. And so corrections to the matrix derived from one star sighting will be incomplete. So at least two -- and in practice three or four -- star sights will be used to correct the matrix. When properly adjusted, the new matrix that describes the orientation of the spacecraft relative to the (drifted) stable member will be useful for guidance. (Unless you're Jim "Shaky" Lovell and you accidentally key in the command to zero out the reference matrix altogether!)

Reference to a stable onboard member is one way to measure your orientation. But it's not the only way. Differential star-, Earth-, and sun-sensors can be used along with the same kind of math to give the guidance computer an idea of the spacecraft's orientation in space. Ballistic missiles can also use horizon sensors. These are optical instruments that "see" various objects in space. They are precisely mounted on the spacecraft structure so that the sensor axis can be directly converted to the spacecraft axis, and the measurement of deflection according to sensor axes can be reckoned easily in terms of the spacecraft axis.

So much for "Which way am I pointing?" For "Where am I?" you have a lot of options, depending on which spacecraft you're flying and what phase of the mission you're in.

Periodically the spacecraft updates the position portion of the state vector by integrating the velocity portion. The spacecraft's electronics generate a very stable 100 Hz reference signal that is coupled through a counter to a register available to the computer. The integrator program "wakes up" every so often (1-2 times per second for Apollo, 20 times per second for the space shuttle), reads the counter and resets it, multiplies its consulted value (which represents the elapsed time since the counter was last reset in hundredths of a second) by the velocity in each of the three cardinal axis and adds the result to the position point (again, for each of the three axes).

The position portion of the state vector could also be updated from Earth. The ground stations can use the radio signal to track the spacecraft in terms of right-ascension, declination, and velocity (via doppler shift). Using their own linear algebra and some orbital mechanics, they can use the RTCC to crunch heavy numbers and compute the spacecraft's position with astonishing accuracy. Then they just beam up the new state vector to the spacecraft.

The velocity portion of "Where am I?" is a little bit more involved. The IMU also has very sensitive accelerometers that measure acceleration along the stable member's axes. They are "pendulous" accelerometers, meaning that a little mass is cantilevered out into space. Acceleration along one axis causes the mass to bend the cantilever a little bit, which can be measured by strain gauges. Another method places a floating mass next to a fixed mass with some pressure-sensitive materials between them. Acceleration pinches the sensor between the floating mass and the fixed mass. Today we use various sensors that don't require moving parts.

The accelerometers are "integrating," meaning that they have electronics that reads out the deflection in terms of counter-suitable pulses. When the accelerometer mass passes each increment of deflection, it pulses a "count up" wire. On its way back to rest position it pulses a "count down" wire for each returning increment. If you wire those outputs up to the "tick up" and "tick down" pins of a digital counter, you have an instrument that reads the momentary (snapshot) change in velocity (acceleration) as the counter value.

Imagine your bathroom scale was rigged that way. Imagine that you're the mass cantilevered out there, and that the deflection of the accelerometer arm occurred as Earth's gravity tried to accelerate your mass. For each increment of deflection, your scale's accelerometer would pulse its count-up output. And through the magic of Sir Isaac we know that acceleration, deflection, mass, and weight are all correlated quantities. When you stepped on the scale there would be a flurry of count-up signals. As you shifted your weight on the scale, there would be a lot of "noise" in count-up and count-down signals. But the counter would meta-stabilize around the number of deflection-caused pulses that your standing on the scale had (finally) induced. And then when you stepped off, there would be a commensurate flurry of count-down pulses. If the system is well-designed and well-built, that counter should read zero at the end -- i.e., the number of count-up pulses balanced the number of count-down pulses.

As a matter of fact, digital bathroom scales are rigged this way, but they use the pinch-type accelerometers.

But this is all just another job for the all-purpose integrator. Before computing the position portion of the state vector, it updates the velocity portion. After recording elapsed time, it reads the registers corresponding to the accelerometer counters to see if any acceleration is being recorded. If so, it integrates the acceleration in each axis and updates the velocity in each axis prior to updating the position.

The astute embedded-system programmer has just realized that this integration process is the same algorithm in the acceleration-to-velocity case as it is for the velocity-to-position case. On a general purpose RISC or CISC architecture, this algorithm could be implemented in four instructions. On the AGC archicture it can be implemented in three, because of some unique addressing modes. You simply point that algorithm at different data sources for the rates and quantities to be integrated into, and you have a very competent, very simple guidance computer.

The good news is that this method works for any inertially-detectable accelerations, whether they come from engine burns, atmospheric interface, or an astronaut on EVA kicking the spacecraft. Engineers love solutions that handle effects of entire classes of causes, not just those few causes they can think of. The bad news is that not all acceleration is inertially-detectable, such as that deriving from orbital mechanics.

For orbital motion you need the orbital integrator. While it is not possible to measure accelerations directly that occur during orbital motion, you can deduce the changes in velocity with reasonable accuracy if you know the geometry of the orbit. If you know the orbital elements -- the handful of numbers that together uniquely describe an orbit's shape -- you can determine your velocity at each point according to elapsed time by solving simple orbital mechanics equations for that orbit and that elapsed time. So the orbital integrator uses given periapsis and apoapsis values and some simplified math to deduce how the velocity should change from moment to moment according to orbital motion.

Another way to measure velocity reasonably directly is with radar. If you assume the thing the radar waves are bouncing off of is fixed in your coordinate system, then radar returns measure your velocity in that coordinate system, if you take into account the radar pulse's orientation. And radar can sometimes also give you position directly, if you know the position of your radar target according to a fixed point in your coordinates.

Any practical guidance system will use as many of these measurement techniques as are possible and applicable, averaging the diverging results if necessary. So a landing spacecraft may use both radar and inertial means to maintain its state vector. It may compute its velocity according to idealized curves from its descent profile, and according to velocity information obtained from the radar. It may update its position both from radar altimeter measurements and from intertial integration. This happens because there is no one true and accurate sensing mechanism. During Apollo 11's descent you can hear a conversation about which guidance methods are going to be used, which are currently operating correctly, and finally a comment that the methods are "converging," meaning that the error between the guidance methods is growing smaller and smaller.

Fazor
2007-Aug-29, 06:30 PM
I propose that our ancestors never ate fish, because they did not have the graphite rods and reels that I have today to catch them with.

Just because the technology that would be used to do something today wasn't available in the past, it doesn't mean it couldn't be done.

The Backroad Astronomer
2007-Aug-29, 06:40 PM
Saw an old canadian air farce were they start talking about the past and the comedian goes "... it was a long time ago before they had nintendo.."

JayUtah
2007-Aug-29, 08:12 PM
Continued from previous post.

Knowing where you are is one thing. Knowing where you need to be is another thing. Knowing how to get from one to the other varies depending on what phase of the mission you're in. That determines what dynamic effects can be brought to bear, and it may also determine certain constraints and limitations.

The basic digital autopilot attitude-hold mode compares the spacecraft's error angle and error rate against stored expectations. Again, this can be done separately for each dimension. The error angle is the difference (in some plane) between the measured orientation and the expected orientation. The error rate is how fast that angle is changing over time. So you have a tuple (E,E-dot) that describes the ship's dynamic situation in that plane -- (-10,-2) means that you're 10 degrees off in the negative direction and the error is increasing at 2 degrees per second. In that plane the desired heading might be 62 degrees, but it's measured at 52 degrees: a difference of -10.

You can graph those "coordinates" on a 2D grid (with error angle as the horizontal axis and error rate as the vertical) and watch how the spacecraft's motion on the graph behaves. That's a tool for reasoning about how to solve the orientation problem. Graphing a quantity and its first derivative like this has a name, but I forgot what it is. Obviously you want your ship to be at (0,0) as much as possible: no error angle and no error rate. But in practice you draw a little circle around the origin and say that any point inside there is "good enough."

One of the first things you notice is the non-reflective symmetry of this situation through the y=x line. (10,2) is equivalent in many ways to (-10,-2). So the problem can be reduced to same-sign, different-sign reasoning. (10,2) represents a dangerous, diverging condition while (10,-2) requires far less attention, as does (-10,2). When the error rate is the opposite sign, that means the ship is returning to true orientation in that plane, and will require only a correction of the rate when the error angle approaches zero.

But a divergent situation must be corrected quickly. (10,2) becomes at least (12,2) on the next second, and (14,2) after that. What we need is to convert (10,2) into (10,-5) -- that is, to force the rate and the angle to be of opposite sign.

In practice (or rather, what MIT did for Apollo) is to partition this graph up into zones. Each zone contained points representing error and rate situations that all required the same action to return the spacecraft to the center deadband zone. For each zone, a programmed procedure acted to transition to a different zone, and so forth until a zone transitioned to the deadband.

For example, if a guidance software pass found the spacecraft in a "significant error, no error rate" condition, it would apply a reasonable moment to induce an error rate opposite the error sign. This would transition the ship to the "significant error, corrective rate" condition. If the spacecraft had been in the "significant error, diverging rate" condition, then the goal would still be to put it into the "significant error, corrective rate" condition, but the transition program in that case might command a more aggressive corrective moment. From the "significant error, corrective rate" condition it transitions to a "marginal error, corrective rate" state after which would be the deadband, and the transition program would apply a moment to zero out the corrective rate.

Often the "corrective" zones (i.e., those in which the error and rate have opposite sign and which are thus somewhat self-correcting) are further partitioned according to mission requirements. It's not uncommon to have a very degenerate corrective zone that corresponds to a standard corrective rate. And so states with stronger corrective rates are mitigated into the standard corrective rate and states with leisurely corrections are helped along. This often simplifies that final transition from marginal to deadband.

For example, a Boeing autopilot in heading-hold mode responds to certain heading error and error-rate conditions by applying a standard bank angle (yaw rate). If you're flying along on autopilot and you suddenly give the heading knob a significant twist, the plane will bank gently into a 30-degree bank and hold it until rolling out onto the right heading. The 30-degree standard bank angle was chosen for a number of reasons, not the least of which being that Boeing knows exactly what aileron deflection command is required to roll out of a 30-degree bank in a fixed number of seconds corresponding to how long it will take to settle onto the target heading according to a yaw profile.

So the attitude-hold problem is reduced to a state automaton in each of three dimensions. The automaton has the psuedo-Markovian property that it doesn't matter how you got to that state, you know what you have to do in order to go to the one-and-only next state. It may be to apply some control output, or simply to do nothing. (In the latter case, the spacecraft circumstance will transition naturally to a new state or remain in the desired state.)

Again this makes computer scientists smile because the attitude-hold algorithm becomes dirt simple: at each cycle, see what state (E,E-dot) falls into, and take whatever corrective action is designed to transition you to the new state. It doesn't matter what put you in that state; what matters is that there's a clear procedure to get you to a better state. You don't need "history." The correctness of those algorithms is easy to prove, and they're easy to implement reliably.

So how do those state transitions occur?

It depends on what you're flying and what mode it's in. In an airplane commanding moments usually means deflecting control surfaces to generate roll, yaw, and pitch rates. for a launch booster it might mean gimballing the engine to generate pitch and yaw moments. For the ISS it might mean wrenching a heavy gyroscope rotor off-axis. For a coasting Apollo spacecraft it might mean RCS firings. The important concept is the division between the logic that determines a corrective moment is required and the mechanism by which the moment is applied.

The same error-amount, error-rate approach can be taken with positional control, but sometimes automatic control requires leaking over into attitude control and into higher-level brain functions. Positional errors often can have a much larger deadband. In Apollo's coast phase, positional errors were accumulated until one of the preplanned mid-course correction points. By that time they could be many miles off course. In contrast the LM landing phase had to keep a pretty good reign on distance above the ground.

Certain mission phases such as landing and rendezvous had special dynamic considerations. That led to custom programs and preferential consideration of certain parameters. The state vector for a landing LM is reckoned according to the desired landing point. The ship's position is reckoned in uprange distance and altitude, with lateral displacement occupying the other dimension. Velocity is reckoned in lateral velocities, forward speed, and sink rate. Sink rate ("h-dot") is all-important because that's what determines whether you live or die. Forward and lateral displacements affect only whether a pinpoint landing can be achieved.

There is a qualitative difference among the control methods used in each of these primary axes. The spacecraft is nominally upright, so some restricted math can be used. You can still treat each axis (mostly) independently, but you can also directly connect some errors with some specific corrective action. Errors in (H,H-dot) (altitude, sink rate) are corrected in the DPS throttle. Closed-loop control gives us the clever side-effect of managing sink rate with a spacecraft whose mass diminishes as fuel is expended. The guidance system merely knows what the acceptable deadband is for (H,H-dot) at any given time, and its state-transition procedures adjust the throttle to force the ship into the proper state. It doesn't matter how the ship got into that state; there's a straightforward procedure for getting it to a better state. If the sink rate isn't fast enough because the ship is lighter due to fuel expenditure, the system just recognizes an "out of tolerance" condition and throttles down appropriately. That's a real noodle-baker for some people who want to solve the problem the hard way. Closed-loop control is very good at avoiding complex ad hoc solutions.

Lateral corrections can be done by the RCS in translation mode, or by vectoring the DPS. Same with approach speed corrections. All these presume the ship is relatively upright, meaning that it's not a general-purpose guidance program. It works only for landing, taking advantage of what the designers know will have to be the case in a successful landing approach. And it goes by the name of P64.

The trick in general positional guidance is that a spacecraft usually has one powerful motor aligned along one axis. Where a substantial corrective delta-v is required, the primary motor must first be aligned in that direction, which crosses the line from positional control to attitude control. That's why positional corrections either adopt a custom, non-general approach (e.g., P64) that presumes motor orientation and preferential dimensions in the profile, or they accumulate error until an elaborate, piloted manuever can bring together the various guidance systems for a coordinated correction.

Van Rijn
2007-Aug-29, 08:17 PM
I propose that our ancestors never ate fish, because they did not have the graphite rods and reels that I have today to catch them with.

Just because the technology that would be used to do something today wasn't available in the past, it doesn't mean it couldn't be done.

The funny thing is that we still do the same kind of things. Microcontrollers are used all the time in embedded applications (as this was). These are often 8-bit CPUs with small instruction sets and less RAM and ROM than the AGC, that are used to run household appliances, watering system timers, and millions of other devices we use every day. For these applications, unit cost matters, so it pays to have the engineers and programmers spend more time with lower end hardware, and writing tighter software.

An example is the PIC (http://en.wikipedia.org/wiki/PIC_microcontroller) family. A low end chip can have 16 bytes of RAM, less than half a Kb of flash memory, and 33 instructions. Such a device would not be sufficient for the AGC, yet it is sold and used today.

JayUtah
2007-Aug-29, 09:11 PM
Continued from previous post.

With the highly modular approach in place that cleanly separates measurement and correction, you can begin to see a framework for high-level control.

As I mentioned before, you can steer a Boeing airplane on autopilot just by turning a knob. If you are flying heading 090 and you turn the knob so that it reads 180, the plane goes through a set of determined state transitions to stabilize on the new heading. But what have you really done? You gave the autopilot a new heading to fly. But that doesn't trigger anything special in the autopilot. The autopilot logic doesn't know you twisted a knob. It just knows that "suddenly" there's a huge error in the heading it's supposed to be flying. It doesn't matter how that error got there. It might have been a wind gust, or you simply moving the goalposts. But the autopilot does know what to do when it sees an error.

So the action taken when the goal is deliberately changed doesn't differ from the action taken to correct error. That is, the mechanism doesn't respect the difference between an anomaly and a change of plans because there is no difference. That's a value judgment we impose on it.

Then you realize the power of these guidance systems and how such power can be eked out of comparitively little, dirt-simple technology. Reliable technology too.

These low-level mechanisms for detecting and correcting errors can be used to control the spacecraft simply by loading them with new expectations from time to time. So when the pilot decides to point the CSM in order to give the high-gain antenna a good view of Earth, he can go to the computer and punch in a new attitude -- say, 0,0,1 -- and the attitude hold mode recognizes the "error" and applies a "corrective" action to orient the spacecraft to the new heading.

You can wire the same sort of action into manual control. Pushing the RHC forward in the CSM means "add a downward amount to the expected pitch rate proportional to how far I pushed the stick." On the next guidance pass the autopilot would notice that the actual pitch rate was less than the commanded pitch rate, thus the spacecraft's new coordinate in the pitch-axis guidance graph would be "significant error, zero rate" instead of deadband, and the computer would command the transition to "significant error, corrective rate." What changed to make this happen? The RHC action changed the goalposts.

There was actually some discussion about how that should actually work. Some pilots wanted it so that the roll/pitch/yaw rate would be held constant as long as the RHC deflection was constant. Releasing the controller and allowing it to come back to detent meant "accept the current heading as the desired heading and zero the intended rate." Other pilots wanted it so that the stick commanded the rate only, and that you had to apply reverse input to zero the rate. It's just a software change.

In the landing LM, one hand controller nominally selected the landing point. Pushing the stick forward meant "land long," while pulling it back mean "land short." You had lateral adjustments too. In terms of P64, that simply redefined the coordinate system. The spacecraft's dynamic condition in the new coordinate system generates an "error" and the system corrects it as with any other error. Push the stick = "add 100 meters to the downrange coordinate of the landing point." Dirt simple. Subsequent autopilot pass = "Hey, I'm below/behind the desired glide path; better increase downrange velocity and/or arrest the sink rate." Dirt simple. But if you took the intuitive approach you might imagine having to recompute the whole landing profile on the fly or some such nonsense.

Very clever behavior can be built up in layers from very simple-minded tools implemented on very simple-minded computers. In fact, the more simple-minded the better -- less to break or malfunction.

The LM ascent was even more simple-minded. Starting with the landing point, ground-based computers provided a table of space-fixed orientations that corresponded to a pre-computed ascent profile. It had been precomputed based on piecewise-linear impulses of the fixed-thrust APS. In other words, first go straight up for 10 seconds. Then pitch downrange a few degrees and fly that orientation for a number of seconds. Then pitch even farther downrange and fly that orientation for several seconds. It only remained for ground computers to compute absolute orientations for the relative deflections, from the local space-fixed "up" at the landing site, at the projected liftoff time. Then it beamed that "pad" up to the LM via telemetry. Armed with its time-indexed table of intended headings, the liftoff program was basically:

1. turn on APS
2. for each table entry i
2a. load heading into digital autopilot
2b. wait delay[i] seconds
3. turn off APS

And the digital autopilot periodically found itself in an "error" condition as a new heading was loaded into it, and operated the RCS to "correct" the ascent stage onto the new heading. The same program code implemented the ascent profile as corrected the off-axis thrust and shifting center of mass.

At the end of the program above, the LM should have found itself a certain distance above the lunar surface moving downrange at a certain velocity, provided the attitude-hold mode worked correctly and the APS developed sufficient thrust. But a certain amount of slop was allowed in the LM ascent orbit insertion in order to be able to implement it with the simplest computer commands. The plan was to get the LM to [i]any stable orbit around the Moon, and then figure out what exactly would be done quantitatively to bring about the rendezvous.

And again you have the same sort of "Where am I? Where do I need to be?" approach. Using observations from Earth and radar observations from the CSM (whose orbit was known very precisely), the orbital elements of the LM's rag-tag orbit could be derived, and then it's a matter of using simple orbital mechanics to figure out how to convert the LM's convenience orbit into the proper rendezvous orbit. Buzz Aldrin had pioneered the closed-loop rendezvous method, as well as the lesser-known seat-of-the-pants rendezvous method.

People who bad-mouth the AGC simply don't know what they're talking about. In terms of simplicity-versus-capability, code design and organization, modularity, severability, fault-tolerance, and sheer genius, the AGC and LGC software stacks easily outstrip a great deal of code written today. They are excellent examples of how to do a whole lot with a whole little code, much of it practically bulletproof.

Swift
2007-Aug-29, 09:25 PM
Originally Posted by Bob B.
EDIT:
The lunar module also included two additional computers -- the brains of the Commander and Lunar Module Pilot.Which, according to Von Braun, "are easily manufactured with unskilled labor"

:)
Sure, you can build the hardware with unskilled labor, but the programming is much better done with some skilled "programmers". ;)

JayUtah
2007-Aug-29, 09:33 PM
The funny thing is that we still do the same kind of things.

Proper design is a holistic approach. If you ask me to build you a spacecraft for a mission, I can do it; and I'll probably include some kind of programmable digital computer in the design. If you asked me to build a spacecraft for a particular mission and stipulated that I had to use a certain kind of computer of lesser capacity, I could do it if you let me compensate in other ways for the lesser computer. If you asked me to build you a spacecraft for a mission and not use any computer at all, I could still do it but only with a great deal of thought. We use computers because they are useful, not because they are necessary -- however we elect to do something things only because we know the computer is a capable tool that lets us do it.

A low end chip can have 16 bytes of RAM, less than half a Kb of flash memory, and 33 instructions. Such a device would not be sufficient for the AGC...

Not for the AGC, but I could build a guidance system around it. The early guidance systems had only six words of erasable, addressable memory -- the state vector. The rest of the guidance "program" can be implemented in terms of dedicated comparators and counters. A dedicated multiplier circuit tied to a crystal-driven counter works as an integrator. Comparators can provide control outputs based on comparisons between the state vector elements and stored expectations.

We do the integration in general-purpose software because these days we can, not because you can't do it any other way. We do the comparisons in general-purpose software these days because we can, not because there's no other way to do it. That was good enough to get megatons to Moscow. As the processors get more and more powerful, it just makes sense to convert the other parts of the solution to computer algorithms. A few computer instructions on a general-purpose CPU are easier to deal with than a dedicated comparator or multiplier circuit. And with the flexibility that software implementations give, we can be more audacious in mission design.

Bob B.
2007-Aug-29, 09:56 PM
Thanks, Jay. That was some good stuff, I learned much.

Neverfly
2007-Aug-29, 10:13 PM
JayUtah! For God's sakes man! Take a breath before you pass out on the floor!

Van Rijn
2007-Aug-29, 10:20 PM
We do the integration in general-purpose software because these days we can, not because you can't do it any other way. We do the comparisons in general-purpose software these days because we can, not because there's no other way to do it.


Yes, and you see the same trend in commercial embedded applications. Where older hardware would be chip heavy with more of the design embedded in hardware (even without a microcontroller at one time), software in a microcontroller can do many of the same functions at lower unit cost, and is easier to modify too. The speed of some of the new microcontrollers allow what some call "virtual devices" where fairly sophisticated hardware is replaced with simple bit toggles and software handling much of the work.

Anyway, my point was that, while it may be a foreign concept to anyone who hasn't worked on an embedded application, we still program and use computers similar in general concept and capability to the AGC for similar purposes.

Bob B.
2007-Aug-29, 10:42 PM
It seems like everything today is controlled by microprocessors. It doesn't seem like all that long ago when I worked on control systems that had no computer at all. You can do an awful lot with just synchronous motors, cams, and switches.

JayUtah
2007-Aug-29, 10:57 PM
My clothes washer still uses a sequencer based on a synchronous motor driving cams.

captain swoop
2007-Aug-29, 11:50 PM
so does mine :)

AGN Fuel
2007-Aug-30, 02:46 AM
Hi Jay,

Entries cut, pasted together, printed and filed with my Apollo bits and pieces. That was as fine a summary of anything as I have read in many years.

Thank you for the time and effort that you put into that. It is sincerely appreciated.

Graybeard6
2007-Aug-30, 05:11 AM
I don't have an internet reference, but the original computer for the space shuttle used a Z-80 (can you say "Osborne 1?) and the ISS used an Intel 286 (PC XT). We've lost shuttles, but not because of computer power.

JayUtah
2007-Aug-30, 05:33 AM
The original computer for the space shuttle was the IBM AP101. The current version is AP101S, a solid-state revision. The engine controllers are redundant Motorola 68030s.

pzkpfw
2007-Aug-30, 05:39 AM
I don't have an internet reference, but the original computer for the space shuttle used a Z-80 (can you say "Osborne 1?) and the ISS used an Intel 286 (PC XT). We've lost shuttles, but not because of computer power.

80286 would be AT not XT, wouldn't it?

"Advanced" makes all the difference. :-)

Hobo
2007-Aug-30, 05:43 AM
How much computing power does a comet have? How much does it need?

OMG! I saw one through a telescope, and it looked so real. I was completely fooled. Why would anyone fake comets though?

The Backroad Astronomer
2007-Aug-30, 05:56 AM
OMG! I saw one through a telescope, and it looked so real. I was completely fooled. Why would anyone fake comets though?
To hide the spaceships.

Ufonaut99
2007-Aug-30, 05:58 AM
While we're on computers in space, any truth to the following rumours I've come across in my time :

1) When the shuttle went on its first test flight off the back of the 747, as soon as the bolts fired one of the computers prompty died (but the shuttle was still flyable, due to multiple redundancy - is that 5 computers?)

2) The Hubble Space Telescope uses "old-fashioned" Ferrite Core memory, since it's less prone to being affected by the Van Allen belts (but hey, best tool for the job if true).

Occam
2007-Aug-30, 06:56 AM
I'd like to thank you, Jay, for your posts in this thread. As an Apollo freak of long standing, I've found the info fascinating and have saved it out to a document, to re-read.

Maksutov
2007-Aug-30, 06:58 AM
Interesting note about the Apollo 11 computers.

During the last phase of the LM landing, it was under the control of computers that put even our current PCs to shame: Buzz Aldrin's and Neil Armstrong's brains.

Neverfly
2007-Aug-30, 07:44 AM
I'd like to thank you, Jay, for your posts in this thread. As an Apollo freak of long standing, I've found the info fascinating and have saved it out to a document, to re-read.

You almost have to. JayUtah just took his time to write us a book for free.

GeorgeLeRoyTirebiter
2007-Aug-30, 08:30 AM
It seems like everything today is controlled by microprocessors. It doesn't seem like all that long ago when I worked on control systems that had no computer at all. You can do an awful lot with just synchronous motors, cams, and switches.

I got to see the downside of poorly-designed sequencers when I worked as a projectionist. Several of the older projectors used electromechanical automation systems. I eventually figured out that it was easier to just do everything manually (even though I was running several screens at once) rather than try to fight those unreliable monstrosities. It really gave me an appreciation for microprocessors. The solid-state controls weren't necessarily more reliable, they were just easier to reset when they "went stupid."

However, the sequencer in my washing machine has given me nary a problem.

Laguna
2007-Aug-30, 08:31 AM
Wow, Jay!
A simple answer would have sufficed... ;)
But anyway. Thank you for your enormous effort. http://www.cosgan.de/images/midi/konfus/g040.gif

Jakenorrish
2007-Aug-30, 08:47 AM
I propose that our ancestors never ate fish, because they did not have the graphite rods and reels that I have today to catch them with.

Fazor, having stumbled in to work bleary eyed, I switched on my PC, logged into BAUT for five minutes and LMAO!

I haven't as yet had time to read through all of JayUtah's posts, I'm printing them out and saving them for a hot bath and a nice glass of wine later. The water will have gone stone cold by the end, but it'll be worth it! :D

MG1962A
2007-Aug-30, 10:04 AM
**Shakes head** Wonders what James Cook, Columbus, Vasgo De gama would make of all this. Thanks NASA for finding ways to make things happen, rather than worry about why it couldn't

JayUtah
2007-Aug-30, 05:24 PM
Neverfly: JayUtah! For God's sakes man! Take a breath before you pass out on the floor!

Wait 'til I get started! Where was I?

Van Rijn: Anyway, my point was that, while it may be a foreign concept to anyone who hasn't worked on an embedded application, we still program and use computers similar in general concept and capability to the AGC for similar purposes.

Yes, and often with rather "primitive" tools. More on this later. Our theater just upgraded the control system for our Scala stage. The computers that ran it initially looked very clunky. The computers that run it now look very clunky. But since they literally hold life and limb in the balance, I don't care how many megawhatsis or gigathingies they have, or how much more powerful your iPhone is. I care that they don't break while I'm on (or under) the stage. They need to work the first time, every time, all the time.

AGN Fuel: Thank you for the time and effort that you put into that.

You're welcome, as are all those others who said thanks. Thanks really go to a client who kept my whole team in limbo for a day during a touchy deployment while they fixed some stuff at their end. Sort of a "hurry up and wait" scenario. It turned into a "post to BAUT while we retract heads from orifices."

pzkpfw: 80286 would be AT not XT, wouldn't it? "Advanced" makes all the difference. :-)

The earlier examples of the Intel x86 architecture are available in packages and densities suitable for embedded use in space deployments, and have a fairly rich history in both payload and vehicle controllers.

RobA: 1) When the shuttle went on its first test flight off the back of the 747, as soon as the bolts fired one of the computers prompty died (but the shuttle was still flyable, due to multiple redundancy - is that 5 computers?)

I don't know if that rumor is true, but the orbiter has computer redundancy at several levels and in several modes. The orbiter has 5 general-purpose IBM AP-101 flight computers. If you can program the IBM S/360 or S/370 mainframes, you will find the AP-101 architecture very familiar. The current flight variant AP-101S is reasonably modern computer hardware. Four of the computers run one software load while the fifth runs a completely separate software load designed to do the same tasks; this presumably averages out persistent software errors through voting logic. The loss of one computer does not generally affect mission operations.

2) The Hubble Space Telescope uses "old-fashioned" Ferrite Core memory, since it's less prone to being affected by the Van Allen belts (but hey, best tool for the job if true).

The original flight computer was the DF-224, which was indeed a tried-and-true 1970s design. It did not use magnetic ferrite core memory; it used plated-wire memory, which has many of the same properties as core memory and works according to similar principles. Both core and plated-wire memories are insusceptible to ambient radiation, but now we are able to harden semiconductor memories to a similar degree. This is highly desirable since the robustness of core and wire memories comes at a substantial penalty of mass, power consumption, and storage density. Core and wire memories are also non-volatile, meaning that they retain their storage when power is disconnected. Since the software is written with this in mind, replacing core memory with volatile semiconductor memory is problematic. In practice, core replacements made with semiconductor components require special power supplies to keep up the appearance of non-volatility.

The DF-224 was replaced long ago by a semiconductor-based computer.

GeorgeLeRoyTirebiter: I got to see the downside of poorly-designed sequencers when I worked as a projectionist.

You mean the guy who goes around attributing his faults to others? :)

The solid-state controls weren't necessarily more reliable, they were just easier to reset when they "went stupid."

The mark of a truly great automation and control system is the ease with which it can be restored to a known and helpful state. Our old stage system mentioned above took 45 seconds to reset after an astrogal trip. That's an eternity in theatrical time. Of course you want a system that never fails, but that's not practically achievable. So if you can't make a perfect system, make a system that recovers well.

However, the sequencer in my washing machine has given me nary a problem.

I'll venture a guess that the projector automation was a tacked-on addition to what originally were designed as manual controls. The washing machine was designed from the ground up to be automated a certain way.

Laguna2: A simple answer would have sufficed...

A wizard did it.

Jakenorris: I'm printing them out and saving them for a hot bath and a nice glass of wine later. The water will have gone stone cold by the end...

Yes, but depending on the size of the wine glass you may not care.

MG1962A: Wonders what James Cook, Columbus, Vasgo De gama would make of all this.

Considering they lived in a time when their sailors wanted the compass in a binnacle so that the evil spirits that operated it wouldn't get out and affect the ship.... Or maybe that's just an old sea story.

Anyone who gets a chance to see a World War II capital ship should pay very close attention to the inertial navigation system. It's an amazingly complicated, amazingly helpful electromechanical analog computer.

Thanks NASA for finding ways to make things happen, rather than worry about why it couldn't.

In all fairness the Apollo guidance system descends directly from ICBM guidance systems, which were tasked with more efficiently destroying large portions of civilization, if needed. It's a plowshare made out of a very effective sword.

MG1962A
2007-Aug-30, 07:14 PM
Jay considered

MG1962A: Wonders what James Cook, Columbus, Vasgo De gama would make of all this.

Considering they lived in a time when their sailors wanted the compass in a binnacle so that the evil spirits that operated it wouldn't get out and affect the ship.... Or maybe that's just an old sea story.

I would suggest it was an old sea story. Sailors trusted the compass, even if they didn't understand the science behind it. It was only when the thing stopped pointing in the direction they expected, they got worried. IIRC there was such and incident on Columbus' trip, when his fleet passed through one of the magnetic variances.

hplasm
2007-Aug-30, 07:22 PM
My clothes washer still uses a sequencer based on a synchronous motor driving cams.

As did Soyuz originally, If I remember correctly...?

Bob B.
2007-Aug-30, 07:44 PM
IIRC there was such and incident on Columbus' trip, when his fleet passed through one of the magnetic variances.

That was cased by a UFO in the Bermuda Triangle.
:)

JayUtah
2007-Aug-30, 08:27 PM
I would suggest it was an old sea story. Sailors trusted the compass, even if they didn't understand the science behind it.

Maybe the binnacle was to keep the evil spirits out then. There's got to be some sort of evil spirit aspect to this story.

Occam
2007-Aug-30, 08:34 PM
I would suggest it was an old sea story. Sailors trusted the compass, even if they didn't understand the science behind it.

Maybe the binnacle was to keep the evil spirits out then. There's got to be some sort of evil spirit aspect to this story.

Navy rum is fairly evil

JimTKirk
2007-Aug-30, 11:06 PM
Navy rum is fairly evil

Maybe you're thinking of grog... I think that was rum cut with water.:mad:

captain swoop
2007-Aug-30, 11:22 PM
That depends on the period you are looking at. In Nelsons time the ration was a Gallon of Beer a Day or 1 pint of wine or 1/2 pint of spirit, in that order. The supply of both beer and wine having to be exhausted before spirit is issued. This progression would appear to make spirit issue relatively rare and unlikely in the extreme in Home Waters and in Port. Rum was issued pre-mixed with one and a half pints of water, lemon or lime juice and sugar added, making two pints of grog, which I can tell you is a very pleasant, slightly alcoholic lemon cordial

In 1850 the Rum ration was reduced to one eighth of a pint: (a gill) of rum to water ratio 1:3 Petty Officers and above receive their rum neat. Grog money was introduced to teetotallers at a rate of 1 shilling and 7 pennies per month. In 1937 Ratio of Rum to water reduced to 1:2 and the Rum Ration was abolished 31 July 1970

The main problem with grog was that many didn't drink all their tot all the time and used to pass it around the mess as "sippers" for the hardened core of drinkers and "grog money" was worth so little that it wasn't worth claiming in lieu.
In addition, many favours could be obtained with the "tot", in so much, that it became a form of currency ie "sippers", "gulpers", "three fingers" and a "tot" all had value in an ad hoc world of bargaining where someone would cover for someone else at a "muster of hands" or someone would carry out another task in exchange! It was surprising what lengths a "hardened bubbly rat" would go to obtain more than his daily ration of "grog". In senior rates messes it became quite common for those who didn't care to take their neat tot every day to bottle it and after a period of time one could accumulate quite large quantities.

Joe Durnavich
2007-Aug-31, 12:04 AM
Swoop, you are the Jay Windley of booze.

GeorgeLeRoyTirebiter
2007-Aug-31, 12:07 AM
I got to see the downside of poorly-designed sequencers when I worked as a projectionist.

You mean the guy who goes around attributing his faults to others? :)

:lol: I almost choked on my coffee with this. It's so true. A corollary is that the equipment only malfunctions for me. When some other fool is running the booth, it's always operator error.


However, the sequencer in my washing machine has given me nary a problem.

I'll venture a guess that the projector automation was a tacked-on addition to what originally were designed as manual controls. The washing machine was designed from the ground up to be automated a certain way.

To a certain extent, the automation in any movie theater is jury-rigged. There's just too many different configurations to make an "out of the box" solution.

However, most of this equipment was early '80s vintage and designed with automation in mind. For example, the projector head just has terminals to hook up the electricity for the motor. It's up to the theater to decide whether to switch it on and off with a computer, a toggle switch, or something in between. Most of the manual controls were actually overrides built into the automation unit.

Occam
2007-Aug-31, 12:09 AM
Anyone who has drunk 200 proof Pusser's Rum (NZ Navy issue) knows what I mean

captain swoop
2007-Aug-31, 12:31 AM
In the UK there are several brands of Rum claiming to be 'Navy Rum' but none of them are like the proper Pussers Spirit from Pompey or Chatham (as was)

captain swoop
2007-Aug-31, 12:35 AM
Swoop, you are the Jay Windley of booze.

I am from a Maritime family.

I don't like rum much myself, Give me a Malt any day.

Van Rijn
2007-Aug-31, 01:20 AM
Neverfly: JayUtah! For God's sakes man! Take a breath before you pass out on the floor!

Wait 'til I get started! Where was I?


I'm not sure, but I'd suggest you avoid discussion of land wars in Asia.

Graybeard6
2007-Aug-31, 06:08 AM
the Rum Ration was abolished 31 July 1970
A day I remember well. I was in the US army in Hawaii, on detached duty to complete my degree. A British destroyer was in port and the invited US military members to help them "dispose" of their excess rum. It was a hard job of work, but we pulled it off; by midnight there was no rum left!

MG1962A
2007-Aug-31, 08:24 AM
A day I remember well. I was in the US army in Hawaii, on detached duty to complete my degree. A British destroyer was in port and the invited US military members to help them "dispose" of their excess rum. It was a hard job of work, but we pulled it off; by midnight there was no rum left!

Good to see two great nations comming together to solve a problem

***Note to self** Full bar fridges at next UN meeting

ZappBrannigan
2007-Aug-31, 07:08 PM
I just want to point out that a drive-by poster started a thread that has taught me more about the AGC, autopilots, and rum in the British Navy than I ever thought possible. Thank you, Lobreiter, wherever you are!

Van Rijn
2007-Aug-31, 08:04 PM
That's the irony of threads like this: The MHer rarely learns anything from it (since they either refuse to listen or simply don't understand the answers), but the rest of us do.

JimTKirk
2007-Aug-31, 09:41 PM
That depends on the period you are looking at. In Nelsons time the ration was a Gallon of Beer a Day or 1 pint of wine or 1/2 pint of spirit, in that order. The supply of both beer and wine having to be exhausted before spirit is issued. This progression would appear to make spirit issue relatively rare and unlikely in the extreme in Home Waters and in Port. Rum was issued pre-mixed with one and a half pints of water, lemon or lime juice and sugar added, making two pints of grog, which I can tell you is a very pleasant, slightly alcoholic lemon cordial

In 1850 the Rum ration was reduced to one eighth of a pint: (a gill) of rum to water ratio 1:3 Petty Officers and above receive their rum neat. Grog money was introduced to teetotallers at a rate of 1 shilling and 7 pennies per month. In 1937 Ratio of Rum to water reduced to 1:2 and the Rum Ration was abolished 31 July 1970

The main problem with grog was that many didn't drink all their tot all the time and used to pass it around the mess as "sippers" for the hardened core of drinkers and "grog money" was worth so little that it wasn't worth claiming in lieu.
In addition, many favours could be obtained with the "tot", in so much, that it became a form of currency ie "sippers", "gulpers", "three fingers" and a "tot" all had value in an ad hoc world of bargaining where someone would cover for someone else at a "muster of hands" or someone would carry out another task in exchange! It was surprising what lengths a "hardened bubbly rat" would go to obtain more than his daily ration of "grog". In senior rates messes it became quite common for those who didn't care to take their neat tot every day to bottle it and after a period of time one could accumulate quite large quantities.

I think the original dilution was a little harsher... (at least wiki thinks so)


A half pint (http://en.wikipedia.org/wiki/Pint) of rum mixed with one quart (http://en.wikipedia.org/wiki/Quart) of water and issued in two servings before noon and after the end of the working day became part of the official regulations of the Royal Navy (http://en.wikipedia.org/wiki/Royal_Navy) in 1756 and lasted for more than two centuries.

http://en.wikipedia.org/wiki/Grog

hplasm
2007-Aug-31, 10:08 PM
That's the irony of threads like this: The MHer rarely learns anything from it (since they either refuse to listen or simply don't understand the answers), but the rest of us do.

Wasn't one of the MHer family in ' The Man with Two Brains' ? :whistle:

captain swoop
2007-Sep-01, 12:44 AM
I think the original dilution was a little harsher... (at least wiki thinks so)


Quote
A half pint of rum mixed with one quart of water and issued in two servings before noon and after the end of the working day became part of the official regulations of the Royal Navy in 1756 and lasted for more than two centuries.
http://en.wikipedia.org/wiki/Grog

Half a Pint to to a Quart is 4:1.

Falconer in his 'Universal Dictionary of the Marine' (1815) records John Nicol as saying that, in 1794, whilst he was serving in the Edgar, 74(guns), she was called upon to to engage the Defiance, 74, in an attempt to suppress a mutiny which had broken out because "...their captain gave them five-water grog; now the common thing is three-waters. The weather was cold; the spirit thus reduced was, as the mutineers called it, as thin as muslin, and quite unfit to keep out the cold. No seaman could endure this in cold climates. Had they been in hot latitudes, they would have been happy to get it thus, for the sake of the water; but then they would not have got it."

JimTKirk
2007-Sep-01, 04:03 AM
Half a Pint to to a Quart is 4:1.

Falconer in his 'Universal Dictionary of the Marine' (1815) records John Nicol as saying that, in 1794, whilst he was serving in the Edgar, 74(guns), she was called upon to to engage the Defiance, 74, in an attempt to suppress a mutiny which had broken out because "...their captain gave them five-water grog; now the common thing is three-waters. The weather was cold; the spirit thus reduced was, as the mutineers called it, as thin as muslin, and quite unfit to keep out the cold. No seaman could endure this in cold climates. Had they been in hot latitudes, they would have been happy to get it thus, for the sake of the water; but then they would not have got it."

I fully agree that it is a 4:1 ratio and is a little harsher than a 3:1. I just tried mixing one shot glass of Jamaican rum to 4 shot glasses of water and it definitely needed something to make it more palatable. You're right that a little sugar and a twist of lime made it pretty tasty. I also tried some 90 proof both ways and they were both rather dilute. I tried 5:1 just for grins and there is a very noticeable difference. I'm going to go lie down now.:sick:

JonClarke
2007-Sep-01, 06:15 AM
Considering they lived in a time when their sailors wanted the compass in a binnacle so that the evil spirits that operated it wouldn't get out and affect the ship.... Or maybe that's just an old sea story.


Could explain what you mean by this? I have never heard this story!

A binnacle is simply a mount that places compasses and other naviation instuments in a position that it easily referenced by the crew at the helm in all weathers and lighting conditions and protects the compass against various external influences.

Thanks

Jon

MG1962A
2007-Sep-01, 11:22 AM
I fully agree that it is a 4:1 ratio and is a little harsher than a 3:1. I just tried mixing one shot glass of Jamaican rum to 4 shot glasses of water and it definitely needed something to make it more palatable. You're right that a little sugar and a twist of lime made it pretty tasty. I also tried some 90 proof both ways and they were both rather dilute. I tried 5:1 just for grins and there is a very noticeable difference. I'm going to go lie down now.:sick:

I have sent this description of your heroic research project to these people

http://nobelprize.org/

You should expect something in the mail forthwith

captain swoop
2007-Sep-02, 12:29 AM
The usual retail Rum you buy is no where near strong enough to be used in Grog!

As for a binnacle, a proper one has Iron Spheres and magnets that you can move around to compensate for the effect of the metal in a ships hull on a Magnetic compass.

JimTKirk
2007-Sep-02, 01:10 AM
I have sent this description of your heroic research project to these people

http://nobelprize.org/

You should expect something in the mail forthwith

Thanks! You think I'll hear before this headache goes away?:lol:

JimTKirk
2007-Sep-02, 01:15 AM
The usual retail Rum you buy is no where near strong enough to be used in Grog! <snip>

I agree! The Jamaican rum I used was given to me by a real Jamaican who brought it back in his luggage. Drinking it straight was kind of like drinking Ever Clear! I'll try to find the bottle (my wife hid it on me, don't know why) to find what proof it is.

quarn
2007-Sep-04, 02:53 PM
Hi there.

Nice forum, found it while searching for moon landing proof.

I'm probably retarded but ill give it a go anyway.


I understand that since there are no stars on images, that means the camera has a too fast "exposure". But then i don't understand why the real time capture films still don't have stars in films ?.

Another thing. I have read that why images of astronauts are lit even though they have the sun in the back, is because of the bright moon surface reflecting the sunlight so they appear lit. But why aren't all images like so. ?
Shouldn't all images be the same. No shadows at all since the surface it self lights up the images.

Thanks.

JayUtah
2007-Sep-04, 03:19 PM
But then i don't understand why the real time capture films still don't have stars in films ?.

Because motion picture images, whether film or video, are just a sequence of single fast exposures and exhibit many of the same limitations as still photography.

Shouldn't all images be the same. No shadows at all since the surface it self lights up the images.

No. First, the spill from the surface is secondary only; sunlight will still be so much more strong that it will cast shadows. Second, spill varies from situation to situation as the camera moves about the scene. Third, exposure settings on the camera will affect whether the spill reveals detail. You should never assume that all photographs the same general situation will always appear at the same brightness or contrast. Photography just doesn't work like that.

sts60
2007-Sep-04, 07:22 PM
Hi, quarn. Welcome to the board.

I have read that why images of astronauts are lit even though they have the sun in the back, is because of the bright moon surface reflecting the sunlight so they appear lit.

Here's a page with a nice collection of Shuttle/ISS photographs. You can see how light reflected from the Earth, Shuttle, ISS, and the astronauts themselves illuminate areas shadowed from the Sun.

But why aren't all images like so. ?
Shouldn't all images be the same. No shadows at all since the surface it self lights up the images.

As Jay said, the secondary sources cast much less light, as they reflect only some fraction of the direct illumination. If this wasn't the case, there wouldn't be much in the way of shadows on Earth, no? And on Earth we have the additional fill-in of light scattered by the atmosphere.

Serenitude
2007-Sep-04, 07:49 PM
Hello Quarn!

Welcome to BAUT! Please make sure you stop by the ABOUTBaut forum and read the rules while you're surfing here (you're doing fine - it's just a good idea for all newcomers).

As an idea, you may get more answers if you make a new thread with these questions - not everyone will be reading thread anymore ;)

JayUtah
2007-Sep-04, 07:53 PM
http://www.clavius.org/img/bridge-over.jpg
http://www.clavius.org/img/bridge-under.jpg

Here are two pictures I took of the same covered bridge, just seconds apart. The lighting is exactly the same. But I changed the camera's exposure settings between them. So in the darker picture you can't really see the rafters of the bridge, and in the lighter picture you can't see the detail in the trees in the background. Whether the camera uses a manual exposure setting like the one I used, or a sophisticated exposure computer, you can't usually compare one photograph to another and talk about "proper" brightness and contrast.

And in fact what your eye sees directly is often quite different than any photograph. Your eye can see a greater range of brights and darks than most cameras. The film used here actually had a pretty broad dynamic range, but I could still see both the rafters and the tree detail with my naked eye. So what appears dark in a photograph because of exposure settings or just poor lighting is not necessarily what your eye would have seen in similar circumstances.

Exposure factors like this are one of several elements that affect apparent brightness in lunar surface and other space photography.

PhantomWolf
2007-Sep-04, 09:42 PM
While Jay does cover it aptly, there is just one more thing I'd like to add. Often even in the shaded areas there is actually details that you can't see in a "normal" exposure. Sometime images get "pushed" in the development lab to bring out these details. Here are some examples:

See in the original, the details of the two people facing the camera are lost, they are dark. So is the stage area behind them.

http://lokishammer.dragon-rider.org/X/Shadows5.jpg

A little magic to brighten the image and note that all the detail is restored, but also note that the sky is now over exposed and has lost it's colour and detail.

http://lokishammer.dragon-rider.org/X/Shadows5a.jpg

This works in a very similar way to changing the exposure settings on the camera, but allows those processing the images to get the best from them.

Click Ticker
2007-Sep-06, 01:35 PM
This works in a very similar way to changing the exposure settings on the camera, but allows those processing the images to get the best from them.

It's a fake. Notice how the shadows are not all going in the same direction? :D

Irishman
2007-Sep-06, 07:38 PM
hell i could creatively edit most of the footage and pictures to make it look fake, its called photoshop.

But Photoshop wasn't available in 1969. Photos and video of astronauts on the Moon have been available since then.

I think you missed Lobreiter's point. He appears to be saying that the hoax claimers could have used photoshop to alter photos to make them look fake, and thus produce the "evidence" of fakery.



Considering they lived in a time when their sailors wanted the compass in a binnacle so that the evil spirits that operated it wouldn't get out and affect the ship.... Or maybe that's just an old sea story.


I would suggest it was an old sea story. Sailors trusted the compass, even if they didn't understand the science behind it. It was only when the thing stopped pointing in the direction they expected, they got worried.

Hmmm, so what's a binnacle?

http://www.answers.com/topic/binnacle

A case that supports and protects a ship's compass, located near the helm.

Gee, thanks, that's singularly unhelpful. I'll go put my compass in the binnacle, and then put my silverware in the silverware drawer, and follow up by putting my dirty clothes in the dirty clothes hamper.


A binnacle is a case or box on the deck of a ship, generally mounted in front of the helmsman, in which navigational instruments are placed for easy and quick reference as well as to protect the delicate instruments...

With the introduction of iron-clad ships the magnetic deviation observed in compasses became more severe. Methods of compensation by arranging iron or magnetic objects near the binnacle were developed. In 1854 a new type of binnacle was patented by John Gray of Liverpool which directly incorporated adjustable correcting magnets on screws or rack and pinions. This was improved again when Lord Kelvin patented in the 1880s another system of compass and which incorporated two compensating magnets.

No discussion of evil spirits. By the time of John Gray and Lord Kelvin, magnetism was more widely understood. Perhaps general sailors (as opposed to officers) in the early days had notions of spirits around the compass.


Etymology
Before 18th century bittacle, through Span. bitacula[/]i, from Lat. [i]habitaculum, a little dwelling.

Interesting. That derivation is suggestive, with "little dwelling" suggestive of a home for spirits. But that could just be a projection onto the words due to the inexactitude of translations and the prior lack of a word requiring the use of an existing concept being stretched to provide the word in the first place. By the time it was applied to the compass housing (housing, get it?), it could just have meant "compass container" rather than "home for the spirits that live in the compass". So inconclusive.

So Jay, you want to cough up some evidence for that claim? ;)

JayUtah
2007-Sep-06, 08:06 PM
So Jay, you want to cough up some evidence for that claim? ;)

Probably some blurb from History's Mysteries. That's why I was so ambivalent about writing it off as a sea story. Or Modern Marvels or something. One of those bastions of scholarly research.

From the engineering point of view it makes sense to put the compass in a protective enclosure. Anyone who's been on the open deck of a ship knows it's not the place for a delicate instrument. Dunno where the "evil spirits" thing came from, even in the History Channel context. I guess sailors bear the brunt of many accusations of superstition, just as the Illuminati, the Freemasons, and Mrs. Field's Cookies have to bear the brunt of all shadow-government accusations.

SLF:JAQ SFDJS
2007-Sep-06, 11:09 PM
The only people that had photoshop back then would have been NASA. Since they were the ones that invented 3-D computer graphics that are now so prevelant in computer games and software.

Van Rijn
2007-Sep-06, 11:29 PM
The only people that had photoshop back then would have been NASA. Since they were the ones that invented 3-D computer graphics that are now so prevelant in computer games and software.

I'm not sure if you're joking, or what time you mean by "back then" but Photoshop 1.0 came out in 1990. I remember an article in the late '70s talking about recent developments in computer graphics. Removing hidden surfaces was a big deal then, and it took a long time on high end hardware to do things we consider trivial today. Ray tracing (usually done on Crays) started taking off sometime after that. The fact is that it was hard to develop graphics software when it took hours to render a decent ("decent" meaning something we would consider primitive today) image and took more RAM than most machines had available.

Take a look at The Last Starfighter (1984) done on a Cray supercomputer for state of the art at that time. It was fantastic then just thinking they did it on a computer, but was obvious CGI and looks extremely primitive today. People do much better work on their own PCs now.

There was, of course, nothing that even came close to that during the moon landings. You could do pong and limited 2-D or vector 3-D graphics then. Really primitive stuff.

GeorgeLeRoyTirebiter
2007-Sep-07, 12:19 AM
The only people that had photoshop back then would have been NASA. Since they were the ones that invented 3-D computer graphics that are now so prevelant in computer games and software.

Um ... what?

Photoshop is a 2D image editor. The only thing it has in common with CG is that the end result of CGI is a 2D image.

If any one place can be considered to have invented modern 3D computer graphics, I would say it has to be the University of Utah School of Computing. Many of the early figures in CGI (Gouraud, Phong, Blinn, Kajiya, Catmull, Clark, etc.) received advanced degrees or conducted research there during the 1970s.

JayUtah
2007-Sep-07, 12:22 AM
Since they were the ones that invented 3-D computer graphics that are now so prevelant in computer games and software.

Um, hogwash. First, NASA didn't invent 3D computer graphics. Second, image processing (e.g. Photoshop) and three-dimensional graphics have almost nothing in common.

JayUtah
2007-Sep-07, 12:31 AM
...I would say it has to be the University of Utah School of Computing.

...at which JayUtah studied and taught the subject and conducted research in 3D graphical design techniques for engineering.

Many of the early figures in CGI...

I know many of those guys. Don't forget Ivan Sutherland and David Evans, who went on to form the highly successful Evans & Sutherland company, now a division of Rockwell Collins. There are also Alan Kay and John Warnock. In fact, in the early 1990s most of the significant players in the computer graphics industry were closely connected to Utah, creating what we still call the Utah Graphics Mafia.

While Jim Blinn produced many CG videos for NASA, it was not NASA who invented that. NASA came to Blinn because he had the expertise already.

pzkpfw
2007-Sep-07, 12:43 AM
I think it's interesting (and I guess counter-intuitive to the HB's) that it would take more computing power to "fake" images of the Moon missions than was actually used to do the Moon missions.

(As in, some sample HB might say the LM computer was not "powerful enough", but that same HB might then claim all the photos were faked on some secret 1969 super computer).

JayUtah
2007-Sep-07, 12:47 AM
The world's then-fastest (1969) computer was used on Apollo to precalculate some orbital manuevers. It was about as powerful as a single-core Intel Pentium microprocessor was in the early 2000s.

PhantomWolf
2007-Sep-07, 12:56 AM
...I would say it has to be the University of Utah School of Computing.

...at which JayUtah studied and taught the subject and conducted research in 3D graphical design techniques for engineering.

Many of the early figures in CGI...

I know many of those guys. Don't forget Ivan Sutherland and David Evans, who went on to form the highly successful Evans & Sutherland company, now a division of Rockwell Collins. There are also Alan Kay and John Warnock. In fact, in the early 1990s most of the significant players in the computer graphics industry were closely connected to Utah, creating what we still call the Utah Graphics Mafia.

While Jim Blinn produced many CG videos for NASA, it was not NASA who invented that. NASA came to Blinn because he had the expertise already.

Aha! Now we know why you know so much about it Jay, you were behind the whole fake. BUSTED!

On a more serious note, should I point out that for all it's high-tech, NASA didn't even have Computer Monitors for the Control Rooms. Instead they used TV screens all connected to a internal cable system. To get the data they needed they would tune the screen to the appropriate channel and then lay a cardboard cutout over the screen so that they could see the figures and information next to the correct captions.

Van Rijn
2007-Sep-07, 12:57 AM
I think it's interesting (and I guess counter-intuitive to the HB's) that it would take more computing power to "fake" images of the Moon missions than was actually used to do the Moon missions.

(As in, the same sample HB might say the LM computer was not "powerful enough", but that same HB might then claim all the photos were faked on some secret 1969 super computer).

Yes, the tasks are very different. I know that high quality graphics is data and computation hungry. And, in fact, most of the resources on a modern PC are dedicated to graphics (what I often call "glitz"). There is a reason we used to use command line interfaces and text-only screens.

Even today we would be hard pressed to do convincing CGI to match the Apollo photographs and video. And it's about both hardware and software: When it takes days to render a single image, it takes time to test new ideas and develop new software. The software evolved along with the hardware.

Van Rijn
2007-Sep-07, 12:58 AM
The world's then-fastest (1969) computer was used on Apollo to precalculate some orbital manuevers. It was about as powerful as a single-core Intel Pentium microprocessor was in the early 2000s.

That actually sounds to me far more powerful than I would have thought any computer to be at the time. What were the specs?

JayUtah
2007-Sep-07, 03:21 AM
It was the CDC 6600 installed at LLNL later supplanted by the CDC 7600, which was dramatically faster but came too late for the first orbit-patching computation.

Van Rijn
2007-Sep-07, 04:22 AM
It was the CDC 6600 installed at LLNL later supplanted by the CDC 7600, which was dramatically faster but came too late for the first orbit-patching computation.

Thanks. According to this article:

http://www.ddj.com/184404102

The CDC 6600 ran at about 10 MIPS and could have about a megabyte of primary RAM. The main CPU was 60 bit (I remember that - we had a Cyber at college). There were additional processors (PPs) that complicate things a little, but that should be okay for a ballpark calculation

An early 2000s 32 bit Pentium IV would run at about 1200 MIPS, and easily would have a half a gigabyte of RAM on a common configuration. From that, I'd put the early PIV at about 64 times as fast as the CDC 6600.

JayUtah
2007-Sep-07, 04:36 AM
MIPS is less important than MFLOPS. We don't measure high-performance computing performance in MIPS. You may be right in any case; I'm just trying to get this in some kind of ballpark: supercomputers of the 1960s were comparable to desktops of not too long ago. I remember doing the calculation comparing it to some model Pentium, but I can't remember now which one it was.

Van Rijn
2007-Sep-07, 04:51 AM
MIPS is less important than MFLOPS. We don't measure high-performance computing performance in MIPS.


This is true, but another article said the CDC 6600 could do about 3 MFLOPS with the right software. The PIV has excellent floating point hardware, and in this too would be well ahead of the CDC 6600.



You may be right in any case; I'm just trying to get this in some kind of ballpark: supercomputers of the 1960s were comparable to desktops of not too long ago. I remember doing the calculation comparing it to some model Pentium, but I can't remember now which one it was.

That's quite possible for Pentiums in the mid 90s.

JayUtah
2007-Sep-07, 04:55 AM
This is true, but another article said the CDC 6600 could do about 3 MFLOPS with the right software.

I recall it was closer to 30 MFLOPS. Or am I thinking of the 7600?

That's quite possible for Pentiums in the mid 90s.

Let's call it an early Pentium then and not a 21st century Pentium.

Van Rijn
2007-Sep-07, 05:04 AM
This is true, but another article said the CDC 6600 could do about 3 MFLOPS with the right software.

I recall it was closer to 30 MFLOPS. Or am I thinking of the 7600?


FWIW, I'm going by this: (http://en.wikipedia.org/wiki/CDC_6600)

The system used a 10 megahertz clock, but used a four-phase signal to match the four-wide instructions, so the system could at times effectively operate at 40 MHz. A floating point multiplication took ten cycles, while a division took 29, and the overall performance considering memory delays and other issues was about 3 MFLOPS. Using the best available compilers, late in the machine's history, FORTRAN programs could expect to maintain about 0.5 MFLOPS.

GeorgeLeRoyTirebiter
2007-Sep-07, 05:22 AM
...I would say it has to be the University of Utah School of Computing.

...at which JayUtah studied and taught the subject and conducted research in 3D graphical design techniques for engineering.

Pardon me while I stare in awe with my mouth agape...

:surprised

...OK, I'm back to normal now.


Many of the early figures in CGI...

I know many of those guys. Don't forget Ivan Sutherland and David Evans, who went on to form the highly successful Evans & Sutherland company, now a division of Rockwell Collins. There are also Alan Kay and John Warnock. In fact, in the early 1990s most of the significant players in the computer graphics industry were closely connected to Utah, creating what we still call the Utah Graphics Mafia.

I could only remember the people who became "household names" in the CG industry because of their eponymous shading/illumination models or subdivision algorithms. I'm not sure how I forgot Alan Kay, as the Kajiya-Kay hair shading model is still used (it's like Blinn/Phong specular shading: we all know we should be using something more advanced, but we keep using it anyway).

Anyway, it was one of those amazing concentrations of talent, like... :think: I don't know. I can't think of any examples, darn it!

Van Rijn
2007-Sep-07, 05:23 AM
Oh, as an aside, when I built my 3GHZ hyperthreaded PIV system several years ago when the 3 gig chips first became available, I named it "PerSC" (pronounced "Percy"). That stood for "Personal Super Computer." In the Windows description comments, I said "Will be obsolete in 6 months." Surprisingly, that was when they hit the wall on "simple" increases on clock speed, so it actually took a bit longer before it was substantially outclassed.

JayUtah
2007-Sep-07, 02:01 PM
Pardon me while I stare in awe with my mouth agape...

There was one glorious day in the department when Kay, Warnock, Catmull, and Blinn were all in the same room together and I was there. You couldn't throw a teapot without hitting CG royalty.

If you go to SIGGRAPH (the ACM's computer graphics contest) you can get a special swag for hugging Jim Blinn.

And the running joke is that you can always tell a graphics geek by whether he associates "Andy" or "Jean Claude" with the surname "Van Dam."

I could only remember the people who became "household names" in the CG industry because of their eponymous shading/illumination models or subdivision algorithms.

E & S are household names in Salt Lake City but perhaps not as much elsewhere. There are a few companies in town for whom everyone in the software biz has worked at some point, and Evans and Sutherland is one of them. I live just down the hill from the famous Building 770 where all the full-scale flight simulators and the Digistars are built up. I'm fully instrument rated on the hang gliding simulator.

...we all know we should be using something more advanced, but we keep using it anyway).

The first assignment I handed out in advanced graphics was a copy of Gouraud's paper and a copy of Phong's paper with a week to implement both in software. They're important. They lend themselves well to hardware implementation, which is probably why they persist in production. But they're also important academically. They, along with Blinn bump-mapping and other techniques, illustrate how you can dink around with the surface normal in creative ways to achieve the illusion of sophistication.

For many years we just didn't have the computing cycles to adopt a fully physically-based approach to illumination. We still don't, but todays methods are based more in the actual physics of light than before, requiring considerable computation per pixel. Before that, illumination models were often hacks -- mathematical approximations to the observation, not to the mechanism. But they had the advantage of faster computability. That raises the important concept of Good Enough. But the speed of those good-enough computations also gives us the ability to animate them on today's hardware/software systems at sustainable, credible frame rates. The only way you can shove a billion polygons per second down a rendering pipeline is by not being too picky about the physics behind the shading.

I can't think of any examples, darn it!

Lockheed Skunk Works? Xerox PARC?

JayUtah
2007-Sep-07, 02:02 PM
FWIW, I'm going by this: (http://en.wikipedia.org/wiki/CDC_6600)

My 30 MFLOPS recollection was for the 7600. The 6600 was in the 3 MFLOPS range.

Donnie B.
2007-Sep-07, 08:42 PM
I can't think of any examples, darn it!

Lockheed Skunk Works? Xerox PARC?
Manhattan Project (Los Alamos)?

easytiger831
2007-Sep-14, 01:19 PM
OK, so lets do this systematically...
Radiation:-
Anybody who has worked with or within any field of atomic energy knows of a few simple basic rules for radiation shielding which are 1. Time 2. Distance 3. Shielding.
The Van Allen belts exist, no doubt about it but someone once calculated that a spacecraft travelling at 25,000 MPH passes through them in a few minutes which is insufficient time to get anything approaching a serious dose, in fact it was calculated as no more exposure than lying on a beach in the sun for 8 hours.
Most radiation can be stopped by thin aluminium, Beta particles can be stopped by tissue paper so the radiation exposure argument just doesn't exist in the mind of anyone who knows what they're talking about.
Heat:-
You seem to forget that space is a vacuum which means there is no air to convect the heat, a spacewalker on the ISS in the sun is exposed to high temperatures, as soon as he goes into the shade the temperature drops to minus 200 degrees. All Apollo CM's were deliberately polished to reflect the heat (it being easier to warm up a spacecraft than to try and cool it down from +200 degrees), which they did very effectively which is why Apollo 13 was cold inside! Also remember that most electrical componenets give off heat (put your hand on you PC Monitor and feel). So without the natural heat from circuits and wires, displays and lights the CM and LM got very cold very quickly. There was some heat from the sun that did get through as the temperature in 13's CM and LM was just above freezing, if no heat had got through it would have been minus 200 in there.

KaiYeves
2007-Sep-14, 04:39 PM
Pong? From what I know, Pong and Pacman were very early 80's, a decade after the end of Apollo.
So the computers were worse than that.

JayUtah
2007-Sep-14, 04:57 PM
Anybody who has worked with or within any field of atomic energy knows of a few simple basic rules for radiation shielding...

I agree. But the conspiracy theorist authors and their readers don't fall into this category. To someone with a scientific or technical background, radiation is simply a phenomenon with reasonably well known properties and reasonably measurable and controllable parameters. To the conspiracy crowd, radiation is a big Boogey Man that inspires terror and death in all who encounter it.

The Van Allen belts exist, no doubt about it but someone once calculated that a spacecraft travelling at 25,000 MPH passes through them in a few minutes...

The Van Allen belts don't have precise boundaries, so any statistic along those lines will have to say under what assumptions of "significant strength" the duration was given. There are a number estimates based on "the thickest parts" of the Van Allen belts, which are given according to criteria of such-and-such a flux at such-and-such an energy, and that amounts to a traversal time of much less than an hour.

...in fact it was calculated as no more exposure than lying on a beach in the sun for 8 hours.

More like a chest x-ray or two. The point in either case being that while it's a measurable amount, it's not enough radiation to have a biological effect that will worry us.

The particulars of estimating or analytically computing radiation exposure quickly drown the layman, so often the best way to talk about moon hoax claims of radiation is simply to quote James Van Allen himself, as I have done, saying that there's no scientific meat to the notion that the Van Allen belts would have precluded manned trips outside them.

Few conspiracy theorists can talk about radiation in terms of actual numbers or actual understanding. For someone to say "There was too much radiation" implies that he knows how much there was and how much is too much. When he can't give you numbers or even estimates, it's a fair bet his claim is based on ignorant fear-mongering than on actual understanding.

You seem to forget that space is a vacuum which means there is no air to convect the heat, a spacewalker on the ISS in the sun is exposed to high temperatures, as soon as he goes into the shade the temperature drops to minus 200 degrees.

But that's a misleading phrase. It's clear you understand the mechanics of heat transfer, but statements like "the temperature drops to -200" makes it ambiguous what substance's temperature you're considering. Yes, as the astronaut moves into shade he is no longer under solar influx, but his temperature doesn't instantly drop, nor is there an ambient whose temperature drops. He begins to radiate heat away more effectively and will thus begin to cool, but in general it's unwise to use the word "temperature" in a discussion without specifying what's temperature you mean.

Also remember that most electrical componenets give off heat...

And the CM was not designed to be operated without its electrical system working, so there was no provision in the design for an alternate heat source. The waste heat from the electronics was to be the primary source of cabin heat, with a small electrical-resistance heater to help out.

...if no heat had got through it would have been minus 200 in there.

Probably not quite that cold, but not likely above zero F.

NEOWatcher
2007-Sep-14, 06:37 PM
Pong? From what I know, Pong and Pacman were very early 80's, a decade after the end of Apollo.
So the computers were worse than that.
Nope; late 70's. I was playing many of those in HS, and I graduated in 80.
And pong was released (http://en.wikipedia.org/wiki/Pong) during Apollo.

Anyway, there is a huge difference in the capabilities of a highly funded research computer (NASA) and a high volume commercially viable computer (Atari).

I was in a few of the old KSC facilities (both tourable, and private/unauthorized tours) and have seen some of that equipment. What an Atari had in the 70's would fill a room based on the register, memory, and stack markings that I saw in some of those old NASA machines.

easytiger831
2007-Sep-14, 07:30 PM
I was being rough with my figures, I am just a layman with a small amount of scientific knowledge so was guestimating the 400 degree temperature ranges, but if I, as a layman, can understand such things shouldn't everyone be able to?

JayUtah
2007-Sep-14, 08:07 PM
You have to distinguish carefully between the research and development machines and the deployed machines. The first version of the AGC occupied something like four standard-sized equipment racks. That's not because that's the minimum size for a computer, but because the development computers use large, modular pieces like breadboards, plugboards, and patch panels to let the engineers quickly change things as they work. Then when they get it the way they want, they can execute the design again in specialized hardware. The first Atari working model may have been just as klunky.

JayUtah
2007-Sep-14, 08:15 PM
I was being rough with my figures...

As are we all on this point. But the CSM absorbs some energy, so it won't assume the coldest extreme temperature. Sure, the shaded parts of the CSM will probably be on the order of -200 F, but the interior won't be, nor will the sunlit part.

...if I, as a layman, can understand such things shouldn't everyone be able to?

They can. Many choose not to make the attempt. I don't try to understand everything about, say, corporate finance. It just doesn't interest me. But I suspect that I would be able to, if that's what I decided to put my mind to.

KaiYeves
2007-Sep-14, 08:49 PM
Nope; late 70's. I was playing many of those in HS, and I graduated in 80.

Sorry:( . This is what happens when I try to talk about a topic out of my sphere. Video games are not 'my thing'. Never played Pong, but always wanted to, just never could find a console. Enjoy Pacman and Galaga (AKA Space Invaders) when I can find them, but if I'm in an arcade with air hockey, that will get all of my quarters.

quarn
2007-Oct-14, 09:48 AM
Amazing picture.

http://upload.wikimedia.org/wikipedia/en/e/ec/AS11-40-5924HR.jpg



I KNOW how dust behaves in vacuum, therefore this movie of the lunar rover is a hoax. It's a VACUUM. The dust should have made a complete and beautifully arc shooting out from the wheels and continue upwards and around in space as it is in complete vacuum.

http://en.wikipedia.org/wiki/Image:Ap16_rover.ogg

Jim
2007-Oct-14, 06:59 PM
Quarn, you have posted this same image in three different threads. Please do NOT do this. Pick a thread and stick to it. I suggest the thread you started.

BertL
2007-Oct-14, 07:23 PM
The dust should have made a complete and beautifully arc shooting out from the wheels and continue upwards and around in space as it is in complete vacuum.
You mean like in the video you just showed?

Neverfly
2007-Oct-14, 09:55 PM
Bold mine:


Amazing picture.

I KNOW how dust behaves in vacuum, therefore this movie of the lunar rover is a hoax. It's a VACUUM. The dust should have made a complete and beautifully arc shooting out from the wheels and continue upwards and around in space as it is in complete vacuum.

Continue upwards and around in space?
This statement doesn't even make sense. Like swirling?

Or do you mean like the effect you would see without gravity, not without air?

Van Rijn
2007-Oct-14, 10:14 PM
I KNOW how dust behaves in vacuum, therefore this movie of the lunar rover is a hoax. It's a VACUUM. The dust should have made a complete and beautifully arc shooting out from the wheels and continue upwards and around in space as it is in complete vacuum.

http://en.wikipedia.org/wiki/Image:Ap16_rover.ogg

I do see dust moving in arcs. What kind of arcs did you assume would appear? Different particles of dust will be thrown from the wheel at different angles and velocities, so you would see the net effect of all the dust particles being thrown. Some dust particles will strike other dust particles, and therefore cause a more complicated path for some dust. However, because there is no atmosphere, there won't be a cloud of dust floating behind the rover.

How is this different from the movie you referenced?

Skyfire
2007-Oct-14, 10:23 PM
The dust gets thrown up as you would expect as the wire mesh wheels interact with it, and ALL of it then falls back to the surface perfectly as would be expected where there was only one sixth gravity and NO ATMOSPHERE!

NOTE THAT POINT: There is still GRAVITY acting on the particles of dust to cause them to fall back onto the surface, but it is only one sixth of our own.


Edit: I said "The dust gets thrown into the air..." Ooops! :)

Grand_Lunar
2007-Oct-14, 10:24 PM
To someone with a scientific or technical background, radiation is simply a phenomenon with reasonably well known properties and reasonably measurable and controllable parameters. To the conspiracy crowd, radiation is a big Boogey Man that inspires terror and death in all who encounter it.



Jay, that says it all about HBers thinking of radiation. I'm gonna use it as part of my signature, okay? Do I have to pay royalties now?

Maksutov
2007-Oct-15, 11:36 AM
Whatever happened to Lobreiter (http://www.bautforum.com/members/lobreiter.html)? Another hoax poster?

http://img137.imageshack.us/img137/566/iconwink6tn.gif

Damien Evans
2007-Oct-15, 12:16 PM
Anyone who has drunk 200 proof Pusser's Rum (NZ Navy issue) knows what I mean

That's not rum!

That's pure alcohol, where can I get some?:)

unscannable
2007-Oct-23, 11:08 PM
Whatever happened to Lobreiter (http://www.bautforum.com/members/lobreiter.html)? Another hoax poster?

http://img137.imageshack.us/img137/566/iconwink6tn.gif

His post looks eerily similar to the writing style of someone at a forum where I post from time to time. There was a thread there which dealt with the hoax theory and it sparked my interest in the debate. This guy was the main pro-conspiracist in the thread. He was convinced by everything in the nasascam (http://geocities.com/nasascam/) website along with some other well-known debunked claims. All the participants in that thread were laymen, and some of his claims went unchallenged until I directed him to this forum and apollohoax.net. Lobreiter wrote his first and only post here a few days after I did this. I hope he reads this post and it lures him back into the debate here. :cool: