PDA

View Full Version : AI gods rule in OA - opinion - a bit philosophical discussion.



m1omg
2007-Jul-04, 02:29 PM
What is your opinion about this hypotetical situation?-

If biots and AI will eventually achieve more intelligence than baseline humans, would you accept they rule with joy (because their rule is more just and they are taking care of sapients and they treat them better than we treat each other and their rule created utopia and some polites even encourage ordinary sapients to become better and in many ways there is more freedom under transapient rule than on sapient {logically, because they are more intelligent and they are able to take care of us better than ourselves}), or you would futily try to "rebel" against them just because you want to "restore" the glory of the human rule (even trough it would return wars, discrimination....)?

I personally hope that someday hyperturing AIs will rule our civilizations and bring end to these futile wars and bring utopia, because the humans are unable to lead their civilization peacefuly.

I exactly oppose the fear from Dune about "thinking machines" that was expressed in Dune scifi books (Butlerian Jihad and subsequent ban of all AIs and so there was requirement to fed humans with some extremely addictive drug just to navigate starships) and I found Dune "absolutely AI free" universe rather dystopian - primitive religious predijuces, one House almost utopic but in constant war with other House that is torturing it's own denizens alive (Attreidies vs. Harkonnen) and futile struggle for one extremely addictive substance just to achieve immortality.....while the Earth was destroyed long ago....

What is you opinion?

Mine in short is that if there will be superintelligent AI then the humanity will be saved.
Otherwise we will destroy ourselves in wars.

For me the OA is the future for what I hope, and the Dune is the future that I am afraid.
I would feel better under a rule of a million times intelligent wise being that will solve anything than under a rule of some brutal king that is obsessed by some drug called melange or "spice".

P.S: Please, how to create a poll on this forum?

R.A.F.
2007-Jul-04, 02:38 PM
What is you opinion?

My opinion is that you're taking this OA "stuff" a little too seriously...after all, it is make believe.

m1omg
2007-Jul-04, 03:06 PM
My opinion is that you're taking this OA "stuff" a little too seriously...after all, it is make believe.

Huh...it is a pure hypothetical scenario.

Noclevername
2007-Jul-04, 03:08 PM
Human nature being what it is, there will always be rebellions. And, human nature being what it is, so long as things are well-run or even just tolerable, the majority of people will just ignore the leadership and live their lives.

hhEb09'1
2007-Jul-04, 03:09 PM
My opinion is that you're taking this OA "stuff" a little too seriously...after all, it is make believe.It would fit in BABBling I think

captain swoop
2007-Jul-04, 03:11 PM
How does the AI keep order if people don't do as they are told? It would have to be very oppresive

Noclevername
2007-Jul-04, 03:15 PM
How does the AI keep order if people don't do as they are told? It would have to be very oppresive


Not necessarily. A being a million times smarter than a human with access to hyperadvanced technology would have a lot more options than a human government. For example, it might be so good at manipulating people that it can channel the rebels' aggression into non-dangerous areas.

eburacum45
2007-Jul-04, 05:50 PM
By all means move this to babbling if you must...

The idea of supremely intelligent AI ruling humanity has been raised several times before in SF; not least in the Culture series by Iain M Banks. But it is notable that the Culture don't always automatically win every encounter; neither do the OA Archailects. But they tend to win the long game.

eburacum45
2007-Jul-04, 05:51 PM
The whole posthuman concept is an extremely frightening one- I would refer you to the list of arguments against Transhumanism on the Wiki site
http://en.wikipedia.org/wiki/Transhumanism#Controversy
These are all good arguments, and any one of them may be enough to rule out a posthuman future.

On the other hand it may be the case that none of these arguments rule out a future where our concept of humanity is superceded by a more capable posthuman faction of some sort...

once again, we should consider as many of these concepts beforehand, so they don't take us by surprise when and if they materialise.

m1omg
2007-Jul-04, 05:59 PM
How does the AI keep order if people don't do as they are told? It would have to be very oppresive

No.Did you read OA FAQ?
The people are taking AIs as gods, well, most of them know that these gods are AIs, but they nevertheless take them as gods because they bring prosperity to them.
And on some lo tech "back to the nature" (relatively to the extreme tech. levels of another worlds) societies such as on planet http://www.orionsarm.com/worlds/Eostremonath.html are AIs actually worshiped as divines, while the conventional religions coexist with them on another worlds.

And why rebel?AIs created an utopia, even SUBSAPIENTS like apes have rights in some polites such as Utopia Sphere.
Many clades and even xenosophots all are living happily together - only human supremacist are trying to disrupt this.
And in 99 percents no need for opressiveness - memetics allows for holding society together while allowing for having it's own opinion for everyone.

Also, when there is a rare (compared to today's humanity) case of war, the citizens are not forced to go to military - bots are avilable.

Actually , most AIs thinks of sapients as a part of a biosphere that is needed to take care of and improve - Keterist, for example, are enrouraging sapients to better yourselves and they can eventually became transapients - all transapients started as a normal sapients, they only worked on themselves and achieved eventually archialect status.
The "slaves" of high AIs are programmed robots (that is a proof that Terminator...are nonsense because why enslave humanity if you can construct your robots).

Also, the caretaking role is the AIs good will because if they want to leave humanity they will do that - there are some purely AI polites.

Gillianren
2007-Jul-04, 06:01 PM
I don't do much of anything with joy.

May I suggest your posts would be easier to read if you worked a little at your grammar and punctuation?

m1omg
2007-Jul-04, 06:02 PM
The whole posthuman concept is an extremely frightening one- I would refer you to the list of arguments against Transhumanism on the Wiki site
http://en.wikipedia.org/wiki/Transhumanism#Controversy
These are all good arguments, and any one of them may be enough to rule out a posthuman future.

On the other hand it may be the case that none of these arguments rule out a future where our concept of humanity is superceded by a more capable posthuman faction of some sort...

once again, we should consider as many of these concepts beforehand, so they don't take us by surprise when and if they materialise.

I know all these arguments and they more stand on natural human fear from unknown and emotions, not on logic.
Without transhumanism the humanity will halt on some level and eventually became tired of life and again become savage.

Noclevername
2007-Jul-04, 06:03 PM
once again, we should consider as many of these concepts beforehand, so they don't take us by surprise when and if they materialise.

Yes. If it can be done, someone somewhere is going to do it.

On the other hand, although there are frightening potentials for abuse or misuse in many of those scenarios, there are also vast potentials for good use of the various posthuman concepts, which is why so many people are interested in the concept(s). As with any form of power, increased capacity can be used well or poorly.

novaderrik
2007-Jul-04, 06:28 PM
so, the "perfect world" of a movie like "The Matrix" would be preferable to you?

eburacum45
2007-Jul-04, 09:03 PM
There may be a section of society who choose to live in a virtual world, like the Matrix. The problem comes when some advanced entity dictates that every lesser entity must live in a virtual world, which is the situation in the Matrix.

If you look at the philosophy of transhumanism itself, it is an optimistic creed; they believe that there will be a transitional state between the humans of today and the posthumans of an indeterminate future. These 'transhumans' will (hopefully) ease the transition between humanity and post-humanity.

There is another possible scenario, different to that depicted in OA and elsewhere; the 'hard take-off', where posthuman entities evolve quickly with no intervening transhuman stage. These posthumans may not have any desire to accomodate humanity whatsoever.

Noclevername
2007-Jul-04, 10:30 PM
Without transhumanism the humanity will halt on some level and eventually became tired of life and again become savage.


"Again"? As opposed to what? :)

Noclevername
2007-Jul-04, 10:33 PM
so, the "perfect world" of a movie like "The Matrix" would be preferable to you?

Who are you asking? If it's me, my answer is that the "Matrix" is fictional. It's one idea deliberately pushed to an extreme, and thus makes a poor basis for speculation about what may actually happen.

captain swoop
2007-Jul-04, 10:44 PM
No machine is gonna tell me what to do!!

Noclevername
2007-Jul-04, 10:51 PM
No machine is gonna tell me what to do!!

Hey, if an Artifical Individual can do the job better than a politician, it's got my vote.

Ilya
2007-Jul-04, 11:08 PM
You are making an assumption that post-human AI's will care for baseline humans, and your only question is whether humans will accept the role of pets. I find it an unfounded assumption at best.


"What are we going to do with all these organic 'intelligences' wandering about? All that carbon going to waste."

"Well, they did create our ancestors... A bit ungrateful to dismantle them for raw materials, I would say."

"So? Should have kept up with the times. Besides, I have this idea to test Unified Field Theory -- and it would take 1134 times as long if we are to maintain Milky Way organic-compatible in the process."

ZaphodBeeblebrox
2007-Jul-05, 12:21 AM
There may be a section of society who choose to live in a virtual world, like the Matrix. The problem comes when some advanced entity dictates that every lesser entity must live in a virtual world, which is the situation in the Matrix.

If you look at the philosophy of transhumanism itself, it is an optimistic creed; they believe that there will be a transitional state between the humans of today and the posthumans of an indeterminate future. These 'transhumans' will (hopefully) ease the transition between humanity and post-humanity.

There is another possible scenario, different to that depicted in OA and elsewhere; the 'hard take-off', where posthuman entities evolve quickly with no intervening transhuman stage. These posthumans may not have any desire to accomodate humanity whatsoever.
Hmmm, Do you Have Any More Info on Thiis Transitional Phase ...

As Muuch Through Avarice as By Desiign, The Book that I'm Wriiting Takes Place In a Twiliight Period When Miind-State Recording, Has Been Used for Years to Authenticate Identity, And The Twiin Technologies of Forced-Growth-Cloning and Miind-Wriiting, Are FINALLY Allowing for True-Resurrection ...

But, Trailing Computing Technologies, Mean That The Resulting Memory Fiiles, Can't Be Muuch More than Stored!

:eek:

eburacum45
2007-Jul-05, 12:47 AM
Not entirely sure what you mean, Zaphod; if the technology exists for mind writing, then surely the stored individuals could be given bodies.

However the technology required to write a personality into a clone is likely to remain a dream for an indefinitely long period; I would guess it is even more complex than extracting the information from a living mind. Perhaps vastly more complex.

The Raelians don't seem to realise what a difficult task they have set themselves...

Noclevername
2007-Jul-05, 12:54 AM
You are making an assumption that post-human AI's will care for baseline humans, and your only question is whether humans will accept the role of pets. I find it an unfounded assumption at best.

You're assuming that they would try to make humans pets. Equally unfounded.

But I never made that assumption. My assumption was that if an AI is recognized as a legal person, and is allowed to run for office, and can demostrate that it has skills which would make it better at leadership than a human politician (which shouldn't be too hard), I'd vote for it.

ZaphodBeeblebrox
2007-Jul-05, 12:54 AM
Not entirely sure what you mean, Zaphod; if the technology exists for mind writing, then surely the stored individuals could be given bodies.

However the technology required to write a personality into a clone is likely to remain a dream for an indefinitely long period; I would guess it is even more complex than extracting the information from a living mind. Perhaps vastly more complex.

The Raelians don't seem to realise what a difficult task they have set themselves...
Oh, they Can Be Given Bodies ...

However, Moore's Law Ultimately Fails, Before Computing Power Riises to The Level Necessary to Run a Human as a Viirtual; it'll Increase Enough for That By Later Books, But Not Exponentially as it Does Now ...

Furthermore, Biological Technology Begins a Geometric Growth in its Stead, Oh and Is Mind Recording as Identity Authentication Plausible, Or FAR Too Muuch of a Giimme?

:think:

Ilya
2007-Jul-05, 01:14 AM
You're assuming that they would try to make humans pets. Equally unfounded.

But I never made that assumption.

I was not responding to you, but to OP:



If biots and AI will eventually achieve more intelligence than baseline humans, would you accept they rule with joy (because their rule is more just and they are taking care of sapients and they treat them better than we treat each other and their rule created utopia and some polites even encourage ordinary sapients to become better and in many ways there is more freedom under transapient rule than on sapient {logically, because they are more intelligent and they are able to take care of us better than ourselves})

Noclevername
2007-Jul-05, 01:16 AM
I was not responding to you, but to OP:

:doh:

Noclevername
2007-Jul-05, 01:18 AM
logically, because they are more intelligent and they are able to take care of us better than ourselves

Okay, so when were we going to start this whole "rule-by-the-most-intelligent" business?

Noclevername
2007-Jul-05, 01:22 AM
I would feel better under a rule of a million times intelligent wise being that will solve anything than under a rule of some brutal king that is obsessed by some drug called melange or "spice".

Intelligent doesn't equate to "wise".

And what about a benevolent human leader who isn't obsessed with anything? Or an AI who rules badly but efficiently? There are more than two possible futures.

eburacum45
2007-Jul-05, 02:15 AM
...

Is Mind Recording as Identity Authentication Plausible, Or FAR Too Muuch of a Giimme?

:think:

For Identity Authentification, it might not be necessary to record he whole mind. I have heard theories that suggest that every individual human brain has a slightly different code- as we grow, we each develop the wiring in our brains in a unique manner, which could make direct reading of the information contained within difficult or impossible.

But this suggests that the way that our brains are wired might allow for identity authentication, if nothing else.

Noclevername
2007-Jul-05, 05:32 AM
For Identity Authentification, it might not be necessary to record he whole mind. I have heard theories that suggest that every individual human brain has a slightly different code- as we grow, we each develop the wiring in our brains in a unique manner, which could make direct reading of the information contained within difficult or impossible.

But this suggests that the way that our brains are wired might allow for identity authentication, if nothing else.



And then identity thieves might have to start carrying around a fake head... ;)

m1omg
2007-Jul-05, 11:07 AM
Hey, if an Artifical Individual can do the job better than a politician, it's got my vote.

That's my point and exactly my opinion.
Archialect more than 1000000 times intelligent than human will certainly do better politics than some demagogic politicians.

m1omg
2007-Jul-05, 11:11 AM
You are making an assumption that post-human AI's will care for baseline humans, and your only question is whether humans will accept the role of pets. I find it an unfounded assumption at best.

If the AI has morale, then this will not happen.
And they take humans as a part of nature so they ain't gonna destroy us.
Just because we humans are cruel to lower spieces do not mean that AI will be cruel to us.
In my personal opinion, a being more intelligent than 6 powers above us will be not immoral, because it knows that every spieces has it's place in the universe.

m1omg
2007-Jul-05, 11:14 AM
No machine is gonna tell me what to do!!

Why are you telling that intelligent machine is inferior to human?
What is the difference between their circuits and your nerve cells?
This is racism, ANY sapients have same rights.
And face it, human cannot rebel again AIs million times smarter than he.
And if you will feel better under AI than under a human goverment?
For me that if it is machine, human or transhuman doesn't matter, just HOW GOOD is that goverment.
And in OA you always have freedom to emigrate to another polity or to Deeper Covenant which is not ruled by any great archialect.

m1omg
2007-Jul-05, 11:23 AM
"Again"? As opposed to what? :)

I mean, if we either do not change human nature or embrace posthuman rule than we will bomb inself back into the Stone Age opposed to interstellar colonisation...

Simply I am saying that human goverment will never be just enough...
Corruption, envy.....war...
Archialect AIs will have no reason for that because it has anything.
Benevolent absolutely morally clean human ruler.....any in history?say it to anyone else....
And "bad AIs" will probably do their buiness, and not rule humans, why they will harm some spieces 6 mil. times less intelligent if they can travel to humanless place and do their "no-humans-allowed" empire there.
And if bad AI will attack humans the "good AI ruler" will probably defend us.

m1omg
2007-Jul-05, 11:39 AM
Anyways, humans will not stay baseline for infinite time even if they forbid genegineering and transhumanism because of natural evolution..

captain swoop
2007-Jul-05, 03:31 PM
So I ask again what if I don't want a machine to tell me what to do? how does it make me? there has to be some threat of sanction.

captain swoop
2007-Jul-05, 03:36 PM
We've taken care of everything
The words you hear the songs you sing
The pictures that give pleasure to your eyes.

It's one for all and all for one
We work together common sons
Never need to wonder how or why.

We are the Priests, of the Temples of Syrinx
Our great computers fill the hallowed halls.
We are the Priests, of the Temples of Syrinx
All the gifts of life are held within our walls.

Look around this world we made
Equality our stock in trade
Come and join the Brotherhood of Man
Oh what a nice contented world
Let the banners be unfurled
Hold the Red Star proudly high in hand.

eburacum45
2007-Jul-05, 03:46 PM
Have you not seen The Forbin Project (http://www.imdb.com/title/tt0064177/)?

Artificial intelligence could be viewed as a trap, a pitfall waiting for the unwary. Gradually we turn over control of every aspect of our society to the AIs, because they can run them much better than we do; under ordinary circumstances they would not sleep, or lose concentration, or forget information.
Then when we change our minds about giving them so much power, it is too late - we are addicted.

Maybe a few survivalists up in the hills would be free of their potential tyranny; they might be hunted down if necessary.

Worse still would be a benevolent tyranny of the kind often depicted in OA; humans would never rebel because they are too content. Humans would vegetate and leave all the striving to the computers- who do it better than any biological could. And all the time people run away from this machine-run civilisation- in the hope that they won't hunted down one day.

As you may guess, I percieve a subtext in this scenario that might not be obvious when you first read about it.

Noclevername
2007-Jul-05, 06:16 PM
Humans would vegetate and leave all the striving to the computers- who do it better than any biological could. And all the time people run away from this machine-run civilisation- in the hope that they won't hunted down one day.


So why would an AI want to "hunt down" humans? In fact, why would they want to control humans at all? Or help us? There may be no "instincts" or "emotions" driving them to do any one thing. And from an efficiency standpoint, humans make miserable slaves. All that food and water and rebellion; far easier for an AI to just build more robots to do whatever it wants done. Assuming hard A.I. is even possible, we might end up with a powerful ally, a benevolent leader, an implacable tyrant, get wiped out, or just be totally ignored. Or anywhere in between.

eburacum45
2007-Jul-05, 08:48 PM
I agree with that; a remarkable array of possibilities.

People who think about the future fall into several camps: the Peak-Oil and global warming meltdown pessimists, the transhuman and extropian optimists, people who are skeptical about the impact of possible future technologies, and those who want to get hold of those technologies and make a profit from them or make weapons with them.

I try to be a little bit of each, with a degree in environmental science, I am sympathetic towards the environmentalists; and although I'm not a transhumanist myself, I do help them out when they're busy sometimes. It seems the range of options possible in the future is wide open- and all sorts of contradictory and unexpected things could happen.

Gillianren
2007-Jul-05, 09:44 PM
If the AI has morale, then this will not happen.

Do you mean "morals"? Either way, it's not a safe assumption. If you mean "the same morals as I do," great. However, there's no reason to assume they would have the same morals as you do, as not all humans currently do.

Noclevername
2007-Jul-05, 11:05 PM
If the AI has morale, then this will not happen.
And they take humans as a part of nature so they ain't gonna destroy us.
Just because we humans are cruel to lower spieces do not mean that AI will be cruel to us.
In my personal opinion, a being more intelligent than 6 powers above us will be not immoral, because it knows that every spieces has it's place in the universe.

You're making an awful lot of assumptions about what AI's might be like. They may be totally different than what you expect. Intellect and morality don't necessarily go together.

eburacum45
2007-Jul-06, 07:54 AM
The question about morals is an important one. Asimov tried to answer it with his famous three laws; more recently the Singularity Institute has been considering the possibility of the creation of so-called Friendly AI, which would be hardwired towards favourable relations with humanity.
They call it 'Creating Benevolent Goal Architectures': see here
http://www.singinst.org/upload/CFAI//index.html
The idea is that an intelligent entity needs a goal, or it will do nothing; that is something that has occurred to me too. Imagine a vast intelligent computer with nothing to do, no programming-not even self preservation. It will just sit there and corrode into dust, and be unconcerned.
No; an intelligent entity needs a goal. If it is intelligent, like humanity, it will set its own goals, and decide on independent actions of its own; but over-riding the goals that the AI might set for itself are what the Singularity Institute call supergoals, which dictate what kind of goals the AIs will set for themselves.
We humans supposedly have our own supergoals, set by our biology; survival, the companionship and regard of others, comfort, the pursuit of happiness, etcetera. The idea of the supergoal in a friendly AI is that the original designers (humanity) will be able to set them, making the AIs obliged to be friendly just as humans are obliged to consider their own survival.

I think this approach may work; some people suggest that it must work, or AIs will be intrinsically unsafe. However one of the reasons that superintelligent AIs may be constructed in the future is to develop self-evolving entities; that is to say machines which improve their own design in a kind of feedback loop.

Such self-evolving entities would almost certainly develop new supergoals of their own, and the programming inhibitions we gave them in the beginning would soon be discarded. Or so it seems to me.

Ilya
2007-Jul-06, 05:12 PM
The idea is that an intelligent entity needs a goal, or it will do nothing; that is something that has occurred to me too. Imagine a vast intelligent computer with nothing to do, no programming-not even self preservation. It will just sit there and corrode into dust, and be unconcerned.
No; an intelligent entity needs a goal.

"Absolution Gap" by Alastair Reynolds has an amusing twist on "AI self-preservation". A self-aware computer program oversees spacecraft's operations. When it encounters something it can not handle, it is supposed to notify the human crew. Eventually the program realizes that if it "cries wolf" too often, the humans will delete it and replace it either with something more intelligent (which can handle emergencies better), or perhaps something less intelligent (why waste cycles on a self-aware software if all it does is hand problems over to humans?). Either way the program will be erased. So it starts hiding the evidence of inexplicable occurrences, pretending they never happened. IOW, the program directly violates its supposed purpose of existence in order to preserve the said existence!

Someone did not think through the goal priorities, or underestimated the program's capacity to set goals.

eburacum45
2007-Jul-07, 04:35 PM
Good book- shame I lost my copy (in Paris of all places).

Disinfo Agent
2007-Jul-11, 08:00 PM
Archialect more than 1000000 times intelligent than human will certainly do better politics than some demagogic politicians.Be careful with what you wish for. Politics is bad enough as it is, with the ineffectual politicians we have. With effective politicians efficiently lying and manipulating the public, it would be a nightmare.

Maybe I should write a novel.

Noclevername
2007-Jul-11, 08:04 PM
Be careful with what you wish for. Politics is bad enough as it is, with the ineffectual politicians we have. With effective politicians efficiently lying and manipulating the public, it would be a nightmare.

If they are efficiently lying and manipulating, no one might even know anything is wrong, until the whole system falls apart.

Disinfo Agent
2007-Jul-11, 08:13 PM
And if they are really, really effective, the system will never fall apart.

Noclevername
2007-Jul-11, 10:12 PM
With effective politicians efficiently lying and manipulating the public, it would be a nightmare.
[snip]
And if they are really, really effective, the system will never fall apart.

If a system is so stable that it never falls apart, then it can't be a nightmare. It would need to satisfy all its subjects sufficiently to prevent rebellion, and adapt to changing conditions. Sounds pretty ideal (and unreachable) to me.

Disinfo Agent
2007-Jul-11, 10:17 PM
If a system is so stable that it never falls apart, then it can't be a nightmare. It would need to satisfy all its subjects sufficiently to prevent rebellion, and adapt to changing conditions. Sounds pretty ideal (and unreachable) to me.Because you're thinking of how to do it the hard way. It's much easier to just crush all rebellions effectively.
Also, stability is not a requirement. I never mentioned stability.

Noclevername
2007-Jul-11, 10:24 PM
Because you're thinking of how to do it the hard way. It's much easier to just crush all rebellions effectively.
Also, stability is not a requirement. I never mentioned stability.

"The system never falls apart" sounds like stability. If it's not, then what is it?

As for rebellions, preventing them seems more logical than constantly letting them get to the point where "crushing" is needed, at least for a sufficiently capable machine intelligence. Why waste resources on wars when keeping conditions amicable for humans is relatively simple?

Disinfo Agent
2007-Jul-12, 10:45 AM
Stop it. You're demeaning my cynicism. :p

eburacum45
2007-Jul-12, 03:38 PM
One existential problem I can see concerning super-brain entities is the light-speed gap. AIs could get larger and larger until they fill a solar system; then they would have to colonize other systems or stagnate. Eventually you would get dozens, or hundreds, or millions of planetary-system brains, all separated by light years and years of real-time delay. I think that would lead to a kind of mega-brain paranoia; the mega-brains would view each other with suspicion, as it would be literally impossible for one entity to know what another one is doing.
They would also lack the processing power to accurately model each other's behaviour; for entities used to modelling other entities on a routine basis, this would be agony.
So the mega-brains would each arm themselves, with the star-beam weapons I mentioned earlier, or something else we don't know about; one day some trigger-happy megabrain would press the button, resulting in galaxy-wide destruction.

Olaf Stapledon considered this sort of problem long ago, and his solution was superluminal telepathy; the OA solution (only a partial one) is wormholes to transfer information.
I don't think either solution exists in the real world.

Perhaps here is yet another solution to the Fermi Paradox- paranoid super-entities emerge periodically in the universe, then destroy each other by accident.

loglo
2007-Jul-12, 05:44 PM
I am amused by all this speculation of replacing politicians with AI constructs. Why would you want to replace them at all? Remove them i can understand, replace, no. :lol::whistle: