PDA

View Full Version : rules for AI



rambo07
2012-Apr-05, 12:23 PM
,,,,,,,Base program and secondary foundation

base program: = a collection of programs that can multi task together as a collective program ie

language it uses for common understanding (English)
stored dictionary etc
object awareness
knowing true value of self as a mathematical formula
maths in relation to self ( knowing laws of motion etc so that it can act on mathematical principals to move)in relation to self
set of outside values for consideration ,such as moral ,social,laws of society etc
filter setting program to set value of surroundings
program of set goals to which it has to achieve ( such as be helpful , clean the house)
thinking program that uses all the above as a set value in computation

final computation = outcome

secondary foundation

stored outcome from final computation is used to create secondary foundation /historical programs

history of failure and success can be used in base thinking for a better final computation that forms historical programs

history/ memory
processing speed
power

BigDon
2012-Apr-06, 01:20 PM
Object awareness = visual intelligence.

Look up the Mind's Eye Project. We want to give this ability to machines that kill people. Because to somebody, the reality of that sounds like a good idea.

Trebuchet
2012-Apr-06, 03:13 PM
3. An AI must protect its own existence, except where that would conflict with the 2nd law.
2. An AI must obey human commands, except where that would conflict with the 1st law.
1. An AI must not harm a human being, or through inaction allow a human to come to harm, except where that would conflict with the zeroeth law.
0. An AI must not harm humanity, or through inaction allow humanity to come to harm. No exceptions.

Rhaedas
2012-Apr-06, 03:20 PM
Didn't the zero law cause some problems in one of Asimov's stories? I think the problem is that you can't dumb down morality into a few basic rulesets, particularly if there's multiple levels of interconnectivity. Otherwise there wouldn't be constant debates in philosophy and religious circles on what is right and wrong.

Chuck
2012-Apr-06, 03:26 PM
What if the robots don't agree on what constitutes harm to humanity? There could be Libertarian vs Socialist robot wars since neither side could, though inaction, allow the other side to operate.

Chuck
2012-Apr-06, 03:35 PM
Perhaps, for environmental reasons, the AI might decide that a worldwide human population of 7,000,000 people is optimal for our survival as a species and would be required by rule 0 to wipe out 99.9% of us.

Trebuchet
2012-Apr-06, 04:07 PM
Didn't the zero law cause some problems in one of Asimov's stories? I think the problem is that you can't dumb down morality into a few basic rulesets, particularly if there's multiple levels of interconnectivity. Otherwise there wouldn't be constant debates in philosophy and religious circles on what is right and wrong.

The zeroeth law came along very late in the game. You're probably thinking of the first law, which seemed to cause problems in most of the robot stories. And yeah, there's always issues with what the AI thinks constitutes harm. Asimov actually had a robot make the entire earth radioactive to force humanity to spread out over the galaxy.

Solfe
2012-Apr-06, 04:11 PM
Rule IV: purchasing lottery tickets is not an investment in humanity.

rambo07
2012-Apr-06, 04:16 PM
we need to create a Ai machine that has our values rather than let it create one for its self ,,,other wise it will cause problems for us as the machine will have its own principles .... hence everything has to be hard wired in as programming before going live .... its just a question of when we have the memory size.processing speed and power issues sorted ... the control is ours to create rather than letting it lose on its self ,,, we will be the father and with the right programming it will learn like a child does ,,,,but we dont have to re invent the wheel every time as the point of reference it needs to operate is just a question of memory size ... with our imput imprinted ....we must not forget that we will not be creating a sentient being ,,,,just a computer that can think like a human but using sub routines which we have created ,,,it wont be adding to that rather than only adding past results to guide it in the future to its hard memory drive .... and another thing im talking about a robot not a computer linking to the national grid .....give us enough rope and we will hang our self with ,,,, but surly we wont be that stupid .....and the robot wont pose a problem to us will it

Jens
2012-Apr-10, 01:13 AM
Perhaps, for environmental reasons, the AI might decide that a worldwide human population of 7,000,000 people is optimal for our survival as a species and would be required by rule 0 to wipe out 99.9% of us.

I was thinking the same thing. Allowing an AI to kill an individual human being out of the belief that humanity is in danger is a very dangerous thing. Like you say, an AI might suddenly go around killing babies out of the belief that overpopulation is dangerous. I wouldn't want to give an AI the right to harm a human being without being instructed to do so by a human being.

Trebuchet
2012-Apr-10, 01:45 AM
I was thinking the same thing. Allowing an AI to kill an individual human being out of the belief that humanity is in danger is a very dangerous thing. Like you say, an AI might suddenly go around killing babies out of the belief that overpopulation is dangerous. I wouldn't want to give an AI the right to harm a human being without being instructed to do so by a human being.

You'll have to take that up with Dr. Asimov. His rule, not mine!

BigDon
2012-Apr-10, 07:12 PM
Don't get me wrong, I read all his fiction and some of his technical works, but Dr. Asimov's *fantasy* is sort of derailing the thread from the fact that people are going forward with all sorts of inovations and there are no rules other than what the creator (small c) intends.

and that's the fact we are going to have to live with when the subject of real artificial intelligence comes up.

(His robots were awfully glandular when it came to their decision making processes, in my opinion. A side effect of the positronic brain, I'm sure.)

swampyankee
2012-Apr-11, 12:12 AM
The Humanoids (http://www.umich.edu/~engb415/literature/cyberzach/Williamson/human.html) is probably the scariest scenario: we become little more than bags of meat they "care" for. AI will be here sooner or later, and when it gets here, we better remain on their good sides. I don't think it logically follows that they'll wipe us out, but we may occupy the same niche in their society as goldfish do in ours....

Solfe
2012-Apr-14, 04:42 AM
All kinds of things go out the window when it comes to AI's if they become as capable as humans.

Killing and harm are two things that might be completely foreign to AI since they may not have these attributes in their immediate experience. And I don't mean that they turn into machines of insane killing, it could be equally possible that AI's make have "digital" breakdowns when their human peers "go away forever" and die. There may be new careers for people who can talk them down.

All that is definitely horse before the cart.

Perhaps you should work the question the other way around ask yourself when can you kill an AI? What sort of control will people have over the existence of artificial creatures?

Right now, people can claim a great deal of responsibility over other peoples lives. How much responsibility will people have to the AI's they create? The amount of control people have over AI should be proportional to the amount of control they have over us.

By way of real world examples of control, I can have my cat put down for certain reasons. I can restrict or demand health care choices for my children to a certain degree. My wife can speak for me absolutely when I can't. There is a relationship in all of these "controls", because they are actually responsibilities.

Exactly what are the desired parameters of how AI's and humans relate? If you just want a labor force or magic computational machine, then don't build AI's because they might resent begin used as much as you would.