Page 1 of 2 12 LastLast
Results 1 to 30 of 57

Thread: AI, Robots, & Ethics

  1. #1
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558

    Exclamation AI, Robots, & Ethics

    Asimov's Three Laws of Robotics meet HAL-9000.


    https://techxplore.com/news/2018-10-...-behavior.html

    A principle-based paradigm to foster ethical behavior in autonomous machines

    by Ingrid Fadelli , Tech Xplore , Nov 5 2018

    A team of researchers at the University of Hartford, the University of Connecticut, and the Max Planck Institute for Intelligent Systems have recently proposed a case-supported, principle-based behavior paradigm toward ensuring the ethical behavior of autonomous machines. Their paper, published in Proceedings of the IEEE, argues that ethically significant behavior of autonomous systems should be guided by explicit ethical principles, determined via a consensus of ethicists.

    "This year marks the 50th anniversary of the movie 2001: A Space Odyssey," Michael Anderson, one of the researchers who carried out the study told TechXplore. "While reading The Making of 2001: A Space Odyssey at the turn of the century, it struck me that much of HAL's capability had lost its science fiction aura and was on the cusp of being realized. It also struck me that they had gotten the ethics so wrong: If HAL was around the corner, it was time to get the ethics right."

    Anderson and his colleagues have been working on discovering ethical principles that can be embodied in autonomously functioning machines, as part of a project called Machine Ethics. They believe that the behavior of artificial intelligence should be guided by ethical principles. These principles, determined by a consensus of ethicists, should not only help to ensure the ethical behavior of complex and dynamic systems, but also serve as a basis to justify this behavior. The researchers developed an extensible, general case-supported and principle-based behavior paradigm called CPB. In this paradigm, an autonomous system decides its next action using a principle abstracted from cases for which ethicists have agreed upon the correct action to undertake.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  2. #2
    Join Date
    Sep 2003
    Posts
    12,641
    Would you believe China now has an AI news anchor? Hmmmm.
    We know time flies, we just can't see its wings.

  3. #3
    Join Date
    Jul 2012
    Posts
    289
    Quote Originally Posted by Roger E. Moore View Post
    Asimov's Three Laws of Robotics meet HAL-9000.


    https://techxplore.com/news/2018-10-...-behavior.html

    A principle-based paradigm to foster ethical behavior in autonomous machines

    by Ingrid Fadelli , Tech Xplore , Nov 5 2018

    A team of researchers at the University of Hartford, the University of Connecticut, and the Max Planck Institute for Intelligent Systems have recently proposed a case-supported, principle-based behavior paradigm toward ensuring the ethical behavior of autonomous machines. Their paper, published in Proceedings of the IEEE, argues that ethically significant behavior of autonomous systems should be guided by explicit ethical principles, determined via a consensus of ethicists.

    "This year marks the 50th anniversary of the movie 2001: A Space Odyssey," Michael Anderson, one of the researchers who carried out the study told TechXplore. "While reading The Making of 2001: A Space Odyssey at the turn of the century, it struck me that much of HAL's capability had lost its science fiction aura and was on the cusp of being realized. It also struck me that they had gotten the ethics so wrong: If HAL was around the corner, it was time to get the ethics right."

    Anderson and his colleagues have been working on discovering ethical principles that can be embodied in autonomously functioning machines, as part of a project called Machine Ethics. They believe that the behavior of artificial intelligence should be guided by ethical principles. These principles, determined by a consensus of ethicists, should not only help to ensure the ethical behavior of complex and dynamic systems, but also serve as a basis to justify this behavior. The researchers developed an extensible, general case-supported and principle-based behavior paradigm called CPB. In this paradigm, an autonomous system decides its next action using a principle abstracted from cases for which ethicists have agreed upon the correct action to undertake.
    I get that AI presents serious challenges, but a consensus of ethicists? Sounds scary to me.

    Would we not be better off with a panel of computer scientists, logicians, mathematicians and that ilk? To me there's little question that the ethics is our survival.

  4. #4
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,608
    Quote Originally Posted by 7cscb View Post
    Would we not be better off with a panel of computer scientists, logicians, mathematicians and that ilk?
    They are notoriously poor at understanding the implications of technology and how it can be used. Look at the abuse of Facebook data, for example. It might be better if every startup was required to have an ethics panel.

  5. #5
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,162
    Quote Originally Posted by 7cscb View Post
    I get that AI presents serious challenges, but a consensus of ethicists? Sounds scary to me.

    Would we not be better off with a panel of computer scientists, logicians, mathematicians and that ilk? To me there's little question that the ethics is our survival.
    Nope. One look at Microsoft's racist AI tells you, robots NEED ethical guides, and computer experts are not it.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  6. #6
    Join Date
    Feb 2003
    Location
    Depew, NY
    Posts
    11,696
    Quote Originally Posted by Noclevername View Post
    Nope. One look at Microsoft's racist AI tells you, robots NEED ethical guides, and computer experts are not it.
    I worked at a company that had IT produce a product to randomly select employees for drug testing. The problem? IT wrote the program so that it would not select people in the IT department. They were working on the mistaken assumption that all IT employees were drug tested because before the implementation of the program, they were. They got fired and the whole program was scrapped, costing a lot of money.
    Solfe

  7. #7
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    Humans will have to remain in charge of AI, no matter what, for the foreseeable future. I can live with armed robots, but not AUTONOMOUS ones.

    Also, here is an article on various problems with AI and ethics. If AI/robots replace humans for jobs, retraining must be provided. You can't increase unemployment without trouble.

    https://www.weforum.org/agenda/2016/...-intelligence/

    "We shouldn’t forget that AI systems are created by humans, who can be biased and judgmental."
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  8. #8
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    https://techxplore.com/news/2018-11-...-platform.html

    Robot that is being upgraded to work with long-term elderly/disabled/injured/hospitalized people. Ethics panels better be up to snuff.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  9. #9
    Join Date
    May 2007
    Location
    Earth
    Posts
    10,209
    Quote Originally Posted by Roger E. Moore View Post
    Humans will have to remain in charge of AI, no matter what, for the foreseeable future. I can live with armed robots, but not AUTONOMOUS ones.

    Also, here is an article on various problems with AI and ethics. If AI/robots replace humans for jobs, retraining must be provided. You can't increase unemployment without trouble.

    https://www.weforum.org/agenda/2016/...-intelligence/

    "We shouldn’t forget that AI systems are created by humans, who can be biased and judgmental."
    We Humans have shown ourselves to have members who are quite capable of truly horrible actions. What if the autonomous AI decides its human masters are unethical and refuses orders to, oh, fire a missile into a densely populated area because of the likelihood of innocents getting killed? See this story: https://www.tor.com/2018/10/17/ai-an...m-pat-cadigan/

    “Ethical” and “obedient” are mutually exclusive.
    Information about American English usage here and here. Floating point issues? Please read this before posting.

    How do things fly? This explains it all.

    Actually they can't: "Heavier-than-air flying machines are impossible." - Lord Kelvin, president, Royal Society, 1895.



  10. #10
    Join Date
    Dec 2004
    Location
    UK
    Posts
    9,120
    I had an idea a long time ago, that if human existence, within their model was actually part of their own personal programming system; then by harming or killing a human they would actually be destroying their of processing function, their own programming...I sort of see that as how it works, to some extent with humans; by killing someone you are partially killing yourself; which doesn't work with some people as their pressure to harm and kill will even overcome their need to self preserve, and with some they are already so damaged it wouldn't have as much effect, or non at all.
    ................................

  11. #11
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    http://moralmachine.mit.edu/

    Just discovered this website, which is from MIT. You get to make moral choices for a robot. I started this and whoa, it was weird. Stopped me dead, made me think hard.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  12. #12
    Join Date
    Dec 2004
    Location
    UK
    Posts
    9,120
    has anyone actually thought about what it might be like to be in the same room as a humanoid machine, with the same, or more strength than you, but which is completely devoid of any real consciousness? There would be no way for the machine to really check whether its actions were sane or safe, for the people around it.
    It might actually be quite frightening.
    ................................

  13. #13
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    Quote Originally Posted by WaxRubiks View Post
    has anyone actually thought about what it might be like to be in the same room as a humanoid machine, with the same, or more strength than you, but which is completely devoid of any real consciousness? There would be no way for the machine to really check whether its actions were sane or safe, for the people around it. It might actually be quite frightening.
    Well, how different would this be from working in an automated factory? Not sure I understand the query.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  14. #14
    Join Date
    May 2003
    Posts
    6,047
    Quote Originally Posted by swampyankee View Post
    We Humans have shown ourselves to have members who are quite capable of truly horrible actions. What if the autonomous AI decides its human masters are unethical and refuses orders to, oh, fire a missile into a densely populated area because of the likelihood of innocents getting killed?
    Here's xkcd's take on that thought.
    Conserve energy. Commute with the Hamiltonian.

  15. #15
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    Quote Originally Posted by swampyankee View Post
    We Humans have shown ourselves to have members who are quite capable of truly horrible actions. What if the autonomous AI decides its human masters are unethical and refuses orders to, oh, fire a missile into a densely populated area because of the likelihood of innocents getting killed?
    Been there, did that, human (not an AI) saved everyone.

    https://en.wikipedia.org/wiki/Stanislav_Petrov

    Stanislav Yevgrafovich Petrov... (7 September 1939 – 19 May 2017) was a lieutenant colonel of the Soviet Air Defence Forces who became known as "the man who single-handedly saved the world from nuclear war" for his role in the 1983 Soviet nuclear false alarm incident. On 26 September 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the command center for the Oko nuclear early-warning system when the system reported that a missile had been launched from the United States, followed by up to five more. Petrov judged the reports to be a false alarm, and his decision to disobey orders, against Soviet military protocol, is credited with having prevented an erroneous retaliatory nuclear attack on the United States and its NATO allies that could have resulted in large-scale nuclear war. Investigation later confirmed that the Soviet satellite warning system had indeed malfunctioned.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  16. #16
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,162
    Quote Originally Posted by WaxRubiks View Post
    has anyone actually thought about what it might be like to be in the same room as a humanoid machine, with the same, or more strength than you, but which is completely devoid of any real consciousness? There would be no way for the machine to really check whether its actions were sane or safe, for the people around it.
    It might actually be quite frightening.
    I drive on the highway. Surrounded by other cars. With irrational, known-to-kill, unsafe random factors behind the wheels.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  17. #17
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    Quote Originally Posted by Noclevername View Post
    I drive on the highway. Surrounded by other cars. With irrational, known-to-kill, unsafe random factors behind the wheels.
    I drive in South Carolina, where tailgating at high speed is apparently a state requirement though I cannot find the law mandating it.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  18. #18
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    2,558
    Speaking of robotic police, check out the photo with this article. It really does look like a giant toy.


    https://techxplore.com/news/2018-11-...re-summit.html

    'Robocop' on patrol at Singapore summit
    November 14, 2018

    Hi-tech Singapore has deployed an autonomous robot with a swiveling camera for a head and flashing lights to patrol a summit venue—arresting the attention of amused passers-by who stopped to snap selfies. The white, four-wheeled buggy, measuring about five-feet (1.5 metres), trundled around the perimeter of a convention centre in the city-state, providing additional security at a meeting of world leaders. The so far unnamed robot, decked out with flashing blue and red lights, is a prototype reportedly developed by the police, which can transmit a 360-degree picture of the area it is patrolling.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    — Mark Twain, Life on the Mississippi (1883)

  19. #19
    Join Date
    Sep 2003
    Location
    The beautiful north coast (Ohio)
    Posts
    48,771
    John Varley, in a few of his short stories, had an interesting take on teaching ethics to AI robots - they are raised like humans, by humans (like how humans learn ethics). I recall one short story about a human female asteroid prospector, who's traveling companion is a "teenage" robot, going through all the emotional problems that a human teenager does ("Why do I always have to clean out the airlock?").

    Data parenting Lal (ST:NG) would be another example.
    At night the stars put on a show for free (Carole King)

    All moderation in purple - The rules

  20. #20
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,162
    Quote Originally Posted by Swift View Post
    John Varley, in a few of his short stories, had an interesting take on teaching ethics to AI robots - they are raised like humans, by humans (like how humans learn ethics). I recall one short story about a human female asteroid prospector, who's traveling companion is a "teenage" robot, going through all the emotional problems that a human teenager does ("Why do I always have to clean out the airlock?").

    Data parenting Lal (ST:NG) would be another example.
    In the webcomic Schlock Mercenary, Howard Tayler calls creating a new AI mind "growpramming".
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  21. #21
    Join Date
    Feb 2003
    Location
    Depew, NY
    Posts
    11,696
    Quote Originally Posted by Swift View Post
    John Varley, in a few of his short stories, had an interesting take on teaching ethics to AI robots - they are raised like humans, by humans (like how humans learn ethics). I recall one short story about a human female asteroid prospector, who's traveling companion is a "teenage" robot, going through all the emotional problems that a human teenager does ("Why do I always have to clean out the airlock?").

    Data parenting Lal (ST:NG) would be another example.
    Oh good. Machines that can make teenage mistakes at superhuman speeds.

    My dilemma with virtually all autonomous devices is, shouldn't they have the buyer best interests at heart? That is almost worse than having a superhuman teenager make mistakes for like a 100 different reasons.

    Now a product with adult reasoning would be my perfect solution. It should have the owners best interests at heart, but it has a heart and ethics. Maybe it takes risks like a human, but at phenomenal speeds. I'd be willing to accept sudden death at the hands of a machine if it tried to dodge someone crossing the street 1,197,148 different ways but couldn't save both of us. Ok, fine. I'm game for that. I just don't like hard parameters that automatically deal a bad hand to someone.

    I could envision a scenario where hard parameters (sacrifice vehicle and passengers over pedestrians every time) if the speed was much lower to make the chance of a deadly collision far less likely. That would be a total game changer. It'd take an hour to get to a store 10 miles away, but the autonomous vehicle has a phone, TV, coffee pot, grill and an NES so I can do something with that time. That is a fair trade. Odd, but fair.
    Solfe

  22. #22
    Join Date
    Dec 2004
    Location
    UK
    Posts
    9,120
    I think within the foreseeable future, robot cars aren't going to be capable of making the moral choice to minimize deaths in a road accident. So I think the laws will have to become more strict with how humans behave, how pedestrians and cyclists behave. So stepping out onto the road, trying to cross it, and leaving a robot car with the option of hitting him/her, or changing course and maybe hitting one or more people on the pavement(sidewallk), will fall foul of the law, if it leads to an accident.

    edit: well I don't know how the law stands on road crossers, at the moment, in the UK, or US...but I think the laws on pedestrians etc will become more strict, anyway.
    Last edited by WaxRubiks; 2018-Nov-17 at 10:54 AM.
    ................................

  23. #23
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,162
    Quote Originally Posted by WaxRubiks View Post
    I think within the foreseeable future, robot cars aren't going to be capable of making the moral choice to minimize deaths in a road accident. So I think the laws will have to become more strict with how humans behave, how pedestrians and cyclists behave. So stepping out onto the road, trying to cross it, and leaving a robot car with the option of hitting him/her, or changing course and maybe hitting one or more people on the pavement(sidewallk), will fall foul of the law, if it leads to an accident.

    edit: well I don't know how the law stands on road crossers, at the moment, in the UK, or US...but I think the laws on pedestrians etc will become more strict, anyway.
    More strict laws, or more strictly enforced?
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  24. #24
    Join Date
    Dec 2011
    Location
    Point Clear, Essex,UK
    Posts
    461
    With AI it all comes down to numbers.

    Years ago I was working on a console driving game.
    We had the conversation: Do you lose more points for driving over an old lady or a policeman, what about the kid with the balloon?
    See: 'The God Kit' -- 'The Brigadier And The Pit' -- Carl N Graham -- Sci-fi blog: The Alien Reporter

  25. #25
    Join Date
    Feb 2003
    Location
    Depew, NY
    Posts
    11,696
    Quote Originally Posted by PetTastic View Post
    With AI it all comes down to numbers.

    Years ago I was working on a console driving game.
    We had the conversation: Do you lose more points for driving over an old lady or a policeman, what about the kid with the balloon?
    Didn't Sylvester Stallone have a score card in Death Race 2000?
    Solfe

  26. #26
    Join Date
    Feb 2003
    Location
    Depew, NY
    Posts
    11,696
    Quote Originally Posted by Noclevername View Post
    More strict laws, or more strictly enforced?
    I would think less variability. My college campus has it's own police force and rules which are followed strictly. One of the rules is having to yield for pedestrians in crosswalks. That is not the law at the five lane roads that run the perimeter of the campus. There you need to look at the lights. I can't tell you how many times I have seen a pedestrian try to test that. 5 lanes of traffic can try to stop for pedestrians, but only try.

    There is a very nasty three way intersection in the local park less than a mile away from the school. You can enter the park, exit the park or merge on to an expressway. Although it's only two lanes, the lanes are huge at the intersection. In the past, the expressway was 50 mph. They knocked that back to 30 mph. Very dangerous, even at low speeds. All of the green space encourages people to attempt to cross the intersection instead of where the crosswalks are. Here is a map.

    If I had a plan for that area, I'd add a light, a round about or several more crosswalks plus drop the speed to 10. There is a stop in every direction, no need for even 30 mph.
    Solfe

  27. #27
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,162
    Quote Originally Posted by Solfe View Post
    I would think less variability.
    You might get that on a campus, but nationwide, US states will be widely (and wildly) varied. Nature of the beast, I suppose.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  28. #28
    Join Date
    Jun 2005
    Posts
    13,377
    One think about the “moral dilemma” thing is that I don’t think it’s really very common to have to make a decision like that. I think that if I were on the bridge problem I would probably just slam on the brakes and hope for the best:,perhaps the baby will fall over and slide under the car, hopefully the car won’t slide off the bridge. I think that people normally try to avoid the closest danger first and then try to deal with the next one, etc., sometimes with good results and sometimes with catastrophic ones.


    Sent from my iPhone using Tapatalk
    As above, so below

  29. #29
    Join Date
    Mar 2004
    Posts
    18,245
    Quote Originally Posted by Jens View Post
    One think about the “moral dilemma” thing is that I don’t think it’s really very common to have to make a decision like that. I think that if I were on the bridge problem I would probably just slam on the brakes and hope for the best:,perhaps the baby will fall over and slide under the car, hopefully the car won’t slide off the bridge. I think that people normally try to avoid the closest danger first and then try to deal with the next one, etc., sometimes with good results and sometimes with catastrophic ones.
    I was going to say something similar. My experience is that there is usually very little time to deal with driving emergencies. Typically conscious analysis comes later, assuming things don't turn out too badly. I expect driving AI will usually do better than humans in emergencies, simply because they'll be able to work through which options would minimize conditions likely to cause injury (they don't have to make moral decisions to do that, but rather analyze how best to minimize collision speed or redirect a vehicle so there won't be a collision), while humans don't have time to do significant analysis. I can easily see where a pilot of an airplane or a captain of a ship could have time to make decisions about which choices could minimize lives lost, but rarely someone driving a car.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." — Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  30. #30
    Join Date
    Dec 2004
    Location
    UK
    Posts
    9,120
    Quote Originally Posted by Van Rijn View Post
    I was going to say something similar. My experience is that there is usually very little time to deal with driving emergencies. Typically conscious analysis comes later, assuming things don't turn out too badly. I expect driving AI will usually do better than humans in emergencies, simply because they'll be able to work through which options would minimize conditions likely to cause injury (they don't have to make moral decisions to do that, but rather analyze how best to minimize collision speed or redirect a vehicle so there won't be a collision), while humans don't have time to do significant analysis. I can easily see where a pilot of an airplane or a captain of a ship could have time to make decisions about which choices could minimize lives lost, but rarely someone driving a car.
    I think in the example I suggested, that if someone steps out onto the road, for some reason, then the car has to decide whether it mounts the pavement(sidewalk), or not, in order to avoid him/her, risking hitting someone on pavement....that might be the sort of emergency decision that might become common. So I think that option, of mounting the pavement, might have to be disallowed for the car, and the laws about how pedestrians behave when they enter the road space, become more strict, or as NCN suggested more rigorously applied(depending on the laws of that part of the world).
    ................................

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •