Results 1 to 24 of 24

Thread: Ethics of General AI

  1. #1
    Join Date
    Apr 2010
    Posts
    493

    Ethics of General AI

    not sure if this is the right forum...

    if general AI is developed in-silico...

    would be unethical to:

    1. turn it off = death (or at least suspended animation)
    2. stop if from learning/growing
    3. stop if from reproducing.
    4. stop if from 'instantiating' in a physical body
    5. make it do the task that we envisioned it to do = slavery?
    6. make it work (think) for us without being paid in some form (?data/processing power)



    I wonder if 'AI rights' would be the same as 'human rights' ?
    "It's only a model....?" :-)
    https://www.youtube.com/watch?v=m3dZl3yfGpc

  2. #2
    Join Date
    Jun 2005
    Posts
    14,194
    Quote Originally Posted by plant View Post
    not sure if this is the right forum...

    if general AI is developed in-silico...
    would be unethical to:

    1. turn it off = death (or at least suspended animation)
    2. stop if from learning/growing
    3. stop if from reproducing.
    4. stop if from 'instantiating' in a physical body
    5. make it do the task that we envisioned it to do = slavery?
    6. make it work (think) for us without being paid in some form (?data/processing power)

    I wonder if 'AI rights' would be the same as 'human rights' ?
    I think that's a difficult issue. The most problematic is the first. The other ones are more complex. For example, I don't think that it is unethical to withhold learning from anyone, and I'm not exactly sure what reproduction would mean in this case. I think the ethical issue would emerge from, for example, if it refused to work, could you turn it off?

    So one issue then is I'm not sure we would really know that we have created a general AI--perhaps we are just being tricked by it. And then as a secondary thing, it is not ethical to kill people after experiments, but it is ethical under some circumstances to do it for non-human primates. So even if you create something intelligent, is it equivalent to a human or is it an intelligent creature that is not human? I think that there will be debate over it, and there is not a simple answer to it.
    As above, so below

  3. #3
    Join Date
    Mar 2004
    Posts
    18,925
    Iíll try coming at this a bit from the side: Assuming it was possible, I think it would be a bad idea to make an AI that thinks like a human or has emotions similar to humans or other species. Iím not as concerned about the ethics of making human like AI as the risks. I also see no reason why we should need to go that route.

    I donít want an AI that might act against me because it fears being turned off. I donít want an AI that resents and tries to work around limits placed on it. I donít want an AI that would require compensation or could imagine itself being enslaved or care about it. At one time I wanted to see advanced AI develop quickly, but now I think it is good that it didnít. Along similar lines, I think it is good that it isnít easy to reverse engineer or directly interface to the brain. I can imagine all sorts of mind control scenarios if, for example, it was easy to install a memory implant and make artificial memories indistinguishable from real ones, or directly alter emotional response. Itís bad enough with indirect methods. Now imagine a human-like AI where that could be done - it could be an instant fanatic for whatever cause you want, no matter how ridiculous.

    "The problem with quotes on the Internet is that it is hard to verify their authenticity." ó Abraham Lincoln

    I say there is an invisible elf in my backyard. How do you prove that I am wrong?

    The Leif Ericson Cruiser

  4. #4
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    9,151
    Agree this might not be science, but I think the issue will be whether we even recognise what is popularly meant by AI, when it happens. I assume the popular meaning is self aware technology. In technology we have been talking about intelligent machines like thermostats which learn our heating habits, for decades, and now we often communicate with bots. We would not have issues with killing thermostats. If and when self awareness evolves, in machines, we might not spot it.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  5. #5
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,789
    Since the AI will be designed by humans it will want what humans program it to want. It need not care about any of the things on the above list.

  6. #6
    Join Date
    May 2013
    Location
    Central Virginia
    Posts
    2,103
    AI will be fine as long as it comes up with answers to our questions. If it starts to ask questions that's when it's a good idea to have to have a hand on the plug ;-)

  7. #7
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    37,618
    Quote Originally Posted by Chuck View Post
    Since the AI will be designed by humans it will want what humans program it to want. It need not care about any of the things on the above list.
    Even today's "black box" machines-programming-machines have produced results that surprise and mystify the humans behind them. Let alone one as complex as a human brain! Your claim, in my opinion, represents a naive and outdated view of software even as it exists now. A conscious being would be many orders of magnitude harder to predict and quantify.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  8. #8
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    37,618
    Quote Originally Posted by plant View Post
    not sure if this is the right forum...

    if general AI is developed in-silico...

    would be unethical to:

    1. turn it off = death (or at least suspended animation)
    2. stop if from learning/growing
    3. stop if from reproducing.
    4. stop if from 'instantiating' in a physical body
    5. make it do the task that we envisioned it to do = slavery?
    6. make it work (think) for us without being paid in some form (?data/processing power)



    I wonder if 'AI rights' would be the same as 'human rights' ?
    We have conflict over many of these ethical questions about humans, let alone a hypothetical non-human consciousness of undetermined qualities and actions.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  9. #9
    Join Date
    Apr 2020
    Location
    Garland, Nebraska
    Posts
    22
    Quote Originally Posted by Chuck View Post
    Since the AI will be designed by humans it will want what humans program it to want. It need not care about any of the things on the above list.
    Do you want only what your parents programmed you to want? I feel like your statement supposes the AI is only as designed. And I feel like the initial question here supposes the AI as our equal, an independent entity.

  10. #10
    Join Date
    Apr 2020
    Location
    Garland, Nebraska
    Posts
    22
    Quote Originally Posted by plant View Post
    not sure if this is the right forum...

    if general AI is developed in-silico...

    would be unethical to:

    1. turn it off = death (or at least suspended animation)
    2. stop if from learning/growing
    3. stop if from reproducing.
    4. stop if from 'instantiating' in a physical body
    5. make it do the task that we envisioned it to do = slavery?
    6. make it work (think) for us without being paid in some form (?data/processing power)



    I wonder if 'AI rights' would be the same as 'human rights' ?
    "in-silico", Hilarious! Just apolitical enough...

  11. #11
    Join Date
    Jun 2005
    Posts
    14,194
    Quote Originally Posted by jascryan View Post
    "in-silico", Hilarious! Just apolitical enough...
    I'm sorry, I don't get either what's funny or political/apolitical about that. It seems like a fairly simple question.
    As above, so below

  12. #12
    Join Date
    Apr 2010
    Posts
    493
    In-silico... i do think it is a humorous pseudo-latin alternative to

    In-vitro: In glass
    In-vivo: In life

    but ... is glass not made of silicon (dioxide)???
    "It's only a model....?" :-)
    https://www.youtube.com/watch?v=m3dZl3yfGpc

  13. #13
    Join Date
    Jun 2005
    Posts
    14,194
    Quote Originally Posted by plant View Post
    In-silico... i do think it is a humorous pseudo-latin alternative to

    In-vitro: In glass
    In-vivo: In life

    but ... is glass not made of silicon (dioxide)???
    Oh, I guess I'm used to using it, which is why I didn't realize it is funny. It means "in silicon," as in "in a computer." It is commonly used for experiments done with computer simulations as an alternative to in vivo and in vitro. But it is true that glass contains silicon (mostly silicon and aluminum), so it a bit funny that way...
    As above, so below

  14. #14
    Join Date
    Apr 2020
    Location
    Garland, Nebraska
    Posts
    22
    Quote Originally Posted by Jens View Post
    I'm sorry, I don't get either what's funny or political/apolitical about that. It seems like a fairly simple question.
    I interpreted it as a play on in-utero

  15. #15
    Join Date
    Jun 2005
    Posts
    14,194
    Quote Originally Posted by jascryan View Post
    I interpreted it as a play on in-utero
    I see. As I mentioned above, it's a real term that means "in computers." In the same way that "in vitro" means "in glass" literally.

    https://en.wikipedia.org/wiki/In_silico

    I'm not sure if it was originally used in a tongue-in-cheek way, but now it seems like a pretty serious term.
    As above, so below

  16. #16
    Join Date
    Jun 2010
    Posts
    73
    Quote Originally Posted by Chuck View Post
    Since the AI will be designed by humans it will want what humans program it to want. It need not care about any of the things on the above list.
    Robert Miles did a very good (and easy to follow) series of YouTube videos on this:

    https://www.youtube.com/watch?v=tlS5...s&index=2&t=0s

    He goes through some interesting thought experiments on how AI could go wrong and how it's not nearly so simple as "just program it to do what you want."

    Especially interesting is the "stop button" problem introduced in episode 7 of the playlist.

  17. #17
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,789
    It looks like the artificial intelligence will want what it's programmed to want with the problem being the side effects of it attempting to achieve its goals. If I hire a chef and tell him I want pork chops, I won't even think of specifying that he's not to steal them from my neighbor's freezer. That would be a mistake if it's an artificial intelligence. A robot chef would need an enormous amount of education in general social behavior before being sold to me, and if it's smarter than its programmers then it would still be far from safe to use.

    I suppose that early artificial intelligence will be used in an advisory capacity only and will not take actions that have not been approved by humans. They'll learn to function based on which of their proposals we accept and which we reject. After years of proposing only acceptable ideas they might be trusted to implement their plans without consulting us on every detail. It will still be risky, but so is trusting other humans.

  18. #18
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    37,618
    Quote Originally Posted by Chuck View Post
    It looks like the artificial intelligence will want what it's programmed to want with the problem being the side effects of it attempting to achieve its goals. I
    Well, what a conscious being interprets as what it wants. We humans have survival instincts "programmed" in, yet we often act against them.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  19. #19
    Join Date
    Jun 2010
    Posts
    73
    Quote Originally Posted by Chuck View Post
    I suppose that early artificial intelligence will be used in an advisory capacity only and will not take actions that have not been approved by humans. They'll learn to function based on which of their proposals we accept and which we reject. After years of proposing only acceptable ideas they might be trusted to implement their plans without consulting us on every detail. It will still be risky, but so is trusting other humans.
    That's a good start. But even then, there's no way of knowing if the AI is just telling us what it thinks we want to hear. A seemingly obedient AI may have calculated that once we let it operate independently it will be able to achieve its goals much more efficiently. (A very reasonable assumption) so it just starts doing whatever it thinks will get us to trust it. You might think you've taught your robochef that it has to buy pork chops from the store, but it's only been doing that so you'll "let it off its leash" so to speak, so it can then start stealing from your neighbor.

    We have to think of an AI like a sociopath. It's willing to do anything and everything necessary to get a high score on its utility function.

  20. #20
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,789
    I suppose it could be left as just an advisor permanently. Not completely safe if it's smarter than us, but probably the best we can do.

    Of course, our competitors with their own AI will probably let theirs off the leash to get an advantage over us. I guess survival of AI gets a 0% in the Drake Equation.

  21. #21
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,789
    Then there's the theory that our brains will be connected to the computers, making them us.

  22. #22
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    37,618
    Quote Originally Posted by Chuck View Post
    Then there's the theory that our brains will be connected to the computers, making them us.
    And get hacked? No thanks.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  23. #23
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,789
    It might not be optional. Besides, you'll love it. The computer chip in your head will see to it that you think it's a good idea.

  24. #24
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    37,618
    AI learning directly from human brains could happen if that works out. A potentially wonderful and/or horrible idea, depending on who we present as a role model.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •