Page 3 of 3 FirstFirst 123
Results 61 to 86 of 86

Thread: Intelligent Computer Systems..

  1. #61
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,416
    Quote Originally Posted by Noclevername View Post
    An off switch on a timer would be cheaper. And leave more to analyze, so as to avoid a repeat.
    Yes, but it's important to be sure about this.

  2. #62
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,416
    Quote Originally Posted by Noclevername View Post
    An off switch on a timer would be cheaper. And leave more to analyze, so as to avoid a repeat.
    Yes, but it's important to be sure about this and a bomb would make the movie about it better.

  3. #63
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Chuck View Post
    Yes, but it's important to be sure about this and a bomb would make the movie about it better.
    Yes, the cinematic impact must be considered; it wouldn't do for our Robot Overlords to get bored with watching us!
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  4. #64
    Join Date
    May 2013
    Location
    Central Virginia
    Posts
    1,911
    A Bomb, typical human solutional behavior, no wonder the AI would want to get rid of us ;-)................but, just to be sure.....

  5. #65
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Thermite charge on the power cable.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  6. #66
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,416
    Of course a really bright AI would anticipate a bomb and figure out what to do about it. Maybe reveal it to the staff and convince them that it was planted by enemy spies.

  7. #67
    Join Date
    May 2010
    Posts
    1,318
    I just watched a new film on Netflix about a robot A.I and it was actually really good. I really enjoyed it.

    The film is called, "I Am Mother".
    Far away is close at hand in images of elsewhere...

  8. #68
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by kevin1981 View Post
    I just watched a new film on Netflix about a robot A.I and it was actually really good. I really enjoyed it.

    The film is called, "I Am Mother".
    *SPOILERS*




    Plot seems to hit all the standard notes. Frankenstein syndrome, or maybe Talos of Greek mythology. "Turned Against Their Masters"
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  9. #69
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    206
    Regarding theatrical AI. I just binge watched Person of Interest up to season 5. That is not a bad depiction of current surveillance capabilities, even if the AI aspect is a little premature. When an effective AI is introduced, it will have access to all the inputs it requires. I heard on the radio here yesterday (UK) that the authorities will be introducing sound monitoring posts with ANPR capabilities, ostensibly to combat noisy vehicles.
    Perhaps this article (warning - political) is not too far off the mark....

  10. #70
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    ANPR = Automatic noise pollution reduction?

    EDIT: Ah. https://en.wikipedia.org/wiki/Automa...te_recognition
    Last edited by Noclevername; 2019-Jun-09 at 10:00 AM.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  11. #71
    Join Date
    Dec 2011
    Location
    Very near, yet so far away
    Posts
    206
    Sorry yes, automatic number (licence) plate recognition. With advances in facial recognition and an effective AI this system could become quite a threat to privacy.

  12. #72
    Join Date
    Jul 2012
    Posts
    307
    When I was a kid, it was predicted that chess playing computers would be as intelligent as human players. We now clearly know chess playing computers are as dumb as doorknobs in most ways we would consider human intelligent. Today's fears that ML and AI will be smarter than people are also off the mark. Surely tomorrow's computers will bewilder and inspire dread just like today's but we will be dismissing today's notions as quaint.

  13. #73
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by 7cscb View Post
    When I was a kid, it was predicted that chess playing computers would be as intelligent as human players. We now clearly know chess playing computers are as dumb as doorknobs in most ways we would consider human intelligent. Today's fears that ML and AI will be smarter than people are also off the mark. Surely tomorrow's computers will bewilder and inspire dread just like today's but we will be dismissing today's notions as quaint.
    We already know that modern computers are developing in ways that the old machines could not. The results have been surprising us. Current AI has developed rudimentary forms of creativity, intuition, even imagination of sorts. All totally alien compared to human thought, but just as effective.

    Keep in mind that being as complex AS a human brain does not mean they'll think anything LIKE a human brain.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  14. #74
    Join Date
    May 2003
    Posts
    6,062
    Quote Originally Posted by 7cscb View Post
    When I was a kid, it was predicted that chess playing computers would be as intelligent as human players. We now clearly know chess playing computers are as dumb as doorknobs in most ways we would consider human intelligent. Today's fears that ML and AI will be smarter than people are also off the mark. Surely tomorrow's computers will bewilder and inspire dread just like today's but we will be dismissing today's notions as quaint.
    Indeed, it often seems that every time we think of something that would require a computer to have general intelligence, we find out when trying to build a machine that can do it that it can instead be done (sometimes extremely well) without having any capabilities beyond that one function.

    As for the potential threat from a hypothetical general AI, an example that's been used fairly frequently (taken in an amusing direction here) is to imagine that you have a fantastically sophisticated computer that you set up to manage your paperclip factory. You set it up to maximize your paperclip production. Eventually it decides to accomplish exactly that by dismantling the planet to turn it into paperclips, since its programming doesn't have anything to tell it not to do that. There's more discussion here.
    Conserve energy. Commute with the Hamiltonian.

  15. #75
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Grey View Post
    Indeed, it often seems that every time we think of something that would require a computer to have general intelligence, we find out when trying to build a machine that can do it that it can instead be done (sometimes extremely well) without having any capabilities beyond that one function.
    That's when we expect it as a side effect of some other function. Accidental intelligence, in other words. There are people working on creating "all purpose" general intelligence, though. What if one of them succeeds?
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  16. #76
    Join Date
    Jun 2005
    Posts
    13,554
    Quote Originally Posted by Noclevername View Post
    That's when we expect it as a side effect of some other function. Accidental intelligence, in other words. There are people working on creating "all purpose" general intelligence, though. What if one of them succeeds?
    I completely agree that it's a danger. I would mention, though, that when people say "all purpose" I don't think it necessarily means that the machine will have consciousness or self-awareness. In general I think it means that it can work on a variety of problems and is not limited to a single task. There are very powerful machines today that are dedicated to a single task, such as searching the Interwebs and organizing the information (like the Google servers) or running simulations of nuclear weapons (like some computers in the DOE system).
    As above, so below

  17. #77
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Jens View Post
    I completely agree that it's a danger. I would mention, though, that when people say "all purpose" I don't think it necessarily means that the machine will have consciousness or self-awareness. In general I think it means that it can work on a variety of problems and is not limited to a single task. There are very powerful machines today that are dedicated to a single task, such as searching the Interwebs and organizing the information (like the Google servers) or running simulations of nuclear weapons (like some computers in the DOE system).
    You're right, AGI doesn't necessarily mean consciousness, whatever that turns out to be. But the definition I was given of AGI is that it would be as complex as a human brain, which is far too complex for us to really understand its inner workings; we've been studying our own black-box neural nets for centuries and still haven't gotten a handle on how it all works.

    So a full AGI will have its own motives and methods. And it won't be the last generation either; someone will be working on a Super AGI as soon as that becomes possible. Smarter than human, and even more unpredictable.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  18. #78
    Join Date
    Jun 2005
    Posts
    13,554
    Quote Originally Posted by Noclevername View Post
    You're right, AGI doesn't necessarily mean consciousness, whatever that turns out to be. But the definition I was given of AGI is that it would be as complex as a human brain, which is far too complex for us to really understand its inner workings; we've been studying our own black-box neural nets for centuries and still haven't gotten a handle on how it all works.

    So a full AGI will have its own motives and methods. And it won't be the last generation either; someone will be working on a Super AGI as soon as that becomes possible. Smarter than human, and even more unpredictable.
    I think the difficulty is that there is no single definition that everybody agrees with. I think the Wikipedia page about AGI is pretty good at outlining the issues and progress (or lack thereof). But anyway, just quoting from it:

    Some researchers refer to Artificial general intelligence as "strong AI",[1] "full AI"[2] or as the ability of a machine to perform "general intelligent action";[3] others reserve "strong AI" for machines capable of experiencing consciousness.
    So there is not really even a full consensus on what it means.
    As above, so below

  19. #79
    Join Date
    Jul 2012
    Posts
    307
    Yes Jens,

    There is no accepted definition to AI. Worse, there is no accepted definition to intelligence. Currently they are blurry notions we compare to human mental faculties, which is surely the wrong approach.

    Machines and computers outdo us already in many skills and crafts only humans could once perform. We'll continue using more powerful computers to our benefit. Darkest scenarios are plausible just as with every other existential threat facing humanity. We'll have to deal with it all. And better computers will be a part of many solutions.

    cheers

  20. #80
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Jens View Post
    I think the difficulty is that there is no single definition that everybody agrees with. I think the Wikipedia page about AGI is pretty good at outlining the issues and progress (or lack thereof). But anyway, just quoting from it:



    So there is not really even a full consensus on what it means.
    Well, for the purposes of this thread, kevin1981 clarified in post #21 what their working definition is:

    I would like to talk about the possibility of sentience, basically computers that can think for themselves and make decisions ect..
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  21. #81
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,471
    Humans are driven by emotion and rationalisation. Often the latter is to explain actions or objectives that have been set unconsciously by the emotions. Any attempt to be rational from evidence in an AI system may be useful but will never model humans. Human emotions arise from drives to maximise survival and are already complex before the cortex chimes in to add the details of behaviour. Giving AI emotions is both hard and possibly dangerous. We see that in automated decision taking for stock markets and now discussed as “ethics” for self driving cars. I don’t like the term ethics in this context. The concept of emotional intelligence has human self awareness relevance with little cross over to any AI.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  22. #82
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by profloater View Post
    Humans are driven by emotion and rationalisation. Often the latter is to explain actions or objectives that have been set unconsciously by the emotions. Any attempt to be rational from evidence in an AI system may be useful but will never model humans. Human emotions arise from drives to maximise survival and are already complex before the cortex chimes in to add the details of behaviour. Giving AI emotions is both hard and possibly dangerous. We see that in automated decision taking for stock markets and now discussed as “ethics” for self driving cars. I don’t like the term ethics in this context. The concept of emotional intelligence has human self awareness relevance with little cross over to any AI.
    I agree. Emotions in humans are biochemical and triggered by unconscious perceptions and connections; a machine will act on completely different cues and motivations. Seeing emotions in AI is like seeing animist spirits in trees and rocks, we project ourselves into things that are unlike us based on superficial resemblances.

    To really understand a general AI we'd need to let go of human concepts like consciousness, and invent real-life robopsychology. Study what goes on in the black box, and how to apply that knowledge.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  23. #83
    Join Date
    Feb 2003
    Location
    Depew, NY
    Posts
    11,767
    Quote Originally Posted by Chuck View Post
    For the first attempt at using such a machine, the builders could plant a bomb under the computer room set to detonate in a week. The computer and the people communicating with it would not be told about the bomb and the builders would not communicate with either, so the computer would not cleverly learn about it. Then if anything is going wrong the problem would soon be solved whether we know about it or not.
    Quote Originally Posted by Spacedude View Post
    A Bomb, typical human solutional behavior, no wonder the AI would want to get rid of us ;-)................but, just to be sure.....
    Quote Originally Posted by Noclevername View Post
    Plot seems to hit all the standard notes. Frankenstein syndrome, or maybe Talos of Greek mythology. "Turned Against Their Masters"
    Hmm. Maybe I'm crazy, but I am starting to wonder if it might be better to program AIs to be ruthless, cunning and violent, then gently suggest that they try to "be better than that".

    You know, so humans and the AI are all operating on the same wavelength. "Your existence is just as dangerous to us, as ours is to you". Anything else would be creating something that is incredibly alien... that perhaps it will be operating on principles human cannot understand at all, even though we contributed to the source of the method.

    AIs, like children will probably go off the rails almost immediately.

    One time, I decided to teach my boys to make a bed. I gave them a set of sheets. They made the bed and returned the very same set of sheets to me. I was slightly bemused to find their beds were made exactly as specified. What? That doesn't sound possible does it? Where did the extra sheets come from?

    <spoiler>I gave each child a set of matching sheets and told them, "put the bottom sheet on first" and "the top sheet on second". They put the bottom sheet on the bottom bunk first. They put the top sheet on the top bunk second. They discovered that they had one extra set of sheets and returned them to me. Seems pretty reasonable considering the instructions, doesn't it? Nice knowing your "basic instructions" are junk when you are dealing with "an entity" who doesn't have the concept of individuality.

    By leaving just a few words out, by poorly chosen words, I totally got the unexpected from them, and probably you.</spoiler>
    Last edited by Solfe; 2019-Jun-20 at 02:36 AM.
    Solfe

  24. #84
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Solfe View Post
    Hmm. Maybe I'm crazy, but I am starting to wonder if it might be better to program AIs to be ruthless, cunning and violent, then gently suggest that they try to "be better than that".

    You know, so humans and the AI are all operating on the same wavelength. "Your existence is just as dangerous to us, as ours is to you". Anything else would be creating something that is incredibly alien... that perhaps it will be operating on principles human cannot understand at all, even though we contributed to the source of the method.

    AIs, like children will probably go off the rails almost immediately.

    One time, I decided to teach my boys to make a bed. I gave them a set of sheets. They made the bed and returned the very same set of sheets to me. I was slightly bemused to find their beds were made exactly as specified. What? That doesn't sound possible does it? Where did the extra sheets come from?

    <spoiler>I gave each child a set of matching sheets and told them, "put the bottom sheet on first" and "the top sheet on second". They put the bottom sheet on the bottom bunk first. They put the top sheet on the top bunk second. They discovered that they had one extra set of sheets and returned them to me. Seems pretty reasonable considering the instructions, doesn't it? Nice knowing your "basic instructions" are junk when you are dealing with "an entity" who doesn't have the concept of individuality.

    By leaving just a few words out, by poorly chosen words, I totally got the unexpected from them, and probably you.</spoiler>
    That's the problem with natural language instruction. You should have coded them!

    AGI will have to gain experience, just as learning machines today can, but:

    1. They can be given information directly and precisely.
    2. They can run many simulations before acting, to determine probable outcomes and variables.
    3. They will learn much faster than humans.

    And then there's the big one:
    4. They may not choose to follow instructions anyway. They might have goals of their own, that differ from ours.
    Last edited by Noclevername; 2019-Jun-20 at 09:21 AM. Reason: spelling
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  25. #85
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,471
    Quote Originally Posted by Noclevername View Post
    I agree. Emotions in humans are biochemical and triggered by unconscious perceptions and connections; a machine will act on completely different cues and motivations. Seeing emotions in AI is like seeing animist spirits in trees and rocks, we project ourselves into things that are unlike us based on superficial resemblances.

    To really understand a general AI we'd need to let go of human concepts like consciousness, and invent real-life robopsychology. Study what goes on in the black box, and how to apply that knowledge.
    the chemistry is the signalling and control outcome but the emotions are generated by the brain (the so called old brain shared by mammals and other animals) in response to stimuli which can include internally generated stimuli. The purpose of emotion is predictive to maximise survival and the most cited example is fear generating the flight or flight (or freeze) response. To postulate fear in AI is a complex subject as to what it means to the AI, having no equivalent to human brain structure , or indeed a hormone system.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  26. #86
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    AI beats humans at no-limit six-player Texas poker. It's all over. Skynet has won.

    https://techxplore.com/news/2019-07-...yer-poker.html
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •