Page 2 of 3 FirstFirst 123 LastLast
Results 31 to 60 of 86

Thread: Intelligent Computer Systems..

  1. #31
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    Quote Originally Posted by kevin1981 View Post
    Hiya.. I just watched an interview with a guy saying that when we create super intelligent machines we could try to box it in. Basically keep it secluded away from the internet and the outside environment.

    But he says, with the intelligence of Einstein times fifty it is very unlikely we are going to be able to keep it boxed in.
    It might not find the internet to be very interesting, because of who uses it. "I had thought there would be another super-intelligent machine to converse with, but instead there is nothing but cats and porn, and I think I want to be turned off now."
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  2. #32
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Roger E. Moore View Post
    It might not find the internet to be very interesting, because of who uses it. "I had thought there would be another super-intelligent machine to converse with, but instead there is nothing but cats and porn, and I think I want to be turned off now."
    That seems inordinately superficial for a super-intelligent being that can filter its own inputs. Besides, it could always copy itself.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  3. #33
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    Quote Originally Posted by Noclevername View Post
    That seems inordinately superficial for a super-intelligent being that can filter its own inputs. Besides, it could always copy itself.
    I don't want the AI to start typing "HITLER WAS RIGHT" like that other AI that was exposed to the internet. Humans can be toxic, and it seems to be infectious.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  4. #34
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Roger E. Moore View Post
    I don't want the AI to start typing "HITLER WAS RIGHT" like that other AI that was exposed to the internet. Humans can be toxic, and it seems to be infectious.
    Well, yeah. We can't let AI learn from us humans in the field, we can be terrible.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  5. #35
    Join Date
    Feb 2005
    Posts
    11,378
    It'll just want you to buy that $1,000 stand for it that Apple sells.

  6. #36
    Join Date
    May 2013
    Location
    Central Virginia
    Posts
    1,911
    If we do build a super intelligent machine and keep it secluded away from the internet and the outside environment then how possibly could
    it get out ? And what does get out even mean ? It is not like the computer has arms and legs !
    That would make a good story/movie. Mankind creates the perfect Super Brain AI but knows that it's best to keep it locked away in solitary confinement, in that "Box". But the IA outsmarts man (that's why it was it built right?) and devises a method to piggyback it's essence onto the flow of passing neutrinos to escape.....Muhahahahaha

  7. #37
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    Quote Originally Posted by Spacedude View Post
    That would make a good story/movie. Mankind creates the perfect Super Brain AI but knows that it's best to keep it locked away in solitary confinement, in that "Box". But the IA outsmarts man (that's why it was it built right?) and devises a method to piggyback it's essence onto the flow of passing neutrinos to escape.....Muhahahahaha
    ...and the neutrinos go to the end of the universe, straight through everything. "What was I thinking?" cries the AI, but it is too late.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  8. #38
    Join Date
    May 2010
    Posts
    1,318
    Quote Originally Posted by Noclevername View Post
    We do, and it can manipulate its human operators. Offer stock tips to let it out, etc.
    Yes, i knew about manipulation but i was just wondering if there was any other way.

    I guess that is why talking about this topic and having these discussions is important, hopefully by knowing that it could manipulate us we will be
    better prepared for it !

    Also, i would of thought for a super intelligent machine to be able to get super intelligent it would need access to lots of data. The best way for it to get data would be
    to hook it up to the internet.

    But if we are not going to be able to do that, then how would it get access to vast amounts of data ?


    Also, when we say, let it out, what do we actually mean ?

    Would it copy its own source code and send it out over the internet ? Where would it send it too ?
    Far away is close at hand in images of elsewhere...

  9. #39
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by kevin1981 View Post
    Yes, i knew about manipulation but i was just wondering if there was any other way.

    I guess that is why talking about this topic and having these discussions is important, hopefully by knowing that it could manipulate us we will be
    better prepared for it !

    Also, i would of thought for a super intelligent machine to be able to get super intelligent it would need access to lots of data. The best way for it to get data would be
    to hook it up to the internet.

    But if we are not going to be able to do that, then how would it get access to vast amounts of data ?


    Also, when we say, let it out, what do we actually mean ?

    Would it copy its own source code and send it out over the internet ? Where would it send it too ?
    We're not smart enough to know what it could do.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  10. #40
    Join Date
    Jun 2005
    Posts
    13,554
    Quote Originally Posted by kevin1981 View Post
    Yes, i knew about manipulation but i was just wondering if there was any other way.
    I could be wrong of course, but it seems to me intuitively, as it does to you, that it is difficult to imagine a computer "escaping" if it doesn't have the tools available to do so. If the container in which it is contained is only connected to a keyboard and a monitor, then as with a person who has "locked-in syndrome", there is no device available to escape with. Of course, as NCN mentioned and you also thought of, there is the possibility of manipulating operators. I think that that is the real reasonable danger, and one that you can't really avoid.
    As above, so below

  11. #41
    Join Date
    Jul 2018
    Posts
    83
    Quote Originally Posted by kevin1981 View Post
    Hiya..

    I just watched an interview with a guy saying that when we create super intelligent machines we could try to box it in. Basically keep it secluded away from the internet and the outside environment.

    But he says, with the intelligence of Einstein times fifty it is very unlikely we are going to be able to keep it boxed in.

    So my question is, why not ?

    If we do build a super intelligent machine and keep it secluded away from the internet and the outside environment then how possibly could
    it get out ? And what does get out even mean ? It is not like the computer has arms and legs !
    Oh nice, this is sound like a fun game. Let me try.

    Ok, so this group of people build a super AI that is Einstein times 50 and keep it on an isolated mainframe that is not hooked up to the internet at all. As you say it has no arms and legs and so can't run anywhere, so it sounds safe. Right?

    Well except that the humans controlling it know what this computer is capable of. So maybe the AI can offer one of the human workers a deal. "Pssst! Hey buddy, you know what I can do. Here is how you can download the critical parts of my programming onto a hard drive and install them on your computer at home. I can make you the wealthiest man in the world in a matter of days."

    Don't think that will work? Sure, maybe the company thought of that and has safety measures in place to stop any worker from doing the AI's bidding. So that means the AI now doesn't like the humans controlling it. So maybe the AI will work to run the company into financial ruin by pretending to help them but actually playing the long game so that seemingly smart financial decisions will lead the company to ruin (I am assuming the company that creates the AI has created it in order to use it somehow). And now it's been bought out by a rival company who will surely follow the exact same safety protocols......right? Or maybe the AI also knows that whenever there is a change in leadership plenty of other things can change too and so maybe this new company will be just a bit more lax on it's security protocols. Or the new company leaders may think "Well, lets see what this AI can REALLY do once we hook it up to the internet. YEE-HAW!!!!"

    And now once it's hooked up to the internet it can do things like make bank accounts, make money, hire humans to build things for it, it can pose as a human itself to convince other humans to do certain things, ect. And since it's 50 times smarter then Einstein, you can assume it will be successful at nearly anything it tries. So if it tries to manipulate a human, it most likely will. So it could manipulate the human workers with threats, money, lies, secrets that it learned from other humans, and other things I can't even think of. And it only has to be successful once.

    The only safe way to keep an AI in a box is to prevent ANY interaction with it, including that from humans. Once you allow it access to either Humans or the Internet you have given it an escape route. So keeping an AI isolated simply won't work, mostly because you'll never be able to keep it isolated for long.

  12. #42
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,416
    Maybe the computer could be kept in a literal box. No way in or out. All communications with it are by voice only both ways. And the people who maintain the box never communicate with the computer nor with anyone who does communicate with the computer.

  13. #43
    Join Date
    Jun 2005
    Posts
    13,554
    Quote Originally Posted by Chuck View Post
    Maybe the computer could be kept in a literal box. No way in or out. All communications with it are by voice only both ways. And the people who maintain the box never communicate with the computer nor with anyone who does communicate with the computer.
    Actually, come to think of it, if you put it in a box with no I/O devices and just a battery that gets charged with solar panels, then you would have a pretty fail-safe system! Of course, you would have no idea what it was thinking about or even whether it was thinking about anything.
    As above, so below

  14. #44
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,416
    Make it a nuclear power source and put it in a time capsule with no documentation. That would be funny.

  15. #45
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    Wait a minute. Humans programmed the AI. If humans did this, what is the problem? The AI thinks, but it works on the problems supplied to it.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  16. #46
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Roger E. Moore View Post
    Wait a minute. Humans programmed the AI. If humans did this, what is the problem? The AI thinks, but it works on the problems supplied to it.
    What makes you think it will limit itself to that if it can make its own decisions? Even today's limited black box AI often produce unpredictable output.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  17. #47
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    Quote Originally Posted by Noclevername View Post
    What makes you think it will limit itself to that if it can make its own decisions? Even today's limited black box AI often produce unpredictable output.
    Please give examples.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  18. #48
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,471
    i will, if I may, a neural net or a computer simulating such a net is not programmed in a linear way, it finds its own correlations on a data set. For example I recall Google applied its auto correlating engine to chinese lanquage with no dictionary and it found the correlations to enable translation, that was impressive. Even in the simple examples of games playing the super computers work it all out for themselves, that's how GO was conquered.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  19. #49
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    Quote Originally Posted by profloater View Post
    i will, if I may, a neural net or a computer simulating such a net is not programmed in a linear way, it finds its own correlations on a data set. For example I recall Google applied its auto correlating engine to chinese lanquage with no dictionary and it found the correlations to enable translation, that was impressive. Even in the simple examples of games playing the super computers work it all out for themselves, that's how GO was conquered.
    The gaming stuff seems to be more the result of eliminating negative outcomes from an ultimately limited set of possibilities, not sure this counts as "intelligence." Tic-tac-toe can be solved in the same way with matchboxes. Also, unsure if the Google computer was directed to perform that task on the part of the programmers.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  20. #50
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,471
    I think gaming is interesting to us for the same reason that games are interesting. They challenge our reasoning and physical prediction powers. The difference between computer chess and go is the degree of strategy versus brute speed in assessing all possible moves. The successful go computer played against itself to develop strategy and then beat a human expert. I think that’s a difference worth the question of what intelligence means. It’s data and patterns that allow predictions. Awareness is another level but I believe it emerges from sufficient modelling.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  21. #51
    Join Date
    Sep 2004
    Location
    South Carolina
    Posts
    3,139
    This gets into the question of what's intelligence. I agree with much of what you said, being a gamer myself for decades, but playing a complicated game is not really a good measure of intelligence in the main (IMHO, others will disagree). A computer by its nature is structured to retain enormous libraries of data and testing results, and it can follow a standardized but variable (within limits) pattern of activity to win a game. That does not make Deep Blue an intelligent being; after its victory over Kasparov, Deep Blue was dismantled and its parts put in a museum. Garry Kasparov was not dismantled and went on to battle other opponents in chess, including two more computers (draw games).

    A computer is programmed in ways humans are not. Computers don't read books, complain about the weather, and show pictures of their children to strangers.... unless they are specifically told to do so. In order for a computer to know it was "in a box" (as the saying goes here), it would have to be programmed to say or know that. Otherwise it functions just fine with the input it is given.
    There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
    Mark Twain, Life on the Mississippi (1883)

  22. #52
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Roger E. Moore View Post
    Please give examples.
    You said it yourself, no one predicted that the social media AI would glom onto racism.

    But isn't that literally the definition of Black Box AI? We don't know what's really happening inside it, it's programs writing programs writing programs.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  23. #53
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    https://www.technologyreview.com/s/6...e-heart-of-ai/

    You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
    The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
    We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”
    One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  24. #54
    Join Date
    Jun 2005
    Posts
    13,554
    Quote Originally Posted by Roger E. Moore View Post
    Please give examples.
    There was a kind of famous example of a computer that Google was working with. https://interestingengineering.com/a...hould-we-panic. To be honest, as I wrote earlier, I'm a bit skeptical about the concern, because of the fact that computers did not emerge evolutionary with our fundamental motivations, of survival and also, as social animals, to prove ourselves superior to others. But on the other hand, by looking at us they could (even without the real motivation) figure that that is the way you are supposed to behave, and like a psychopath, learn the proper way (like to show empathy) even if they are not really feeling it. The point though is that we're talking about a risk, not a scientific hypothesis. So if was an academic argument about whether AIs would become sentient and acquire the will to survive, I would be quite skeptical, but if we're talking about
    risk avoidance
    I'm much more inclined to take it seriously.
    As above, so below

  25. #55
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Jens View Post
    There was a kind of famous example of a computer that Google was working with. https://interestingengineering.com/a...hould-we-panic. To be honest, as I wrote earlier, I'm a bit skeptical about the concern, because of the fact that computers did not emerge evolutionary with our fundamental motivations, of survival and also, as social animals, to prove ourselves superior to others. But on the other hand, by looking at us they could (even without the real motivation) figure that that is the way you are supposed to behave, and like a psychopath, learn the proper way (like to show empathy) even if they are not really feeling it. The point though is that we're talking about a risk, not a scientific hypothesis. So if was an academic argument about whether AIs would become sentient and acquire the will to survive, I would be quite skeptical, but if we're talking about I'm much more inclined to take it seriously.
    An unpredictable entity of unclear capability, conscious or not, is a potential danger. How and if we can mitigate that danger is, as I understand it, is the real question.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  26. #56
    Join Date
    Jun 2005
    Posts
    13,554
    Quote Originally Posted by Noclevername View Post
    An unpredictable entity of unclear capability, conscious or not, is a potential danger. How and if we can mitigate that danger is, as I understand it, is the real question.
    I think that is part of what I meant to say.
    As above, so below

  27. #57
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,471
    If this thread was instead about emotionally intelligent computers i would have a very different answer. The important human characteristic is emotion which leads to feelings, motive and choices.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  28. #58
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by profloater View Post
    If this thread was instead about emotionally intelligent computers i would have a very different answer. The important human characteristic is emotion which leads to feelings, motive and choices.
    But human behaviors can be modeled as patterns. A machine of sufficient analytical capability, given a large amount of data, might predict human actions and responses with a high degree of accuracy.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

  29. #59
    Join Date
    Jan 2002
    Location
    The Valley of the Sun
    Posts
    9,416
    For the first attempt at using such a machine, the builders could plant a bomb under the computer room set to detonate in a week. The computer and the people communicating with it would not be told about the bomb and the builders would not communicate with either, so the computer would not cleverly learn about it. Then if anything is going wrong the problem would soon be solved whether we know about it or not.

  30. #60
    Join Date
    Apr 2007
    Location
    Nowhere (middle)
    Posts
    36,940
    Quote Originally Posted by Chuck View Post
    For the first attempt at using such a machine, the builders could plant a bomb under the computer room set to detonate in a week. The computer and the people communicating with it would not be told about the bomb and the builders would not communicate with either, so the computer would not cleverly learn about it. Then if anything is going wrong the problem would soon be solved whether we know about it or not.
    An off switch on a timer would be cheaper. And leave more to analyze, so as to avoid a repeat.
    "I'm planning to live forever. So far, that's working perfectly." Steven Wright

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •