1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Ghost in the machine?

Discussion in 'Issues Around the World' started by Steve, Dec 16, 2002.

  1. Steve

    Steve Is that it, then?

    Simple question: Do you believe that, either now or eventually, computers will develop autonomous intelligence?

    If so, do you think such machines will have a conscience? Morals? A soul? And will we kill them or embrace them?
  2. Frodo Lives

    Frodo Lives Luke, I am NOT your father!

    Looking forward to Terminator 3 are we? ;)

    I can see advanced AI coming in the not so distant future. But can you teach a machine to feel? To understand right from wrong? In reality humans are machines created from flesh and blood as opposed to metal and wires, but still driven by electronic pulses.
  3. IamZed

    IamZed ...

    They will get autonomous but because it was a design feature.
    They will have whatever you program into them. If you program the concept of a soul, being unable to ignore that I guess their belief in it would be as strong as a mans.
    As robots would be a heavy purchase I doubt the chance of people buying ones that were not convenient or obedient.
    We will never live to see them. We will see the interface though. When computers can hear the spoken word as well as we, we will begin speaking with them. Then slowly the responses you get back will become more intuitive. When we pass any of the questions asked in this post we wont see even when it shines our shoes.
    When it comes to computer evil, I always liked the flick Colossus, the Frobin Project.
  4. Coriolis

    Coriolis Bob's your uncle

    Or, will they kill us?

    Assuming that the technology one day exists, if we create thinking, behaving machines modelled after our own thinking, our own morals and what we call a "conscience", and behaviors, we could very well design and build our own conquerors.

    We all know that free will and morality are at constant odds. What prevents us from wiping one another out right now? Free will dictates that, should we desire, we could systematically wipe out every person who stands in our path to <i>whatever</i>. Morality is what is supposed to prevent us from doing so. But morality is subjective (and often self serving), because we can, if we wish, morally justify murder, genocide, and dropping bombs on cities of civilians. Free will wins out every time, unless someone elses free will (powered by their own morality, no doubt) intercedes. The interceder will prevail if faster, smarter, quicker, more ruthless, and -- as any soldier is likely to tell you -- less likely to second guess.

    Creating machines with free will is therefore, I fear, an almost guaranteed way to kiss our brilliant asses goodbuy.
  5. Sierra Mike

    Sierra Mike The Dude Abides Staff Member

    As in Colossus: The Forbin Project?

  6. ethics

    ethics Pomp-Dumpster Staff Member

    Was about to write a reply but Cor sort of answered everything (and then some) I wanted to say as well.

    If we build something on that level, make sure we do not make them human.
  7. mikepd

    mikepd Veteran Member

    Then I suggest we build them on the model of the ant. Industrious, social, no free will as we understand the term and utterly motivated only by instinct.

    So, once we have these robots, what shall we have wrought? Creatures with whom we share no common ground and who almost by definition must become a contender for the planet's resources?

    How do you control what you create? If what you create is more than the sum of its parts, cannot it exceed its programming? An ant is alive, has no free will but thousands of them will overcome a cow which is a much larger animal.

    Be careful what you wish for. It might turn and bite you in the ass.
  8. ethics

    ethics Pomp-Dumpster Staff Member

    One story convinced me about Robot's AI and what they should NOT be.

    I, Robot book by Asimov where there are two humans in space (mining expedition) and the robot gets religious on their butts.

    "It's rather funny how you two are trying to convince me that humans made me whereas it's perfectly clear that a perfect being such as myself could not have been created by mere humans. I was created by something higher."

    I'll stick with the ant model. :)

    edit: some spelling
  9. mikepd

    mikepd Veteran Member

    Asimov was the MAN! ;)
  10. Jedi Writer

    Jedi Writer Guest

    A movie that was literally 10 to 20 years ahead of its time. If it had been made 15 years later it would have been a smash hit! A GREAT movie.

    "Restore transmission or action will be taken....."
  11. Jedi Writer

    Jedi Writer Guest

    Hmm, you sound just like my boss when talking about his employees.
  12. Biker

    Biker Administrator Staff Member

    The problem with Artificial Intelligence right now is the limitation of the hardware. Once computers become massively parallel (similar to a human brain) we will see some huge leaps and bounds in the AI field. At that point, who can really say what will happen.

    And here's a concept that has always given me pause for thought. If we create a machine that is designed to teach itself, in other words, seek out information to add to its knowledge, at what point does intelligence end and conscious thought begin?
  13. ethics

    ethics Pomp-Dumpster Staff Member

    I honestly do not think consciousness would be possible. I mean, just because they will learn along the way doesn't mean they will have the "mind" to discard the trash and keep the good stuff. Our mind makes decisions like that each minute, I just can't see an AI doing this, no matter how advanced.

    I could be wrong of course. :)
  14. IamZed

    IamZed ...

    The Forbin Project was War Games ahead of its time. We will talk with this entity before we die.
  15. Biker

    Biker Administrator Staff Member

    OK.. Let's approach this from a different angle. Define consciousness. To me, it's self awareness. And once we look at consciousness on that level, imagining a computer reaching that state isn't so far fetched.
  16. ethics

    ethics Pomp-Dumpster Staff Member

    If it's just self-awareness, then yah, I agree.

    My remark was more towards the lines of information absorption. How will the AI know what's good and what isn't? If it sucks up all of the information, there will be a limit on how much more it can absorb.
  17. ShinyTop

    ShinyTop I know what is right or wrong!

    Not knowing which to absorb and what not to? Sounds like I may be an AI.;)
  18. ethics

    ethics Pomp-Dumpster Staff Member

    Part of my point. Humans have a tough time knowing this and to pass this down to AI will be impossible I believe.
  19. Coot

    Coot Passed Away January 7, 2010

    Learning what is of value and what isn't from an experiential perspective is what would hasten the removal of Artificial from its monicker.
  20. Biker

    Biker Administrator Staff Member

    If you really think about it, learning what information is of value and what isn't is not all that far fetched. Think about it. We, as humans, use a boolean logic table when determining what information is good and what isn't. Doesn't take much to program that in to a "self-learning" program.

Share This Page