Social Question

stanleybmanly's avatar

Care to comment on Steven Hawking's latest assertion that the rise of artificial intelligence will mean extinction for the human race?

Asked by stanleybmanly (24153points) December 3rd, 2014

It certainly makes sense to me that it won’t take a great deal of smarts for an intelligent machine to quickly recognize how screwed up we are. What would be a smart solution regarding the human “infestation” of the planet?

http://rt.com/uk/211019-hawking-warning-artificial-intelligence/

Observing members: 0 Composing members: 0

17 Answers

CWOTUS's avatar

Hawking could have gone a bit farther with his concern: The rise of human intelligence will mean extinction for the human race.

That is, as our intelligence improves (in any of the various ways one wants to define “intelligence”) our control systems do not improve at the same rate. We continue to crowd out species and spoil environments and ecologies that we need to survive as a species, and we continue to react irrationally towards one another. As someone has said (I paraphrase): “Our weapons systems continue to improve, but our diplomatic skills are stuck in the Dark Ages.”

janbb's avatar

^^ I answered that well, didn’t I?

Winter_Pariah's avatar

For me it’s an amusing assertion which really just makes me all the more curious between the Western and Eastern beliefs regarding AI. True, nowadays the Western belief of that AI would eventually turn on humans and wipe us out is more prevalent but easily 30–40 years ago there was a relatively prominent belief that AI just wouldn’t care for us, they would hold no fascination or interest in us after a certain point in their development and ignore us leaving mankind to itself and its limitations. Unless we attempt to correct their belief in that 2+2=7 like in the Cyberiad, then the shit hits the fan.

Back to the point, this sort of discussion just intrigues me as to just what would artificial intelligence do? How would it develop, evolve? Which prediction is correct, if either?

Stephen Hawking also predicted that mankind would kill itself with a man made virus within the next 1,000 years. I doubt that if we did, that it would take that long. But the more immediate thought was, “I think someone has been reading Ray Bradbury’s The Illustrated Man, specifically The City.”

And props and much lurve for @CWOTUS‘s answer.

ucme's avatar

* Stephen *

talljasperman's avatar

Mr. Hawking is already a cyborg ( half man , half robot).

flutherother's avatar

He’s assuming artificial intelligence will be as mean and paranoid as the real thing.

syz's avatar

I think it much more likely that we’ll wipe ourselves out with a modified virus.

Bill1939's avatar

I find the notion of “singularity” amusing. It requires that an artificial intelligence develop a sense of self and comparing itself with the selves of non-machines finds value in ending the existence of beings with less intelligence. An intelligence devoid of emotion and free from moral constraints would no more pursue our extinction that it would the extinction of other animals on the planet. It would be more likely that such an intelligence would either ignore us or keep us as pets.

stanleybmanly's avatar

@Bill1939 What use would an intelligence devoid of emotion have for pets? As for ignoring us, doesn’t judgement always accompany intelligence? How could a rational intelligence ignore 7 billion emotion driven threats to its own existence as well as their own. At minimum, even the pet thing wouldn’t require more than a few dozen of the “best” animals. The surplus should be disposed of, if only for their susceptibility to emotion driven irrationality

talljasperman's avatar

@Bill1939 I’m in a human zoo. All my efforts to escape are thwarted by the zoo keeper. I’m waiting for them to find me a suitable mate.

Winter_Pariah's avatar

@stanleybmanly one common issue I think is we all assume that we would know how they would think, that we would understand such a conscience. I recommend reading some of Stanislaw Lem’s works. Especially Golem XIV… and probably Solaris and the Invincible, those latter two aren’t about AI but delve into making contact with something we cannot comprehend or really communicate with.

And with Solaris… the movies don’t count. As Stanislaw Lem said, ”...to my best knowledge, the book was not dedicated to erotic problems of people in outer space… As Solaris’ author I shall allow myself to repeat that I only wanted to create a vision of a human encounter with something that certainly exists, in a mighty manner perhaps, but cannot be reduced to human concepts, ideas or images. This is why the book was entitled ‘Solaris’ and not ‘Love in Outer Space’.”

stanleybmanly's avatar

@Winter_Pariah So our definition of intelligence may be inadequate for Hawking to posit such a future?

Bill1939's avatar

@ stanleybmanly, you make a good point about the need for pets requiring an emotional component to an advanced intelligence. However, such an intelligence should be able to ensure its survival from human attempts to terminate it. I presume that this artificial intelligence would have a hive-like existence and therefore attacks on any specific component would be ineffective. Its existence would be analogous to an individual’s body, except that its equivalence of organs would have a large number of redundancies, unlike our bodies that have no more than two.

Genetically directed instinct creates human emotion that assigns value to experiences that serve the purposes of maintaining the individual’s survival and the continuation of the specie. While mechanical beings likely do not have a need to experience pleasure and pain or love and hate, they would need to recognize events with the potential for their evolution or devolution. Their ability to identify these potentials would correspond to human emotion.

Climate and cosmological factors would be a greater threats to AIs than humans would. Viruses, microbes, insects, plants, and animals would be irrelevant to its existence, so there would not be reason to interact with biological life forms, nor the need to reduce their numbers or extinguish them.

Winter_Pariah's avatar

@stanleybmanly Possibly. It may be adequate for us now, but it may have to be altered or completely redefined later on. We’re still a young species and still have much to learn regardless of how quickly we have been able to develop.

stanleybmanly's avatar

@Bill1939 Wouldn’t any form of A I arising from a physical entity (machine) quickly conclude that no possible threat to its own existence could exceed 7 billion irrational beings surrounding it, equipped with such things as thermonuclear technology? Were you to suddenly come to consciousness and find yourself in such circumstances, what would be “the smart thing to do”?

stanleybmanly's avatar

@Winter_Pariah The question remains, what do you suppose a superior intelligence would make of us? More to the point, would an advanced intelligence regard the idea of sharing this planet with irrational humanity as a smart or even sensible idea?

Winter_Pariah's avatar

I don’t know, ants? Poor limited fools? We have a bad habit of believing that another intelligent being, race, etc would operate similar to us whether rational or irrational. Stanislaw Lem’s Golem XIV doesn’t care at all what humanity does, just imparts some advice and observations before utterly ignoring us in an attempt to further improve itself, after all, do we concern ourselves with the affairs of ants after we sate our curiousity?

I suppose if they even entertained the notion that they’d be sharing the planet with humanity they might attempt to eliminate us. They might just as easily act like Golem, a few suggestions and observations for their creators before concluding it isn’t worth the effort and ignoring us entirely. Or maybe like the movie “Her”, they ditch the planet and go off onto the great void all the while rapidly developing themselves further at an exponentially accelerating pace.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther