Send to a Friend

ETpro's avatar

When a computer first becomes self-aware, will it act morally?

Asked by ETpro (34605points) December 29th, 2009

Science fiction writers are mostly united in the idea that computers will, 1—someday become self aware, and 2—will act in a totally amoral, self interested fashion once they do. In many science fiction works, the great struggle that drives the plot is a contest between mankind and computers, each determined to either subjugate or exterminate the other. But would a computer, using sheer logic, conclude that killing off mankind was its best course of action? As machine intelligence progresses, it seems likely that computers will, in the not too distant future, become intelligent enough to begin controlling their own programming and exhibit every behavior we humans define as self awareness. When they do, how do you think they will program themselves?

Using Fluther

or

Using Email

Separate multiple emails with commas.
We’ll only use these emails for this message.