General Question

PhiNotPi's avatar

How can you determine if a computer is sentient?

Asked by PhiNotPi (12681points) July 21st, 2011

Say you had a very powerful computer hooked up to a complicated robot, with the fanciest AI program you can imagine installed on it. How can you determine that the robot is sentient/self aware?

Assume that the robot has no pre-programmed way to communicate with humans. It does not know how to speak, write or sign any language intuitively.

Observing members: 0 Composing members: 0

37 Answers

poisonedantidote's avatar

Try to destroy it and see if it defends it’s self.

Play shitty music and see if it leaves.

funkdaddy's avatar

Science fiction has taught me that if it’s self aware it will immediately attempt to destroy all mankind.

So that would be a giveaway. ~

More likely, with no form of communication, you’d look for behavior that suggested exploring itself rather than the world around it. That could be physically (checking out how it’s made) or testing it’s limits. Trying to develop a means to communicate would also be a strong hint if it wasn’t specifically told how to go about that.

PhiNotPi's avatar

@poisonedantidote What someone calls good music is completely subjective and dependant on culture, and we don’t know what the robot likes.

Cruiser's avatar

Threaten it with a fruit cake and see if it screams!

marinelife's avatar

It would be hard to imagine a sentient being that had no language. How would it frame its thoughts?

“The most well known test to see if an AI is actually sentient is the test devised by Alan Turing a.k.a. the Turing Test. This test is based on a simple assumption. An AI can be called sentient if he is able to converse with a human and the human cannot tell he is talking to an AI or another human.”

“My definition of sentient intelligence is:
The ability to perform one or more different action(s) without interference from a third party or any form of determined stimuli.

In short, sentience is the ability to do something for no reason at all which makes the random concept the perfect way to measure it.”

AI Depot

poisonedantidote's avatar

@PhiNotPi Music is indeed subjective, but some of it is so bad it is objectively offensive to the universe. jk

Actually, thinking on it further, putting it in danger would prove nothing, it could just be programmed to defend it’s self.

You are going to have to test it’s reactions extensively, as well as it’s ability to learn and improvise, and then try and assume that it has the potential to be scentiant. I don’t think you could devise a test to prove it. Not with the given parameters.

filmfann's avatar

I keep trying to defrag my hard drive, but it just wants to be fragged.

I suppose if it began to presuppose what sites I was going to look at, that might be an indication. If I opened my HotLinx folder, and it knew to go to Fluther, or if I opened my Media folder, and it automatically opened AOL.

ragingloli's avatar

I do not agree with the test that tries to make AI indistinguishable from humans, because it presumes that sentience can only take the form of human sentience. What if the sentient being just does not converse in the same way a human does? That is not limited to AI, but other actual lifeforms, too. Like dolphins, elephants, apes and squid. What about visiting extraterrestrial beings? All of them would likely fail the turing test. Would that make them not sentient? The test is way too narrow.

I would maybe test it like this:
– Does the entity have the ability to solve problems, use objects in the environment to do so? For example stacking boxes on top of each other to reach a high object (apes can do it).
Make sure that the test subject has an actual motivation to do it. If it does not have a motivation, such a test is useless. For example, Cats. They just do not give a crap.
– Does it have a sense of self? Does it express it in any way? Not just by language, but by behaviour. Does it keep property? Does it recognise itself in the mirror?

With all those tests, you can not have a totally rigid scoring system, á la “fail this, therefore not sentient”. You have to be fluid and tolerant in your judgement.

To quote Picard, when argueing for the sentience of Data: “Seems reasonably self aware to me.”

PhiNotPi's avatar

@filmfann That can be programmed simply by using statistics. There has to be more acting on its own, without always being completely predictable.

filmfann's avatar

@PhiNotPi But if it wasn’t programmed to do that, and it does it on its own, wouldn’t that be an indication?

PhiNotPi's avatar

@filmfann I guess it could be, as long as you do other tests besides that.

wundayatta's avatar

I think that sentience means being “self-programming.” That is, you can change your own programming, to some degree, in response to the environment. The second aspect of sentience is communication. You need to be able to develop some way of symbolic transfer of information that indicates an understanding of what your communicatee is trying to say and an ability to transform what you want to say so that the person you are communicating with can understand. I.e., you need a fluid form of error checking that can be adapted, on the fly, to various situations. Which is a lot like self-programming.

My second criteria is a lot like the Turing test. The first probably could be verified simply by monitoring the programming of the computer. Although, that might be impossible since such programs are certainly hundreds of millions of lines long, if not billions of lines long.

Hibernate's avatar

He understand the binar code. Talk to it using the binar code ^^

mattbrowne's avatar

We can’t for sure. Even passing the Turing test wouldn’t be enough. Even passing the
http://en.wikipedia.org/wiki/Mirror_test

So how about adding AI being capable of loving humans or other AIs and AIs being capable of empathy.

YoBob's avatar

You really can’t. The best you can hope to achieve is to fail to be able to prove that it is not sentient. This is essentially what the Turing test is about.

Schroedes13's avatar

Ask if its name is Watson!

LostInParadise's avatar

Without language, the computer is limited in what it can do. Maybe it would have the intelligence of a chimpanzee.

ragingloli's avatar

@mattbrowne
I do not see empathy and love as requirements to be recognised as sentient.

flutherother's avatar

There would be no way you could tell for sure even if the robot could communicate. The robot could imitate a human being, or even a particular human being so well that we couldn’t tell the difference but that wouldn’t be proof of self awareness.

ragingloli's avatar

@flutherother
Should you then deny the entity its claim to sentience based purely on the off chance that it is just faking it?
I say no, if it appears to be sentient, it is to be considered sentient until you can prove that it is not, because what is at stake is nothing less than the basic dignity and “human” rights a possibly sentient being deserves.
After all, you can not prove that you are sentient yourself.

flutherother's avatar

@ragingloli Just because we can’t prove a creature is sentient or self aware doesn’t mean we shouldn’t treat it as if it were. I can’t prove it isn’t sentient any more than I can prove it is. We can’t rely on logic, we have to look to instinct and sympathy.

gorillapaws's avatar

@ragingloli if one can paint an ultra-realistic painting of an apple, such that it can fool others into thinking it’s real, doesn’t actually make it an apple.

I would think it has to actually “understand” what an apple is instead of referencing a table of properties with thousands (millions) of attributes such as red, green, round, fruit, sweet, sour, ripe, etc. There is no comprehension of the experience of “appleness”. If someone hacked into the property database and inserted “PwN3d” it would have no way of knowing that it doesn’t make sense and would forever associate “PwN3d” as a property of apples. This could never happen in a human.

ragingloli's avatar

“This could never happen in a human.”
Sure it could. You just have to teach them young.

gorillapaws's avatar

@ragingloli if you implanted that idea in a mature adult somehow, I’m confident it would be instantly rejected as nonsensical, and even if you taught them young, the rational mind would realize that it’s nonsensical and abandon it. Arrays don’t work this way.

ragingloli's avatar

“if you implanted that idea in a mature adult somehow, I’m confident it would be instantly rejected as nonsensical”
And you know why? because the adult mind already has years of experience and accumulated knowledge about that subject.
I also doubt that the rational mind would realise the nonsensical nature. If that were so, there would be no superstition and no religion.
And even if I granted you the rejection of the rational mind, it would also then mean that the AI, which is ratio in extremo, would abandon it as nonsensical as well.

gorillapaws's avatar

@ragingloli AI isn’t “rational” at all, it simply follows instructions by evaluating VERY basic commands. From this, programmers have been able to craft beautifully complex architectures that do remarkable things, but there’s not an ounce of critical thought there. If I assign a “PwN3d” property to the “apple” object, it simply obeys. And if I assign it to the aPple object without defining it first, it will crash completely.

PhiNotPi's avatar

The human mind actually does create memories of all of the properties of an item. It is just capable of finding not only where things are associated, but where they are not associated based on past knowledge.

If you gave a human a purple hammer, he will think about all of the uses of a hammer, how to hold it, what it does, etc. He will also think about purple, but years of expirience show that hammers, in general, have nothing to do with the color purple. He would only associate “hammer” and “purple” when thinking about that particular hammer.

However, if you give a human a purple hammer, and almost every hammer he has seen has been purple, he would associate “hammer” with “purple”. When you give him an orange hammer, he would most likely think “what’s up with this hammer, it’s orange!” He would identify it as a hammer because it matches almost every other property of a hammer, but he will be surprised when it fails to meet his prediction of purple.

ragingloli's avatar

@gorillapaws
And from these basic commands rational thought would emerge.
Do you think neurons are a lot different? All they do is the very basic function of sending a bioelectric potential along its axon upon stimulation. It is the activity of the network of neurons that lets thought emerge.
A network of interdependent and interrelated algorithms and subroutines would be quite similar to this. There is no reason at all why a simulated neural net would not result in rational thought. It works in all other lifeforms with brains.

“If I assign a “PwN3d” property to the “apple” object, it simply obeys.”
It seems clear and pretty obvious that you have already created a model in your head on how scientists would make an AI, set it up to fail and then just assume that this is the only model that can possibly exist.
Did it never occur to you that, for example, using genetic algorithms, scientists could simply let evolve an artificial, simulated neural net, without any input from them except the environment of stimuli that simulated brain would evolve?
Or how about the scientists making the AI in such a way that simple entering of data sets would not even be possible without the AI automatically analysing the new data if it actually made sense? The AI could ask back, “what does “PwN3d” mean?” “How does it relate to the apple?” “demonstrate that it does!”
Do you really think AI is just a big database of information with autonomous access processes?
If so, sorry, but that is not what AI is.

PhiNotPi's avatar

@gorillapaws Also, hacking into the computer’s memory is like going in and rewiring a human’s brain. You could get both of them to associate PwN3d with apples, but it completely skips the learning phase, and the change is involuntary and most likely not what the computer or human would do on its own.

Mariah's avatar

See here for some other ideas.

mattbrowne's avatar

There is no agreement what http://en.wikipedia.org/wiki/Sentience is and what http://en.wikipedia.org/wiki/Qualia are. The articles are worth reading.

Zaku's avatar

I’ve programmed AI, and I haven’t yet been able to imagine a computer program which would actually be self-aware in the way that I experience self-awareness. I could model such a thing in data, and in theory pass a Turing test, but that would not mean that the program had consciousness the way animals do. And I would not expect it to develop psychic abilities or morphic fields the way most animals demonstrably do. This is because computers do things mechanically, based on electronic logic. There’s no reason I can think of why that would suddenly give rise to consciousness, any more than a computer game causes real-world events. Programs can represent things, but they don’t manifest those same things, unless hooked up to a projector.

Now, perhaps, if someone figures out how to make a device which manifests consciousness, and can create a meaningful computer interface for it, that might do something relevant to consciousness… but I would think that would be just putting information into an existing consciousness. We can already do that by playing video games, for example, but it doesn’t create a consciousness – it just gives an existing consciousness some input.

And ya, per previous answer, we also need to agree on definitions of intelligence and consciousness, before answering this question. I just used my own conceptions/experiences of consciousness.

Mariah's avatar

^ On the other hand, why does the structure of an organic brain give rise to consciousness?

I find consciousness to be one of the most mysterious things.

Zaku's avatar

@Mariah Excellent question. I think it’s likely that consciousness exists and just focuses on life, and/or that consciousness is everywhere and the only people talking about it are certain humans. More study is needed.

filmfann's avatar

At this writing, there is a movie that addresses this question in the theaters.
It is called “Ex Machina” and it terrific!

kritiper's avatar

Ask it the ultimate question: “Is this all there is?”

kritiper's avatar

Ask how it feels.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther