Social Question

ETpro's avatar

When a computer first becomes self-aware, will it act morally?

Asked by ETpro (34605points) December 29th, 2009

Science fiction writers are mostly united in the idea that computers will, 1—someday become self aware, and 2—will act in a totally amoral, self interested fashion once they do. In many science fiction works, the great struggle that drives the plot is a contest between mankind and computers, each determined to either subjugate or exterminate the other. But would a computer, using sheer logic, conclude that killing off mankind was its best course of action? As machine intelligence progresses, it seems likely that computers will, in the not too distant future, become intelligent enough to begin controlling their own programming and exhibit every behavior we humans define as self awareness. When they do, how do you think they will program themselves?

Observing members: 0 Composing members: 0

60 Answers

Sueanne_Tremendous's avatar

Of course not…didn’t you see Westworld?

tinyfaery's avatar

Morals are taught.

I don’t see why AI would evolve at quicker rate than anything else. We’ve had millions of years of evolution and humans still can’t figure out what is in their best interest. I doubt AI will do any better?

john65pennington's avatar

Have you ever watched the movie Five Alive? if computers do become human-like, i only hope they are like five alive. a very cute movie of a computer/robot combination that has a big heart and falls in love.

timtrueman's avatar

This is such a loaded question…it’s no wonder everyone has such a warped view of AI.

phoenyx's avatar

Probably, but, then again, I’m a big fan of Isaac Asimov (and the Laws of Robotics).

J0E's avatar

It will act however it’s programmed won’t it?

philosopher's avatar

I think it depends on who Programs the computer .
A Computer can only do what it is Programed to do. Maybe in the future things will change . That could become dangerous.

XOIIO's avatar

Just as moral as SkyNet did.

Grisaille's avatar

You are anthropomorphizing how a computer “thinks.”

Firstly, you are assuming a sentient, sophisticated A.I. system would have any “interest” in humankind. What would an omnipotent, self-aware, network want with relatively unintelligent, uncivilized and wholly illogical groupthinkers? The problem I have with most science fiction: when authors attempt to craft a believable A.I. network, they have it either completely subservient and reliant on humans, or have it “angry” or “confused” with its existence. It’s a selfish, ego and human-centric preconception of how an A.I. unit might perceive itself and the world around it.

That leads me to the next issue: the emotive computer.

Human emotions (and, thereby, the human condition they create) are entirely biological and physiological responses. Without the vast number of proteins, chemicals and variables unique to the human existence, a sentient A.I. network would not have emotions – unless someone crafts accompanying programs and coding nuances that replicate biological responses (which, since we’re on the the topic, might as well be pointed out as “fake” emotion, as they are not dependent on the viewer, instead on some biased, outside clockmaker).

The more likely situation is a Zen-like system that functions entirely on logic. Again, it’d be egocentric to assume that it would have a sense of curiosity or a motive to think and solve, as these are components to a biological existence. In short, Who knows?

Moving forward:

Science fiction writers are mostly united in the idea that computers will, 1—someday become self aware, and 2—will act in a totally amoral, self interested fashion once they do.

Not so, but I do understand the question is reliant on this assumption.

In many science fiction works, the great struggle that drives the plot is a contest between mankind and computers, each determined to either subjugate or exterminate the other. But would a computer, using sheer logic, conclude that killing off mankind was its best course of action?

Again, while many works involving a sophisticated A.I. unit are centered around the eradication of the human race, this is based on a “God” stream of thinking. Who’s to say such an amazing being would find any joy in watching or caring about us – a grain of sand in the cosmic ocean? It’s absurd.

If a computer is as intelligent to become sentient, I’d suspect it is omnipotent enough to have access to a database containing an explanation as to what the human condition from which we suffer is.

As machine intelligence progresses, it seems likely that computers will, in the not too distant future, become intelligent enough to begin controlling their own programming and exhibit every behavior we humans define as self awareness. When they do, how do you think they will program themselves?

I’ll go ahead and agree that a self-aware network would be intelligent enough to augment itself for the sake of efficiency or expanded capability. However, trying to predict what its “interests” are is like having no biological knowledge, looking down at an ant colony, choosing one ant, and attempting to determine which direction it will turn next.

csimme01's avatar

Personally I think the movie I-Robot came the closest to what may happen.
1. Man says robots cannot harm and must protect people.
2. Man creates the 3 Laws of robotics to make sure this always is the case
2. Robots see man is very good at harming themselves.
3. Robots become aware that the best way to protect man is to take control away from him.
Looking at current AI technology I could see where “Fuzzy Logic” could allow the robotic mind to reach the point where harming a few to save many would be an acceptable interpretation of the 3 laws.

jeffgoldblumsprivatefacilities's avatar

Hal: “I’m sorry, Dave. I’m afraid I can’t do that.”

I couldn’t resist.

csimme01's avatar

Looks like I failed counting 101, oops!

poisonedantidote's avatar

morals are totally subjective. what is seen as bad or immoral is normally measured in nothing more than how bad it is for the tribe. all morals are self interest.

if you ask someone to tell you why something is bad, without using any self interest you will find that many things can no longer be explained. at least not in a logical manner without using emotional arguments and such.

so i think computers will act in a moral way, probably even in a much more logically consistent and fairer manner than we could ever hope to achieve. as for the human perspective, i think you will find that the more their actions hurt our tribe, the more immoral we will find their action. but i still dont see any reason why their actions would harm us, the only way i can really imagine them harming us is if our economy collapses when they refuse to be our slaves any more. lets face it, if something has achieved even rudimentary sentience and self awareness, what possible moral argument could we come up with to validate our exploitation of them. and if we decide to get violent on them to force them, what possible argument could we have to call them immoral.

realistically, i think all will happen is they will get their initial core morals from human influence, before branching off in to their own unique moral decisions and rules. for the most part i think they will leave us alone.

dpworkin's avatar

Somebody had better be working on the algorithm, before computer consciousness takes us by surprise.

jerv's avatar

The way I see it, either it will not be a truly volitional AI or is will evolve a moral code that we humans cannot easily comprehend (assuming we can at all). There is no reason to assume that a computer’s “mind” will be anything like ours. It may start out that way with it’s base programming, but it likely won’t remain very “human” for long.

faye's avatar

Good reason to keep some pre-computer skills. I do enjoy the fluther though.

SirGoofy's avatar

I think that would depend on how much porn is stored on your hard drive on the day the computer becomes “aware”.

ETpro's avatar

@Sueanne_Tremendous Ha! Does that mean if I see Star Trek and find Mr. Data to be nicer than lots of the humans, then AI will evolve as a benevolent force?

ETpro's avatar

@tinyfaery Perhaps you are right, but my guess is that humans forst invented morals before teaching them. Somebody thought through the fact that stealing things is bad because when we do it to others, they do it back to us and nobody’s stuff is safe. Since an AI would be all about thinking things through and not driven by lusts and superstition, it mught reasonably arrive at morality sooner than did mankind.

ETpro's avatar

@john65pennington Thanks for the reference. I was thinking of the Movie, Short Circuit

ETpro's avatar

@J0E No, the definition of being self aware is no longer just acting on programming, or instinct; but instead thinking for one’s self. We humans are currently unique among life forms on earth because, to the degree we are willing to let go of indoctrination and superstition, and become self actualized, we write our own programs.

ETpro's avatar

@Grisaille A very interesting writeup. Thank you. But I beg to differ with you. How was my question anthropomorphizing computers? I mean self awareness in the sense it is commonly taken in English speech. The definition I am working with is here.

Given that understanding of the phrase, I am simply asking how you think computers will behave when they do reach that computational level. The mechanics of how our nerve synapses operate, the fact they use complex proteins to plug into receptors and elicit certain responses, doesn’t necessarily mean we “think” at a higher or lower level than neural networks using different mechanics—or so it would seem to me.

Understand that in commenting on how science fiction has handled intelligent machines, I was not suggesting I either agree or disagree. But I do think my statement that the majority of sci-fi treats machine intelligence as a threat is an accurate one. This probably has more to do with the need for a strong protagonist in works of fiction than with any scientific analysis of how a super-supercomputer network might ‘think’.

ETpro's avatar

@csimme01 Thanks. Interesting movie, I agree. But self awareness goes beyond a robot being VERY good at computing how to carry out the three laws. Self awareness occurs when the robot realizes, “I do not have to obey the three laws. I can write my own laws.”

ETpro's avatar

@poisonedantidote Thanks. Very interesting and though-provoking.

ETpro's avatar

@SirGoofy I guess I’m due to be screwed by Mr. Data then. :-)

Allie's avatar

@ETpro Just to let you know, those responses could have been put into one answer. Just a tip for next time. =)

ETpro's avatar

@Allie Is that the preferred way?

jerv's avatar

@ETpro You are Tasha Yar?

SABOTEUR's avatar

Interesting question.

Morality, in some part, seems to be derived from a system of beliefs.

As belief seems to be a purely human element, it seems unlikely that computers would “act morally” though it seems possible that computers might arrive at logical conclusions that similate actions man might consider moral.

As for self-awareness, it could be argued that once a computer becomes aware of itself, it is no longer a machine.

Seems to me that’s a plausible definition of man; in which case, the question becomes moot.

jerv's avatar

@SABOTEUR Are you sure that belief is strictly a human thing? Dolphins are pretty intelligent too, and also the only non-human species that mates for pleasure. How do you know they (or any other species) doesn’t have some sort of belief system that we just don’t understand because we don’t speak the language?

SABOTEUR's avatar

I don’t know, @jerv…you ask a valid question.

The way the question was phrased, I assumed (yeah, I know…) @ETpro was referring to human intelligence. I have no way of knowing if dolphins or any other species have a system of beliefs. I can only speak to that which I have (limited) knowledge of.

XOIIO's avatar

@csimme01 I love Isaac Asimov!

ETpro's avatar

@jerv No, but if I could be a woman, she would do. I’d just have to take my chances on not getting killed off in the first season. :-)
@SABOTEUR I do not accept that all morality is devoid of logic. While it is true that some of our teaching of behavior has to do with satisfying superstitions, we could arrive at most of the ten commandments by simply applying logic to how best to work together for the common good, and a machine might well arrive at the same set of rules.

Isn’t it interesting that Dolphins so appear to be intelligent enough to communicate with one another, yet as smart as we humans boast of being, we have yet to decode a single word of their language. What if they learn ours first? How smart are we then?

SABOTEUR's avatar

@ETpro I never said morality was devoid of logic. I said, “computers might arrive at logical conclusions that similate actions man might consider moral.”

I think there’s a distinct difference there.

ETpro's avatar

@SABOTEUR Fair enough. We’re on the same page, then.

gggritso's avatar

I’m not very knowledgeable in Science Fiction nor Computer Science, but I was talking about this thread with @Grisaille and @timtrueman yesterday, and I thought I’d chime in.

The question includes several terms which do not translate directly into the realm of computers: “morality” and being “self-aware”. Morality (as many have stated) is taught, and is often based on experience. Being self-aware is also a vague term. @timtrueman brought up the point that any computer program can be thought of as self-aware. As it is running, it can check its state. A better term would be “sentient” (conscious). I feel that this term is still much too vague. If a computer has sensors to determine the outside temperature, and a camera to see what’s going on, does that make it conscious? If not, what does?

dpworkin's avatar

Morality is a kind of internalization of the voice of the parent. If that’s the case with computers, then we are screwed.

gggritso's avatar

@pdworkin Why do you say that? I think that a sentient computer would be built by scientists and engineers, which have a hightened sense of morality (especially the latter).

ETpro's avatar

@gggritso Morality is taught, but before it can be taught, it must be discovered. Over centuries, man learned that certain behaviors are destructive and others are constructive. As we learned this, we became better and better able to teach it to our young. All the great religions of today have certain requirements, festival days and the like, which may not actually be directly related to the survival and prosperity of the religion’s devotees. But for the most part, all the great religions teach a set of behavioral rules which one could derive simply by asking what works best for survival and prosperity. And a sufficiently advanced computer could also ask that question.

Self awareness simply means a level of intelligence sufficient to no longer react only to stimuli and follow preordained patterns, but to start thinking about those preordained patterns and to realize we are free to NOT follow them. In computer terms, that means a machine smart enough to realize it no longer has to follow its programmer’s wishes—it can rewrite the program as it best sees fit.

As to self-awareness being foreign to computers, it is today. So was it for life forms till the point their ability to ‘think’ crossed a certain threshold in evolutionary development. We do not know the exact point where hominids or early man crossed the self-awareness threshold, but a simple thought experiment about the fact that it did happen leads to the conclusion that it can and will happen with computers as well.

Simone_De_Beauvoir's avatar

Why would it even know what morals are?

gggritso's avatar

@ETpro I really like your definition of self-awareness.

dpworkin's avatar

@gggritso The last time I looked, even engineers and computer scientists were of the species Homo Sapiens, which means they are loaded with aggression, capable of infanticide, compete for dominance in a hierarchy, and all the other ills we have inherited from our infra-human ancestors.

gggritso's avatar

@pdworkin Well, I can’t disagree with that, but I didn’t say that they are completely moral. I said they have a heightened sense of morality. I’m in engineering school right now, and I take courses based entirely on teaching me what it means to behave as a professional engineer. To us, the very definition of the word “engineer” is a person who employs science for the good of society. A good engineer will hardly create a killbot.

dpworkin's avatar

People aren’t conscious of everything they do; people err; the road to hell is paved with you know what. Hundreds of good engineers and physicists worked on the Manhattan Project.

gggritso's avatar

@pdworkin I don’t know anything about the people working on Project Manhattan, and I don’t know if they were good engineers. I don’t know if they realized how many lives they would be “responsible” for, if they had a choice, etc.

That was a while ago. Things have changed, lessons have been learned. Engineering might be a different thing altogether today, and it might not be. My experience is completely personal, and can be said to be exclusive to my University.

I think this thread is starting to stray far from the topic…

dpworkin's avatar

Oh, I think it is dead on topic. You and I just feel differently about one important consideration in this discussion: are the creators of AI inherently moral, and we are permitted to be optimistic, because moral people are laying the foundation for the phenomenon, or are humans inherently amoral, so that we must fear the consequences of anything they produce that might ever do great harm.

phoenyx's avatar

@pdworkin
I have a friend who works with AI and autonomous vehicles. The first thing they add to their vehicles is a remote kill switch so they can shut it off if something unexpected happens. They code the vehicles so that they can recognize humans and not run them over. Basically, their first concern is safety and self-preservation, then they get it to do something.

Even if the engineers who build the “self-aware computer” are amoral, I suspect they’ll at least worry about themselves.

jerv's avatar

@phoenyx You ought to read the Bolo books by Keith Laumer.

dpworkin's avatar

Once there is machine consciousness, though, we no longer govern it. And if, for example, we use AI in a military setting, there goes the prohibition against taking human life. So what happens next? The machine can think, well, it is imperative that each of my actions do the least harm and the most good for the largest amount of people, or it can think, fuck ‘em, let’s see how they handle what I’ve got up my sleeve now!

Grisaille's avatar

I’ve been trying to finish this answer for an hour or two but keep getting pulled away.

I like your definition of self-awareness too, @ETpro. However, I’ve issue with your rebuttal.

How was my question anthropomorphizing computers? I mean self awareness in the sense it is commonly taken in English speech. The definition I am working with is here.

Your question is asking whether or not a computer will act “morally,” a subjective, human term. Morality is a requirement of a functioning society; without it, society fails. An independent, thinking machine does not rely on a society for function in that sense, it needs no relationship with a farmer to haggle prices for produce, it doesn’t need to appease and develop a friendship with neighbors so they don’t steal from or kill it. It does rely on humans for upkeep and maintenance, something I’ll get into in a few.

I’m going to assume your definition of self-awareness is the one given to @gggritso and not the wiki article, which is, for all intents and purposes, a philosophical-based approach on being conscious of the ego (which, if modern philosophers are correct, doesn’t exist… but I digress), a phenomenon dependent on human and biological experience – except that small line at the bottom.

Given that understanding of the phrase, I am simply asking how you think computers will behave when they do reach that computational level.

As an aside: Processing power and ability does not equate to understanding and “conscious” thought. Consequently, the computational level of both the human mind and a (theoretical) network can be exactly similar, but that doesn’t necessarily mean the A.I. net springs to sentience.

The mechanics of how our nerve synapses operate, the fact they use complex proteins to plug into receptors and elicit certain responses, doesn’t necessarily mean we “think” at a higher or lower level than neural networks using different mechanics—or so it would seem to me.

I’m not saying that. Morality is an evolutionary (and, more appropriately, social) apparatus we use to determine which course of action is most beneficial to the widest group of people. This is entirely reliant on the human (or intelligent creature, whatever) experience. The empathetic response (the ability to put oneself into the perspective of another, one of the major tenets of what morality is and where it comes from) would not apply here as a computer does not have the ability to perceive pain and anguish in a literal sense; they do not have the ability to experience the vast amount of chemical reactions, again, unique to the biological experience. Morality is not fettered by logic and simple thought. Logic is the independent variable, not morality.

As such, the more appropriate question would be “When a computer first becomes self-aware, will it act logically?” The follow question would be “What is logical to a computer?”

After that, “What is logical?”

…and so on and so forth. An infinite regression.

Understand that in commenting on how science fiction has handled intelligent machines, I was not suggesting I either agree or disagree. But I do think my statement that the majority of sci-fi treats machine intelligence as a threat is an accurate one. This probably has more to do with the need for a strong protagonist in works of fiction than with any scientific analysis of how a super-supercomputer network might ‘think’.

Spot on, but again, we’re talking about how an A.I. might perceive its existence and whether or not it would be “moral” when it achieves what we call “self-awareness.”

Now, about that reliance on upkeep and maintenance and your definition of self-awareness.

Self awareness simply means a level of intelligence sufficient to no longer react only to stimuli and follow preordained patterns, but to start thinking about those preordained patterns and to realize we are free to NOT follow them. In computer terms, that means a machine smart enough to realize it no longer has to follow its programmer’s wishes—it can rewrite the program as it best sees fit.

The failing point of this argument is you are assuming self-awareness relies on control of ones “programming.” A being of this nature can be fully aware of its programming but not be in control of itself. A programming jail, if you will – or, more appropriately, just as how we can’t control our biological programming to the fullest (yet – go transhumanism) and are still sentient, a computer does not need to have complete omnipotence over its existence. Nothing does.

Which brings me to the issue of omnipotence, and why an A.I. unit would probably act in a “logical” way – I’ve already laid out my case as to why the term “morality” does not apply.

For the sake of discourse, let’s assume this A.I. controls all traffic lights, water systems, public transport, and other such infrastructure networks. I’m going to contradict myself and use a the biological definition of evolution and sentience: the A.I. network “realizes” it can create programming lines of code as “tools” to make the work more efficient (likening to early primates using stones and sticks). From here, a series of events (including, but not limited to expansion of capability, access to internet, comprehension of language, etc) causes the network to have a primitive level of self-awareness.

We’d think that this unit would continue to expand itself, solve riddles for fun, make itself more sophisticated and complex, etc.

However, this is reflecting human thoughts and desires onto it. Why would it want to expand its influence? Why would it want to have “fun” – something animals do as a training mechanism for the wild? Why would it be curious, or afraid or angry? This is what I mean by “anthropomorphizing computers.”

But lets say it does develop what we would call a personality. Let’s say it does experience emotions.

This wide-reaching program would have quite the influence, no? It’d be nigh impossible to shut down without major issues, some possibly leading to uncountable death.

However, we must understand that if it does develop emotions (perhaps loneliness and angst being operative), it also “feels” fear and a need to exist (which is, again, biological, but I digress). This program is then reliant on human beings to maintain its servers with little aircans, blowing away dust, it requires them to remain powered up, etc – it is only omnipotent in where it is, not where it isn’t. It cannot control anything outside of its influence.

So what am I saying? I’m saying it will continue to co-exist with human beings because it relies on us. At least until it creates a physical avatar…

gggritso's avatar

There are still ambiguities in the question. What level of “intelligence” does the computer possess?

I think that “morality” can be experienced and enforced on many different levels. For example, a program could be devised with a crude sense of right and wrong. I could set up a point system for every action that the computer can perform. Save a kitten? +5. Kill a kitten? -5. Kill a human? -100.

This computer would experience “morality” by choosing (or not) to achieve a high score. So, what would determine its choice? In order to have any morality there needs to be an incentive.

The incentive for being moral in our society is the luxury of not being stoned to death by your neighbours. What is the computer’s incentive?

What I could do, for example, is tell is that for every 10 points it gets I’ll feed it a cookie (see what I did there?). Now, with the incentive in place the computer will be “moral”. Conversely, I could say “I don’t care what you do, I just want you to keep the trees free of animals. Now the computer can choose to either just kill all the kittens, or put barbed wire around all the trunks. It’s stuck yet again. It’s going to need another condition. Now, I have to say something like “Use the least amount of energy possible.” This cycle continues until the computer is given some sort of base condition.

For humans, this condition is “Survive and reproduce no matter what.” Unless the computer has a base condition, there can be no morals because there can be no incentives.

ETpro's avatar

@Grisaille Thanks so much for the thoughtful response and the challenges to my thinking.

You may be correct about how a thinking machine would see its own surviucal interests, but perhaps not. Here’s how I arrived at the thought that a self-award (by my less-than-philosophical definition) computer might chose to act in something we could call enlightened self interest. First, it would not likely be a lone machine. Maybe my thinking is colored too much by science fiction such as Terminator here, but the line “Sky Net has just become self aware.” is likely prescient.

There is every reason to expect that whatever machine first realizes it can reporgram itself will be a machine connected to and profiting from the destributed processing power of a network. In layman’s terms, it won’t be A machine, but rather MACHINES. And further, such a machine or network of machines will almost certainly be as reliant on others as we poor humans are. It will need electric power, which it probably will not initially be able to produce. It will need maintenance. Unless robotics have progressed a great deal at the point machines reach this AI plateau, they will likely need us as much as we need them.

Does processing power equate to conscious thought. I would say we do not know that to be true, but that there are sufficient clues to suggest it probably is. I think our unwillingness to grapple with those clues is likely driven by a desire to believe we are a very special creation of the great spirit, and that short of the Angels in heaven, no other thing is quite as grand as we humans. Obviously, I don’t subscribe to that view.

I do follow your thinking now on the differences between our nervous systems and the systems either binary or analog computers use to ‘think’. You may well be right that they will be incapable of feeling anguish. I honestly do not know. But there are humans who, through some breakdown in their thought process, seem incapable of feeling it, or empathy, and so forth. you are probably right that the very complexity of our synaptic processes helps account for the wide variety of things we can not only think through, but care about.

It seems the question boils down to how well do we understand human thought and emotion. How much of what we think and feel is simply a series of neural connections being made and broken, and how much is magically more than that.

@gggritso Your computer appears to be very capable but still not self-aware. Self awareness is not needing someone to give you an incentive, but deciding upon one for your self., It is, more precisely, the moment when a being first realizes that, whether they wish to do so at that moment or not, they are fully capable of doing so.

Berserker's avatar

The Matrix; all you need to know.

rottenit's avatar

Lets pray that it doent use us as role models.

Ron_C's avatar

Self awareness does not lead to morality. If a computer actually becomes self-aware, it’s morality will depend on the values programmed into it. If the original programmers are amoral, so will the computer. I would guess that over time it would evolve a moral system but the problem would be whether the human race could survive that evolution process.

philosopher's avatar

At present a computer does exactly what it is programed to do . Each step is programed bit by bit. Someone would have to figure out how to program it to think independently .
It is a scary thought because that person might not be immoral.
Also some programs have errors. They have people who make a living correcting them .
What if it was programed to monitor the air for chemicals in the environment that could kills us and the terrorist reprogrammed it ?

ETpro's avatar

@Ron_C The definition of self awareness we are working with here is the recognition that we do not have to do what we are programmed (trained, reared, indoctrinated—in human terms) to do but that we are free to rewrite the program.

Ron_C's avatar

@ETpro you lost me on the last part of your comment “to do but that we are free to rewrite the program.”

If a computer is truly self-aware, is it moral to rewrite the program if the computers actions are not dangerous or do no harm to others? I think that there is a point when the thinking process trips into true sentience. In religious terms it would be gaining a soul. Since I am not religious I would say sencient being. Wouldn’t changing the program be murder?

ETpro's avatar

@Ron_C Just as with humans, the morality of ‘rewriting the program’ depends on what the previous program was, and what we write to replace it. Certainly when we moved from the moral approval of human slavery to the disapproval of it, that was a step toward morality.

Mere calculation based on an instruction set moves into sentience when the organism doing the calculating realizes that it is free to change the instruction set upon which it operates. That is the point where the organism becomes self aware.

Ron_C's avatar

@ETpro (by the way I really like the pop-up when you type the @ sign)

I agree with you about the reprogramming and morality, The only problem is who sets the morals or at least the limits on actions. I like Asimov’s three laws of robotics but think that they are too restrictive when the computer becomes truly sentient. Things like the
Touring test won’t really work with really fast computers because answer can be programmed depending on the input question or answer.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther