Social Question

Hypocrisy_Central's avatar

A sentient machine, or automatons, is destroying one akin to murder?

Asked by Hypocrisy_Central (26879points) December 21st, 2013

The other day I was looking at some news blurb (on TV or the Net, I can’t remember) that was showing the advances in robotics. They were showing robots that could mimic the physical movements of humans. If man were to mate one of these automatons with a brain that could think, or make the automaton aware of itself, would pulling the plug on one be akin to murdering it? It would not be as simple as flicking off a power switch, etc., the machine would know it was being deactivated, or destroyed and its existence may be no more.

Observing members: 0 Composing members: 0

68 Answers

DWW25921's avatar

The sci-fi geek in me says it’s murder! The realistic side of me… maybe not. I’m conflicted. That’s kind of a deep question… I’ll give you a wimpy no for now but maybe someone else will convince me otherwise?

gailcalled's avatar

Read Isaac Asimov’s I, Robot. and check out his three laws of robotics.

” Many of Asimov’s robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself…”

Seek's avatar

Gene Roddenberry rules: If it knows it’s alive and can die, it’s murder if you kill it, and slavery if you threaten its life or welfare if it does not do your bidding.

Data was alive, though completely artificial. So were the nanites modified by Wesley Crusher. Both were determined to be allowed to choose their own destiny, and it would be considered murder to kill either.

ETpro's avatar

It will come. There is nothing magical about the combination of synapses, inference machines, and self learning circuits that form the human brain, or the brain of a whale, elephant, chimp or parrot, for that matter. I’m of the same school of thought as @Seek_Kolinahr. Killing a being with sentience like of greater than humans is murder. Enslaving them is slavery. When they think they have free will, we must let them exercise it so long as they don’t will harm to us.

ragingloli's avatar

It is not akin to murder, it is murder. Next question.

stanleybmanly's avatar

The question poses some interesting speculations. Is it murder to switch off the robot brain if the memory is left intact and can be revived? How much of the hardware of a robot need be destroyed to qualify as killing the “machine”? And worse, I’m getting increasingly irritated with my growing uneasiness at our treatment of animals. I hate it when fkn peta starts harping while I’m gobbling down a burger.

kritiper's avatar

Yes. @Seek_Kolinahr said it very well.

Bill1939's avatar

If a machine were actually sentient, meaning that it was conscious of its consciousness, then turning it off would be murder; HAL was murdered in the movie “2001.”

ragingloli's avatar

@Bill1939
Well, you could argue that it was self defence.

bea2345's avatar

Bicentennial Man with Robin Williams dealt with an aspect of this topic. The film was comic, in the way of Robin Williams’ performances; but it did not go far enough. It could have explored more deeply the human tendency to make the same mistakes over and over again. Can any man-created sentient being sin? Can it feel sorrow, anger, envy, guilt? A thinking, intelligent being without any social sense would be monstrous.

If we become able to create sentient beings we will not be as gods, but humans with power that we will certainly use improperly. Consider the opportunities for abuse that now exist, in: social inequality; the status of women and children; ownership of property, especially land and other natural resources; the mass production of food; and so on. I have long felt that, in the last judgement, we humans have a lot of explaining to do.

Bill1939's avatar

Is there a difference between killing and murder. The insect I step on is killed, not murdered. Perhaps it is a question of the level of sentence that determines which term applies.

Emotions seem to be necessary for an AI to have the ability to care. It must experience guilt if it is to regret an action it took (or considered taking) that was harmful, and the ability to experience satisfaction would be helpful.

Hypocrisy_Central's avatar

@stanleybmanly I guess if the mechanics are left in place for revitalization it is more like placing the machine (the automaton) in a coma. I suppose if the infrastructure of the machine was destroyed to the point the CPU, (or whatever brain it had) could no longer command the body or have thought, that would be the threshold for destroying the machine, if you deactivated or powered it down before the damage was done

mattbrowne's avatar

Sentient machines can’t be murdered as long as there is at least one safe full backup of the hardware, the software and up-to-date memory state. The uniqueness of Data was pure fiction. Once Data becomes possible, one million Datas become possible.

bea2345's avatar

^^What an unpleasant thought.

Bill1939's avatar

Our genetics seem to be programmed for us to cease functioning after a hundred years, plus or minus twenty. Such limitation could be installed, connected to primary mechanical functions, that shut the AI down. The sentient AI would understand this about as well we do, but we don’t have a creator to revolt against—or do we?

Seek's avatar

The brilliance of the Data fiction was that the creator died without leaving the secret of his creation behind, and the government of the day determined Data was allowed to refuse to be dismantled for study. Nyah.

Besides, if you make a replica of a machine, you now have two sentient machines. Not one sentient machine twice over. From the moment of copying, they are two sentient creatures on two different life-paths.

Not robot-centered, but the movie “The Illusionist” touches on this idea.

The “twins” in your book, @mattbrowne – they shared DNA – even had very similar life experiences, but they were certainly not the same person.

ETpro's avatar

@mattbrowne I have to side with @Seek_Kolinahr on this. The deeper neuroscience probes into the workings the human brain, the more apparent it becomes that they are nothing more than very complicated, sophisticated meat computers. But no two are, because of that, alike. Each is colored individually by all the experiences of a lifetime. There is no reason to think that 1 million Datas would be less subject to their experiences.

Hypocrisy_Central's avatar

@ETpro There is no reason to think that 1 million Datas would be less subject to their experiences.
To make a million Data-like automatons crated with the same CPU/brain configured the same way, with the same parts from Japan/Korea, exposed to the same stimuli, would act upon it differently and learn from it differently?

ragingloli's avatar

Unless they are occupying the same exact position in spacetime, they would not be exposed to the same stimuli.
1 million Data’s would not be any different from 1 million clones of @Hypocrisy_Central.

Hypocrisy_Central's avatar

@ragingloli From what I have heard believed about the brain is that it is not exact to the minute detail in twins, and even if you got an exact copy as in a clone, there is no program running it. Any automaton would have the same brain/CPU/program as all the rest. More than likely it would be no different than Spellcheck, it can’t decide to handle what appears to be a mistake differently than the program writers told it. Unless there was some randomizer built in the CPU/brain, they would all act on the same stimuli the same; I.E. someone spills a drink on them, etc.

Seek's avatar

@Hypocrisy_Central – being 18 inches further away from the air conditioner than the guy standing next to you is being exposed to different stimuli.

The question posited sentience – Again using Gene Roddenberry rules, that implies intelligence, self-awareness, and consciousness. Intelligence in this instance is the ability to learn, understand, and cope with new experiences. No “randomizer” needed, as every new experience informs the reaction to the next new experience.

ETpro's avatar

@Hypocrisy_Central True, but experiences are still different for each. The chances are good we will not reach the AI Singularity without quantum computing, and if that’s the case, then no two android brains will be the same even if they are built from the same blueprint.

Bill1939's avatar

Identical twins in utero experience their environ differently. They interact with each other and their experience of the other is unique. Even as zygotes, they respond to the physical reality in distinctive ways, the position of the components of their cells independently determined. A concentration of mother’s hormones greater for one than the other will effect different genetic selections by the cell’s RNA, uniquely selecting genes to become active or inhibited. The number of variables guarantee no two individual will ever be totally alike. I expect that the same will be true for non-biological life forms.

mattbrowne's avatar

The Data fiction is a special unlikely creation. I still love Star Trek, though. Best scifi series ever.

I predict that the future reality will be the exact opposite. Millions of artificial sentient beings will actually be one super-sentient being, because all minds will be interconnected (like the Borg). If you kill one, nothing is lost, unless you isolate one machine and put it in a remote location light years away without subspace communication.

Some scifi authors explore the theme for humans, e.g.

http://en.wikipedia.org/wiki/Andreas_Eschbach

Every human on Earth has a chip in his or her brain. Humanity becomes one super mind. If you kill one human nothing is lost, just one less input device.

So again, it’s practically impossible to kill one artificial sentient machine.

Bill1939's avatar

I agree, @mattbrowne, the ability for machines to exchange and integrate information with a large number of other machines simultaneously, virtually instantly, gives each individual AI as much importance to the whole as a flaking skin cell has to one’s body. Their intellectual advantage over our limited means to conceive, design and construct examples of the envisioned, can bring a better life to biologics with the most efficient use of resources.

Consider the history of quantum leaps in communication: the spoken word, written word, printed word, transmitted word, internetted word; each multiplied the ability for minds to focus on singular subjects in greater depth. Humans have all but advanced this art to the limits of their ability. The next quantum leap in communication will likely be made by machines.

Seek's avatar

A skin cell isn’t sentient. It is not aware of its own existence or the fact that it will one day die.

Presume there is a person living in seclusion. The only contact they have with other human beings is through the Internet – a “collective conscious”, if you will. They get all their living supplies from Amazon and other delivery services and never speak to another human being. Their parents died a long time ago and they have no other family.

Everything they’ve contributed to society can be found on the Internet long after they pass away.

Someone finds their house, walks in, and shoots that person in the face.

Is that not still murder?

Hypocrisy_Central's avatar

@mattbrowne Millions of artificial sentient beings will actually be one super-sentient being, because all minds will be interconnected (like the Borg). If you kill one, nothing is lost, unless you isolate one machine and put it in a remote location light years away without subspace communication.
If all the automatons are connected by BlueTooth, Wifi, or some other means would that not be due to humans who created them, or would the automatons eventually install that into themselves? If they did, and the humans sought to prevent it, would that be trumping a right of the automaton?

mattbrowne's avatar

@Seek_Kolinahr – Yes, a skin cell isn’t sentient. A transistor or qubit isn’t either. But some day a single machine could be. And single machines can become interconnected. That was my point. If you kill a single sentient nanite, the others can recreate it.

mattbrowne's avatar

@Hypocrisy_Central – It all depends on replication capabilities. I’m against that on Earth. It might make sense when building von Neumann probes to explore the galaxy.

Seek's avatar

That sounds like the flesh-Cylons.

It’s perfectly OK to shoot Caprica Six in the head, because as long as the resurrection ship is within range, she can come back, right?

No. You’re willingly putting a living, sentient being through the pain and process of dying, and then forcing them to remember the experience.

That is torture.

mattbrowne's avatar

Yes, @Seek_Kolinahr. To me that’s more realistic than the unique Data, although I still love the character and actor.

Seek's avatar

The question is moral: Do you consider it moral to kill a sentient being, just because it can be “rebuilt”?

Bill1939's avatar

From my spiritual (not religious) perspective, killing is immoral even when justified. What is killed and how can increase the immorality, as can why. However no configuration will make killing moral, even when what we kill is of our own making.

From a more mundane view, it seems to me that as most individuals experience a rush from a successful hunt, or racking up a new high score on a video game, suggests a genetic mechanism provides pleasure producing chemicals to strengthen aggressive impulses. Pleasure can be had by killing. (This is the highest form immorality, imho).

When one sleeps, periods without consciousness exist. If, while deep in sleep, death came, there would be no experience of it. In the same way, our sentient AI can be asleep when power (presumably electrical) is switched off. Later, switching it on then “awakening” the AI, it will be unaffected, except for time gaps in its experiential memory, for having been killed and reanimated.

mattbrowne's avatar

@Seek_Kolinahr – All resources should be considered precious, including sentient androids that can be rebuilt. Under special circumstances, for example to save a human life, the sacrifice can make sense.

Seek's avatar

@Bill1939 – In your case, it would require consent from the party involved, just as it requires consent to put a surgical patient under anaesthesia.

Bill1939's avatar

So @Seek_Kolinahr, would you say that anything incapable of giving consent should always or never be killed? Except for a self-sacrifice to save another’s life or unless one believes that they will be revived (in this world or the next), how likely is it for someone to consent to being killed? And since suicide is illegal, how can one give consent to be killed?

Seek's avatar

I don’t agree that suicide should be illegal. I think a sentient, conscious, self-aware being has the right to decide whether it wishes to continue living.

There is no significant difference between our human, sentient, self-aware, conscious meat brains and the posited android, sentient, self-aware, conscious synthetic brain.

If it is aware that it is alive, and you kill it, it is murder. If it is alive, and you sedate (or turn it off) it against its will, that is also immoral.

What immediately gives us, as humans, the right to decide whether another being is allowed to remain alive? We make exceptions for animals because we believe to the best of our knowledge that they are not conscious and self-aware. They have no higher reasoning capability. But we aren’t talking about synthetic animals or your desktop computer. We are, effectively, talking about a synthetic human, for lack of a better adjective.

Hypocrisy_Central's avatar

@Seek_Kolinahr If it is aware that it is alive, and you kill it, it is murder. If it is alive, and you sedate (or turn it off) it against its will, that is also immoral.
To have a sentient automaton that has a defect or some minor malfunction that if left alone would destroy and to repair it would take deactivating the automaton. If it believes the humans will not reactivate it, or that the humans will botch the procedure making it impossible to revive it and thus doesn’t want to be repaired, the humans should let it, even to the point of losing their investment in the creation of the automaton?

Seek's avatar

Clearly exceptions are made in the case of something or someone acting outside civilised rule.

We have prisons and mental health facilities for humans, and there are moral and ethical codes of conduct in place for those.

We do not simply kill anyone who is diagnosed bipolar.

Investment is moot when you are discussion conscious beings. We do not own other sentient beings. That is slavery.

Bill1939's avatar

The question of sentience in animals other than human beings remains unanswered. Some posit plants are sentient; Vegetation has consciousness. While less likely than dolphins or dogs being sentient, a rose may not want to have its only flower snipped and stuck in a vase.

How immoral an act of murder may be, will depend upon what is killed, how it is killed and why it was killed. The degree of immorality never equals zero, in my opinion. Until a better means to protect society from harm exists, killing will remain a necessary evil, and sentience be damned.

Hypocrisy_Central's avatar

@Seek_Kolinahr Investment is moot when you are discussion conscious beings. We do not own other sentient beings. That is slavery.
Part of the reason for creating an automaton is to serve, to do tedious or dangerous work in place of humans. To make them serve better, a creator gives them sense, and reason, reason to try and preserve themselves in the midst of danger or reason enough not to create harm for themselves. If there was no mechanical or financial benefit for creating an automaton, why do it? Simply to have a superior thinking machine that decides he doesn’t care about the billions spent on creating it, it would rather just play golf than work in a mine or as a deep-water welder?

Seek's avatar

@Hypocrisy_Central

Why not clone a bunch of meat-children? Of course you have to feed them, but that has to be less expensive than silicon and programming. It’s not like you have to send them to college or anything.

And then, if they step out of line, kill it, and set an example for the rest of your slave horde?

Hypocrisy_Central's avatar

@Seek_Kolinahr Why not clone a bunch of meat-children?
Biggest reason is that it takes too long. It will take 9 months to birth it, then another 18 years before he/she is even close to working in the work force. Then there is the parent issue. You can’t willingly send them into harm’s way. Being born they would have Constitutional rights. In that time you can create thousands of automatons. Not only that, they can be created to withstand extreme hot or cold climates, be resistant to acids, etc. a clone could not.

Seek's avatar

You can’t willingly send them into harm’s way.

People have been forcing their slaves into harm’s way for thousands of years.

Hypocrisy_Central's avatar

^ Your clone would essentially be your offspring, they would be akin to a member of the family and thus disqualified to be used as a slave anymore than a natural born relative.

Seek's avatar

This horse is thoroughly beaten. Thanks for not playing through your own thought experiment.

Hypocrisy_Central's avatar

^ I have come to the conclusion from all that was said, that to deactivate is not murdering a sentient machine, more like placing it in a coma. To render the automaton nonfunctional to the point it cannot be repaired is mentally murdering it, but not really. It is a synthetic human, it won’t age or die of old age. Mentally because it has feelings, etc. people will associate that with human traits and thus treat the machine accordingly, there is a name for it but it escapes me right now. If a person, company, college, etc. created the automaton, they have dominion over it even though it has enough wisdom not to like the task given it and/or knows it was created by someone or some organization. No one will spend all that money and not have it do a job they designed it for.

Seek's avatar

It disturbs me to think you believe one person (a body containing a brain that can think and has feelings) can own another (body containing a brain that can think and has feelings, albeit synthetic), provided they paid enough money for them.

Aren’t you a black man?

Hypocrisy_Central's avatar

^ Yes I am, but I am not manufactured, I was created by the will of the Almighty through the union of two humans. Even if I had feelings, a program manufactured by humans, even if a learning one, I’d still be a machine subject to those who created me. Why is that so hard to understand? Why would anyone create an automaton, no matter how perfect, if just to let it lose on the world to do as it pleases? What would be the point?

Seek's avatar

Indeed. What would be the point?

Create a machine to do the job you don’t want to do, if you want an unquestioning slave.

But if you create life – a brain that thinks and feels and has hopes and fears, you’ve made a person. And people have rights.

ragingloli's avatar

@Hypocrisy_Central
You WERE manufactured, by a biological factory.

Hypocrisy_Central's avatar

^ Surely you jest….... Put down the redacted and step away from the redacted.

ragingloli's avatar

Seems you are ignorant of even high school level biology.

Hypocrisy_Central's avatar

That is not what had me laughing.

ragingloli's avatar

Keep laughing.
One day you will boil in the FSM’s beer volcano for all eternity.
Blessed be his noodly appendage.
Ramen.

Hypocrisy_Central's avatar

Whatever makes your boat float go for it…

mattbrowne's avatar

The best tv series ever about androids is

http://en.wikipedia.org/wiki/Real_humans

a Swedish production. Anybody who knows it?

ETpro's avatar

How will we know when an android has a soul?

ragingloli's avatar

@ETpro
How do we know you do?

Bill1939's avatar

I have not seen a clear definition for soul implied in this long running exchange. One cannot know whether a soul exists if they do not know what it is. We assume that we know what we are talking about, but we cannot be sure that what we are saying is being heard since one’s presumed attributes of soul may be different from theirs.

Life exists. Its existence is intimated by the animation of an object. When a machine breaks down, we say it died. Yet most do not think that it had been actually alive. For some, automatons fall into this category. Clearly, life does not require sentience. Yet it seems more than mere chemical and/or physical reactions, though it may not be. Consider the possibility that life and soul are the same thing, that life encompasses all that is alive and soul encompasses an individual’s life.

This leaves unanswered the questions of whether a machine that thinks it is alive is alive and, if it is alive, does its maker have the right to kill it? If alive, it has a soul and it would be a sin, actively or passively through neglect, to kill it.

mattbrowne's avatar

I would rephrase the question: How will we know when an android is self conscious? There is no scientific test for soul.

Bill1939's avatar

Put a spot on its forehead and put a mirror in front if it. A self-conscious automaton will recognize the image of itself and wipe the spot off its forehead, not the mirror. This test works with very young children.

ragingloli's avatar

@Bill1939
Only when they are older than 18 months. Babies actually fail the mirror test before that date.
http://en.wikipedia.org/wiki/Mirror_test#Animal_species_capable_of_passing

Ergo, babies have no souls.

Bill1939's avatar

I do not think that self-consciousness correlates with soul. I do not exclude the possibility that rocks have souls, or atoms, or quarks, . . .

Sentience permits free will to develop. Observing an android asserting a will in opposition to that asserted by its master would be evidence of its self consciousness.

ETpro's avatar

@ragingloli Don’t digress. My question has nothing to do with whether I do or do not have a soul.

@Bill1939 Correct me if I’m wrong, but I believe you have already agreed there is no scientific test capable of even identifying something called a soul. So how are you so certain that such a think exists, and even able to tell at what age a human acquires one? Can you help me understand how you know that?

ragingloli's avatar

@ETpro
Yes, it does. How can you even expect a machine to have a “soul”, when you do not even know whether you have a soul yourself?
You are applying a higher standard to the machine than to yourself, a standard that it can not fulfill because you have not defined what a soul is or how you could even determine its presence.
You assume, based on nothing, that you have a soul.
Why not assume the same for the machine.

Bill1939's avatar

@ETpro, if you accept that particles and fields only exist conceptually and are not actual, then I will say the same for soul. Soul may precede genome, a cascade of causes concluding with conception. A soul may carry a purpose, of which one is a part, that continues through time after one’s part is done, or it may be the purpose, or it may be a mental construct with no relation to reality. I do not know.

ETpro's avatar

@ragingloli Sorry, I didn’t realize you were referring to my whimsical musing about how to know when an android has a soul. I don’t know if you or I have one, so I won’t know if an android has one either. But I don’t think it’s right to kill you or for you to kill me unless self defense is involved. By the same token, I’d find it unacceptable to kill an android capable of sentient, human-like thought and action. I see no reason why carbon based machines should have rights not accorded to silicon based machines.

@Bill1939 I don’t “accept that particles and fields only exist conceptually and are not actual”. I just don’t pretend I know what I do not know, and that seems to bother a lot of people who love to do just that. I base my epistemology on the best evidence available today, but the one thing that I do know for certain is that there is nothing else I know for certain. Some things seem very likely. They agree with observable phenomena and predict lots of new observations that bear out. So i provisionally accept them as the best models we currently have. But I know that some of each such model is almost certainly wrong. I also know that some of our most cherished models may prove wildly wrong due to some future observation that falsifies them. Do we live in a vast, almost infinite Universe, a truly infinite Multiverse, or the Matrix. I do not know. And it’s OK to admit you don’t know when that’s the gospel truth.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther