Social Question

poisonedantidote's avatar

Should we preemptively pass laws to give artificially intelligent life forms equal rights?

Asked by poisonedantidote (21675points) October 18th, 2009

This is something I have thought about for a while now, I am quite sure that it will not be long until we begin to create artificial intelligence. Do you think it would be wise to give them equal rights before they are made to ensure that they do not get forced in to a life of slavery and laborious and dangerous tasks. Or are such thoughts a waste of time.

should we stop discriminating against fellow humans first, or could we fix both problems at once if we put our minds to it.

Observing members: 0 Composing members: 0

30 Answers

The_Compassionate_Heretic's avatar

AI doesn’t exist and is only science fiction. It would not be a productive use of lawmaker time to introduce such a bill at this time.

Dog's avatar

Laws are, for the most part, created when there is an issue to resolve. In this case there is no issue as of yet and so the need is merely hypothetical. If lawmakers were to work on the hypothetical they would never have time to address the non-hypothetical.

ragingloli's avatar

no
pass laws to give ALL intelligent life forms equal rights.

DarkScribe's avatar

Sure. We can give our computers Sundays off and free virus checks.

markyy's avatar

Why would we spend billions in research and production to create robots if we are not allowed to exploit them. If there should be a law in the first place, it should be to prevent the creation of an AI that is sentient. Not that I’m afraid of sentient AI, I’m afraid humanity is not ready for it yet and should protect itself.

DarkScribe's avatar

@markyy . If there should be a law in the first place, it should be to prevent the creation of an AI that is sentient.

Until it is sentient it isn’t intelligent – it is just a program.

DarkScribe's avatar

Are we going to gender with sentient AI? That will be fun – discrimination laws for computers. (Stop looking up my keyboard definition table you pervert!)

Blondesjon's avatar

No. The key word in this whole debate should not be intelligence. It should be the word artificial.

wildpotato's avatar

You might be interested in Asimov‘s books if you haven’t read them already. He writes a lot about the topic you bring up (the role of robots in society, generally speaking).

deni's avatar

no…what? i dont get it. why would we.

rangerr's avatar

Artificial. They aren’t real.. so why would we need to fix any ‘problems’ with them in society?
We can’t fix our own problems, why should we help robots first?

filmfann's avatar

Is Artificial Intelligence the crux here, or is it sentience?
I doubt I will live to the point where computers are sentient.

Zaku's avatar

I’m with your last thought. Let’s see if we can provide rights to humans, animals, non-animals, habitats, before getting confused about computer programs. ;-)

Harp's avatar

At what level of intelligence would those rights kick in? We don’t now consider intelligence in non-human beings to confer rights. And we don’t consider sub-normal intelligence in humans to disqualify them from rights.

The whole notion of “rights” is ultimately based on our capacity for empathy. Humans have the ability to generalize their individual desire to be treated with respect and to not be subjected to needless suffering. Because we want it, we assume that others want it as well. Our capacity for empathy then kicks in; to varying degrees, we perceive the happiness of others as our own happiness and their suffering as ours. The whole idea of rights is a formalization of that aspect of the human consciousness.

Our empathy extends easily to beings that are more recognizably like us. When they exhibit signs of happiness or suffering in ways that trigger this self-recognition, we light up with empathy. That’s hard to do with creatures farther removed from our own physical form (like whales or octopods). We tend to empathize strongly with apes because it’s very easy to translate their displays of emotional state in terms of our own. Still, we have done very little to formalize the rights of higher primates.

Some philosophers assert that any information processing system that exceeds some tipping-point capacity will develop a reflexive self-modeling; in other words, build a brain (natural or artificial) with enough processing capacity and a “self” will emerge. That self will then have a capacity for suffering. Many such non-human selves probably already exist in the animal world. The advent of the same phenomenon in a machine would be remarkable, but wouldn’t pose radically different moral questions.

In the end, while we naturally feel empathy, whether or not we o the extra step of formalizing that empathy into rights seems to come down to questions of convenience. If our interests are served by squelching our natural empathy, we deny rights to others, as white Americans did with blacks in the slave days and beyond. If we get to the poit where the moral imparative seems to outweigh matters of convenience, then we accord rights.

LostInParadise's avatar

It is premature to pass laws because we don’t know for sure what will happen, but I think it proper to at least think about it.

There is also the whole matter of what kind of laws will be required if the singularity ever occurs, if we are able to perpetuate our minds by downloading them into computers. This brings up a whole bunch of ethical issues which I prefer not to think about because it scares the bejesus out of me.

bea2345's avatar

Bicentennial Man , a film, explores some of the issues of artificial intelligence. It also bypasses some of the implications of having machines with AI, but how much can a film contain? such ideas as the effects of a slave culture on the slave owner: because that is what Andrew Martin (played by Robin Williams) is: a slave. A vacuum cleaner is one thing: a humanoid robot that has intelligent conversations is quite another.

gussnarp's avatar

We are far enough away from an actual AI that I don’t think we need to make any preemptive laws. There are plenty of people theorizing and pondering the issue, and that’s good. But until we actually make a machine that can think independently, has a sense of self, and actual emotions and can communicate such with humans, there is no need for any action. When we can, well, there is a large body of literature on the subject on which to draw in the inevitable debates. Think of it this way, it is hard enough to get a legislative body to act when there is a real and urgent need. It is probably impossible to get them to act in advance when there is nothing more than a speculation.

doggywuv's avatar

All conscious beings deserve the right to not suffer unnecessarily, and all people deserve the right to live.

Blondesjon's avatar

I think a lot of folks forget that emotions are a chemical reaction. The chemicals may be triggered by an electrical impulse in the brain, but they are chemicals, nonetheless, meant to work within the specialized system that is the human body. To truly replicate this within a machine would mean making that machine a human.

Why go to all that trouble when we can already make humans by rubbing our pee pees together?

gussnarp's avatar

@Blondesjon You raise a valid point. What level of intelligence do we really want a machine to have? Do we only want robots because we can enslave them? If so, should we act to ensure that they never achieve true intelligence? I imagine that the only reason to do it is to create synthetic humans that are stronger, smarter, and can survive harsh environments. Then it becomes a question of cost. Is it cheaper to reproduce real humans and use protective equipment and tools (including unintelligent robots) than it is to produce intelligent machines? The follow up question becomes: if we do this, are human beings obsolete?

Also, your definition of pee pee must be different from mine. Rubbing two pee pees together does not make a human, it requires a pee pee and a hoo hoo.

Blondesjon's avatar

@gussnarp . . .Not to get too technical but a hoo hoo is a pee pee as well in the fact that both versions do indeed make, among other things, pee pee.

We are still years away from fully understanding the human brain and light years away from producing anything near the computational power it possesses.

If God created us and we, in turn, create something that is more than us, does that make God obsolete?

gussnarp's avatar

@Blondesjon “does that make God obsolete?” I would say yes, except that God is already obsolete.

Yes, it’s easy to get too technical when discussing hoo hoos and pee pees.

Harp's avatar

@Blondesjon That’s a very good point. Our concern for self-preservation and pain-avoidance are outgrowths of the natural selection process. We feel fear and pain and pleasure because those feelings have helped our species replicate.

If there were no massive jolt of adrenaline that accompanied the perception of a threat, would that perception of threat actually be a negative subjective experience? If there were no release of the “pain” or “pleasure” neurotransmitters, would there be a capacity to care how things turn out, including caring about one’s own existence? How much of caring is actual thought and how much is the physiological response to the thought? Would a being that could think independently but had no subjective experience of pain or pleasure even be able to formulate any sort of will or purpose or desire?

Maybe then, it wouldn’t matter in the least to such a being what became of it. And if the being doesn’t care, then what’s the point of rights?

On the other hand, maybe that existential concern could actually be hardwired into the operating system of an artificial brain so that it always favored certain outcomes. That could be a form of caring, maybe, but without the physiological component.

LostInParadise's avatar

It may be, and it is my belief, that it is not possible to create intelligent beings without someting very like emotions. There needs to be some sort of complex motivating force for a process as complex and varied as thought. The Mr Spock model does not work for me. The emotion can be a need for recognition, curiosity or an appreciation of beauty.

Blondesjon's avatar

@Harp . . . Artificial caring by that definition.

And you bring up a very good point. With most of our “caring” coming from chemical reactions to what goes on around us, aren’t we in possession of an artificial intelligence of our own?

CWOTUS's avatar

We’re more likely to give extra rights and privileges to people with Artificial Stupidity…

josie's avatar

No. Artificial means it is not the real thing. Sort of like an artificial leg or artificial teeth. It is a facsimile. No sense in pretending it is otherwise.
@CWOTUS There is ample political pressure to give extra rights to legitimate stupidity.

TexasDude's avatar

Sentient artificial intelligence, yes.

Standard Artificial intelligence, no.

When it comes down to it, though, I typically don’t support pre-emptively passing laws for anything.

Pandora's avatar

We already give rights to AI. I already know some artificial intelligence. Ever work with someone who would act like they are smart but was only faking it? Or someone who is smart and has had a lot of body work?

Hibernate's avatar

I’d go with stopping discrimination. But laws should be implemented because when enough androids/robots etc will surface they won’t like it we use them for our needs only.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther