Social Question

HungryGuy's avatar

Suppose a group of scientists created a virtual world populated with sentient, self-aware "Sim" people?

Asked by HungryGuy (16039points) March 17th, 2010

Say, hypothetically, a computer scientist created a high-resolution virtual reality world inside a high-power computer, and then, knowing what mechanism causes self-awareness, populated his VR world with a small society of fully sentient self-aware sim-people (like an advanced version of “The Sims” computer game, but not like “The Matrix” because that’s real live people wired into a VR system). Does this scientist have a moral duty to treat these sim people with justice and compassion? Or, since he made them, does the scientist have the right to regard these sim people as his “slaves” to torment and experiment on however he chooses? Bear in mind that, being a “computer simulation” the law has no bearing on what software a scientist writes, and said “sim people” are completely at this scientist’s mercy.

Observing members: 0 Composing members: 0

52 Answers

DrasticDreamer's avatar

Do parents have the right to torment and enslave their children? ‘Nuff said…

ArtiqueFox's avatar

As @DrasticDreamer mentioned, parents “create” their children and are expected to treat their offspring well. At the same time, go to any virtual pet site and create a pet…you will be expected to take care of it or negative consequences happen.

When any form of “life” is created, the creator is held responsible for it. It’s an unwritten rule we automatically assume. The same would go for your scenario – the scientist would be expected to treat his creation well. Besides, this sounds like a project of magnificent costs. I am sure his peers and the “higher ups” would expect him to handle with care – it would be strange for him to abuse such an expensive investment.

HungryGuy's avatar

Good answers so far :-)

Response moderated
Fyrius's avatar

@DrasticDreamer
I don’t agree with your logic, but I arrive at the same conclusion.

Parents don’t create their children, they only allow their children to grow into existence. To create anything as complex as even only human mind, that’s the sort of thing we might be up to centuries into the future. We hardly even understand half of what goes on in a human brain.

But regardless of their origins, sentient beings have rights.

dpworkin's avatar

Certain computer scientists would finally get laid.

lloydbird's avatar

Surely they can only be virtually “self-aware”.

ragingloli's avatar

Sentience and self awareness -> Human Rights. (or ‘Sentient Species Rights’, as in that case, the very term ‘Human rights’ would be racist).
The creator without a doubt has the moral, I would prefer legal, duty to treat these virtual life forms as the law requires to treat humans.

HungryGuy's avatar

@charmander – Troll…

@lloydbird – If it’s possible for us biological creatures to be self-aware, it must be equally possible to for an electromechanical person (i.e. android) to be self-aware. And if it’s possible to make an android that’s self-aware, it’s equally possible to make a sim person inside a VR system that’s self-aware. We just don’t know what mechanism causes self-awareness at the present time is all.

@ragingloli – Let me play “devil’s advocate” and ask you this: Assuming that law enforcement was even aware that some scientist wrote some software on his computer, how would such a law be written telling someone that his “software” has rights?

Lightning's avatar

The scientist has the moral right to B TORTUROUS :P

Response moderated
Trillian's avatar

@DrasticDreamer and @Fyrius. Torment and enslave? Maybe not, but my entire purpose in having children was to have someone to do the dishes!~

DrasticDreamer's avatar

@Trillian Bet you’re disappointed that the dishes still never get done then, aren’t you? ;)

HungryGuy's avatar

@Trillian – Unfortunately, unlike children, sim people living in a VR world inside a computer can’t do your dishes for you :-(

ALL – As I mentioned in the description under my question, they’re more like an advanced version of The Sims game than a buch of robots running around inside your house (which would be a completely different question). The reason I used an example of Sims-type people inside a VR system instead of a physical android is precisely because an argument can be easily made that an android should have “human” rights. But that argument is more tenuous as regards to a VR world completely contained inside a computer.

Fyrius's avatar

@Trillian
@DrasticDreamer
And now they’ve invented automatic dishwashers that only take some water an electricity. That’s a lot of effort you could have saved yourself there. :P

12_func_multi_tool's avatar

Could the scientist create a gov’t and police? could he create a God, are out current God or gods good enough for this virtual world. oh dear, I’ve made it political now.

lloydbird's avatar

No, not a moral duty.
But perhaps a scientific one.

Fyrius's avatar

@12_func_multi_tool
“could he create a God”
For all intents and purposes, I think he’d be one to the virtual people.

HungryGuy's avatar

@12_func_multi_tool – I don’t see why the sim people couldn’t form their own government and police, but that wouldn’t prevent the scientist from playing smiteful god with them. They would still be helpless at the scientist’s mercy.

@Fyrius – Exactly!

12_func_multi_tool's avatar

yes, i’m just trying to get to a responsible party. I think the system would be the god. No he couldn’t in my opinion

Trillian's avatar

@Fyrius or Dad! Like Dr. Frankenstein.
If her were to create sentience, he’d be one cool customer. How would you propose that he interact with them? Would he have an oracle? Or speak through a simulated cloud of smoke? Their bodies wouldn’t really be real, so…I guess he could just delete them if they didn’t show proper reverence.
@HungryGuy Smiteful god as in; “I will smite thee.”?

HungryGuy's avatar

@Trillian – There’s any number of ways he could interact with his sim people. He could don a virtual reality suit and appear to them in the sim world as just another sim person and they wouldn’t know the difference (until he started performing magic/miracles all over the place). Or he could appear to them as a giant flying unicorn, or whatever…

Trillian's avatar

@HungryGuy Oh, I forgot all about the virtual reality suit. And I just suggested somethin like that for a toy I had to invent in my psychology class! Duh.
Flying Unicorn, Pah! Shuuuuuuuuuuunnnnnnnnnnnnnnnnn!

kess's avatar

The creator will have no other obligation to his creation except to the purpose by which he created.

Which means he is free to do what ever he likes with his creation once it achieves his purpose.

The purpose to which he created tell a lot about he the creator.

Trillian's avatar

@kess Right. So….go forth and replicate!

XOIIO's avatar

all I know is that if i had to pull the plUG I couldn’t. Even if they aren’t alive as we are, they know what is happening.

Fyrius's avatar

Here’s another question: if for some reason it would be most convenient to delete them all, in such a way that they’ll just painlessly pop out of existence and never know what hit them, would that be objectionable?
And if yes, would it still be if they were written without any survival instincts?

I’d say yes to both, unless they consent to it. They should have a say in their own existence.

mattbrowne's avatar

I see no difference to physical androids who are sentient. Both are based on software written by people. Both entities are capable of passing the Turing test. The problems begin when they develop the capability to evolve on their own and possibly delete the three laws of robotics. At some point we might learn: And they have a plan.

HungryGuy's avatar

@Fyrius – Well yes, he could just hit the power button on the server, and POOF! they pop out of existence like they never existed. If the scientist was smart, he would have made regular backups, and he could restore from any previous backup, and so the sim people would have no clue that ever happened.

@mattbrowne – But there is an important difference. Sim people would exist inside a VR system, and they would have no clue that their world is a simulation inside a computer (and people outside would neither know nor care that some scientist was experimenting with software in a computer lab). Perhaps sim physicists would discover the scientific method and develop a science to explain the “laws of physics” and ponder why light behaves in an inconsistent manner, etc. But, unless that scientist appeared in the sky one day as a giant flying unicorn (in which case they would know that their world isn’t what it seems), they’d have no reason to suspect that their universe is really a giant virtual reality. And even if they did discern that, they’d be powerless to affect the outside world unless the scientist gave them access to the computer’s OS I/O functions in the VR application software.

But an android (though still a sim person inside the android’s processor core no differrent than any one of the VR sim people) does have a physical body, can manipulate the real world, can interact with real people, and can even stand up in court and say, “Cogito ergo sum” thus changing the fate of the world forever…

Fyrius's avatar

@HungryGuy
“Sim people would exist inside a VR system, and they would have no clue that their world is a simulation inside a computer”
Why not? Why wouldn’t we just tell them?
In order to keep them from knowing what they really are, the creator guy would need to deliberately create a convincing illusionary world for them to live in. That can’t be a trivial task. Why go through that trouble? What for?

Well, of course he could also just give them the curiosity of a cow and give them a simple meadow to live in. But still, why hide it?

HungryGuy's avatar

@Fyrius“Why not? Why wouldn’t we just tell them?”

Becuase if WE broke into the scientist’s house to tell them (assuming WE even know about the project), the scientist would call the police and WE would be arrested for trespassing.

“In order to keep them from knowing what they really are, the creator guy would need to deliberately create a convincing illusionary world for them to live in.”

Right. That’s exactly what I implied in my question.

“That can’t be a trivial task.”

Right again. I’m sure building the first aircraft wasn’t a trivial task for the Wright brthers. So what?

“Why go through that trouble? What for?”

For science, of course—duh! Why go through the trouble of building an airplane that can carry only one person, reach a maximum altitude of 20 feet, and fly a maximum distance of less than a mile? Or what’s the point of inventing a telephone to call someone in the next room? If that’s your attitude, what’s the point of bothering to invent anything that has no immediate practical use?

But mainly, I’m merely using this whole “Sim scenario” to explore the ethical question of whether such sim people have rights—and what obligations, if any, the scientist has to them—not what scientific merit such a project might have. Though I suspect that such a project would have huge potential in the field of robotics and androdics…

XOIIO's avatar

I have a question. How do we know that we aren’t talking about ourselves? Maybe we are just a self aware computer simulation made just like people. or aliens are doing this to study humans Either way…

-23:06:34
-Subject: Terminated
-Replacing… Replaced

_Resume System Function.
-End Of Line

HungryGuy's avatar

@XOIIOGive the man the gold star!!!!

Fyrius's avatar

@HungryGuy
I’m not asking you why you would want to have a colony of artificial intelligences. I’m asking you why you would want to deceive them.
Would it be bad if they knew they were programs being run on a computer? Would it lead to anything so horrible that you would be prepared to invest so much time and effort into creating an illusion for them?

And there we have another interesting question. What would be the ethical implications of giving them false beliefs on purpose? Don’t they deserve to know the truth?

ragingloli's avatar

@Fyrius
because they would be in constant fear of being, uh, terminated. If they knew for sure that it would just take the literal push of a button to end their existence, in an instant, they would do nothing but run around looking skywards and screaming “please don’t kill us”. Not unlike religious people.

mattbrowne's avatar

@HungryGuy – Well, we can’t even prove that you and I are in fact not sim people. When you stare at your monitor right now reading my comment, some smart program and mechanism might actually trigger action potentials in the neurons of your visual cortex. When we build an android and look at him he could also be part of a simulation.

Fyrius's avatar

@ragingloli
I see your point.

But surely they would only panic if their creator wrote them that way.
It would be wrong to assume they’d behave like humans by default. They’re not humans, they’re bits of machine code. With decades of hard work their programmer might succeed in making them approximate human behaviour, but he might as well go a different way altogether and create, say, beings of pure logic without our redundant instincts and innately biased ways of thinking.

For that reason, I don’t think the Turing test would be a proper measure of intelligence. It’s too anthropocentric. It relies on the notion that if you can’t mistake an AI for a human, it’s not intelligent.
There probably already exist intelligent non-human life forms somewhere out there that would fail the Turing test.

HungryGuy's avatar

@Fyrius – Oh, I misunderstood your comment.

“I’m not asking you why you would want to have a colony of artificial intelligences. I’m asking you why you would want to deceive them.’

So that I can ask hypthetical questions about justice and ethics and rights and obligations on Fluther. That’s why :-p

HungryGuy's avatar

@mattbrowne – Exactly. Just as XOIIO also said, I was waiting for someone to turn around and ask how we know that we aren’t sim people in some elaborate VR universe on God’s desktop supercomputer…

HungryGuy's avatar

@ragingloli – That’s a good reason right there for hiding the fact from them—so they don’t live in constant fear of someone hitting the OFF button.

ragingloli's avatar

@Fyrius
“_But surely they would only panic if their creator wrote them that way. _”
Yes, true, but it is reasonable to assume that the creator(s) would write them that way.
A self preservational instinct/urge/subroutine is, in my humble opinion, essential for the survival of any higher species, including simulated ones.

mattbrowne's avatar

@Fyrius – The Turing test is about artificial intelligence reaching the human version of intelligence. But a superintelligent machine (as portrayed for example by Ray Kurzweil) would certainly be able to simulate human intelligence and pass the Turing test.

Fyrius's avatar

@HungryGuy
Lol, fair enough.

@ragingloli
Well, yes, in a world that has dangers. But again, the writer would have to deliberately add those too.
And he could write a self-preservation subroutine that would only activate when something can be done to avert the danger. A subroutine that generates an unproductive sensation when nothing can be done is useless.

Our biological evolution has developed a psychology that works, but it’s not usually the best way to do it. It would be better to start from scratch than to model an AI on the human mind.

@mattbrowne
I’m sure it would be. But what about a machine that’s not much more intelligent than us, but just about as clever, only shaped in a fundamentally different way? It would have to adopt an entirely different mode of thought than the one it’s used to in order to trick the tester.

mattbrowne's avatar

@Fyrius – Yes, it’s a possible scenario. A sentient race in Andromeda might have launched probes, say 300 million years ago, carrying an artificial intelligence not much more intelligent than us, but just about as clever and shaped in a fundamentally different way. What would happen if they reach Earth? Would they notice Earth if in the neighborhood? Would they be able to recognize our intelligence? Learn our language by simply listening?

Well, as a supporter of the Rare Earth Hypothesis I would assume they’d recognize Earth as being special. Would they be able to receive signals of the entire electromagnetic spectrum? Very likely. There are four elementary forces in Andromeda as well. I’m not so sure about my other questions above.

HungryGuy's avatar

@mattbrowne – The Turing test is obsolete, it it ever was valid in the first place. Machines can. and have, have paseed the Turing test without being self aware or sentient. I don’t know exactly how we would confirm that a machine is self aware. Can you prove to me that you’re self aware?

HungryGuy's avatar

@mattbrowne – I don’t know how “intelligent” an alien probe wouild judge us as compared to their own standard of intelligence, but they would certainly be aware that we have a technological civilization by the presence of radio waves across a wide spectrum. Upon closer inspection, they would see other telltale signs such as artificial light visible from the surface and various chemical signatures in our atmosphere.

mattbrowne's avatar

@HungryGuy – You might have heard of the

http://en.wikipedia.org/wiki/Loebner_prize

Machines have not yet won the ultimate Loebner prize and I quote:

“Originally, $2,000 was awarded for the most human-seeming chatterbot in the competition. The prize was $3,000 in 2005 and $2,250 in 2006. In 2008, $3,000 was awarded. In addition, there are two one-time-only prizes that have never been awarded. $25,000 is offered for the first chatterbot that judges cannot distinguish from a real human and that can convince judges that the human is the computer program. $100,000 is the reward for the first chatterbot that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input. Once this is achieved, the annual competition will end.”

About the alien probes. There’s one scenario called the

http://en.wikipedia.org/wiki/Zoo_hypothesis

and this would mean that we are the animals unaware of our intelligent observers. It would explain the Fermi paradox.

HungryGuy's avatar

@mattbrowne – Indeed. The Loebner prize is certainly a worthwhile competition in the field of AI (just as the X-Prize was for private space launches), but it’s still just a Turing test on steroids. I’m sure machines will eventually win those prizes, as Moore’s Law advances, without being self-aware. Law enforcement has had software has been able to recognize individual faces against a “rap sheet” for a number of years, and Google now has an app that recognize faces to help people categorize photos by the people in them. Bleh.

HungryGuy's avatar

@mattbrowne – Yes, perhaps we are in a zoo—that’s actually an unstated sub-question to this very question that you and one other picked up on :-) We won’t know the answer until we actually meet up with ET (I hope they’re benevolent—because we won’t stand a chance if they’re not).

mattbrowne's avatar

@HungryGuy – Yep. Like Deep Blue. It’s just a matter of time.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther