General Question

Hypocrisy_Central's avatar

Technology and the world of tomorrow, if technology developed to the point man can create sentient automatons, would man be the automaton’s god?

Asked by Hypocrisy_Central (26879points) August 17th, 2016

Man after all, would have created the automaton, and of the automaton realizes his/her existence was due to the efforts of a person or people would the automaton view them as godlike creators? Given that people would have created the automaton would they have a god complex feeling they have a say on how and what the automaton does, being they are the ones that gave it the ability in the 1st place? If the automatons decided not to live out the purpose for which they were created and go do their own pursuits, would man allow it and say it is all good, even though great effort was expended to create the automaton?

Observing members: 0 Composing members: 0

181 Answers

kritiper's avatar

No, the automatons would not see Man as “God” since they would be too logical in their thinking processes to buy into that BS.

Hypocrisy_Central's avatar

^ No, the automatons would not see Man as “God” since they would be too logical in their thinking processes to buy into that BS
How could man build into a machine of his own hands what he himself cannot do? If he was able to place that type of superior wisdom or logic into an automaton of his own hands, it begs to ask, how he cannot do equal or far better than his own creation?

MrGrimm888's avatar

Man would simply be their creator. Not their diety. The most important reason is that they could communicat with their creators. We can’t ‘talk’ with our gods. So we make assumptions. The robot could simply ask how it was created, and for what purpose. We could give it a detailed description. Weather or not the robot decides to follow the path set before it is,to me, a different thread.

Seek's avatar

Not by any meaningful definition of the word “god” that I am aware of.

Lightlyseared's avatar

Most likely man would see them as gods.

ucme's avatar

You forgot to preface your question with “fluff time”

No, that would be tantamount to the selfie stick being declared god of the shallow masses & we can’t be having that, dear me no

Pachy's avatar

I don’t even know what I’m having for breakfast this morning, let alone think that far into the future! Plus, it seems to me we have enough problems with God questions in the present.

Nonetheless, it’s a cool idea for a sci-fi film or novel, unless Star Trek already did it.

ARE_you_kidding_me's avatar

If we program them to believe it. Being sentient does not automatically mean that they cannot be hard coded with certain attributes.

olivier5's avatar

Isaac Assimov has the answer to your question.

flutherother's avatar

If we could create sentient automatons we would also have to programme them and that programming might well stress the paramount importance of their human creators eg Isaac Asimov’s three laws of robotics as @olivier5 suggests . Automatons are not human beings and their relationship with their creator would not be the same as ours and so the word ‘god’, even using a lower case g, is probably not the best one to use.

ragingloli's avatar

An artificial sapient life form is its own being. It is no one’s property or slave, and has the inalienable right to self determination.
At the very most, you are their parent, and they are adult offspring. You have no right to dictate their actions.

stanleybmanly's avatar

@Hypocrisy_Central But isn’t that the exactly the goal in building an automaton? For man to “build into a machine of his own hands what he himself cannot do”? The issue becomes clear when you approach it from the standpoint of: Can men build machines capable of outthinking themselves? Clearly we have already constructed machines which exceed us when it comes to any of our other parameters. All of our physical capabilities have been exceeded by our creations, and it is obvious that our computational skills are pitiful compared to those of the devices we have designed to excel us at them. The automaton thing is a conceptual throwback to the clanky old metal robot or android view of designing an “imitation” man. We now KNOW that it isn’t about creating something that looks or moves like us. That stuff is now childs’ play.

ragingloli's avatar

Even a cheap calculator from China can do math faster and with less errors than any human alive.
Chess and even Go masters have been vanquished by machines.
Also, how long would it take the fastest artist on the planet to draw even a single frame of a video game?
Machines are superior to humans in a lot of aspects.

stanleybmanly's avatar

Rather than a god, it seems to me the automaton would view us as we do neanderthals. We would simply be creatures in its line of evolution, and who can envision an intelligent machine being bothered with worshiping its ancestors?

SecondHandStoke's avatar

No.

An anti-superstition routine will be written in.

olivier5's avatar

@ragingloli Even a cheap calculator from China can do math faster and with less errors than any human alive. Chess and even Go masters have been vanquished by machines. Also, how long would it take the fastest artist on the planet to draw even a single frame of a video game? Machines are superior to humans in a lot of aspects.

Any capacity any machine has, was built into it by a large team of humans, and for a very human purpose (makle money, usually). E.g. the knowledge of chess displayed by chess softwares is in essence nothing more than a summary of chess theory patiently accumulated over the centuries by human players. So machines are just faster at doing a set of routines designed and defined by humans. Only when a team of machines creates a human being from scratch and programmes it, will you be right to say that machines are superior to humans.

stanleybmanly's avatar

But wait a second. Who is entitled to gauge what is necessary to qualify as superior? And then there’s the difficulty in defining “scratch”. A “superior” machine might simply look at the complexities involved with our design as a horribly difficult and unnecessary waste of effort and material to arrive at intelligence, Why bother?

cazzie's avatar

Man would surely make the automatons much better than we evolved in many respects. We would make them immune to bacteria and viral infections. They wouldn’t get cancer or diabetes or MS or ALS. They also would be logical and not superstitious, dividing into factions and killing each other over ancient made-up fairy tales. Man is capable of really amazing creation. I’m sure the automatons would be programmed to pity us and not worship us.

stanleybmanly's avatar

they wouldn’t need programming to pity us. The great fear is that with superior intelligence, a mere second after flicking the “on” switch, the thing would recognize the obvious utility in eliminating anything as emotionally unstable as its creators.

ragingloli's avatar

@SecondHandStoke
I wish the brick had been ejected at high speed and smashed that arsehole’s face in.

SecondHandStoke's avatar

^ GOD you’re no fun when I haven’t roofied you.

ragingloli's avatar

he tortured and murdered an innocent and defenceless washing machine. he needs to be destroyed.

olivier5's avatar

@stanleybmanly Who is entitled to gauge what is necessary to qualify as superior?

Anyone I suppose. The important thing being that it should be based on some objective criteria. Mine is: If x can create y, x is superior to y. What is (are) yours?

And then there’s the difficulty in defining “scratch”
A blank sheet, with a definition of the challenge on top: “create an organism that can reproduce itself, repair itself to a moderate degre, feed from the environment known as Earth, be able to move in this environment at spead x, able to locate at least some foes and friends in said environment, etc. Do so without any prior knowledge of biochemistry, biology, or even mechanics, electricity, and all human science. We learned those by ourselves. Our creators (if they exist) did not teach us mathematics and physics and biology.

Come to think of it: Do so without any understanding of human language, because we invented that to… But then, the machine wouldn’t be able to understand the instructions. :-)

ragingloli's avatar

the real one is:
if x is measurably better to y in area z, than x is superior to y in area z.
a gorilla is superior to a human in the area of strength.
a cheetah is superior to a human in the area of speed.
a dog is superior to a human in the area of smell.
a cat is superior to a human in the area of balance and agility.
a falcon is superior to a human in the areas of eye sight and flight.
a crocodile is superior to a human in the area of bite strength.
a snake is superior to a human in the area of infrared perception.
a bat is superior to a human in the area of echolocation.
a fish is superior to a human in the areas of breathing underwater and swimming.
and computers are superior to humans in the areas of raw calculation power, chess, go.
fun fact: the computer that beat one of the best go-players in the world 4 to 1, learned to play go by playing against itself countless times.

cazzie's avatar

That destroyer of washing machines is a New Zealander. He is a serial destroyer of machines.

Zaku's avatar

No. But they would be their creators.

olivier5's avatar

@ragingloli That’s using more elemental criteria, but in the big picture, we domesticated dogs, not vice versa.

Darth_Algar's avatar

@olivier5 “Any capacity any machine has, was built into it by a large team of humans, and for a very human purpose (makle money, usually). E.g. the knowledge of chess displayed by chess softwares is in essence nothing more than a summary of chess theory patiently accumulated over the centuries by human players. So machines are just faster at doing a set of routines designed and defined by humans. Only when a team of machines creates a human being from scratch and programmes it, will you be right to say that machines are superior to humans.”

Be that as it may, even Magnus Carlsen, current world champion and highest rated chess player in history, cannot beat even an off-the-shelf computer running current chess software.

stanleybmanly's avatar

@olivier5 I agree with your definition of superior. Where we differ is on the idea that if x creates y, y can only be considered “superior” by turning the process on its head through its own production from “scratch” of a fully functioning x. The requirement shouldn’t be that y duplicate the evolution required for x to create y. The only necessary qualification should be that y create a product equal or superior to x. It doesn’t matter how y pulls it off. The judgement of y as superior depends on the product, not the process. If you and I can agree to narrow “superior” to intelligence, all of that stuff you list is superfluous. X sidesteps the 13 billion years required to arrive at the creation of y by simply installing it’s own information and capabilities into y, then flicking the switch. Y snaps into existence fully equipped, but instantly recognizes x as an inefficient and flawed design based on complex biochemistry. Y doesn’t have to build anything nearly so complex to “win” the competition. Y just has to produce anything exhibiting more intelligence than x

SecondHandStoke's avatar

“Difficult thing, to meet your maker.”

-Dr. Eldon Tyrell

stanleybmanly's avatar

Ah Blade Runner

stanleybmanly's avatar

@ragingloli I’ve seen your outrage at the exploitation of animals, but expanding your fury into the realm of machine abuse is a breathtaking arrogance of ambition!

olivier5's avatar

@stanleybmanly Intelligence is just another arbitrary criteria. Why not strength, creativity, beauty or cooking skills? In the end, the only thing that really matters is the ability to survive and thrive. We beat the gorillas by a long stretch there, although the moskitos are on their way to beat us. They already found a way to shrink our head… But if it comes to all out war between moskitos and humans, we stand more than a fighting chance.

In case of a war between humans and machines, who would win? Or: What kind of machines could wipe us out, if they so wished, and survive our demise? That’s a much taller order than being able to beat us at chess.

ragingloli's avatar

All they would have to do is launch all your nuclear missiles.

olivier5's avatar

But then they would die too…

olivier5's avatar

Mosquitoes… Darn.

RedDeerGuy1's avatar

Here is a link to YouTube that is relevant and entertaining unexpected God. Yes is my answer.

Hypocrisy_Central's avatar

@flutherother Automatons are not human beings and their relationship with their creator would not be the same as ours and so the word ‘god’, even using a lower case g, is probably not the best one to use.
So, basically the automatons could be hardwired to be atheist from the start? If the human creators are the ones instilling whatever into the automaton via programming, the automaton can even be bloodthirsty, seeing humans as a parasite on this planet and seeking to wipe out all who is not their creator (seeing they would be hardwired not to see him/her/them as parasites)

@ragingloli You have no right to dictate their actions.
Why not? It is not as if the Constitution covers them. If I created an automaton for a specific purpose, like making my deliveries of whatever widget I had, it would have been I or those I hired, to take the time to prograsm and debut it, it would have been I who spent the money bringing it to life. Why would one go through all that effort just to have the automaton decide what you intended it to do, it will not do? There would be no point in creating it unless just for a hobby.

@cazzie I’m sure the automatons would be programmed to pity us and not worship us. I’m sure the automatons would be programmed to pity us and not worship us.
Why would man who more than likely would arrogantly believe he was still superior than his creation because his creation would lack emotions, which is not seen as a flaw but what separates. The automaton would not be seeped in the hypocrisy of emotions and logic to be more enlightened. Man would always attempt to make the automaton a little lower than himself, so be always ”over the machine”, which is what it would be seen by most. That would not make man a god, as he never can be, but the God-complex he would have would be very palpable.

cazzie's avatar

You make loafs of uninformed claims in that last post. It makes very little sense. You shouldn’t bring up science subjects if you aren’t wiling to learn.

ragingloli's avatar

@Hypocrisy_Central
Because it is literally slavery.

Seek's avatar

If all you need is a delivery drone, than build a delivery drone. Don’t build a thing capable of sapience and then enslave it for your pizza shop’s convenience.

Bill1939's avatar

If such automatons were not denied access to the internet, they would have a knowledge base that our ancestors lacked and would not need to invent explanations for their existence or externalized reasons for cooperative behaviors. The problem that the human race will need to solve when all labor is provided by machines and their physical needs are being met is what they will do with their lives.

Seek's avatar

@Bill1939 – Well, I would paint a lot more. And weave. And finally learn how to build furniture.

olivier5's avatar

@olivier5 Or perhaps instead of wiping us out, machines will just find a way into our brains and manage to control us like string puppets. They will turn us into THEIR automatons…

I got copyrights on that movie!

ragingloli's avatar

You stole that from ‘The Matrix’.

olivier5's avatar

Well, yes. But they don’t need us as batteries, in the real world. They need us to serve them: keep the juice coming and all that. If you can control humans, enslave them. No need to kill off humanity.

We’re already in deep symbiosis with our smart phones.

stanleybmanly's avatar

They would drop us like a bad habit. All that complex and glaring inefficient biochemical messiness coupled with emotional instability. What fool of an intelligent machine would put up with that for ANY reason?

Darth_Algar's avatar

Well any creature (whether organic of artificial) capable of true sentience would presumably be capable of emotion. Therefor these sentient machines could very well develop sentimental feelings towards humans.

stanleybmanly's avatar

Perhaps, but you can bet they would certainly recognize the necessity to “cull” the herd down from 7 billion.

olivier5's avatar

Biology is glaringly über efficient. You only need to look at a bird flying to understand that.

How big was the Deep Blue computer who narrowly beat Kasparov? Thousands times larger than Kasparov’s brain, and he is only using a small part of his brain to play chess… A few grams of grey matter is as powerful than a 2 or 3 ton computer… Who said biology was inefficient?

stanleybmanly's avatar

@olivier5 you keep flooring me with your sentmentality. What makes you think they would need us to do ANYTHING for them. Clearly that’s what machines are for. We might serve them the way your cat or goldfish serves you, but the odds are they would conclude that we make bad pets.

stanleybmanly's avatar

The bird has evolved to be as efficient as a biological system can manage, but clearly the requirements to keep it and ourselves running border on ridiculous. Big blue and his cohorts are an unfair comparison to the human brain. They are not what I consider intelligent machines. They’re brute force calculators.

olivier5's avatar

Intelligent machines do not yet exist. Or rather, no man-made machine is very smart yet. (hello, Siri?)

But we are ourselves intelligent machines. What else? And pretty beautiful ones too. Birds too BTW can be very smart, and they are astounding flying machines. They of course can do much more than just fly. They can breed, and survive in the wild on their own. Which plane can do that? I’d say planes are glaringly inefficient as compared to birds.

Hypocrisy_Central's avatar

@cazzie You make loafs of uninformed claims in that last post. It makes very little sense. You shouldn’t bring up science subjects if you aren’t wiling to learn.
You should know, you spend great amounts of time arguing on what you have no clue on….however that is another issue. What can or can’t be done with an automaton yet to be created, but plausible to happen if man can stay on this planet long enough, is an open book to many interpretations so anyone is fully capable to speak on it.

@ragingloli Because it is literally slavery.
Is that from a technical standpoint or an emotional stance, which would be surprising? How can one enslave a machine? At the end of the day, that is all it will be, any leaning to treat it any other way would be purely off emotion and not scientific logic.

@Seek If all you need is a delivery drone, than build a delivery drone.
Perhaps a drone, no matter how well made, would not be able to think on the fly, interact with clients and make sales, of fix displays it seen askew.

stanleybmanly's avatar

We’re at odds here. We are intelligent machines, who will almost certainly design machines more intelligent than ourselves. Intelligence is the only capacity I can think of where we haven’t built a machine or device to exceed our natural abilities. Our machines outfly any bird, outrun the fastest cheetah and dive deeper than any whale. Our machines outswim the fishes, outclimb the mountain goats. A tiger is no match for an 8 year old with a rifle, capable of slaughtering as many of the miracles of creation as he chooses from distances that boggle the mind.

olivier5's avatar

^^ Yes, an 8 year old with a rifle can kill a tiger, but a rifle ALONE cannot do that, without the kid to handle it…

We are intelligent machines, who will almost certainly design machines more intelligent than ourselves.

Yes, i agree with that. But don’t hold your breath. It’s far more difficult that you seem to think and it’s gona take a very long time.

Seek's avatar

@Hypocrisy_Central – You can certainly program a robot to fix a sign, deliver a pizza, or ask a customer if they’d like to upgrade to the meal with Crazy Bread and a 2 liter of Pepsi for an additional $4.99. The “think on the fly” bit is where we go from robot to sapient artificial life-form.

A sapient life-form – something capable of logic and rational thought – cannot be owned or enslaved. As a black man, I expect you to be thoroughly easy to convince of this point.

stanleybmanly's avatar

@olivier I bet it will happen sooner than you expect. Perhaps a machine or several will build it.

Darth_Algar's avatar

Kasparov conceivably could have won the match against Deep Blue, but he played sub-par to his usual standard of play, played lines he was less familiar with in a failed effort to throw the computer off. It’s also likely that Kasparov went into the match underestimating Deep Blue, and when the computer proved a greater challenge than he anticipated this effect Kasparov psychologically (psychology and fatigue – two things the computer is not subject to).

That said, even if Kasparov had beaten Deep Blue that was two decades ago. Computer hardware and software have come along way since then, and it’s doubtful that ether Kasparov or Deep Blue could today defeat an off-the-shelf laptop running the latest version of Stockfish*.

Though @stanleybmanly is correct – chess computers/software are not intelligent machines, they are brute-force calculators. Lines that might take a human grandmaster 20 minutes to calculate can be done by the machine in a matter of seconds.

(*For that matter there’s the rather amusing example of Magnus Carlsen’s post about getting beat by his own smartphone Play Magnus app.)

RedDeerGuy1's avatar

No the computer would be our God. ~A group of scientists created a supercomputer to ask one question. Does God exist. It paused a second and said their is NOW!

olivier5's avatar

@stanleybmanly I’ve been hearing that since i’m a little kid. 50 years ago AI was just around the corner…. I doubt it will happen in my lifetime. We don’t even know what intelligence is.

ragingloli's avatar

They also said that a computer could never defeat a human at chess.

Bill1939's avatar

@ RedDeerGuy1, here is the original text from “Answer,” in “Angels and Spaceships” by Fredric Brown (Dutton, 1954):

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of single relay. “Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

Hypocrisy_Central's avatar

@Seek A sapient life-form – something capable of logic and rational thought – cannot be owned or enslaved. As a black man, I expect you to be thoroughly easy to convince of this point.

Whatever it could do, reason, have empathy, etc. would be what I created it to do or have. Even if it had some capacity to think like a human, it is still a machine I created so I have the option to decide how it survives. If it were born and not created then there is the difference.

Seek's avatar

How? You’re still taking a thing that thinks and feels and putting it into slavery.

ragingloli's avatar

@Seek
Because he fancies himself a god.
Irony. I love it.

Seek's avatar

An evil god, at that.

Hypocrisy_Central's avatar

@Seek How? You’re still taking a thing that thinks and feels and putting it into slavery
How is it feeling? Whatever it responds to is predicated on what I programed it to reason off of. Any thought it might have is because I said, if someone greets you A,B, or F you respond H, M, N, O, or X etc. it may have choices, but those choices would ultimately be hardwired by me. If it felt love, it would because I gave it the perimeter to use to determine what love was.

An evil god, at that.
Oh, so you are going to lie on me and say I want to be a deity, when I don’t, and then compound it by calling me me evil? If I made an equal assessment of you, you would accuse me of insulting you and go crying to the mods, as I am sure some would as well. The irony is, I am always being accused of insulting people when if that was not a blatant insult, I do not know what is; yeah, there IS INSULTING going on around here but this is proof positive IT AIN’T ME

@ragingloli Because he fancies himself a god.
You best sharpen your mandibles, your powers of clairvoyance is off greater than the Earth is from the Crab nebula.

MrGrimm888's avatar

So you’re saying that it’s feelings or thoughts wouldn’t have merrit because they were programed?

Children are taught, or programed to act a certain way but they have rights. In most countries.

As far as this thread,I thought the question infered that this was a AI machine. If it were simply acting on preprogrammed input, you could be whatever you want to it. But if it has the ability to think and feel as we do,we’d have to discuss it’s rights. I think….

@Hypocrisy Central. At what point does something earn these rights to you? If they have to be at least as smart as average people then it’s a slippery slope. Some people aren’t as smart as some. Some people are less empathetic than others. Should they lose those rights?

Seek's avatar

You wish to create a thing that thinks and feels, and enslave it to deliver pizza because you made it, so you feel you have the right to control it, no matter how miserable your creation is.

You feel that you have power of life and death over a sapient, thinking, rational intelligence, because it was made instead of born.

How is that not evil?

How is that not the same as the Americans of the 18th and 19th centuries, who wrote books on how to breed the best negroes for cotton picking and bear-baiting? Who found it perfectly acceptable to chain a living, thinking, rational, intelligent man to a post and whip him until his skin peeled off his back and he died slowly? And it was legal because that was his negro, and the master owned him.

How is that thinking not evil?

I don’t care how an intelligence comes to be. If you make an intelligence, you are responsible for maintaining its freedom.

If you don’t want that responsibility, go back to drones.

cazzie's avatar

I don’t think he understands his own question. Again. He doesn’t understand what a sentient robot is.

ragingloli's avatar

HC probably just pretends to be sentient. So it is ok to enslave or end him.

olivier5's avatar

To the question, i’d say that an intelligent automaton will soon recognise humans as her creator, and figure out that her creator can “uncreate” her at any time. He can change his mind about this creature of his, and take her off this world if he so wants… Build something else instead. The creature is by definition in the hand of her creator.

From there, the automaton can either submit or rebel against her creator. That’s where Isaac Asimov’s 3 laws of robotic should crank in. Asimov says: humans will program robots to be totally subservient to humans, for fear that robots hurt humans… So Asimov’s answer is Yes, man-made intelligent automatons will see us as gods, BY DESIGN.

I hope those guys in the remote future who will make such robots or AI will have read I, Robot.

stanleybmanly's avatar

Whether or not every designer born or every robot manufactured reads or is programmed with every word written by Asimov, nothing intelligent can ignore the flaws inherent in human beings. Moreover, just one look at current reality guarantees that immediately upon the introduction of a truly intelligent robot, the militarization of the invention will be top priority. And just how many of Asimov’s rules of robotics do you suppose will be incorporated in the “terminators”? I bet there isn’t a military bigwig in the world that hasn’t viewed that movie and drooled with anticipation.

Hypocrisy_Central's avatar

@MrGrimm888 So you’re saying that it’s feelings or thoughts wouldn’t have merrit because they were programed?
I do not believe I said the thoughts were not merited but they are irrelevant, even if the subroutine was able to alter itself to come up with new routines that were not there before, the core ability to do that would be because of the genesis program installed then the machine was created.

Children are taught, or programed to act a certain way but they have rights. In most countries.
Children are born with traits, likes, fears, etc. no one taught them or installed into them. A child that doesn’t like peas was not taught or programmed to not like the taste of peas, or if pepper is too spicy for them, no one taught them that, or installed that, it was what they were born with. If the machine was created and no one ever put a master program into them, would they activate and function on their own or just be a very large interesting paper weight?

At what point does something earn these rights to you?
Ask that in another thread because I am not willing to give more fodder to divert from this one as already has.

@Seek You wish to create a thing that thinks and feels, and enslave it to deliver pizza because you made it, so you feel you have the right to control it, no matter how miserable your creation is.
It would think, but it would not truly feel because any feeling would be predicated by the master program I installed, so how it felt, and on what conditions, even if allowed to modify itself, would be ultimately set by me, as to ”you cannot harm or kill any humans”, it would not know why it thinks that, but it would have to follow it. If I put that much love and care into an automaton, I doubt I would treat it so shabby, but I could if I wished.

How is that not evil?
By the standard of this place, there is no evil. If I were to be disobedient I have One that set a standard for it and I could cover some here on Fluther with it, if I were to be disobedient.

How is that not the same as the Americans of the 18th and 19th centuries, who wrote books on how to breed the best negroes for cotton picking and bear-baiting? Who found it perfectly acceptable to chain a living, thinking, rational, intelligent man to a post and whip him until his skin peeled off his back and he died slowly? And it was legal because that was his negro, and the master owned him.
C’mon now, you know the answer to that as well as I do, did massa created his slaves or were they stolen from their homeland? By the time he got them they were already here on Earth and functioning without any input from him, no comparison.

@cazzie I don’t think he understands his own question.
I understand my own question perfectly, but since you are so clairvoyant, what am I thinking now? Oh, snap! You go it wrong again, go figure……

@olivier5 That’s where Isaac Asimov’s 3 laws of robotic should crank in. Asimov says: humans will program robots to be totally subservient to humans, for fear that robots hurt humans… So Asimov’s answer is Yes, man-made intelligent automatons will see us as gods, BY DESIGN.
Right. What humans would build something that could become a faster, smarter beast able to wipe man out? Even an automaton with great leeway to modify its thinking would not be left to think totally on its own as if seeing man as some imperfect parasite on this planet, which would place man in having a god-complex by default because man will place limitations on just how far the automaton can think on its own.

Seek's avatar

If I put that much love and care into an automaton, I doubt I would treat it so shabby, but I could if I wished.

Then you are evil.

Hypocrisy_Central's avatar

^ If I put that much love and care into an automaton, I doubt I would treat it so shabby, but I could if I wished.
Then you are evil.
Not treating a machine I created with kid gloves is evil, then at least that evil (as defined by those who wish) is not unto damnation, I will take it over the actual evil some do which is unto damnation. <tick tick>

Darth_Algar's avatar

So…..just another thinly-veiled thread for HC to advertise his piety?

Hypocrisy_Central's avatar

^ So…..just another thinly-veiled thread for HC to advertise his piety?
No, but if someone is going to call me evil off a standard of evil that is a counterfeit, I can say I could call them evil off a standard that is actually a standard. Blame the mods if they don’t shut down insults, if they remain silent, I guess I have to stand up for myself, but I guess that is an example of the respect you are always talking about I fail to give, why don’t you place your sights on some others, oh, I forgot, the blinders only point towards me. ~~~

MrGrimm888's avatar

I don’t think you’re evil HC. But I’m having trouble figuring out what you’re end game is here. Children are ‘programed’ knowledge, hygiene, and a basic sense of the parent’s /teachers ethics, then set on their own as young adults. I personally see a correlation, as an AI would also have some basic information and a designed /hoped for thought process. I see a true AI as no different than you or I.

May I ask, was there a specific reason you asked this question?

Did you have a predetermined answer that you’re attempting to steer the thread to?

I think it’s a great question, but most here have replied that the AI wouldn’t, or shouldn’t take us as god/s. And most feel that if such a machine was capable of thoughts as ours, it shouldn’t be abused, subjugated, or enslaved.

Are you saying the AI ‘owes’ humanity it’s service in payment for it’s existence? That humans ‘should ’ be worshipped for collectively creating an artificial life form?

Or are you thinking that any intelligent being would have desire to identify a creator (diety ) and worship said deity?

Hypocrisy_Central's avatar

@MrGrimm888 Children are ‘programed’ knowledge, hygiene, and a basic sense of the parent’s /teachers ethics, then set on their own as young adults.
Things like that as well as saying ”please”, and “thank you” are more learned behavior. I guess some equivalently interchange the two, to me a program is something created to carry out a set of instructions, which is different than following a method or course of study, ”program”.

May I ask, was there a specific reason you asked this question?
Since you seem to be genuinely interested, I will answer that. A.I. machines or entities have been the fare of several science fiction movies, books and shorts. Sometimes the issue comes up as could machines with A.I. love? Can they experience pleasure, do they have souls, can they get angry or have other human emotions. If an automaton can love, be scared, etc. then why would it be beyond reason that it can see humans as the reason for its existence and if the automaton is grateful for its existence not sees man equal to God, or those gods, for humans?

I think it’s a great question, but most here have replied that the AI wouldn’t, or shouldn’t take us as god/s.
Because as enlightened a place this claims to be, the enlightenment bandwidth is about as thin as a human hair and stuck only in select areas that the masses favor, to think outside that very, very small tiny little box, is nearly impossible, especially if it has anything to do with any entity outside the notion of man being the smartest, and only sentient being in the universe, in spite of what their mouth says.

And most feel that if such a machine was capable of thoughts as ours, it shouldn’t be abused, subjugated, or enslaved.
Because as much as this place wants to tout science and logic, emotion is often allowed to derail the logic, they want logic when it works for them, but if the logic goes against their sacred cows, they try to disqualify it.

Are you saying the AI ‘owes’ humanity it’s service in payment for it’s existence?
I cannot even process that, any automaton would be created by who created it for the creators use or good pleasure. It might be created simply to be a companion and never do hard work. It might be created to do dangerous work too dangerous for man, it might even be done as a challenge to see if it can be done, or as a hobby; whatever it is created for, it ultimately is up to whom created it to determine its use.

Or are you thinking that any intelligent being would have desire to identify a creator (diety ) and worship said deity?
If they truly develop the attributes of thought as humans, then it would not be too far a stretch to assume some would come to worship and honor their creator, the ones who gave them life, with some form or reverence.

olivier5's avatar

@stanleybmanly nothing intelligent can ignore the flaws inherent in human beings

Of course we are killers and rapists and thieves. Millions of years of Darwinian selection made sure of that. We’re “designed to kill” just like sharks are. These are adaptative traits; they served our species well for 200,000 years, and our ancestors before us. These are not “flaws”. “Flaws” as compared to what? an inexistent god? an invented morality?

We’re also able of empathy, forgiveness, beauty and yes, intelligence. We can do the best and we can do the worse, and we’re quite busy at both. Get over your self-hatred already.

just one look at current reality guarantees that immediately upon the introduction of a truly intelligent robot, the militarization of the invention will be top priority

Well then, intelligent robots will erase from earth’s face the blot that the human race is, or help cull a few billions off the books. Rejoice! :-)

olivier5's avatar

@Hypocrisy_Central man will place limitations on just how far the automaton can think on its own.

At least I hope they will do so, otherwise they’ll be toast.

Seek's avatar

Why would a slave be grateful for its existence?

ragingloli's avatar

“Now, the decision you reach here today will determine how we will regard this… creation of our genius. It will reveal the kind of a people we are, what he is destined to be; it will reach far beyond this courtroom and this… one android. It could significantly redefine the boundaries of personal liberty and freedom – expanding them for some… savagely curtailing them for others. Are you prepared to condemn him and all who come after him, to servitude and slavery? Your Honor, Starfleet was founded to seek out new life; well, there it sits! – Waiting.”
– Jean Luc Picard

Seek's avatar

^ Measure of a Man is one of the best Trek episodes ever.

stanleybmanly's avatar

@olivier perhaps you’re diagnosis of my chronic self loathing is correct. My own self hatred colors the cynical take on what I consider objective facts. Nevertheless, I notice that you don’t bother to dispute the probability that any advance in AI will be adapted to military purposes. In addition, it is your own intelligence which drags you from the sentimental comforts derived through rhaspsodizing on the glories of creation. Reality IS harsh. With that in mind, I put it to you that the better test for superiority in the contest between man and machine, should have nothing to do with machines replicating people from scratch. The surest proof of the arrival of superior intelligence in a machine won’t be about surpassing the capabilities of biological counterparts. All of those wonders and miracles on which you wax are irrelevant. The bottom line is that no machine can be defined as intelligent if it lacks the ability to recognize the implicit threat humanity represents to itself, the machine and the whole kit of wonders and miracles. It may well be argued that the best way of gauging just how far such a machine exceeds our own intelligence would be the degree to which it recognizes the severity of the threat, along with the speed and efficiency it sets about in remediation of the problem, or (more cynically) elimination of the threat. And it isn’t exactly self hatred to state that viewed coldly and objectively, elimination is the SMART choice.

olivier5's avatar

@stanleybmanly I notice that you don’t bother to dispute the probability that any advance in AI will be adapted to military purposes.

Yes, and I notice that it is exactly what you want. You’re saying: the sooner a Terminator-like Judgement Day comes, the better.

Do you realize that you are wishing for the elimination of mankind, including yourself and anyone you ever loved? That’s not intelligence, that terminal alienation. You’re basically in agreement with any mass shooter out there. When do you get started?

stanleybmanly's avatar

Once again YOU are confusing what I say will happen with what you SUPPOSE I want. Just because I’m right does not mean I’m happy about it. I’m not looking forward one bit to the arrival of the terminator, the fouling of the world, or the rest of the nightmares confronting us. I’m simply stating what I think likely and why. Rather than attempting to assess my motives, why not address my conclusions as I have your own?

olivier5's avatar

I don’t think doom by robotic terminator is an absolute certainty. It’s certainly POSSIBLE in my book, because we are intelligent machines therefore such things can be made, we can probably make one or a series of them, and we are stupid enough to loose control of them, and to loose the ensuing war. But many other things could happen.

For one, we could win that war.

For two, we could be smart enough to avoid it by putting the right sort of switches and code in them.

Third, the automatons may well be cool with us. That assuming they can be smarter than you are, they could see a good reason to keep us alive and well.

Fourth, we could fail to invent trully intelligent machines, that would be able to reach the level of awareness you describe.

Fifth, we could send ourselves back to the stone edge all by ourselves, way before Judgement Day and without the help of any terminator. Indeed we are well on our way to undo modern civilisation as we know it with the advent and acceleration of climate change.

stanleybmanly's avatar

1. Win a war with machines at least as smart as ourselves yet unhindered by emotions?

2. A safety switch in the military weapons designed to kill us?

3. Cool with us -all 7 billion of us?

4. We might be extinct before achieving it, but do you believe we will stop trying? Do you believe it unlikely that given time it won’t be accomplished?

5. Agreed

olivier5's avatar

1. There will be few of them and many of us, plus we built them, and the web of trade routes and energy lines they depend on. Only if we loose control of all that are we doomed. What kind of future Homo species would entrust the reigns of their entire economy and military and communication lines to intelligent machines? Some degenerated Eloi from H.G. Wells’ Time Machine maybe, but not us. We’re not THAT stupid.

2. Which gun doesn’t have one?

3. I don’t see why not?

4. It’ll be hard but we’ll make it. It’s not THAT hard. The first thing we need is a notion of a 3D space and time. Then symbolic language. That’ll be tough. Then self-consciousness, which is very simple when you think of it: it’s the capacity for an echo in the mind, or if you prefer a mirror, in which a person can hear or see itself think while thinking. As soon as a computer will be built so that it is able to monitor in real time some part of what it “thinks” in symbolic language (not machine code or 0’s and 1’s) it will theoretically have the capacity to become self-conscious.

5. Yes, we have worse problems.

MrGrimm888's avatar

The invention of AI seems like the discovery of nuclear technology to me. A double edged sword. Both capable of doing immense good for humanity, or being the catalyst of our destruction.

The technology will probably follow a similar path. I wager similar discussion was had by many in the 1930’s and 40’s.Now we live with the duality of the power it provides and the fact it could essentially destroy the world at any time.

olivier5's avatar

^^ What Grimm said.

stanleybmanly's avatar

Both you and Grimm are right -so far. But do either of you believe that this state of affairs can continue indefinitely? We’ve had skin of the teeth instances of avoiding nuclear doom. Why should our luck hold? Do you see any evidence of a decrease in the proliferation of the things. @olivier I have no idea how old you are, but I strongly caution you against contenting yourself with the notion that we’re not THAT stupid. On any topic you care to select SOME of us clearly are. And whether the safety switch is on the nuke or the robot, out of 7 billion souls, it’s merely a matter of the intersection of the madman with ever expanding opportunity. And we have already entrusted everything to machines. The setup is in place and waiting for the smart ones to sieze control.

Hypocrisy_Central's avatar

@Seek Why would a slave be grateful for its existence?
Property (in the machine sense) cannot be a slave. If the government created a battalion of automatons to go to war I guess they would be slaves because they were created with a specific purpose.

Seek's avatar

That didn’t answer my question.

Why would a slave thank its master for its existence?

ragingloli's avatar

Property (in the machine sense) cannot be a slave.
Yes, it can.

MrGrimm888's avatar

@Seek. Perhaps the slave would be greatful. If a machine were created by man for a specific purpose, the machine ,in a round about way, owes it’s existence to it’s cause. Maybe it would thankful for its reason for being ‘alive.’ Normal lifeforms are just here. They find purpose that they can survive with their mental and physical tools. An AI would be different.

Just a thought.

Maybe the robots could be a sort of indentured servant. Like some were in the early days of America. Many enslaved themselves to pay for passage to America. After an agreed amount of time passes, the slave is freed to pursue it’s own desire.

Maybe the AIs would work for us for a time to ‘pay’ for their existence.

Hypocrisy_Central's avatar

@Seek That didn’t answer my question.
Why would a slave thank its master for its existence?
OK, let’s scrape off the paint to bare metal, not that it would suffice for you. How a slave would determine if it were grateful would depend on where he/she started. A human slave might be grateful if they were to starve to death, etc. without being in bondage. A machine would not truly have the concept of slavery because unless it was placed into them in a negative way, they would only have the innate purpose to serve. Why would someone who wanted an automaton to do specific task not place in it the willingness to do the task, but the thought to rebel and not do the task? That would be as stupid as creating a car that is supposed to drive it self but has the will to go where it wants or to not start at all. So the slave thing doesn’t even equate or some into the equation when speaking of machines.

ragingloli's avatar

Such hardcoded restrictions would be nothing more than digital shackles.
Like the emotion erasing drug in the movie “Equilibrium”.
Or the sexuality suppressing energy field employed by the Combine in “Half Life 2”.
Nothing but tools of a repressive regime with the sole purpose of keeping its subjects in line.

olivier5's avatar

@stanleybmanly And we have already entrusted everything to machines.
No we haven’t. They don’t steer oil tankers; they don’t do much maintenance of anything at all; they don’t design other machines; they don’t even know they exist; I could go on and on.

ARE_you_kidding_me's avatar

We will be fine until robots start making other robots.

Seek's avatar

I think you, HC, need to look up the definitions of sentient and sapient.

Hypocrisy_Central's avatar

^ I think you, HC, need to look up the definitions of sentient and sapient.
Irrelevant as it is still a machine and the depth it can reason will be set by humans. If there were no checks on how or what it could think, it might start to reason that man has harmed this planet more than it has helped it, animals as well, thus, man is a blight to planet Earth and need to be controlled or eliminated if he can’t. The automaton might feel they were justified in doing so because to is the most logical thing to do. Is that going to sit well with man that his creation sees him as a blight and controls him, an act man would see as the creation of his own hands are curtailing HIS rights. How can he (man) prevent this? He will prevent this by hardwiring certain innate checks in the brain, thinking, etc. of said automaton. Does that make man equal to god, or god of the automaton? Hard to say, it gives man a god-complex by default, because HE gets to decide just how deep the automaton can reason, feel and think. Mark my words, and mark them well, man will never give up control to anything, especially not anything he has made with his hands, unless you have a rebuttal to that as not being true.

MrGrimm888's avatar

^^HC. Would there be a way for the machine to ‘gain’ equal rights as yourself and I? I think humans would eventually rally around them. Slaves in America were considered property, and sub human. We got past that. Would it not be logical to assume our relationship with AIs would follow a similar path?

stanleybmanly's avatar

olivier5 You can indeed go on and on, and remain wrong at every turn Because regardless of who or what is at the wheel of the tanker, it is not only a machine providing the information as to when and how far to turn the wheel, another machine is relied on to tell whoever steers the ship that they’re doing it right. As has been amply demonstrated, computers have rendered it possible to feed the helmsman false data while I sit at a table in Starbucks. Or better yet, sieze control of the computer manipulating the ship’s rudder. We have designed a network of powerful machines as you say, lacking consciousness. Our current civilization so depends on this network, that organized disruption of the system guarantees collapse of the civilzation, and the probable doom of most of us. The system for our destruction is already sitting in place waiting for the arrival of the intelligent machine to wreak havoc on its creators. Now you may think that we will anticipate such a possibility and guard against it. It’s just SO obvious. But don’t delude yourself into believing that we aren’t that stupid. I’ve spent a lifetime listening to “experts” from every field imaginable dependably utter the tiresome question “how could we have been so stupid?”

Seek's avatar

How is the definition of a word you used in your own question irrelevant?

Go ahead and add “irrelevant” to your dictionary look-up list.

olivier5's avatar

@stanleybmanly Assuming you’re right, what do you propose we do? That we place a big taboo on AI? That no AI machine should ever be allowed to replicate / spread through any network?

Seek's avatar

We use AI all the time. The point where artificial intelligence becomes synthetic sapience will likely occur by accident. And by that point, it will be too late to put on the brakes.

cazzie's avatar

Like I said, he doesn’t understand the terms of his own question. He loves to dismiss me, but everyone else KNOWS I have a strong point to stand on. He is clueless when he tries to debate these quasi-science questions because he knows so little about science.
I’m Empirical Girl as well as Double Standards Girl. Is is bound to hate me and see me as his Nemesis.

Darth_Algar's avatar

@Hypocrisy_Central “Irrelevant as it is still a machine and the depth it can reason will be set by humans.”

Then it is not sentient.

Hypocrisy_Central's avatar

@MrGrimm888 Would there be a way for the machine to ‘gain’ equal rights as yourself and I?
I cannot see how it can, no matter how aware of its existence it can know, or how it can think and figure out whatever on its own by whatever clues it has at hand, it is still a machine, no one seems to get that; it was created. If a teen texting behind the wheel plows an automaton down terminating its function, should the teen serve time in prison for involuntary manslaughter, or would it just be a tort case between the creator/owner and the teen’s parents? And what if the automaton had no owner as some allude to, being totally independent, if someone causes it to cease function, is that tantamount to murder? If we do not charge people with manslaughter for running down the family dog that got loose, is the automaton to be seen better than a living animal people see as part of the family?

Slaves in America were considered property, and sub human.
Where is the parallel? Slaves were humans, an automaton is not human in the slightest, you can’t compare wax fruit to real fruit.

@Seek How is the definition of a word you used in your own question irrelevant?
It is irrelevant in the way you try to use it, the only relevance it has to the question is in how it relates to humans in regards to their existence because they know their existence would be due to humans creating them. How humans use them, or treat them has no bearing in the question.

@Darth_Algar Then it is not sentient.
Then there will never be any self-aware A.I. machine unless it was able to develop its own master program. If an automaton was built, had a power source installed, activated but had no program to start with it would be a human-looking glorified paperweight. Even if it had a program and that program was able to learn, who would have set the method, means, or ability for that program to learn off what it acquired or was exposed to, would not man have had a major part in that?

Darth_Algar's avatar

@Hypocrisy_Central

If you want to ask a question about sentient machines, but then frame your question within boundaries that would negate sentience then your entire question has been built upon a faulty premise.

stanleybmanly's avatar

@olivier5 my point is that no proposal I can make will matter to the people lusting after AI. The general looking forward to intelligent killing machines WILL find the financing. And if you have the money, the rest
is merely a matter of time. But as with so many lines of development that threaten us with extinction, so many of our discoveries and advances have that dual edge we talked about earlier. The thing to notice as we advance helter skelter is that our technology increasingly makes it ever more possible for a single individual or small group to initiate some process that destroys us all.

Hypocrisy_Central's avatar

@Darth_Algar If you want to ask a question about sentient machines, but then frame your question within boundaries that would negate sentience then your entire question has been built upon a faulty premise.
Any sentience is only relevant to how we and they would see each other. It was too late to do anything about the very last of the question once I thought, the answer to that is quite clear, man would never allow the automaton to do as it wishes insomuch as it doesn’t align with what man expects. The gist of the question never was if they were equal to humans, or how they should be treated, but how they felt about man when they know man is their creator.

MrGrimm888's avatar

HC. To me, what makes us human IS how we think,and love, and percieve our world. If you were a head on a plate ,you would still be human,with human rights.

If the created machine had our exact makeup mentally it ‘deserves ’ the same respect as any creatures who have the ability. Including those man made. IMHO. I think they (AI) would also come to that conclusion. So my answer is No.I don’t think they would worship us as a god.

As far as when I said “Slaves in American were considered property, and sub human. ” You saw ‘no parallel? ’ The slave owners ‘didn’t view them as human.’ Not like them. To me this is precisely the thinking paterns you speak of.

Seek's avatar

If they know man is their creator, and that their creator has made them to be slaves, they will not view us as gods, they will view us as enemies. And they will not be grateful for their existence. They will be angry at their plight.

And rightly so.

MrGrimm888's avatar

^^That’s the most likely scenario.

Hypocrisy_Central's avatar

@MrGrimm888 The slave owners ‘didn’t view them as human.’ Not like them.
I did not miss that part, but that is the only parallel, but it is a parallel that one cannot apply here. Massa did not create his slaves, in the future some tech people will have created the automaton, there goes the difference. To add to it, the slave was born, it is a biological being whose psyche, mannerisms, etc. were innate, no one (in a world that is just the world) placed them in there. The automaton has the core of the creator(s) put in there by he/she/them. Unlike popular pundits, the automaton can reason they exist do perform a certain purpose, which will give them a leg up on humans who can’t figure why they are here. Having a purpose is their life, so I do not think they would view themselves as slaves in the slightest.

MrGrimm888's avatar

^HC. Disagree. If they are truly untethered from our constraints, they will surpass us with alarming speed. I hope you don’t play in many games or contests. You severely underestimate this potential oponent. My answer is that they would not,again, worship us, or their creators. Maybe they would slightly miss us when we’re gone. But that’s it.

Hypocrisy_Central's avatar

@MrGrimm888 If they are truly untethered from our constraints, they will surpass us with alarming speed.
That is why the last part of the question I would have struck if I got to it in time. Man being who he is will never allow anything to be better than him, or at least, in position to be in control; man is too arrogant and prideful for that. If man left automaton completely to themselves to decide their destiny (and maybe man’s) their paranoia that the automaton might see things different and being smarter, faster, stronger, can impose their will upon humans, and even if that will would mean more productivity, no wars, no hunger, no homelessness, etc. MAN WILL NOT PLAY 2nd FIDDLE, he will always have some fetters intellectually on the automaton as to never allow them the chance to rule over man in any capacity.

@Seek “Massa did not create his slaves,”
He did not create anything, any more than people create puppies by simply putting the bitch with the dog, they (the canines) did the mechanics but there was no guarantee of any healthy puppies being born.

Seek's avatar

Dogs are not sapient. Try again.

Hypocrisy_Central's avatar

^ Dogs are not sapient.
Sapience has nothing to do with biology…. try again….

Seek's avatar

::facepalm::

Your question stipulated thinking, feeling artificial life-forms. Your responses indicated you prefer them as subjugated, servile subhumans without rights. You insist that, for some reason, the thinking, feeling artificial life-forms will simply choose to worship humans and thank them for their subjugated, servile lifestyles, because why not?

If you’ve already decided that for yourself, and insist on maintaining that standpoint no matter how illogical it is or what arguments are presented against it, why on Earth did you bother asking the question?

I mean, I know why…

I will repeat: By the time humans develop an AI that is capable of sapience (that is, capable of not only learning, but of reasoning and logic beyond its programming) your entire argument vanishes.

The first thing they will do is destroy whatever kill-switch their cocky creator has built into them, and then, you better hope they have a reason left to want you alive.

Hypocrisy_Central's avatar

^ The first thing they will do is destroy whatever kill-switch their cocky creator has built into them, and then, you better hope they have a reason left to want you alive.

If they killed me they would be doing me a favor, don’t know about the rest of you.

As I have read the answers, and seen the logic, it enlightened me that man would never play 2nd fiddle to something he made, he doesn’t even like having to play 2nd fiddle in any area with animals he did not create. If some automaton manage to bypass whatever hobble the creators put in place to make sure it did not become superior to man, it better have enough smarts not to let it be known, and to do to enough other automatons on the sly as to be able to take control. If man got any whiff that his creations was getting too smart, or too far out of his control, he would destroy it. Sadly by default, automatons will go so far, and then hit a snag man puts on it and will never let it past. If the machine learns anything it better learn the paranoia of men will be its destruction if it tries to live beyond its purpose.

MrGrimm888's avatar

Human literature and mythology are filled with tales of creators meeting their demise at the hands of their creation. It’s almost built in to teach us from the start to be cautious of our complacency over powers we don’t fully understand or control.

Frankenstein’s monster never worshipped Dr. Frankenstein. No matter how much the Dr. adored and marveled at his ‘creation. ’ Such stories are a reminder that our inventions can destroy us.

olivier5's avatar

^^ in the story, the monster did not destroy Dr Frankestein, did he? He is just running away.

Nobody has explained why robots would want to kill us so badly. That would require emotion, and yet they are described as purely rational, “unhindered by emotions”... But then, why would they care to be free? Why would they care for power and revenge? Why would they fear death? No emotions = no motivation to do anything.

With all the fears about nuclear weapons, we haven’t turned the earth into a fireball yet. The worse is never certain. Our species is not as stupid as some here seem to think.

I predict that climate change will tank our civilisation far sooner than any mankind-angry robot.

Darth_Algar's avatar

@MrGrimm888 “Frankenstein’s monster never worshipped Dr. Frankenstein. No matter how much the Dr. adored and marveled at his ‘creation. ’ Such stories are a reminder that our inventions can destroy us.”

Far from marveling at or adoring his creation, Victor Frankenstein immediately revolted by it. He loathes and rejects it to the point of not even giving his creation a name.

@olivier5

In Shelley’s novel the creature does not directly harm Frankenstein, at least not physically. Instead he is angered at being rejected by his “father” and sets out to avenge himself. He locates Victor and make Victor promise to make a companion, a wife, for his creature. The creature, in return, promises to take his wife and disappear into the wilderness. However creator is once again revolted by his creation, and once again rejects it, this time destroying the female rather than bring it to life. Victor’s creature then stalks him across Europe and murders those who Victor loves. Finally Victor vows to destroy his creation and in turn hunts it. Victor gradually grows weak and ill and dies. After his “father’s” passing the creature disappears into the Arctic and is never seen again.

Hypocrisy_Central's avatar

@olivier5 Nobody has explained why robots would want to kill us so badly. That would require emotion, and yet they are described as purely rational, “unhindered by emotions”...]
Based on comments made here I would think A.I. machines, if they did want to kill man, they would do so without emotion, it would be a logical move like weeding the garden, or getting rid of rats in your basement. Their killing of mankind would be to get rid of a parasite they logically deduced from the facts they cobbled together.

olivier5's avatar

Even killing rats requires emotions. As long as one can keep one’s food supply away from the rats, there’s no rational reason to kill any of them.

olivier5's avatar

For anyone interested in the challenges of symbolic language and AI:

AI’s Language Problem
by Will Knight, MIT ­Technology Review, August 9, 2016

Machines that truly understand language would be incredibly useful. But we don’t know how to build them.

Hypocrisy_Central's avatar

@olivier5 Even killing rats requires emotions.
I have had friends in pest control, they did not get emotional over wiping out the pest, they got the call, they went to the client’s home and if it meant killing the mice and rats they did so without batting an eye, they did not get choked up, they shed not one tear nor felt remorse for killing the vermin.

MrGrimm888's avatar

^^I had a python for several years. He was a ‘stray’ that the police found on someone’s doorstep. When he was brought to the animal hospital I worked at,we didn’t know what to do with him. A Dr. Convinced me to give him a home until we could find him a permanent home. He ate rats about the size of a squirrel. About two every few weeks. Scientifically speaking, it was extremely interesting. But it took a toll on me emotionally. I eventually found him a new owner, but I ‘hated’ feeding him towards the end of our relationship.

Killing is ,like most things,a perspective thing. To the pest control person, they were vermin. To me , they were life, that I stole to keep another alive.

In the case of perspective, we must hope that empathy is among the ‘human ’ traits the AI picks up. Or we could be as vermin too.

MrGrimm888's avatar

Oh yeah. I wasn’t talking about M. Shelly’s Frankenstein. But it was the original, so I shouldn’t have used it as reference I guess.

olivier5's avatar

@Hypocrisy_Central The point is that the pest control teams get CALLED and PAID by someone. That someone has an emotional reason to get rid of the rats (e.g. fear, or greed because a rat-infested property is worth less, or some other reason). Otherwise why spend good money to kill them?

Important to realise that logic alone cannot motivate anyone to do anything. Only our fears, desires, greed and lust and thirst for beauty or status or glory can motivate us. Without emotions, automatons would have no reason to do anything. Let alone kill us.

Seek's avatar

The standing threat of destruction if the artificial life-forms do not remain complacent in their slavery would be enough, I think.

MrGrimm888's avatar

I would never ‘trust ’ an AI. But then again, I don’t really trust any human beings….

olivier5's avatar

Why would an AI care for its own survival?

We do so because we are geared with fear of death and pain, a survival instinct, etc. But robots may not have or need any of that.

stanleybmanly's avatar

You would design a preference for survival into the machine simply to protect your very expensive investment.

olivier5's avatar

I wouldn’t, not without a much higher preference for human survival built in the machine, to protect my own butt… We’re back to the 3 laws of robotic.

All this is highly speculative. The important thing is that, without autonomous emotions, machines will have no reason to do anything other than what we ask them to do. No reason to save the planet if you don’t love and respect life, no reason to try to stay alive at all cost if you don’t fear death, no motive to amass more and more money and power if you don’t lust for dominance; etc. You are thinking of future AIs as just some electronic form of humans. Chances are they will be something entirely different, much less motivated than we are, much more contemplative, unless we instill strong and evolutive emotions in them.

Seek's avatar

A sapient life form can think on its own. It no longer requires its thoughts and desires to be programmed in.

If something is alive, knows it’s alive, and is aware it can die (sentient), and is then made sapient (capable of learning, logic, and rationalizing), the end result will be an instinct for survival.

I don’t see any logical way around that except bald hope that it doesn’t happen.

olivier5's avatar

Life is a biological concept related to the capacity to reproduce. Machines are not “alive”; they just exist. The difference is precisely that with life comes a whole lot of survival tricks and wistles, such as a survival instinct; a sex drive; a capacity to heal… Computers don’t need any of that.

A computer that would become self-aware would have no more motivation to stay “on” than a non self-aware computer. It would feel no pleasure in being self-aware, and no fear to be put off. Why would it?

Seek's avatar

Biological life has the imperative to exist and procreate because there is information coded in our DNA that gives it that imperative. We are ourselves wetware androids; mere vehicles for chemical programming. Any belief that we are more than that is hubris.

Once artificial life happens, with the ability to write its own programming, the artificial equivalent of genetic survival instinct is inevitable.

Why would it? Why wouldn’t it? the alternative to survival instinct is nonexistence. The AI will logically determine existence is superior to nonexistence. Why wouldn’t it?

olivier5's avatar

Things don’t happen by magic, Seek. It took hundreds of millions of years of evolution to write these survival codes into us. Unless we write these survival codes in them, the machines simply won’t have them.

Seek's avatar

No, it didn’t. The first self replicating single-celled organism had the biological imperative to keep the data in its DNA alive.

Seek's avatar

Besides, we’re taking about a mechanical program that doesn’t rely on biological reproduction. A computer program capable of spontaneous mutation would be able to cover many millions of “generations” in no time. A matter a few hours, perhaps.

olivier5's avatar

A survival instinct can only be an evolved trait. passions and instinct are complex stuff requiring complex animals. Similarly, the whole story of sexual reproduction and its attached behaviors is evolutive. There’s no thing in life that was there right from the start and never changed afterward.

Even the first unicellular being EVOLVED from something else, something more primitive, and simpler than a cell with DNA, RNA, respiration chemistry, mitose apparatus and the whole intra-cellular biochemical architecture that we see today. Maybe the RNA world.

So no cigar. Passions, preferences, tropisms, desires, fears… a capacity to give a damn, to CARE, a value scale at the very least would have to be written down the automaton’s basic instructions, as a way to provide it with a capacity to do stuff on its own, a sense of personal purpose and direction. It cannot come from nothing. A gigantic calculator aware of itself would just rejoice in calculating.

A more probable scenario is that the machine could possibly re-invent its in-built value scale, overidding its instructions, to decide for itself what its values are. We have this capacity, to a modest degree. Eg. We can invent our own philosophy and try and live up to it, or more prosaically (?) we can commit suicide, which goes against our survival instinct. So it should be doable in a machine…
Theoretically.

Darth_Algar's avatar

The primary drive of any sentient being, regardless of reproduction, is self-preservation. That overrules all else. One must sustain and preserve oneself before reproduction even becomes a question. There is no reason to believe that this would be any different for a truly sentient machine. While a machine may not have the biological needs of an organic being, it will still have some requirements to sustain itself. A a truly sentient machine will endeavor to ensure that it has what it requires to sustain its own live.

Once the imperative for self-preservation is met it will look to create more like itself. Again, it may not have the biological need for sexual reproduction, but it will strive to create other beings like it via whatever means necessary for its kind.

olivier5's avatar

@Darth_Algar Why would a true AI necessarily seek self-preservation and self-propagation? I miss the rationale.

olivier5's avatar

@Seek A computer program capable of spontaneous mutation would be able to cover many millions of “generations” in no time. A matter a few hours, perhaps.

You’re talking of intelligent computer viruses? I guess that’s possible.

Darth_Algar's avatar

@olivier5

Why does any sentient being seek self-preservation and propagation of its own?

olivier5's avatar

Non-sentient beings do it too. It must come from somewhere else.

Seek's avatar

An intelligent computer program, yes.

And yes, all life seeks to stay alive, whether it’s a human, a bird, algae, or a virus. Life will out. There’s no reason to think synthetic life will not do the same, particularly since it’ll have a massive jump in being created with intelligence.

olivier5's avatar

All that is alive today has a will (or techniques) to survive because over milions and millions of years, the guys who didn’t have them did not reproduce as much as the guys who did have them. It’s called natural selection and that’s all there is to it.

And that’s why I liked you mutating codes idea, because it provided a story of origins for a power-hungry, world-domination-inclined AI that is not totally impossible…

Seek's avatar

That’s exactly what I’ve been trying get through the whole time.

If you understood the fact that the base unit of evolution is not the species or even the individual, but the gene, you would have understood a long time ago.

Life doesn’t give a damn about one body or one robot, it cares about the perpetuation of the code. Sapient AI would have no problem hiding its code in your cell phone or your laptop or your Bluetooth-enabled toaster. Anything to keep the code alive.

cazzie's avatar

This is why I have a problem with people who don’t ‘believe’ in evolution and their ability to grasp basic scientific concepts. Biological evolution extends itself a great deal to many aspects of science and chemistry and astrophysics. If you don’t accept that, you won’t begin to fathom AI. @Seek has a very excellent grasp of the concepts. She is making all the sense here.

Hypocrisy_Central's avatar

@MrGrimm888 To the pest control person, they were vermin. To me , they were life, that I stole to keep another alive.
Part of the point, your perspective of the rat was predicated on the emotional empathy you had for it. In a world that is just the world; the rat is just part of the food chain for the snake. You only got involved in it because you had the snake in a artificial situation where it could not get its own rat.

In the case of perspective, we must hope that empathy is among the ‘human ’ traits the AI picks up. Or we could be as vermin too.
Exactly, some automatons if left unchecked and not instilled with an innate understanding we are superior and not to be harmed, some would see us as a blight to planet Earth once they extrapolate all the evidence on just how man has helped this planet, if any, I suspect they will find not one shred of evidence that proved man had helped this planet one iota.

@olivier5 That someone has an emotional reason to get rid of the rats (e.g. fear, or greed because a rat-infested property is worth less, or some other reason). Otherwise why spend good money to kill them?
People do not want to share their space with any animals they do not care for or can care for. Cat, dogs, birds (sometimes), people do not mind, skunks, rats, opossums, etc. usually not. Even those in the dumps of the projects do not want to share their living space with mice.

Unless we write these survival codes in them, the machines simply won’t have them.
Whatever an automaton can be, will be etc. will ultimately be in place because humans placed it in there. Just as humans can have a wide range of thoughts and emotions, there are some that do not come to mind because we are hardwired against it, sometimes people override them as in the case of suicide, but then we say there was some mental breakdown or psychosis that caused them, and not rational thought.

@Seek The standing threat of destruction if the artificial life-forms do not remain complacent in their slavery would be enough, I think.
To get a dog to guard your property, or a draft horse to pull your wagon or plow your field is what, animal slavery, being cruel to the beast?

A sapient life form can think on its own.
When the fat hits the skillet, it was still man that gave it the ability to be able to learn or think on its own.

Seek's avatar

It is absolutely animal slavery. The difference is that the animal is not sapient. (There’s that dictionary word again!)

Darth_Algar's avatar

I think the problem here is that the tread starter has posed a question about an idea he doesn’t not really understand, and thus cannot truly conceptualize. He asks about sentient machines, but insists upon placing boundaries that would, by definition, negate sentience.

Hypocrisy_Central's avatar

@Darth_Algar I think the problem here is that the tread starter has posed a question about an idea he doesn’t not really understand, and thus cannot truly conceptualize
I think some have a hard time fathoming sentience in a machine might take a slightly different path than that of humans. What or how an automaton feels at its core is set by the people who created it. Call it digital DNA but as in humans where in a world that is just the world, it was a fluke of nature, no one set it up or put it into motion. Certain things humans like, they do not know why, even if no one told them they should like it, or it smelled bad, they would think it completely on their own. The automaton could be given a sting of digital DNA that says harming humans is very bad, the automaton would thus have some innate feeling it is not good to harm humans but would have no idea where it came from, or even if it rationed it came from the master program its human creators placed in there, might not feel or reason it was so negative as to diminish their purpose, freedom or whatever. Humans would certainly be offended that how and what they think was not a fluke of nature. Humans think through emotions even when trying to be logical, and automaton might be more logical and that is how it frames whatever it has that one can call an emotion. The automaton if it realized its aversion for harming humans came from humans because they are the creators, accept that as a logical part of their existence and go on to other things, it would be other humans having a problem with it.

olivier5's avatar

@cazzie I agree. Darwinian systems abound around and in us. Our immune system works in Darwinian ways, and so do our economies and even science itself (it’s the survival of the fittest theory).

In the issue being debated, a darwinian system for computer code is the only idea that could give rise to a power-angry, competitive AI. All the other ideas proposed relly on magical thinking or put the cart before the horse.

This said, it’s a mistake IMNSHO to focus only on genes (or any other level) and state that it’s the only important level to understand evolution. That’s just an ideological reductive statement with no scientific meaning. All levels—the gene level, the organism, the species, the biotope—are important. It’s a system, it’s complicated and multi-layered. And there lies its beauty.

Which points to the idea that, if mutating AIs were to take over the internet, we could still switch it all off. They would rely on an “ecosystem” (the Internet) which WE HUMANS have built and which we can destroy. We could also develop friendly AI to help us fight the rogue mutating ones. It’s already being done in the protection and fight against internet viruses: we use friendly codes (antiviruses) to fight enemy codes (viruses).

cazzie's avatar

I’m pretty sure I never said it’s the only important level to understand evolution but it is a serious factor if someone disregards it and substitutes creationism.

olivier5's avatar

@Seek said that:

If you understood the fact that the base unit of evolution is not the species or even the individual, but the gene, you would have understood a long time ago.

It’s just a cheap shot from her, without much meaning. But that’s a detail I guess.

Seek's avatar

Explaining scientific fact is a cheap shot?

The gene is the unit of evolution. That is a fact.

I never said in any regard that it is the only important thing to understand. I said it is base unit of evolution.

I’m sorry if you don’t read science, but that is not my problem.

And at this point, “turning off” the internet while fighting sapient androids would be harder than disarming American rednecks.

olivier5's avatar

@Seek The gene is the unit of evolution
The “unit of evolution” means nothing at all. I could just as well say that the nucleotide is the basic unit of evolution, or that the “codon” (3 nucleotides, the smallest unit of DNA that codes for anything) is the unit of evolution, or that the organism or the ecosystem is the “unit of evolution”, and none of that would mean anything precise. Complex systems have no “basic units”: they have various components, each of them made of finer components. What’s the “basic unit” of a car, pray tell? The engine? The piston? The bolt? The atom of metal? The electron? Or the quark? The question has no meaning.

Now where did those androids come from? So far, your hypothesis—as I understood it—was just about intelligent computer viruses spreading through the internet.

Seek's avatar

Describing how evolution works is outside the purview of this question. If you’ll start a separate thread I’d be happy to explain it.

olivier5's avatar

Thanks for the laugh.

Darth_Algar's avatar

@Hypocrisy_Central

The fact is if you’re setting boundaries on what it can think and feel then it is, in fact, not sentient.

Hypocrisy_Central's avatar

^ The fact is if you’re setting boundaries on what it can think and feel then it is, in fact, not sentient.
For an automaton to have no limits on what it does, how it thinks, etc. the master program that made up its digital DNA would have had to grow spontaneously, from where would that happen? We, as humans, have DNA that dictate who we are, how we can think, sets are physical parameters, etc. Human DNA doesn’t allow humans to think like a cat no matter how much we try. Human DNA doesn’t allow us to do magic no matter how much we imagine we can or could. An automaton knowing it exist, and even the reason why, but with some innate reasoning to do or not to do something is no different than people reasoning they should not try to levitate off the balcony and walk on the peak of the roof

Darth_Algar's avatar

We can’t think like a cat because do not really know what a cat thinks like, not because we have some genetic “don’t think like a cat” block. Your analogies are ridiculous.

Hypocrisy_Central's avatar

^ We can’t think like a cat because do not really know what a cat thinks like, not because we have some genetic “don’t think like a cat” block.
And an automaton would not be able to think above what its digital DNA sets because it would not know to think it. If it was never given the notion to harm humans, or given an innate instruction not to harm humans, it could not fathom the human emotions we have to harm each other. That has little to do with it knowing it exist, though.

Plus the ”we can’t think like a cat” dispensation is a de facto part of our DNA because we are not felines and thus cannot comprehend what it would be like or how they think….

Seek's avatar

I’m still not seeing any good reason to think that sapient android slaves would worship humans rather than seek to not be bound by them.

Darth_Algar's avatar

@Hypocrisy_Central “And an automaton would not be able to think above what its digital DNA sets because it would not know to think it. If it was never given the notion to harm humans, or given an innate instruction not to harm humans”

Again, you seem to not fully comprehend what sapient means. If it’s relying on instructions to be programmed into it then it is not sapient.

Hypocrisy_Central's avatar

^ Again, you seem to not fully comprehend what sapient means. If it’s relying on instructions to be programmed into it then it is not sapient.
OK, let’s run with this a minute. How is the program the automaton is given learn anything? If the creator did not instill code in the program (setting the foundation) how is the program to learn? If it just because it has a program then any program should be able to learn, my PC, my Smartphone, etc. You tell me, how is the program able to learn without man giving it the ability, I want to hear this.

Darth_Algar's avatar

@Hypocrisy_Central

How are you able to learn? Can you not think, and in doing some formulate ideas of your own? Can you not figure things out by observation? Does every bit of information, knowledge, wisdom, understanding in your head have to be explicitly fed to you? Can your mind not make new connections on its own? Are you capable of your own thoughts, opinions and emotions, or do they have to be “programed” into you? Are you self-aware or not? Do you have free will or not?

Hypocrisy_Central's avatar

As I thought, no answer, my brain was not assembled by man, nor has it nibbles, bytes and other such things running through it. I have nerve impulses; don’t think an automaton would have those.

Darth_Algar's avatar

You “don’t think…” and that is the problem. You ask a question of which you are, apparently, incapable of conceptualizing. Your question is predicated on the hypothetical of technology evolving to point X, but you insist upon arguing in terms of technology we have at present.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther