Social Question

mazingerz88's avatar

Who would win the war between Man and A.I. robots and how?

Asked by mazingerz88 (28814points) June 27th, 2011

I’m presently reading the book “Robocalpyse” and once again it is about an epic war between humans and intelligent machines. Just recently, Obama in a speech at Carnegie Mellon University pledged $70 million in robotics research and development. In the speech, he told students he supports and looks forward to the creation of robots that would be safe and helpful to humans…at least for now. It’s a jest yet one that makes me wonder. Would there ever be a time when robots would wage a war against their makers? What would cause this and who would win?

Observing members: 0 Composing members: 0

22 Answers

TexasDude's avatar

Would there ever be a time when robots would wage a war against their makers?

Yes.

What would cause this…

My Super Roomba™ would become self aware and try to vacuum me to death.

…and who would win?

I would.

Vincentt's avatar

As an artificial intelligence-student, I have yet to come across signs that show that true AI is anywhere close, let alone that it is also applied in robots and gains good motor skills and the likes. It currently still mainly means little enhancements to things that greatly help people in the daily life but only in a specific area, like, indeed, vacuum cleaning. Then again, you never know how fast things develop, look how quickly the internet became entrenched in our lives.

Who would win? Well, if they were to “turn on their creators”, which I don’t think they will, and were powerful machines, they’d win, as they’d know how to repair themselves, whereas we humans don’t understand large parts of our body, let alone be able to fix anything that gets broken in it.

mazingerz88's avatar

@Vincentt Great interesting point on the aspect of self-fixing. Whoever could do that more efficiently would win the war.

Photosopher's avatar

Captain Kirk settled this debate a long time ago when he tricked his robot captors into short circuiting themselves with one simple sentence… “I am lying”.

FutureMemory's avatar

I’ve never understood why people think machines might ever turn on humanity. What would their motivation be? If it’s self-preservation, wouldn’t that be something we would to program into them in the first place?

mazingerz88's avatar

@FutureMemory Exactly. If they are programmed to self-preserve, what happens when for some reason we try to pull down the switch, if ever?

wundayatta's avatar

What is it about the myth that man overreaches and gets destroyed by his own creations? From Frankenstein to AI Robots and everything in between, there is this constant theme that we take over too much from nature, and that this will catch us in the end. We will regret being a smart as we are.

I don’t really understand where this comes from. Maybe it’s a religious idea? Only God is allowed to create life? Humans shouldn’t meddle? But wait, do we ever hear of stories about man turning on God? Who would win that war, I’d like to know?

We’re all chasing chimeras, I think. No one would win this war because this war could never happen.

Joker94's avatar

I’m with @Photosopher. We just gotta remember these.

roundsquare's avatar

As long as we programmed them correctly I don’t see them ever turning on us.

If we messed up the programming in this respect and there was war, it would depend on how well we programmed them in other respects.

Plucky's avatar

Those movies where robots somehow gain control over all electronics are pretty freaky. If that happened, most of us would be dead rather quickly. Just sitting at my desk, I see several things that could kill me if controlled by a robot. Creeeepy.

Vincentt's avatar

Of course, the assumption that we program things correctly is a big one, as about all software ever written contains bugs (see the hover text in @roundsquare‘s linked comic). Additionally, there are cases in which we don’t even know exactly what’s written, e.g. in the case of genetic algorithms, we say “here, some things you can do, go figure out for yourself what works best for you” to evolve a program that does things we didn’t explicitly say it should do.

But yeah, how something like that would cause a robot to turn against humanity, I fail to see. I can’t foresee everything, though ;-)

roundsquare's avatar

@Vincentt I guess here is one way:
1) Program the AI to “minimize conflict” (however that gets programmed in).
2) Give it control of all defense systems.
3) Give it a way to model human behavior in various situations.
4) The AI (e.g. via a genetic algorithm) decides that the way to minimize conflict is to destroy all humans (no conflict after all the humans are gone!).

I know it sounds sci-fi-ey but its not totally unreasonable. As you said, learning algorithms do weird things sometimes (which is, in fact, why we like them).

Vincentt's avatar

@roundsquare Yeah but the “minimize conflict” instructions are pretty vague and unlikely to be programmed into a potential AI that way.

roundsquare's avatar

@Vincentt Of course. It was just a quick example. What if it was minimize the number of murders where murder is defined as one human killing another? By having the AI kill everyone (with say… bombs) that would drop the number of murders to zero. Even if it was “minimize deaths” an AI might say that killing everyone now leads to a total of 6 billion deaths but if we don’t do that there will eventually be more than 6 billion deaths so better to kill everyone. Can we get around this? Of course, but these are examples of mistakes leading to AI taking over.

Vincentt's avatar

@roundsquare But I don’t think we will ever “program” an AI in such general instructions that might hypothetically lead to something like that. In the worst case we’d instruct them just like we instruct our policemen, namely make people abide the law and have them punished according to the law.

mazingerz88's avatar

@Vincentt A human “never” doing such programming, of course. Not to the computer running NORAD but science fiction writers old were pretty much spot on in imagining the future and science fiction writers now might be too. I could see a programming genius writing AI program for a house robot to detect when he needs a massage, next, calling a maid service when needed.

And then I see this kind of automation evolving sophisticatedly as inevitable as satisfying human curiosity which is really almost impossible to satisfy thereby there is no end. And that’s when disaster will occur. ( Dave, you lied to me Dave… ) Lol.

roundsquare's avatar

@Vincentt Really? I’m not so sure. The world is getting more and more complicated and we are giving more and more trust to machines. We may well reach a tipping point where we give just enough power to an AI to do something like this.

Wall Street already gives some power to algorithms. Is it really so hard to believe that the government won’t start doing this at some point? Once it starts, slippery slopes are everywhere.

Maybe you’re right and we’ll be okay, but I don’t have so much faith in mankind.

Vincentt's avatar

@mazingerz88 First of all, I think there were a lot of science fiction writers in the past that were completely wrong, and not that many predicted e.g. the Internet :) However, still, “give me a massage when I want it” is way more specific than “minimize murders”. I don’t think we’ll ever program specific instructions like that to program an entire robot’s behaviour – I’d rather expect them to learn like we learn, i.e. that AI we create starts with a clean sheet, and learn through interaction with the world. I wouldn’t be surprised if they’d develop emotions as an emergent property at all. Then again, there have probably also been humans who would’ve wanted to eradicate the world if they had had the power, so perhaps that might be an apocalyptic scenario.

Wow, I’m rambling. Anyway.

@roundsquare The world has always been complicated. Our economy is so efficient because everbody does what he does best and doesn’t need to know how the rest works. I agree with you that algorithms that nobody understands could wreak a lot of havoc (though that might be taken for granted given the advantages), but that is a whole other doomsday scenario than AI actually, purposely “turning onto mankind”.

(Also, it’s funny how some industries are way more conservative with applying these tactics than others. A student colleague of mine just finished writing a neural network that would quite accurately predict currency exchange rates. Things like that are already widely used in the economic world. In the medical world, expert systems are already widely used for diagnosis. In the legal sector, however, e.g. judges are very wary of using them. Just a fun fact :)

roundsquare's avatar

@Vincentt “The world has always been complicated.”
In a sense, yes. But, people are specializing more and more. More specialization means fewer people looking at the big picture. But, at some point, with enough processing speed and a good enough AI we could build a system that can do both. Once that happens, I can see the temptation to give all power to the AI being hard to resist.
I agree, its a doomsday scenario… but I don’t see why that makes it less likely. (Doomsday != Unlikely). I’m not saying its “very” likely or anything (though given enough time, even small probability events happen with great probability…) but that its not so hard to believe. You don’t think its possible that an AI can be put in charge of, say, planning our economy? How about in charge of national defense? In the beginning it would, without doubt, just be a tool that humans consult. But, eventually we’ll probably train a system to do automatic missile detection and interception for attacking missiles. Just like the algorithmic trading, at some point it might be given the power to decide what to do in certain situations. Once people get comfortable with that, more and more control might be handed over until…
On your side note: its not just about being conservative. I mean, its true, the legal profession is very resistant to change. That’s what you get in a field based on reasoning from precedent. Its also about how important it is for people to know how a conclusion is reached. Also, I thought I heard that most economists don’t use algorithmic models… something about them falling in love with elegant closed form solutions and forgetting to compare to the real world.

Photosopher's avatar

@Fiddle_Playing_Creole_Bastard “My Super Roomba™ would become self aware and try to vacuum me to death.”

Now that would suck!

mazingerz88's avatar

But, eventually we’ll probably train a system to do automatic missile detection and interception for attacking missiles.

@roundsquare Interesting true story. A few decades ago, less sophisticated US missile tracking systems detected missiles coming here from Russia. A counter launch was “almost” activated by I’m not sure whether human or computer but luckily, in turned out those were not missiles, it was a huge flock of geese! Jeez.

Vincentt's avatar

@roundsquare Ah, right, I can follow that. We’re all doomed! :)

When you use an expert system, you can still see how the computer reached a particular conclusion, you just can’t be sure it hasn’t missed any potentially applicable laws.

@mazingerz88 I believe the US in Afghanistan (I think?) already use drones that can fly and detect enemies autonomously. At this point, there is still someone watching from the army that needs to give approval to fire, but there’s no telling how long that will be required (and how necessary it is right now). That might come quite quicklly.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther