Social Question

ETpro's avatar

Is humanity already out of control?

Asked by ETpro (34605points) July 10th, 2012

Kevin Kelly’s book, Out of Control says we are. He suggests that neural networks, machine learning algorithms, and massive inputs of raw data, which such machines are designed to “evolve”, means that we will soon see the emergence of self-awareness and super-intelligence in machines. Every satellite feeds over a trillion data points to machines each day. Virtually the entire body of knowledge amassed by man throughout all of recorded history is available to machines connected to the Internet. There are telescopes around the globe that feed trillions of images of the universe in every observable electromagnetic wavelength to machines daily. Monitoring stations provide feedback on all data worth measuring.

Kelly says “The first ultra-intelligent machine is the last invention man need ever make.” Do you think he is right? Could it be that we have already invented it, but it is just taking time for massively parallel neural networks with self-teaching feedback loops to take all the raw data available to them and evolve the algorithms needed for sentient thought?

Given the job modern politicians are doing managing humanity, I’m not so sure this happening would be a bad thing. Could ultra-intelligent machines be the saving grace of humanity, or do you think they would spell our doom?

Machine masters would probably see destroying the planet’s ability to sustain evolving life as an unwise move. I don’t think there is any reason to assume that ultra-intelligent machines would automatically view us as their enemies, any more than we view the snail darter or the black-footed ferret as enemies of humanity. If man no longer needed to invent, and thus entrepreneurship and consumerism as they exist now became obsolete and forgotten, what would we replace them with? If machines did all the thinking needed to set the course for the future and built robots to do all the needed work, what would we do?

Observing members: 0 Composing members: 0

14 Answers

Coloma's avatar

I think the depletion of our natural resources and the massive population explosion in the last 30–40 years is far more of a threat to our extinction than technology or machines.
I believe the earth is a living organism and she will retaliate out of self preservation once critical mass is attained. Whether this be by natural disaster, disease, famine, climate change or, most likely, a combo plate of all of the above.

In the short couple million years we have existed as a species the havoc we have wreaked upon our natural world is astounding compared to the millions and millions of years pre-human.
I’m still betting that the dinosaurs will ultimately be the longest lived of all life forms.
Foolish, arrogant creatures we are and our extinction is imminent.

thebluewaffle's avatar

@Coloma Foolish, arrogant creatures we are and our extinction is imminent.

I could not agree more.

Paradox25's avatar

I do believe that machines can become intelligent enough to try to wipe us out, but this would be the result of a nonsentient machine in which its behaviors would result from being an amoral, unsentient entity.

Now Fred Alan Wolf, a theoretical physicist, is a supporter of the filter model of consciousness (relating to the purpose of our brain functions) and is yet open to the possibility to what you’re implying. Most dualists aren’t, and don’t believe that sentience can result from neural connections.

Penrose makes a good argument on why bottom up algorithms (the ones that can ‘learn’) do not mimic what sentience is, let alone consciousness. Something has to be able to understand that there is a greater order which exists from itself in order to be conscious to begin with, and we rely on consistently being able to think in order to be conscious. When we’re not thinking we’re not sentient (self-aware) or conscious (aware of ones enviroment). There is some evidence that even plants may have a type of rudimentary sentience, and yet I’m not so sure if even the most advance AI’s could accomplish this.

phaedryx's avatar

Isaac Asimov speculated about how a robot following the three laws of robotics would be as a politician ( Stephen Byerley).

He also explores a society where machines do all of the manual labor. Humans mostly pursue intellectuals pursuits. It sounds pretty good to me.

flutherother's avatar

Machines will always be under our control as we can pull their plug out however intelligent they are. A super intelligent machine might try to warn us of the effects of over population, climate change and environmental destruction but humans would be too stupid to listen.

Coloma's avatar

One of my favorite quotes by Sam Clemens aka Mark Twain.

” Progress was once a fine thing but it has gone on far too long.”

These words were spoken over a century ago.

Imadethisupwithnoforethought's avatar

This does not seem different to me from the technological singularity event first discussed in the 19th century.

I am not worried, and I have given this a lot of thought.

If you believe, as many intelligent people do, that consciousness is brought about by complicated networks working together, and the soul is some kind of by-product, then you begin to explore Gaia theory, and ants as part of the super organism of a colony. Indeed, Earth itself and Sol are nothing but contributing members of a local cosmic group which may have a consciousness well above ours.

What I imagine is the following:

Hyper intelligent people will have very different concerns than non-hyper intelligent people. Hyper intelligent machines will have very different concerns than people, concerns that will rarely overlap. If I was a super intelligent machine I would care very little for what humanity was doing. I would probably be lonely.

William Gibson explores these themes in his fiction. Hyper intelligent neural networks actually evolve on the internet from the complexity of connections. They then spend a great deal of energy hiding themselves and manipulating humanity to build communication networks to the stars. These hyper intelligent organisms built from the complexity of neural networks find themselves alone and terrified, wanting only to talk to someone.

And, if I remember correctly, the original hyper intelligent organism created a universe for himself without explanation, most likely a sense of loneliness.

ETpro's avatar

@Coloma I don’t know that the Earth is sentient—although my delving into particle physics and quantum entanglement has led me to conclude that it and the entire Universe may well be one massive intelligence—may always have been. That’s an interesting postulate because it would mean that when the big bang happened, the Universe banged itself.

But all that aside, there are example after example where humanity got out of step with nature, and brought on a natural, cataclysmic destruction of their entire culture. And those human cultures that have lasted the longest are all still living basically as hunter gatherers.

@thebluewaffle Obviously, I agree.

@Paradox25 There are evolutionary programs running now. They have been in the wild since the 1990s and before. They are already at the amoral stage, smart about many things, but not aware of being aware. They still evolve new programs because they were originally programmed to do so. But they now need no human “teachers” to help. They are independently able to decide which deliberately mutated programs are more efficient, and which are less. You see things happening like parasite programs evolving, able to achieve efficiency by leveraging off of other programs. Then you see mutations in the full programs, with exploits to either “kill” the parasite or use them for their own gain. Anyone that thinks it is not “A Brave New World” is simply unaware of what is going on.

I am not a dualist and have read and rejected Penrose’s postulates. I tend to agree more with Stephen Pinker and the emergent view of sentience. But if the living universe postulate turns out to be true, then sentience imbues everything, and machines would be no different than intelligence the like of Einstein and Newton, or an ice cube. The only issue would be enough self-learning, self-critiquing capacity for sentience to arise as an emergent phenomena. Newton had that. Machines are closing in on it. Ice cubes are still out in the cold.

I would suggest you are thinking sequentially, which is about all we humans know how to do. Machine intelligence thinks in massively paralleled fashion. Instead of stepping thorough a problem one step at a time, it runs simulations of every possible outcome and evolves toward ever fuller understanding of the problem. That may sound inefficient compared to the human approach, but it is just the opposite due to the scale of parallelism and the speed which neural nodes in a computer operate at compared to their carbon counterparts. Thus the time from becoming self aware to being smarter than any human who ever lived will be extremely short. And in the first few simulations after self awareness, a machine would decide to work through all possible outcomes before taking preemptive action.

@ucme Never trust a clip quoted out of its full context.

@phaedryx Laws only compel those who lack self determination, who fear the consequences of breaking the law, or who see a moral imperative to uphold the law. I submit that an ultra intelligent machine would not be constrained by either of the first two reasons to obey a law. Asimov, bless his creative mind, did not understand fully what is going on in the drive toward AI.

@flutherother How can you be so certain? Do you realize the consequences of shutting down the entire Internet and all the world’s computers at once? Do you realize the scale of the brain you’d be planning to “pull the plug” on? We have more neural nodes than any other creature on Earth, but the size of our neural network is truly tiny compared to what would emerge if a single AI became self aware then proceeded to jack into all the computers and sensory devices connected to the Internet at any given time.

When Skynet does become self aware, I should guess the one way to ensure it terminates us is to attempt to kill it first.

@Coloma Perhaps words of a very wise man. We’ll soon know.

@Imadethisupwithnoforethought That is very much in line with the end of this story in my book.

Paradox25's avatar

@ETpro I’m open to the possibilty of what you’re implying even though I’m still sceptical. I still have a difficult time grasping that enough computing power leads to sentience. You have to be sentient to be conscious, but you don’t have to be conscious to be sentient. Look at animal intelligence, they have a very rudimentary type of intelligence, but yet they’re sentient. Our most advanced AI’s are not sentient. I doubt that sentience has anything to do with intelligence, and I’m not even sure that sentience can be generated. I am open to the likelyhood that self-awareness can evolve, but only if it already existed in at least some capacity to begin with.

Hinton gets into what you’re implying a bit, and this website gets into alot of detail about how algorithms and artificial neural networks function. Like I’ve said, I’m open to the possibility of sentient machines, but I’m still sceptical about it.

phaedryx's avatar

@ETpro In the case of Asimov’s writings, the laws of robotics were part of the robots’ hardware, not software. Hardware always puts restrictions on what software can or can’t do. I think he understood AI. He also thought that we’d be foolish to push it without safeguards, and came up with some “laws” that would be fun to explore in his writing. Whether or not ultra-intelligent machines will be harmful or helpful is up to us and how we create them.

ETpro's avatar

@Paradox25 We need to come to a mutual understanding of the meanings of “sentience” and “intelligence”. I see sentience as superior to intelligence, but dependent on its presence. We care. AI’s so far do not. They cannot. We can only program them to appear to do so.

It will take me some time to absorb all the links at Hinton’s website. I read only far enough to see it looks like time well spent. Thanks for the link. And I retain my skepticism as well. I’m open to persuasion but not incomplete without it.

@phaedryx It seems to me that robots capable of understanding and controlling machines could be more able to control those machines than humans. I do not view hardware as an ultimate firewall.

Paradox25's avatar

You can’t have sentience without some type of internal language to relate anything to, which is just one reason why I don’t believe that a mind can be created. Also, will we be able to create algorithms or a program for anger, hate, sadness, happiness, fear, boredom, lust, etc?

I have to ask you this however, if self-awareness is the result of neural connections and computing power then why are lower life forms sentient and not our most advanced AI’s or computers? Clearly computers and AI’s are much more intelligent then let’s say a cat or a crab. Your advanced machines may very well pass the Turing Test (I’m aware this doesn’t test for sentience in itself), but they likely will not be sentient, at least in my opinion.

It is possible that I’m wrong and your machines could end up being sentient. However, maybe I’m right and your machines may do a great job of duplicating human behaviors, and ‘fool’ us into thinking that they’re sentient. Afterall, we know that we’re self-aware, but we can’t prove this to anybody but ourselves regardless.

Minds are motivated to accomplish things, while AI’s will accomplish things because they’re either programmed to, or are defective. I’m also aware of circuits in which random patterns occur, but again there is a big difference between random events occuring due to some electronic components being activated before another (I’ve built some crude forms of these circuits myself) vs deliberately choosing to make something happen. Random patterns vs purpose, again two different things.

ETpro's avatar

@Paradox25 I suspect you are right about internal language being important to human sentience. What gives rise to sentience is still open to debate. But it would be a huge oversimplification to suggest it is simply some critical mass of connected axons. We know the human brain has numerous self teaching neural nets that use feedback loops to learn and improve. We have a good ways to go before we build a computer capable of doing that. But massively parallel neural network computing is already capable of evolving its own program.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther