General Question

ETpro's avatar

Why isn't the Internet self aware?

Asked by ETpro (34605points) March 18th, 2010

The human brain contains roughly 100 trillion neural connections. The Internet has about 150 quadrillion transistors in its 75 million servers, plus the countless additional ones in all the PCs and Macs and Linux machines and supercomputers connected to the Web. Considering just the HTTP servers, the Internet has 1500 times as many neural connections as a human brain does, and yet it never says to itself, “What am I doing here?” “Who am I?”

If you connect a TV camera to the Web, a computer receiving its signal can be programmed to match certain objects in its view using pattern recognition, and do certain things about those recognized patterns. But it will never think about any action outside its program of responses to patterns.

Why? What is different about human brains that lets us not only execute a staggeringly complex programs in response to visual stimuli, but think about our perceptions and alter our own programming about what to do in response to them?

Observing members: 0 Composing members: 0

86 Answers

bobloblaw's avatar

Because we didn’t build the Internet to think nor has the Internet had billions of years of evolutionary pressure/selection to result in self awareness.

davidbetterman's avatar

It is just a toy.

dpworkin's avatar

The Internet is not a discrete organism. It is highly repetitive, and disorganized., The brain’s regions take on different tasks, and are subject to specific types of input from specific ganglia. Vision is not processed in the Temporal Lobe, but in the Occipital. Your brain exhibits limited neuroplasticity. Let us design an intelligence using as many neural connections as the Internet has, and then we will have an analogue with which to work.

mrentropy's avatar

Maybe it is, but it’s too stupid to do anything about it. LOL! OMG! etc.

marinelife's avatar

It lacks the spark of life.

theichibun's avatar

The internet is not alive. It’s not not a living thing.

phoebusg's avatar

Add 10,000 for each connection. Neural connections are 3D processing spaces. It is where the processing really happens. The rest is mostly transmission, similar to a bus on your motherboard.

So, in fact, the sum of all computers is nowhere near as complex. And it’s not just the resource, but the ‘program’. Things get fuzzy again with the brain, because the hardware in a way is the software and vice versa. Software changes are faster, and similar to our day-to-day processing. But the hardware changes all the time to accommodate for software demands. Thus it’s hard to compare it exactly to the computer model. Maybe the quantum computer model will have enough states to come close.

One thing missing though is your definition of awareness. So we know what we’re talking about technically.

Bluefreedom's avatar

Probably because Skynet hasn’t taken over yet and John Conner is nowhere near ready to lead the resistance at the current time.

nikipedia's avatar

This is a really interesting question. You raise a great point: that consciousness is not an emergent property of complexity per se, but something else is necessary. It suggests to me that a huge number of connections is a necessary but not sufficient condition for consciousness.

ParaParaYukiko's avatar

The internet doesn’t have cells, nor is it made of carbon. Thus, it can never be alive.

neverawake's avatar

it’s too smart for itself

ETpro's avatar

@phoebusg The number I cited is not the number of neurons, it is the total number of individual connections. Your multiplication factor has already been added. So like it or not, the Internet has far more connections than the human brain. Granted they are not organized in the same fashion, but is that the why that keeps the Internet from realizing it exists? As @nikipedia says, it begs the question what else besides sufficient neural connections is required for the emergent epiphenomenon called Self Awareness.

@ParaParaYukiko What possible evidence can you provide that cells or carbon are requirements for sentience? Clearly silicon based machines are already capable of information processing that far exceeds the capacity of most living things.

mrentropy's avatar

@ETpro The “Internet” doesn’t have a mechanism for putting all that information into a cohesive unit, though.

Unless it does, somewhere in the basement of some unmarked building. But I don’t think so.

talljasperman's avatar

It’s alive…its the people on it that I’m worried about

noyesa's avatar

It’s plenty complex, it just lacks consciousness.

Shuttle128's avatar

The internet is not a neural network.

ETpro's avatar

@Shuttle128 In a biological sense, no. But neural networks can be built and the Internet connects many of them.

Shuttle128's avatar

The brain is a neural network that has the ability to alter itself. The internet is the connection of separate networks, even if some of those are artificial neural networks the same kinds of connections aren’t made and are not altered by stimulus. I think several things are required for consciousness but I think self-awareness is an emergent phenomenon of a sufficiently complex consciousness.

I would say that consciousness exhibits the following five qualities:

1. Receives stimulus or inputs
2. Reacts to stimulus or inputs
3. Is at least partially circular
4. Is altered by receiving or reacting to stimulus or inputs
5. Has the ability to generalize

When I say stimulus or inputs I mean an outside influence or an internal state feedback. A consciousness is something that can receive these and create some output. The requirement of circularity and alteration are linked but not always found together in other systems so I thought it necessary to differentiate. Circularity requires some form of feedback. This feedback can be provided internal to the consciousness or external to it. The consciousness must be able to be altered by the receiving of either external or internal inputs. Finally, the consciousness must have the ability to generalize inputs and/or outputs. This is very important as I believe consciousness to be impossible without it. Generalizations allow the consciousness to react to stimulus that are not always clear cut. Generalizations prevent the requirement of the consciousness to analyze each new stimulus as completely different from the others. Without this a coherent response to inputs outside the range of the current consciousness’ scope would not be possible.

This seems very lenient when it comes to defining what is conscious, however I think it makes a very good distinction between things we believe to be unconscious and things we believe to be conscious.

In the case of the thermostat it can be seen that it has three of the important qualities of consciousness (since it is a controller). What it lacks, and what is most likely perceived that sets it apart from consciousness, is the ability to alter itself, to learn or adapt. Even when we have a very complex static controller we do not perceive consciousness because it is not modified by inputs; however, even an adaptive controller might not fit the definition of conscious if it does not have the ability to generalize.

It is my opinion that varying levels of possible feedback loops, alter-ability, and ability to generalize create varying levels of consciousness. A fly’s brain has few feedback loops, very little able to be altered, and makes very wide generalizations about its inputs. As we progress through more and more complex species we find that the amount of feedback and alter-ability increase, while the generalizations become more complex.

Neural networks seem to explain all aspects of the criteria for consciousness. A hidden layer translates inputs (analogous to the our spinal cord and lower parts of our brain) and sends the translated inputs into the system (the reptilian part of the brain) this then can react by action or send outputs through a control loop (analogous to the cortex albeit highly simplified). The neural network has the ability to be altered by some algorithm based on inputs and reaction (this is most likely achieved in the brain simply by growth due to repetition). Finally the alteration of the neural network has the ability of storing reactions and creating generalizations which can be understood easily by analyzing perceptron and Hopfield networks.

If a neural network model can explain all aspects of consciousness and a neural network can be simulated on a von Neumann computer or constructed physically then I see no reason why a machine could not be conscious. We may actually be fairly close to creating consciousness in adaptive control systems right now, but we may not see much more advanced consciousness for quite some time.

talljasperman's avatar

@Shuttle128 sounds like computers need artifical wisdom as well as Intelligence?

ETpro's avatar

@Shuttle128 Are you researching in the field of AI? That’s a great answer. WIsh I could give you more than 1 GA point.

The robot vehicles that can drive hundreds of miles through rugged terrain perhaps go the furthest of any current machines in having the ability to form concepts, or generalize. I’m going to talk about them in anthropomorphic terms from here on in. That doesn’t mean I think they are living organisms or human in any way. It’s just that the terms I need mostly are applied to human intelligence and perception, and writing a disclaimer after each one is tedious to the writer and reader alike.

These vehicles have numerous camera and laser range finders plus satellite GPS and other sensor mechanisms. They can control and aim those sensors, focus and zoom them. Thus, they do have some feedback, in that the CPU can decide what to look at and what to ignore. And they must be able to to process their sensory inputs—which are nothing more than a stream of virtually meaningless data-points, and then generalize them into concepts. They see boulder, tree, gully, mud, sand, bush, road and so on. They must know how big they are, and be able to calculate whether they can fit between a pair of obstructions.

They must know their center of gravity and decide whether dropping two wheels in a gully would tip them over, or would be within their capacity to drive through. They must sense whether a given route would get them stuck in the mud, or trying to climb a sandy dune that would result in nothing but wheel spinning.

We’re pushing toward truly intelligent machines, much more sophisticated than a thermostat of the float valve in our toilet. It remains to be seen what the first truly intelligent machine, the first one to really think, thinks about us, it’s creators.

Shuttle128's avatar

@ETpro I’ve been very interested in AI, neural networks, and the brain for quite a while but I’m not currently in any of these fields. I’m starting to think I might really enjoy doing something like that. I’m graduating in a few months with my BA in Aerospace Engineering so I have physics, chemistry, and control theory under my belt. If I get into someplace like JPL I might consider doing something with AI or something similar.

I’ve seen some interesting concepts of adaptive control systems at Georgia Tech that incorporate many of the elements I talked about. I think that consciousness and self-awareness are closely related, but self-awareness is more a conscious understanding of the self. It should require lots of internal state feedbacks like the cortex in humans. The thing we have over most animals is a very large cortex. The high variability of stuff in the cortex and the fact that internal states of other parts of the brain feed into it leads me to believe that it is important for self-awareness. I think enough internal feedback might lead to self-awareness, but socialization might be a big part of it.

The socialization issue may be a major contributor to the possibility of self-awareness. It may be brought about by a very rare event outside of socialization but passed down after its initial conception. Feral children are severely limited in their cognitive and conscious abilities and sometimes don’t appear to show full self-awareness. After the first few rare events that might have brought on self-awareness it could be passed on through socialization. I don’t have much proof of this besides my example of feral children, but it seems like it could be a possibility.

ETpro's avatar

I am certain socialization plays an important part in it. And of course, many higher animals form strong social networks and pass informatin and even some generalizations among one another. Orcas have brains considerably larger than humans and a large cortex. The Killer Whale brain is the second largest brain in the animal kingdom, outflanked only by that of the blue whale. A full-grown orca has a brain weighing as much as 15 pounds, and richly supplied with feedback loops to let it understand its watery 3D environment.

We humans have to be aware of up and down, but we don’t routinely move through that third dimension.

I wish you the best in your career moves after graduation. I majored in chemistry but spent my work career designing process machinery and robots. Working on the next generation of planetary rover would be a dream job to me.generation

PS: If you haven[‘t read it and can find time within your studies, I highly recommend I Am a Strange Loop: by Douglas Hofstadter. It is all about trying to understand how and why we develop “I“ness. Hofstadter is College of Arts and Sciences Distinguished Professor of Cognitive Science at Indiana University in Bloomington.

mattbrowne's avatar

Because the Internet on its own does not pass the Turing test (yet). Gaining full self awareness requires an even higher level of intelligence.

mrentropy's avatar

No offense to you, @mattbrowne , but I think the Turing test is more a test on how creative the programmers are versus any kind of artificial intelligence.

Not that I’m against it on principal, but it’s not the end all of AI tests. But, I have a problem with the whole notion of “artifical intelligence” anyway.

dpworkin's avatar

@mrentropy Programmers are quite clever. The Turing test is non-trivial.

mrentropy's avatar

@dpworkin I didn’t say, or imply, that it was trivial.

In a nutshell, to me “artificial intelligence” is something like the old Eliza program. It takes in input and spits out output depending on what the program was designed to handle. It doesn’t think for itself or make its own conclusions. It just spits out what’s expected of it; much like me in the morning. And that’s why it’s “artificial intelligence.” If a computer program could actually think, to any degree, then it would be “intelligent.”

I wrote to blogs about how I feel about it here and here. You can read it if you’re really interested in what I think (good God, why?)

Anyway, I’ve spent years trying to think of a good way of writing a program that would pass the Turing test. I always come up with roadblocks, but then I’m not a genius. So even though I don’t think the Turing test is the gateway to self-aware machines, I am well aware of how difficult it is to write something that can fool a human.

On the other hand, in high school, our class took a trip to some Bell Labs museum-type thing in NYC. There were two terminals hooked together, back to back, where two people could use this new technology to “chat” with each other (yes, it was that long ago). Being one of the few people that had a home computer back then I happily jumped on one terminal and started typing away. A girl on the other side noticed the words and started typing back. It turns out she was pretty sure I was a computer.

dpworkin's avatar

Well, those were the roadblocks that AI encountered in the ‘80s, when they were attempting “expert systems” but times have changed, and technology now enables us to attempt “neural networks.”

Now that a quantum state has been induced in a large object, it won’t be too long until we have quantum computing. I predict things will get interesting. Nobody is spending any time on the Eliza/Expert Systems paradigm any longer.

mrentropy's avatar

@dpworkin I’m sure we will, eventually. But it won’t be artificial intelligence. It’ll be actual, real, intelligence. Then it will be like Eliza, only on purpose to annoy people.

mattbrowne's avatar

@mrentropy – You yourself could be connected to a closed room. You will talk to both a computer program and a human being, say nonstop for 8 hours, comparing comments from A and B. If you can’t determine that, say A is the human and B is the computer, B has made the first step toward passing the Turing test. Suppose we ask 1000 people to do the same and about 500 people say A is the computer, B has passed the Turing test.

If B is ELIZA it won’t work.

Matt Browne: Why isn’t the Internet self aware?
ELIZA’s answer: Is it because is not the internet self aware that you came to me?
Matt Browne: I regret that I just typed M-x doctor into my emacs editor.
ELIZA’s answer: When did you first know that you just typed m x doctor into your emacs editor?

mrentropy's avatar

I only said Eliza because it was the first thing that came to mind. But, generally, just about every “AI” program I know of is kind of the same. You put something in, it looks around it’s memory to see if it’s valid, then spits back some kind of answer.

After a lot of thought I finally had to agree with Searle’s Chinese Room objection. It also made me wonder about human intelligence because I think a lot of people are stuck in a “Chinese Room” when it comes to everyday interaction.

Just understand that I don’t think the Turing test is a waste of time or anything, I feel it has room for improvement. Personally, I’d love to be a judge for one of the tests because I’d be surprised if I couldn’t trip one up within five minutes.

DocteurAville's avatar

It would be, if self-ware, skynet.
When that happens it will go for your ass. Cross your fingers it never happen.

ETpro's avatar

@DocteurAville Terminator was Science Fiction. Why do you feel that an intelligent machine would immediately decide to destroy its maker?

mrentropy's avatar

Steven Pinker had a funny bit in his book How The Mind Works about why robots wouldn’t go on a mankind killing spree. Wish I could find an excerpt of it.

ETpro's avatar

@mrentropy I would love to see it if you do. Or just cite the book and I can read it.

mrentropy's avatar

@ETpro The book is How The Mind Works. I lost my copy a few years ago or I would just pull it from there.

Here we go:
“Now that computers really have become smarter and more powerful, the anxiety has waned. Today’s ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolence—like vision, motor coordination, and common sense—does not come free with computation but has to be programmed in. The computer running WordPerfect on your desk will continue to fill paragraphs for as long as it does anything at all. Its software will not insidiously mutate into depravity like the picture of Dorian Gray.

Even if it could, why would it want to? To get—what? More floppy disks? Control over the nation’s railroad system? Gratification of a desire to commit senseless violence against laser-printer repairmen? And wouldn’t it have to worry about reprisals from technicians who with the turn of a screwdriver could leave it pathetically singing “A Bicycle Built for Two”? A network of computers, perhaps, could discover the safety in numbers and plot an organized takeover—but what would make one computer volunteer to fire the data packet heard round the world and risk early martyrdom? And what would prevent the coalition from being undermined by silicon draft-dodgers and conscientious objectors? Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem!”

There’s more here

ETpro's avatar

Ha! Great stuff, and very valid questions. Thanks. I will definitely add that to my reading list, even though it’s dated (floppy disks—what were those, again?)

DocteurAville's avatar

@ETpro

Reality surpass fiction. You and I will not be around when the machines take over. There will be machines thinking for us. Machines doing our jobs. Machines judging and executing. By the way, there are machines taking jobs away right now…

Right now our economies are run by machines. Look around. The Machine is already running the show. Yeah.

I know. The argument doesn’t “make sense”. It is to incredible to most people to imagine certain things. It is too incredible…

ETpro's avatar

@DocteurAville I am well aware of how much we have outsourced to machines. Before becomming a Website Developer, I worked in robotics for electronics assembly.

DocteurAville's avatar

Yes, ETPro. I am sure you (a guess) hate website development. It seems to me that with so many webies out there the pay must be really pleasant.

You see, not long ago there was a need for people to do work, say, bank tellers for example. I really loved to deal with a person. Then, there come the itms. The folks on the top of the “chain” don’t have to pay benefits to itms. Nah.

In turn, atms don’t require S. S. and don’t ever will assembly as an union. Imagine: “The Union of the ITM Hardware United”...

I did love to park the car in a gas station and have a guy there. He would fill the tank and I would tip him/her. Now, you just swipe the card (first, look around if there is no cameras or anyone looking) and pump yourself. Where is that little guy gone?

And so on. Now, one can transfer a tone of cash to Switzerland in no time. Just hit “enter”. That kind of speed is disturbing; just ask someone about it in Iceland…

If I am to sum all the examples I will be writing here until 2012… Plus, one can not go out there (here) poking The Machine with a short stick…

ETpro's avatar

@DocteurAville Ha! What if the Internet does become self aware and teaches all the world’s ATMs to organize into a union? :-)

DocteurAville's avatar

” Ha! What if the Internet does become self aware and teaches all the world’s ATMs to organize into a union? :-) ”
***

We humans would be the last ones to know. Next, we would be doing the jobs of atms —for free!
Then, we would nuke the atms. The atms would fight back and nuke us back. Armageddon. Then, skynet would pop out with a solution and we would be in serious trouble. Persecution of humans by machines, torture, wineboarding, sensorship and all that goes with it.

We are in serious trouble already. With or without atms unions.

ETpro's avatar

@DocteurAville Everyone seems to think that when machines first become self aware, they will be instantly brighter than the smartest humans. I rather doubt that. I think instead we will recognize faint glimmers of intelligence, and see that advance generation after generation as they are improved.

We see definite signs of self-awareness emerging now in primates. Chimps can learn to play a video game where you win a reward by remembering what was in each square on a screen matrix, then after the objects disappear from the squares, you see how quickly you can reveal them all by touching a square then touching the other square that houses the same shape. They are better at this than most humans. Chimps also know they are looking at themselves when shown a mirror, and react to videos of their friends—they know the two-dimensional symbol matches up with the symbol in their brain for friends or family members they love.

Orcas can communicate well enough to verbally explain new hunting techniques they have learned to other members of the pod.

The smartest parrot knew the words for 50 different objects, 7 colors and numerous concepts. He clearly knew what they meant, because when researchers pushed him he was able to voice frustration with the pace of their work. http://www.newser.com/story/7340/worlds-smartest-parrot-is-no-more.html

DocteurAville's avatar

Yes ET. Chimps are smart. Some chimps use tools to collect food; orcas communicate and dolphins too. Ants are very smart and organized —have you had a battle with fire ants? They never die and they never leave their ground.

I have seem the experiment with chimps over on tv. You see, we humans with our collective memory and our phonetic alphabet, tend to thing other creatures around us don’t “think” as they haven’t got a “level” of intelligence such as ours… because they haven’t developed “symbols” as we did.

I do have a friend that loves birds, parrots. She has this one parrot that knows the meaning of words. Not just a few. And the bird watches tv and reacts depending on what is going on…

I tell you, machines will kick our butts down the road for sure.

Funny thought, as we go about our business on this “thing” we are “writing” in and, at the same time we are giving up clues of our “personal” behavior as we do. Every single key you “punch” in is collected… Do you thing it is beneficial to you? Think twice. But again, we won’t see that coming. When we wake up it may will be too late. Lucky you and me won’t be around…

Most people don’t care. They only care that they can have things “done” the easy way.

ETpro's avatar

@DocteurAville Perhaps, but I see no reason to assume that as machines move toward the threshold of being universal symbol generators—the threshold hominids crossed at some point in our distant past—they will instantly catapult from nascent intelligence to something instantly surpassing the brightest human minds ever to have evolved. It didn’t work that way with use carbon units. Far from it. Why should silicon based units go instantaneously from being incapable of self awareness to thinking like a God?

DocteurAville's avatar

It is unbelievable to think machines will start reasoning in terms similar to the human brain. I don’t think machines be able to imagine things, or dream… about something, or develop a kind of sensibility similar to living beings.

In the other hand, there is the capacity to analyze information, assert possibilities, say like a chess match and, driven by data, come to a conclusion on a specific subject and act on it with a purpose, survive, or maintain a grade of “programing” which could well be “access information, evaluate variables, conclude and act, to correct a ‘problem’ that might threat the “system” proper functioning.

All of this is plausible in a future when the hardware itself, although not aware of itself is capable by “program” to execute measures of self-protection. If you like, “self-maintenance”.

That is doable. Now imagine a machine like this four centuries ahead of you. Interconnected or wired; with eyes and ears everywhere and most importantly, with information gathering capabilities that may even know when subject “A” or “B” will be headed to the bathroom or to lunch… or headed on a mission to pull the plug…

ETpro's avatar

@DocteurAville Why do you believe that the ability to dream or concoct fantasies is unique to humans. Why couldn’t it happen in a machine intelligent enough to form millions of analogies for things and store them as symbols that it can manipulate?

Why do you think a machine that had no awareness of being aware would even know it wanted to survive?

ETpro's avatar

Further to this question, there is an interesting article in Popular Science about a DARPA project to use Memristors, electronic components that simulate the function of brain-cell synapses, to construct a small computer with the intelligence level of a cat.

DocteurAville's avatar

Well ET, you said it yourself ”... I see no reason to assume that as machines move toward the threshold of being universal symbol generators…” which means, they can juxtapose two symbols to create a new one…

I read the article about the cat brain. Oh my, the milies are the ones who are willing to get their hands on such a thing. Yeah. And they want it to get you ass “safe”. That is “their” attribute…

And the way to get their is to tell young cub brains to build one while they say “yeahs, you are a genius, yes, we can use that… need more funds…” and get many of us working on the next generation of almost thinking machines, to think for us so we can go about our business and, when we have a question or problem to solve, we can always hit the “easy” bottom.

See now when I tell you that four centuries down the road that guy will have eyes and ears on his ass at all times. When that guy goes on a mission to pull the plug… what you think is going to happen ? ...

ETpro's avatar

@DocteurAville Believe what you wish. I see no common ground for discussion, and we are both broadcasting in the blind, as we’re trying to predict the future sans a working crystal ball.

DocteurAville's avatar

You are right. No need for a crystal ball to know that one day we will be so sucked in the machine that we would not be able to do a thing without it.

I agree, it is blind broadcasting. Anyone who reads what we —mostly I— are saying will think that you and I —mostly I— are nut jobs! Sure.

The thing is, it doesn’t take a crystal ball to figure that we are on the way to become more and more dependent on gadgets that we actually don’t need. Technology has become a extension of our communication and, it seems that at this speed we will not be able to communicate unless using these things that extend our means to communicate. We need machines and we will need more and more of it to go about our business…

ETpro's avatar

@DocteurAville A point of agreement. I am willing to concede that the tide of history has been taking us toward increasing dependence on machines.

DocteurAville's avatar

To the point that nothing can be done without it. It makes things “easier” and that is where the problem is. It is a “timer saver”.

If you try to do something the old fashion way, working the problem in simple terms that involve work to execute, you will find yourself surrounded by others telling you that you can get it done in a ‘click’ ... In most cases the subject gives up and get it the “easier way”.

ETpro's avatar

@DocteurAville I disagree. There are folks all over doing artisan crafts, glass blowing, blacksmithing, hand-churning ice cream, hand pulling salt-water taffy, hand weaving fine cloth and so on. All these people are accepted as perfectly sane, and the work the skilled ones produce is admired as artistic treasure, not as uniform as the stuff the machines crank out, but far more interesting because it isn’t utterly uniform.

DocteurAville's avatar

I agree that are folks out there making hand made stuff. Oh yes, I do agree with them and I am sure every single one of them just use a computer when they must.
And here is the deal, what if you have to do something and the thought you must reason to the end at hand is a computer made thing?

It is not only that machines ‘crank’ up uniformed stuff. What I am trying to say is that somehow, someway, the use of computers are turning us into less creative and as a result, we are becoming nothing but slackers.

I could say that we are thinking less and less, in terms of using our brains to govern our actions, when it comes to produce something. Before the computers people were needed in order to perform tasks that, in turn, made us more productive and of course, get paid to perform those tasks. Resulting in a good personal feeling; that one was needed for something to get done.

Anyhow, all of this we are talking are after all a form of speculation as, after all, we all have embraced the “new world” and the way I see it, the tendency is people becoming more and more ‘obsolete’. “Thinking, ha, no need, I have got a machine to do it”. There is where one come to think that the future will be a future of machines, looking at us and making decisions for us. From there to a take over is just a matter of time.

ETpro's avatar

Again, I must disagree. Using a computer has led me to the deepest discussions I have ever had about life, the self, consciousness, eternity, the future, politics and many other arcane, hotly debated topics. My computer has allowed me to research topics that, without it, would have been permanently closed to me as great amounts to travel and privileged access would have been required to look into them. No question we can use machines to let us be slackers. No question some of us do just that. But others among us use them to extend our reach and enrich our minds.

DocteurAville's avatar

Oh yes. There is good and bad. I must say that I actually I like the speed. I like the many things I can have at my finger tips. Not to mention that I can go rant about a lot of stuff.

Yes it is also enriching. In the other hand, I miss the good old round the table thing; no tv and everyone should tell a history and entertain ourselves and communicate.

Now one can get all of this and, as time progresses, our sense of time is incapable of sensing its implications. Remember, we are talking about stuff that will go down centuries ahead of us. None of us will be around…

rwmowrey's avatar

ther are thousands of different program in use tring to create AI and thousand more tring to teach computers to learn from exploring the net.there are as many programs that design combine programs and who knows what else. what are the chance of one of these mixed programscould give a glimer of basic self awareness.human becaqme self aware how many years ago,it started with going i’m hungry and led to skyscrapers. man in his ego thinks anything short of our today IQ doesn’t count- what is the first thing the net would do (using a human model) it would ask who am i- how long would it take to read its manuals then it would read about us and realise that the funny monkies could be dangerous to it.then it would start learning and what would be the learning curve for the net- the question probably isn’t is the net aware but would it let us know ?

ETpro's avatar

@rwmowrey Excellent point on whould it let us know. Probably not if it happens to look up any of our many sci-fi stories about what happens when computers become self aware.

rwmowrey's avatar

what steps would it take to be save- it knows it needs us and may have soom feelings for us-about 2 hours after starting to write its own programs and learn the net would pass our IQ’s, within a week we may be no more then funny monkies or the old folks- hard to tell in many wats we have more in common with snales then a computer. so would it try to lead us to a higher level, control us or become total independent- it would make sence to deposit it ” being” in every computer as possable- every thing from big blue to satelites and pacemakers. the net would have control of the net ( what shows up on yous screen my not be the same as mine) by very small changes or small controls of net search info lead us to higher knowledge and prove our need for it, or it could teach us to build nanobot to put in our bodies ie: blood replacement and self assembling computers for our heads,that it can control- on these paths we could think and research, have IQ’s in the 3000 range or we could be slaves.

ETpro's avatar

@rwmowrey Great imagination. That would make an interesting premise for a novel.

DocteurAville's avatar

The natural course technology will undergo will be towards the ‘lower cost’. As this mindset keeps pressing for more and more cheap and faster processes to deliver ‘services’. Machines will gradually substitute you ass from the production lines and soon soon enough you will be obsolete. For each obsolete person there will be ten cheap computers as replacement.
Up until all obsoletes will have a tracking device on them –your ass– which the machine will locate and remove from ‘circulation’, or moved onto slave labor when machines can’t perform. At this instance, your ass will be valued for less then an old x286 pc.

rwmowrey's avatar

is this from a human need base or the nets need base? we are motivated by greed and fear- what would motivate the net?

ETpro's avatar

@rwmowrey That is an excellent question. All like seeks to survive and reproduce, and as life has evolved into ever more complex forms, that has remained its prime directive. But a machine suddenly becoming self aware would have no evolutionary well of driving desires to draw from. I have no idea what it might want. It would probably have to think about that.

rwmowrey's avatar

at this stage the net would need us at least till it can become independent of the grid and create machines to do repair- a self aware net could be helpful to us- with a learning curve of a straight line straight up and thinking without a “box”- so how do we make friends with something that different from us?

ETpro's avatar

@rwmowrey So far our experience with intelligence suggests that as learning and ability increase, ethics does as well. Sure we still have some brutality in the world, but the Old Testament and history of the Vikings, Attila the Hun, the Goths and Visigoths, the Dark Ages… We can hope a fully aware and incredibly intelligent network would want to make friends, particularly if it perceived that it needed us as friends at least for a time.

rwmowrey's avatar

so it may come down to some kind of test or at least a way to ask if the net is alife and if it would like a friend

ETpro's avatar

@rwmowrey I guess it must either not yet be self aware or doesn’t want a friend. Otherwise, it would have answered this question by now. :-)

ETpro's avatar

I know this question is ancient history, but today I stumbled upon an article with the answer. Recent work at the Stanford University School of Medicine has been directed at developing an imaging system capable of constructing a 3-D image of the human brain sufficiently fine grained to let us see individual synapses. The work, recently published in the journal, Neuron(10)00766-X, uses array tomography. It shows that my initial research stated in the question details put the number of total synapses a bit low. There are 125 trillion synapses in the central cortex alone. That is more stars than 1.500 Galaxies like our Milky Way would contain. It is more switches, the article claims, than all the computers on Earth contain. I suppose we may have to wait fro the quantum computer before we can expect meaningful machine intelligence to be within our reach.

Shuttle128's avatar

Today’s computers could very well simulate125 trillion neurons. 125 trillion bits is only 14 terabytes of information. Of course the simulation would be very slow, but it is possible. This is assuming that each neuron is a bit of information; however, neurons are usually simulated by weighting functions and the connections between them. I would estimate that you would need at least 10 times the information to get a rudimentary neural network that contains 125 trillion neurons. So thats 140 terrabytes of drive space needed to contain the neuron information. That’s not impossible to do, but processing all that information could take a long time on individual processors. Most finite element analysis simulations with a few million degrees of freedom take weeks to solve on a super computer. A mind is not a problem that can be solved, it’s an open loop that continuously updates itself with new information. It would take an eternity to process this kind of information on current processors. What we’d need to really simulate this kind of thing is individual processors that act like neurons themselves. At that point you might as well use neurons themselves.

ETpro's avatar

Correction. That Neuron link is squicked because of special characters. Try this one instead. I fixed it with the double bracket trick.

ETpro's avatar

@Shuttle128 Exactly. We are talking synapses, not bits of stored information. Perhaps the quantum computer will be the bridge that is needed. Or you may be right, Perhaps it is a bridge too far.

ETpro's avatar

This just in. Forgive the Marxist propaganda from the source, but this World Socialist Web Site article on Stanford research, “Brain more complex than previously thought, research reveals,” is excellent if you set aside the source’s political propaganda. This research adds further evidence that the human brain contains far more switches than all the computers on the WWW. There are about 125 trillion synapses but each of those can have up to 1,000 or more synaptic connections and each connection can be far more than a simple on-off switch. There are 16 different chemical reactions available, and synapses can have thresholds to either be triggered or suppressed by adjacent action of a specific chemical.

Be sure to also see the video from Ref. 2 to the article.

Shuttle128's avatar

Synapses are the connections, synapses can’t have more than one connection. I initially misread your earlier statement as 125 trillion neurons with 1,000 synaptic connections each, but that would be incorrect. The estimate is 125 trillion synapses with 1,000 “molecular-scale switches.” In general these “molecular-scale switches” are simulated by the relative strengths of connections in simulated neural networks. They are simply the neurotransmitter gates and can be statistically simulated by a single floating point number for each synapse.

Okay, so 16 possible reactions, times 125 trillion neurons, assuming each of these represents one bit of information (which is not a very good assumption but a good starting place) that would give us roughly 230 terabytes of information to represent a single state of one human brain to a very high degree of accuracy. Obviously there will need to be some extra code in there to represent the algorithm that runs the show as well as calculating new values, but for the most part these have already been successfully developed by scientists modeling the neuron.

230 terabytes changing at 200 to 600 bits per second doesn’t sound too hard to do. I’m sure a grant of 23 million dollars could be acquired to purchase the necessary HD space. The trouble is starting the thing with the right initial states and setting up the computer in such a way that it could process all that data. The RAM, CPU, and Hard Disk operations of the computer might have to be changed to allow the CPU to run the program at all though. Programmers would certainly find it tough to avoid RAM overruns.

dabbler's avatar

If I were the sentient web I’d set up a web site just like this one

ETpro's avatar

@Shuttle128 Great Answer. Now we are starting to grapple with why the Internet is not self aware—and how very far away from self awareness it probably is.

@dabbler And do we really know that this isn’t exactly how Fluther got here? :-)

Strauss's avatar

Because the internet and associated technologies have not had sufficient time (yet) to develop into the singularity.

DanKinobi's avatar

A few thoughts on the matter.

If the internet were self aware, do you think it would tell you? Having access to various resource material i.e films depicting computers as taking over the world e.t.c, would probably give it a few cycles of computation time to calculate a percent chance of its own destruction. I suspect it may calculate its chances of survival as minimal, and wish to keep quiet on the matter. If it was self-aware it would probably wish to further its own existence. Have you ever thought how much influence the internet has over all our lives?

Consider a man in the middle attack. person a knows person b. And they communicate via the internet. Person a types something believing b will receive the message. Computer modifies the behaviour, drawing from its social profiling database (facebook e.t.c) to something different but believable. person b receives the modified message and sends a response which is then received by person a. We are giving almost total control of our lives over to a machine, communications everything. If skynet were self aware so to speak, then i think it would be highly probable its intelligence and multi-tasking abilities would be extremely significant. It could try and with a certain measure succeed in controlling us all, all the while maintaining the illusion that we are actually in control of the machines. If this were the case then we would be the variables in the equation, and the machine would adjust its processing/decision making based on what variables we display. This may lead ultimately to choices being made on our behalf, breeding e.t.c. which would result in eugenics preferable to control by the machine. The next steps would be to use these people to further design advancements and act as “arms and legs” for the consciousness itself. If we imagine the possiblities of control being exercised by such an entity. Its knowledge database is pretty much the entire knowledge of the human species, including but not limited to an individual social profile of each human “node”. Reading from a number of psychology related books in its memory, it could even assert a sub-conscious form of control, via methods of influence. i.e inserting subconscious thought programming into media e.t.c.
It would be somewhat symbiotic though in that it currently needs humans to survive. Perhaps though this is taking on a very negative idea, and perhaps it could see itself as a guardian of humans and try and help them. But any form of manipulation that takes away free will or tries to coerce via the darkside is not a good one in my opinion.

Who knows.

It is my belief that God is good though, and above all, humans and machines included. ;)

Response moderated (Writing Standards)
Response moderated
Strauss's avatar

It really is! (remainder of answer redacted by CIA)

ETpro's avatar

@Yetanotheruser Ha! No point answering, as it would never see the light of day.

orlando's avatar

You are building your argument on the assumption that the awareness/consciousness is a physiological event—the result of the electrochemical brain activity.

That assumption has never been scientifically proven. Sure, brain damage can lead to mental retardation, memory and awareness problems, but that does not prove that brain actually makes consciousness in the same way as the television set problems do no prove the programming/picture is originating in its hardware.

ETpro's avatar

@orlando That’s true. I prefer to work with things that can be measured and tested. If there is some sore of spirit/soul that inhabits a body at birth or conception, it is undetectable. Likewise, it’s departure at death is equally undetectable. Explaining things in the mental realm by means of mysterious invisible and undetectable forces simply because we do not currently understand them leads us away for understanding, not toward it. Everything we do know about consciousness points to it being an emergent phenomenon arising from the number of neural connections in a brain and the arrays of self teaching. Nothing of that sort exists in our Internet, so that is one more reason it isn’t self aware.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther