General Question

Ibrooker's avatar

Would the invention of an intelligent machine benefit or harm the progress of humanity?

Asked by Ibrooker (60points) September 24th, 2008

Of course you need to define progress. I prefer the Turing Test as a measure of intelligence, although other suggestions are encouraged. Please explain your response.

Observing members: 0 Composing members: 0

18 Answers

sundayBastard's avatar

It would totally depend on the inventor’s intention. I mean, humans harm the progress of humanity just fine on thier own.

Harp's avatar

If you’re setting the bar at the standard of the Turing test, then the machine would only need to be as intelligent as a human to qualify. We’re currently producing 100s of millions of such intelligences each year, to no revolutionary benefit to our species. I’m not sure how the addition of an electronic equivalent would be transfomative.

A key question would be whether in creating such an intelligence, we would have created a sentient being worthy of the same considerations we afford ourselves. Some thinkers argue that any processing system endowed with the power to model it’s environment with sufficient complexity will necessarily recursively model itself- it would, in other words, posess a “self”. What’s the magic ingredient that confers ethical rights on an entity? Mind (what is that)? Human DNA? Is this something that we get to just arbitrarily decide? Why? Those are questions we would have to answer.

Would we then have the right to be its masters, to make it work for our benefit? Would we ever have the right to place it in danger, on a perilous mission, for instance? Would we ever be able to ethically terminate it, or even fail to maintain it? These are murky questions that we’ve had little experience in navigating. We haven’t even done a great job with the ethical questions with which we’ve wrestled for millenia.

To directly answer the question, then, I can’t see much in the way of practical benefit. The experience might help us understand ourselves better, but I think there are already ways of doing that.

Jreemy's avatar

I agree with sunday. Intention is key. If we were able to create an intelligent machine that also had some level of compassion, whether it be programmed or not, and could learn, I believe that it would help. These machines could eventually be used to help researchers with finding solutions to problems. Given enough time, what were once machines, mere tools to be used by humans, would become a somewhat sentient species to stand next to man and help with the improvement of mankind.

sundayBastard's avatar

The machine will look at our history and see our future and then give the answer to all of our problems…...Compute….compute…..compute…...cha ching….....THE EGO

drhat77's avatar

the benefit of an intelligent machine that “churning out 100s of millions of us” couldn’t confer is if we teach one machine something, that machine can “upload” everything it learned to millions of other machines, thus reducing education period to zero for an essential skill. this would be the biggest benefit intelligent machines would confer.

Ibrooker's avatar

So consider the fact that machines are of a special hardware that can be updated at an insane rate. The point is that they could drive their own “evolution.” So, if the first generation began with the equivalent of a human consciousness, how long would it be until they began to expand upon it (indeed, granted human consciousness would it be able to?) Consciouesness isn’t simply a threshold that one passes. Between species, I argue, there are different levels of consciousness, and perhaps even between individuals. If a dolphin is say 60% “conscious” (arbitrary), I am 99.8% conscious, and my friend is 99.9% conscious, it is easily conceivable that something could develop and harness a consciousness that is far more profound than anything a human being has reached.

Now, this is hard to imagine because, “What in the world would that be like?” This is sort of what the original question was driving at. Now considers us low-conscious individuals trying to imagine the mind of a high-conscious being. It would be comparable to a dolphin trying to anticipate a human’s behavior (or you can imagine your own comparison). Aside from the simple processing advantages that these machines might have (obvious near-instant computations, etc.) what sort of extra sensory perception might it develop? Emotional characteristics? Problem-solving capabilities?

Now, carrying on with the scenario and entering a dangerous zone of abstraction, the generations would progress in a sort of exponential behavior, approaching something called singularity where nothing limits the growth except for the physical boundaries of the universe. Would it ultimately be in the interest of the “Singular” minds to help humans? Would the machines lock themselves into some sort of zen-like state of perpetual euphoria for eternity while we simply stared at them and wondered why they didn’t seem to be working? Essentially, considering this “singular” machine as an extension of the universe, it becomes almost godly. How do god treat us?

More eerily, if these machines are invented, but we progress down this same path – how do we treat ourselves?

Harp's avatar

Hmm… You seem to have landed us squarely in the domain of Buddhism, albeit via a peculiar route. Was that your intent?

Harp's avatar

Then I’ll refrain from pursuing that direction.

Ibrooker's avatar

I’m curious…

Harp's avatar

Briefly then, the “singular mind” you’re talking about is what Buddhists call “One Mind” or “samadhi”: unlimited consciousness that’s coextensive with the universe. Buddhists would strongly disagree, though, that samadhi is a state that’s attainable by cognitive powers, as in your scenario. They see One Mind as underlying all consciousness, as being common to all sentient beings in equal measure. To them, what appear to be the many individual consciousnesses are all manifestations of One Mind, and not essentially separate from it.

Buddhism sees cognition as being both the cause and the effect of the illusion of separate consciousnesses, so cognition can never transcend that illusion. Realization of One Mind is possible, though, by direct experience of reality, without processing that experience through thought. They would agree with you that One Mind is unimaginable, precisely because imagination is a form of cognition. But that doesn’t mean that One Mind can’t be experienced; many Buddhists (and non-Buddhists), beginning with the Buddha himself, have had first-hand experience with this. There’s plenty of anecdotal evidence of what kind of person that experience produces.

Jreemy's avatar

We are the Borg. Lower your shields and surrender your weapons. We will add your collective distinctiveness to our own.Resistance is futile.

sundayBastard's avatar

@Harp That’s what I think and I far from a Buddhist.

sundayBastard's avatar

or am I?.............

Harp's avatar

To a Buddhist, nobody’s far from Buddhist.

mattw's avatar

I think this is a good try to answer this question… http://www.hologramthoughts.com/2006/11/22/buddhist-robots/

stranger_in_a_strange_land's avatar

We;ll have to dust off Asimov’s “Laws of Robotics”. The corporate world would like it. Work 24/7, no breaks, no unions, switch them off when you don’t need them.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther