General Question

squirbel's avatar

How did consciousness arise from the brain?

Asked by squirbel (4297points) April 7th, 2011

Looking at the concept from an evolutionary standpoint:

- how did consciousness arise from brain matter?
– why did consciousness arise from brain matter?
– when did consciousness arise from brain matter?
– do animals, who also have brain matter, also have consciousness? Why or why not?

Thanks for your input.

Observing members: 0 Composing members: 0

19 Answers

nikipedia's avatar

Define consciousness please.

josie's avatar

Isn’t that sort of like asking “How did color, stereoscopic vision arise from the eyes”?
Conciousness is a function of brain. The fancier the brain, the more range of the consciousness.
The level of consciousness is established by the amount of information you can actually process, by the level of discrimination, and by the variations in response.
You can not know the extent of other critters’ conciousness other how they react to stimulus, and what stimuli they react to.

cockswain's avatar

I think it is a phenomena related to brain size and shape, that occurred naturally as any other result of evolution of a species. I don’t think it’s exclusively a human phenomena, and any sentient creature is conscious/aware to varying degrees relative to humans. As @nikipedia points out, though, it’s really difficult to discuss without definitions.

Response moderated (Spam)
Coloma's avatar

Maybe consciousness does not arise from the brain at all.
Maybe it is INSTILLED into the brain by it’s source.

I see a difference between what could be called biological consciousness and intrinsic consciousness.

cockswain's avatar

@Coloma Are you suggesting that, given adequate brain capacity, it will be able to “detect” consciousness as an external force in the universe?

Qingu's avatar

We actually have a pretty good outline of the answer!

Bear with me. Let’s start with the most basic nervous system. Something like a starfish, or even a jellyfish. What is the point of this nervous system? Why would it evolve?

Basically, the nervous system is a way for an organism to *process sensory inputs,” and output a fitting behavior. All organisms do this, even cells. But a cell may only be able to measure a few kinds of inputs, like “hot here, cold here… move towards cold.” Plants grow towards light; they don’t have nervous systems. So the concept isn’t magical or anything; it’s everywhere already.

The advantage of a nervous system over something like a plant is: (1) it’s faster, can more directly guide responses to stimuli, and (2) it can internalize more diverse kinds of stimuli. Even basic animals like starfish can recognize stimuli like heat, light, and touch.

And this gets into a central idea about nervous systems and brains that is so easy to overlook. The brain categorizes information about the outside world. The evolutionary purpose of the brain is to take incoming stimuli—light, heat, touch, sound—and chop it up into categories. For primitive animals, those categories are going to be very simple. “Bright/dark. Hot/cold. Loud/quiet. Hard/soft.”

But then, it’s not hard to see how a brain that can subdivide more categories offers better evolutionary advantages. If a starfish can recognize not just hard/soft, but brittle/smooth/fleshy, it’s going to be better at finding tasty mussels to eat. If a jellyfish can recognize not just “bright/dark,” but shapes of reflected light—with certain shapes being hardwired to “flee” or “eat” behavioral response—it can survive better.

So here we have these brains that are pressured, by evolution, to come up with more and more precise ways to divide up and categorize information about the outside world, with each category wired to a behavioral response. And it’s not just the outside world either. Stimuli can include goings-on inside the animal body. Out of energy? The brain categorizes this as the sensation of hunger. Hunger makes your body try to find food. Out of water? Thirst. Body, find water.

When we get to vertebrates (and maybe even before then—obviously some invertebrates like octopuses are conscious), brains have evolved the ability to categorize all sorts of stimuli: shapes and colors of light reflections, a whole gradient of heat, varied sound patterns (or, related, the lateral line in fish), hunger, thirst, pain, desire to reproduce. If there is a pattern out there, or in your body, that is evolutionary advantages for your body to respond to, the brain has an evolutionary pressure to come up with some way to categorize it.

So that’s where sensation comes from. But consciousness is a little trickier. Where does the sensation of “I” come from? Of some sort of internal narration, a train of thought?

The answer is that consciousness is the brain’s way of categorizing its own activity. It’s a feedback loop, like when you point a videocamera at a TV that it’s hooked up to. Your sense of identity is your brain’s way of categorizing, and behaviorally responding to fluctuations in its own activity. It is a great example of an emergent phenomenon.

At least, that is what Douglas Hoffstader argues in his book, I Am A Strange Loop. But it totally convinced me, and if you’re interested in this question, I’d highly recommend it.

Coloma's avatar

@cockswain

Something like that. Brains regulate body functions,experiences.
Egos regulate whether or not someone can create enough internal silence/space to feel the truth of consciousness in it’s totality.

Not just through the filters of the brain but on a deep and intrinsic level beyond form.

LostInParadise's avatar

The nature of consciousness is an as yet unsolved problem. There have been some interesting discoveries in neuroscience. Some of the popular literature on the subject, some of it written by neuroscientists, with titles like “Consciousness Explained”, may give the impression that they have figured it all out. They have not and IMHO are not likely to do so any time soon. If and when the explanation comes, I predict it will start a controversy of sufficient magnitude as to set aside any disagreements over evolution.

Qingu's avatar

@LostInParadise, I think you’re exaggerating the mysteriousness of consciousness.

A lot of the rhetoric about consciousnessness, at least nonmagical material explanations, seem to assume that the only way consciousness can be explained is by reducing it to some observable brain function. Like there would be a part of a brain or a pattern on an MRI that we can point to and say “there’s consciousness.”

This isn’t how emergent phenomena work. Reductionism is a strawman of emergence.

For example, storms are an emergent phenomenon. They are large, complex systems that “emerge” from many, many simple interactions on a smaller scale (i.e. water molecules in temperature gradients). We know the physics of how these water molecules work, how and why they arrange themselves into clouds, and the forces that drive the clouds to form large-scale storms, and we can even predict the “behavior” of storms to some extent. But we can’t really forecast storms beyond a week, because…. they’re chaotic. We cannot map out every single interaction that leads to a storm ten years from now exactly; that’s the nature of a complex system.

Consciousness is the same way. We know a shitload about how nerves work, (like water droplets in a storm) down to the chemical/physical level. We know how these nerves arrange themselves into large-scale brains, and how their activity corresponds to large-scale sensations to some extent. We have a good sense of how patterns of nerves evolved from simpler organisms.

But we will never find consciousness on the “level” of nerves, just as we will never find macro-scale storm phenomena on the level of water droplets or quantum physics. It doesn’t exist at that level; it emerges from that level. But that doesn’t mean it’s an inexplicable mystery.

Vortico's avatar

@Qingu Thank you. You’ve suggested a few answers to questions I ask myself daily. Are you saying that consciousness is a sort of introspection of the sensory/reaction process, instead of the human’s ability to react emerging from our consciousness? That actually makes more sense to me because it gives an explanation for the source of consciousness.

What is really interesting is that consciousness of a man-made machine may actually be achieved within the next half-century. (source)

LostInParadise's avatar

@Qingu , Being emergent is not a catchall explanation for things. We know a fair amount about the way storms arise, but the butterfly effect limits our ability to predict them more than a week in advance.

As of now we cannot even theoretically design a machine capable of consciousness. Nobody has built anything that can pass the Turing test. I am sure that consciousness is an emergent phenomenon, but it is not something that happens simply by throwing together a sufficient number of neurons. The brain has been shown to have very localized and specialized components. It is still to be determined how they come together to create consciousness
.

Response moderated (Spam)
Qingu's avatar

@Vortico, it’s tricky to explain well. Sort of. But the “level” is also important. As an analogy: everyone learns about chemical bonds. You can draw a molecule by putting the atoms together and drawing sticks for their bonds. You can classify these bonds as ionic, metallic, etc.

But in reality, bonds don’t exist—at least not at a fundamental level. In quantum mechanics, which describes how atoms and molecules work fundamentally, there is no such thing as a “bond.” It’s just quarks occupying and rearranging certain energy states within probabilistic clouds of electrons. It’s a convenient illusion we use to describe how atoms and molecules work at a macroscopic (compared to QM) scale.

But chemical bonds are not an “illusion” in a pejorative sense. They don’t exist on a fundamental, but they describe patterns that certainly “exist” in that we can quantify them, we can use them to predict things. The patterns of chemical bonds emerge from the underlying quantum mechanical patterns… and the chemical bond patterns seem to occupy another “level” of reality with its own rules and descriptions, which don’t exist at the lower level of QM.

What Hofstader suggests is that the feedback loop between the brain’s activity and its categorization of its own activity also gives rise to another “level” of patterns—which we call and intuitively feel as consciousness.

Qingu's avatar

@LostInParadise, but by that logic, storms remain mysterious because we cannot design a machine capable of producing storms (or predicting them long-term).

I don’t think understanding a phenomenon is contingent on being able to design a machine that replicates the phenomenon.

And in any case, we have designed machines that are just as advanced as insect nervous systems.

LostInParadise's avatar

We can simulate storms.
Insects do not have consciousness.
To demonstrate understanding of the way something works, we should be able to at least come up with a theoretical design, even if the actual realization of the design is not possible.

nicobanks's avatar

@Qingu Isn’t the basic principle behind science that unless we’re able to repeat something, there’s something incorrect or at least incomplete about our understanding of it?

Qingu's avatar

@LostInParadise, we do have a theoretical design for how consciousness works. We don’t have all the pieces, certainly, but we definitely have some of the big ideas worked out. Maybe a better comparison than storms in general is the Great Red Spot in particular. We know the basic theoretical outlines of why this storm exists; it’s an anticyclone that “feeds” off of other storms that pass by, absorbing their energy. But there are still mysteries: why is it red? How deep does it go? How long has it been in existence?

@nicobanks that’s a basic principle, but it definitely doesn’t apply to many areas of science exactly. Take black hole radiation, for example. We’re not going to replicate that in a lab. It’s based on multiple observations, tied together in a theoretical framework.

We also can’t replicate how protein folding works in a lab, but we understand that very well (and we’re getting there).

LostInParadise's avatar

Some of the “small” pieces we do not know are how memories are stored and retrieved, how reasoning is done and how values are assigned to possible decisions. I do not wish to belittle the work that has been done, but sometimes neuroscientists get carried away with what they have accomplished.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther