Social Question

krrazypassions's avatar

Can we use Artificial Intelligence to discover laws of nature from the scratch?

Asked by krrazypassions (1355points) June 19th, 2011

In simpler words, can we use Artificial Intelligence to do science? Such a system will have only the observations as the input. It will process the facts to derive results and generate the laws.

Why would we use AI for this? Because often we reach a dead end using the existing laws to explain weird/newly discovered phenomena. At such times, we need to think out of the box but cannot do so because our existing beliefs. We tend to be narrow-minded due to our knowledge and beliefs. Free of such a prejudice, an AI system could be able to think out the box!

By using the increasing computing power, such a system can consider all the possibilities rather than working with only few, thereby discovering new laws.

Observing members: 0 Composing members: 0

17 Answers

Imadethisupwithnoforethought's avatar

I hope I am answering your question in some manner with the following:

I absolutely think we could.

However, ‘We tend to be narrow-minded due to our knowledge and beliefs’.

I believe we would reject the results or consider the AI to be faulty if the insight provided were too divergent with our knowledge and beliefs.

marinelife's avatar

I am not sure that AI could be programmed by humans to “think outside the box”. Our limitations would be programmed into the system.

zenvelo's avatar

I think AI would not be able to understand something completely new, nor have a way of investigating it other than to rule out what it is not. It would not have a sense of wonder to be able to define what it is.

Imagine if AI was the first “intelligence” to discover a platypus.

roundsquare's avatar

Absolutely we could. AI can write code so it could theoretically try every program available until it found an explanation for the data.

Realistically, we would have to limit the search space much more and would end limiting the AIs out of the box thinking to some degree, but it could definitely come up with some out of the box stuff.

@marinelife Yes, our limitations would be built in to some degree. But, there are ways to build leave enough space that it would come up with answers that would require out-of-the-box thinking for humans to come up with. (See above with the code).

@zenvelo The platypus wouldn’t be a problem for properly made AI. It would see it, try to correlate to all known data, see no correlation and make a new category.

By the way, if we did do this, the rules we get may be completely different. As humans we are biased by the fact that our minds are still basically the same as humans whose main thoughts were water, food and sex. A good AI might not have that bias and could come up with very strange rules indeed.

flutherother's avatar

Up to a point. I don’t think AI would be good at imaginative thinking but it could check theoretical possibilities and eliminate through trial and error the ideas that don’t work. But then what if it comes up with advanced theories which are acceptable to its intelligence but which human beings can’t understand. What sort of validity would these ideas have?

Qingu's avatar

We already do use AI to discover laws of nature. Modern physics research would be absolutely impossible without computer algorithms.

As for an AI actually “figuring out” a whole theory like a human putting the pieces together, sure, why not. Our brains are just computers, after all, running a suite of AI software. Though obvously AI isn’t quite there yet.

Ron_C's avatar

I think that you have the wrong idea of artificial intelligence. The state of the art for artificial intelligence is that it thinks “inside the box”. Sure there heuristic algorithms that adapt the the situation but I doubt that artificial intelligence is ready to come up with new ideas and approaches to problems, that takes creativity and that can’t be taught, only encouraged.

An artificial intelligence may have the ability to be taught new things, I doubt that the development is in position to expect creative thinking.

An ideal situation for artificial intelligence would be to sift through data and categorize them in a way that would be meaningful to human researchers. The state of the art far from artificial intelligence creating great music, art, or intellectual leaps.

Soubresaut's avatar

Why my first answer was too short… this is too long.

(—Can we? No. No way. Fun question, but not possible.
No, of course it could, when we get it to that point. It’s just a matter of time. And then we’d have thinking that’s free from societal prejudice, that wouldn’t be reliant on the discoveries and so inevitable errors of previous generations. That without tiring, without break, would examine and observe, seeing all that we gave it the ability to see, and compiling it into unity.
But what if we don’t show it all there is? What if, just as we’re limited in perception, we limit it? That just because it sees more of us, doesn’t mean it sees all.
And what if it has no concept of a greater picture, and because of that, excludes various smallness from the different problems; things that seem irrelevant at the time, but that matter in the next problem, and irrelevancies in the next problem that matter in the first.
(Matter.)
It’s not a human. It has no way of eliminating, it must look at everything. That’s the beauty. Humans, we’re blinded by society, and IV drip we’re never free from, that creates our reality, and so we deny ‘impossible’ when it stares us in the eye. The only problem truly, is if we as a society decide to fully trust or fully mistrust the AI.
Yes. If we trust it regardless because it will produce the ultimate truth, we’re no less blind. If we disregard the insanity it produces before stopping to see how sane the universe actually is, we’ll have gotten nowhere but more rooted to the fallacy that only humans can understand.
This is reminiscent of something… what…
Logic isn’t the only answer. Computers, can only think in cold, calculating logic. The more we find out about the universe, the less logical it is. Space irrelevancy? Time irrelevancy? Logic is all about linearality, clear definites.
Perhaps, without the preconceptions imposed, logic will look different. It’s only as good as it’s been molded, and logic is very maleable.
What if the computer gets it wrong.
What if the computer gets it correct, but no one wants that reality. Chaos would ensue.
When is the truth ever bad? Better a bizarre reality than a comfortable lie. It’s just the prejudice against computers being able to conceptualize. If consciousness is able to be human-made, then it’s not special. That’s the objection.
No one’s saying the computer will be conscious. Just calculating. In math. Math works.
Okay. Yes. Math. Perfect. What about octonians. What about dimensions outside our conceivability.
AI doesn’t conceive. It won’t have that limit.
Stop stop this is reminding me of something.
We give it the rules. It’s only as good as the rules we give it. It doesn’t know they’re rules, it doesn’t know they may be wrong. To it, that’s reality, however much it isn’t cogniscent of it.
42.
What? What? What? What?
The Hitchhiker’s Guide. The book. That in it, another civilization tried to answer the question of life, the universe, and everything. They distilled it into one problem, and fed that problem to the most complex, most intelligent, computer they could create.
That computer was dumb.
It was playing a joke on them.
They built it wrong. We wouldn’t.
It isn’t one question.
That it told them, come back in some long amount of time, and it would have the answer. 42. But now they needed the question. So they had to build a computer within a world, so the computer could observe. Or something. I read this a long time ago.
Yes, yes, and the answer was wrong. For the question.
Like 1984—2+2=5
No like the computer got it wrong.
Did it?
The numbers—6*8 is 48. It missed the 8th 6. Or the 6th 8.
In octonians, you know, you can’t just flip the numbers.
Whatever
Focus. There’s a question way up there. About AI. Let’s answer it as correctly as we know. Maybe it wasn’t wrong. The computer had them build a new computer inside a world… it had them build a world. Maybe it was so smart, it wanted them to come up with their own answer.
That’s insane.
Maybe not… maybe there’s something there.
Yeah. Maybe the universe is insane. Maybe it’s sanity that’s got it wrong.
Maybe Earth was the point. Maybe we were the point. But everyone was so focused on the answer, they missed it.
Exactly. This is why we need AI. It wouldn’t’ve missed that. All would’ve been collected as data.
The mice tried to cut open that guy’s brain… what was his name again? ...to get the answer.
Don’t remember. Bad at names. The brain wasn’t the answer. The life was. So much life, so intricately diverse. Beautiful.
And they blew it up. Five minutes before the answer.
Maybe that’s the point. Abolish ignorance before it abolishes truth.
Now you’re all full of it.
Maybe there is no point, then, if it can be so easily destroyed.
Maybe we shouldn’t wait.
Maybe we shouldn’t assume.
Maybe we shouldn’t depend.
Maybe we should check the paperwork to make sure no intergalactic highway crap or whatever is coming towards us.
…no need to be snarky.
Yeah. No need.
Maybe we need the computer to guide us, then.
Or maybe, it was just wrong.
Bengal tigers.
What?
Sorry. I’m off thinking about books now. That was a good one. Infinite.
…what’re we going to answer?
42. Let’s say 42.
Okay.
Yeah.
Yes, just 42. No one’s going to want to read all of this.
Maybe that’s what the computer thought.
Oh you’re sure all funny.)

gasman's avatar

One aspect of cognitive neuroscience is modeling behavior through AI programming designed to mimic some particular high-level cognitive behavior, such as composing music or playing word games. Success can be defined by the Turing test (the computer seems human to other humans) within some narrow range of interactive activity.

A good read is Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought by AI researcher Douglas Hofstadter, describing work (1990s) at modeling various features of the mind. Usually there’s a good deal of recursion or self-reference built into AI software—Hofstadter’s “strange loops.”

I’d venture to guess that genomic bio-informatics—DNA cataloging & matching algorithms & such—probably has a high-level layer of AI. Linguistic work (voice recognition, language translation) certainly represents AI. No doubt AI is creeping into many fields, but never as advanced and, um, human as the sci-fi scenarios we wish for.

Beyond neuroscience you’re proposing to harness AI to do other kinds of science, which is intriguing. Science cycles between deductive reasoning (for testing hypotheses by examining evidence) and inductive reasoning (for proposing new hypotheses). The latter step involves creative thought, once considered the exclusive domain of human minds.

Now computers generate passable music and poetry and perform other acts of artificial creativity. It’s certainly plausible that a future “expert system” could propose scientific hypotheses and perhaps even suggest experimental design for testing them. (Approval of funds will still come from humans!)

Meanwhile there are ongoing developments (mini-breakthroughs, in some cases) in quantum computing, photonics, molecular memories, etc—some of which no doubt foreshadow the real future of computing.

It’s suggested that the next few decades could bring a technological “singularity” to civilization, whereby once computers become intelligent enough to design better computers, emergence of some kind of artificial super-intelligence, surpassing humans, is inevitable. This might be good or bad, according to Ray Kurzweil and other singularitarians, who also have their critics… At any rate if that occurs then, I suppose, science could indeed proceed using our machine friends to usher in an unprecedented age of scientific discovery—along with remarkable advances in robotics. Then humanity comes to a mysterious and sudden end…

hiphiphopflipflapflop's avatar

The real problem is that it is all contingent on nature rather than logical necessity. A telltale moment in physics was the discovery of the muon which occasioned a famous outburst by I.I. Rabi, “Who ordered that?” The first generation of quantum physicists hoped to create a theory for photons, electrons and protons which would sew everything up in a nice tidy package. But nature ended up being much more complicated. We don’t lack for brains at all right now, we lack the experimental data that will take us much beyond the Standard Model.

hiphiphopflipflapflop's avatar

The Key to Science

If your guesses lead to consequences for which there is no experimental check, then you haven’t really discovered a new physical law yet.

hiphiphopflipflapflop's avatar

My understanding is that, as much as string theory is the bandwagon and has been since the 80’s, they are still not even able to compute any consequences! They appear to have lost themselves amongst the 10^500 or so different ways their extra spacetime dimensions could compactify into our spacetime. This leaves them in rather sorry position; as I’m sure Feynman would let them know, bluntly, if he were still alive today.

Qingu's avatar

I wouldn’t discredit string theory. Right now string theory is basically in the realm of math. But so was relativity for a long time before it was experimentally verified. Ideas in string theory also inform how we think about, for example, black holes (see Holographic principle)

Like other mathematical concepts, string theory is more a tool for physicists to use in their science, rather than a science in its own right.

mattbrowne's avatar

Not before 2040. And passing the Turing test does not necessarily mean being a creative genius like Einstein. As we all know, we can’t solve problems by using the same kind of thinking we used when we created them. Future AI will have to evolve intellectually.

roundsquare's avatar

@mattbrowne I wouldn’t necessarily correlate the Turing Test with the kind of AI being talked about here. In general, AI has two separate tracks (though they do overlap):
Track 1: Getting computers to be “smart” i.e. to solve problems.
Track 2: Getting computers to be more “human” i.e. to act like we do.
The Turing Test is about Track 2. But, doing science could be done under Track 1. If we do get computers to do science under Track 1, it would probably look a lot different from how we do it. E.g. in biology, it would probably not create “Kingdom, Phylum, etc…” but have a different way to classify things (or maybe not even have discrete categories, etc…).

I, for one, would not be surprised if we could build an AI now that could take the trajectories of various projectiles and come up with something equivalent to Newtonian mechanics.

mattbrowne's avatar

@roundsquare – But an AI computer who’s truly creative can also master human language. The more serious issue is this: an AI computer can’t easily start from scratch. We have to feed him or her the existing knowledge accumulated over centuries. Einstein would not have been able to come up with E=m*c*c without first understanding what E or m is. So an AI able to come up with a ToE for example doesn’t start empty and explore the world around us from scratch observing apples falling from trees. The best way for the initial upload of all knowledge is the use of natural language. So it makes sense for the AI to pass the Turing test first.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther