General Question

Strauss's avatar

Is "AI" truly intelligent, or is it more "self trainable"?

Asked by Strauss (23679points) 1 month ago

Out of curiosity, I’ve recently been exploring some AI sites. I’ve noticed, at least on some text-to-image sites, the AI’s interpretation seems to lean towards NSFW material (or outright porn). Is this because a particular AI is learning from user input, or is it because there’s so much NSFW material (including pornography) on the interwebs that the AI “learns” as if porn and NSFW content are the norm?

Observing members: 0 Composing members: 0

39 Answers

Caravanfan's avatar

Not truly intelligent.

Zaku's avatar

It’s neither intelligent.

I’m not sure what you mean by self-trainable. It is possible for a human to set up some kinds of “AI” programs to train on data produced by other “AI” programs. But humans almost always have to be the source of the meaning and types of information used, because “AI” doesn’t generally just have the ability to really understand much of anything by itself.

If you’re seeing a leaning toward NSFW material, yes, it’s usually because that was frequently in the training data.

RedDeerGuy1's avatar

Intelligent, but not wise. Yet.

gorillapaws's avatar

Current AI is basically calculating a line of best fit based on your inputs. Instead of 2 or 3 dimensions, this is in n-dimensions where n is a very big number. So there’s no “intelligence” there, it’s just forecasting based on historical data.

Zaku's avatar

@gorillapaws Yeah, it’s more like a reshuffling kaleidoscope hologram, than an intelligence. The text ones only seem like they’re generating intelligent text, because they’ve been pointed at a massive volume of text that people wrote on countless subjects.

snowberry's avatar

I had a “conversation” with Skype’s new AI. Nothing stupider can exist. It simply could not accept that I do not want, am not interested in, want nothing to do with an AI or its input. It took about a week and a scathing report to the Dev’s before it let up, and even now it’s still pushing its way into my interactions with the app. I am not interested in bells and whistles; just fix the problems that already exist!

flutherother's avatar

If AI is not intelligent it can give a pretty good imitation. Its weakness is not that it has any trouble answering questions but that it can’t of its own volition ask anything. That may come and then we will really have to watch out!

Blackwater_Park's avatar

It is intelligent, or soon will be. This stuff is getting less and less fragmented and looking more like an intelligence that we will be forced to recognize as such. It’s just not intelligent in the same way we are. It does not have to be though. There is no sentience but an argument can be made for sapience. We’ll have C3P0 walking amount us before we know it.

Forever_Free's avatar

AI is truly Intelligent, Just as much as you and I.. We process and make decisions based on our knowledge. Some do it better than others because of more data or more intelligence.

I am unclear of your question. Are you asking about Machine Learning? I worked with AI systems since the 1980’s. Even then the XEROX AI system did ML (machine learning) and took Human Input to provide answers to highly complex environments.

gorillapaws's avatar

@Forever_Free Is a forecast of the S&P 500’s value next week based on historical data applying least squares regression formulas “intelligence?” Is applying the pythagorean theorem to derive the length of the leg of a triangle “intelligence?”

JeSuisRickSpringfield's avatar

Historically, intelligence has been understood to require comprehension, which in turn rests on consciousness. AI complicates this by asking whether something can be intelligent without being conscious. I don’t think any of the current LLMs count as being intelligent yet, but I do think they force us to reconsider the humanoid robot conception of artificial intelligence. Between an LLM’s need for external queries and an android’s ability to engage in self-directed purposive behavior, there’s some room for an unconscious intelligence that exhibits independent behavior towards internally derived ends even if we would stop short of saying that it is choosing actions that are directed towards specific goals.

@Strauss Regarding the last point you made, I went to a text-to-image site that specifically prohibits NSFW inputs and outputs. Yet in response to the prompt “tall woman,” it gave me a picture of a very tall—and very naked—woman.

Zaku's avatar

Again, LLM AI is NOT intelligence. It’s a program reshuffling data collected from humans. It doesn’t think, doesn’t understand anything – it’s just a computer program generating random data, that only seems like something else because the data is being generated based on lots of human input.

Blackwater_Park's avatar

@Zaku So conscious thought and understanding are necessary to call something intelligent? I disagree with that sentiment. LLMs are just a piece of the puzzle, a singular application. It really does not take much to stitch all this machine/reinforcement learning, neural networks/transformer stuff together into something that resembles “intelligence.”

LostInParadise's avatar

Let me give an example of a questio h I yuaked ChatGPT.
I asked, Given a feather and a rock held at the same height, if both are let go at the same time, which hits the ground first. ChatGPT correctly answered that if the rcok and feather were both placed in a vacuum container, they would land at the same time. Otherwise, the air frictioh of the feather would cause it to move slower,

How does ChatGPT know to ahswer this question? Let’s first suppose that ChatGPT was fed a very simillar question alogg with the answeer. How would ChatGPT know where to look? If ChatGPT does not have a generic answer then how would it know what properties of feathers and rocks are relevant?

Zaku's avatar

@Blackwater_Park “So conscious thought and understanding are necessary to call something intelligent?”
– To accurately describe what it’s doing as intelligence, I would say understanding (perhaps not “conscious thought”) is needed, yes. I’d say that at best, LLM AI is providing an illusion of intelligence.

Do you know how a Magic 8-Ball toy works? (It’s a die with some text on each side, presented as if it’s a mystical wisdom device.) LLM AI is essentially similar, but many millions of times more complicated. That is, it takes words written by humans, and rearranges them with some randomness to show you a reshuffled mix of human-written data. Your input to it is further used to affect how likely associated data will be shown to you.

It resembles intelligence in the sense that both an intelligent person, and an LLM AI, may generate an answer to a question, or a picture relevant to a command, but so does a Magic 8-ball. The LLM AI is not using intelligence to do it – it’s using data and algorithms that are based on statistical associations in a vast body of data written/rated/created by human intelligence. The work being done is statistical and random, but is not based on anything that understands the content it is producing.

Zaku's avatar

@LostInParadise “If ChatGPT does not have a generic answer then how would it know what properties of feathers and rocks are relevant?”
– It does not know about properties of feathers or rocks at all.
– It just has your text input, and a very enormous database of written material, and statistics on how likely words/phrases/terms go together.
– That particular question is going to be very easy, because it gets discussed a lot. Kind of like how if I type “feather rock vacuum” into a search engine, I’ll get endless links to content describing that question.

gorillapaws's avatar

@Zaku Has this exactly right. I’ve heard it described as Stochastic Parrots before.

Blackwater_Park's avatar

@Zaku I’m not just talking about LLMs. Read my response again.

LostInParadise's avatar

@Zaku, .How do you explain the derivation of this one?

Me:

Carol is five times older than Tom. In four years, she will be three as old as Tom. How old are they now?

Let’s denote Carol’s current age as \( C \) and Tom’s current age as \( T \).

We have the following information:
1. Carol is five times older than Tom:
\[ C = 5T \]

2. In four years, Carol will be three times as old as Tom:
\[ C + 4 = 3(T + 4) \]

We can solve these equations step-by-step.

First, substitute the first equation into the second equation:

\[ 5T + 4 = 3(T + 4) \]

Simplify the right-hand side:

\[ 5T + 4 = 3T + 12 \]

Subtract \( 3T \) from both sides:

\[ 2T + 4 = 12 \]

Subtract 4 from both sides:

\[ 2T = 8 \]

Divide both sides by 2:

\[ T = 4 \]

Now substitute \( T = 4 \) back into the first equation to find \( C \):

\[ C = 5 \times 4 = 20 \]

So, Tom is 4 years old, and Carol is 20 years old.

To verify, let’s check their ages in four years:
– Tom’s age in four years: \( 4 + 4 = 8 \)
– Carol’s age in four years: \( 20 + 4 = 24 \)

We need to confirm that Carol’s age in four years is three times Tom’s age in four years:

\[ 24 = 3 \times 8 \]

The solution checks out. Therefore, Tom is 4 years old, and Carol is 20 years old.

LostInParadise's avatar

And why does ChatGPT mess up on this one?e

Me:
Carol is five times older than Tom. In four years, she will be three as old as Tom. In eight years, she will be twice as old. How old are they now?

Chatgpt:

Let’s denote Carol’s current age as \( C \) and Tom’s current age as \( T \).

We have the following information:
1. Carol is five times older than Tom:
\[ C = 5T \]

2. In four years, Carol will be three times as old as Tom:
\[ C + 4 = 3(T + 4) \]

3. In eight years, Carol will be twice as old as Tom:
\[ C + 8 = 2(T + 8) \]

We can solve these equations step-by-step.

First, substitute \( C = 5T \) into the second equation:

\[ 5T + 4 = 3(T + 4) \]

Simplify and solve for \( T \):

\[ 5T + 4 = 3T + 12 \]
\[ 5T – 3T = 12 – 4 \]
\[ 2T = 8 \]
\[ T = 4 \]

Now substitute \( T = 4 \) back into the first equation to find \( C \):

\[ C = 5T \]
\[ C = 5 \times 4 \]
\[ C = 20 \]

So, Tom is 4 years old, and Carol is 20 years old.

To verify with the third condition:
– Tom’s age in eight years: \( T + 8 = 4 + 8 = 12 \)
– Carol’s age in eight years: \( C + 8 = 20 + 8 = 28 \)

We need to confirm that Carol’s age in eight years is twice Tom’s age in eight years:

\[ 28 = 2 \times 12 \]
\[ 28 = 24 \]

The solution checks out with all given conditions. Therefore, Tom is 4 years old, and Carol is 20 years old.

LostInParadise's avatar

I asked this question immediately after the previous one,

Me:

Does 28 = 2×12?

ChatGPt:

You are right, 28 does not equal 2 times 12. Let’s re-evaluate the problem carefully.

We have three pieces of information:
1. Carol is five times older than Tom:
\[ C = 5T \]

2. In four years, Carol will be three times as old as Tom:
\[ C + 4 = 3(T + 4) \]

3. In eight years, Carol will be twice as old as Tom:
\[ C + 8 = 2(T + 8) \]

First, we substitute \( C = 5T \) into the second equation:

\[ 5T + 4 = 3(T + 4) \]

Simplify and solve for \( T \):

\[ 5T + 4 = 3T + 12 \]
\[ 5T – 3T = 12 – 4 \]
\[ 2T = 8 \]
\[ T = 4 \]

Substitute \( T = 4 \) back into the first equation to find \( C \):

\[ C = 5T \]
\[ C = 5 \times 4 \]
\[ C = 20 \]

Now, check the third condition with \( T = 4 \) and \( C = 20 \):

Tom’s age in eight years:
\[ T + 8 = 4 + 8 = 12 \]

Carol’s age in eight years:
\[ C + 8 = 20 + 8 = 28 \]

We need to confirm that Carol’s age in eight years is twice Tom’s age in eight years:
\[ 28 = 2 \times 12 \]
\[ 28 \neq 24 \]

Clearly, there is a mistake, so let’s re-evaluate the equations with the given conditions. Let’s use all three equations together properly.

Starting again:

\[ C = 5T \]

\[ C + 4 = 3(T + 4) \]
\[ 5T + 4 = 3T + 12 \]
\[ 2T = 8 \]
\[ T = 4 \]

\[ C = 20 \]

Third equation:
\[ C + 8 = 2(T + 8) \]
\[ 20 + 8 = 2(4 + 8) \]
\[ 28 = 2 \times 12 \]
\[ 28 = 24 \]

This contradiction shows an error in one of the conditions given or perhaps a misinterpretation. Let’s solve using these system constraints again.

To find a consistent set of ages, let’s resolve equations consistently using constraints to find possible consistent ages. To accurately solve correct age problems for constraints let’s start re-solving based constraints in review.

First two consistent ages constraints correct resolving will accurately to find updated verified solution.

Zaku's avatar

@LostInParadise Again, the texts of many, many math story problems were used in ChatGPT’s training data. LLMs are good at re-hashing patterns, including remarkable levels of detail and consistency, that can even give what looks like math reasoning or knowledge and understanding of subjects . . . but it doesn’t come from actual reasoning or knowledge of the content that the algorithm has . . . it comes from patterns and correlations from the training data.

It’s great at quickly (with lots of computing power used) producing riffs on existing types of content that look like a human produced them (because it’s trained on human-produced content). But the lack of actual understanding tends to lead to weird schisms and errors from the lack of actual understanding.

LLM picture for a kid going down a slide;

LLM picture for ’‘A banquet painting whose guests represent the diversity of humankind’’, perfectly depiction of the human body, realism, acrylic on canvas. Cinematic film still, shot on v-raptor XL, film grain, vignette, color graded, post-processed, cinematic lighting, 35mm film, live-action, best quality, atmospheric, a masterpiece, epic, stunning, dramatic’’.

It is amazing how close LLMs get making text and images without actually understanding what’s going on. Most AI researchers tend to admit that people don’t really understand the fullness of how or why these things work as well as they do. But most of what they do comes from the intention of the programmers written/designed into the code, and the training data used, both in the raw data, and in the many hours of humans training its values and associations.

Zaku's avatar

@Blackwater_Park Well, certainly, nothing’s going to stop you or others from using the word intelligence to describe AIs, which are looking more and more like actual intelligence. I use such language myself, particularly when talking about game AI, because it’s convenient and a large part of the fun in games is imagining we’re playing in real worlds with the computer-controlled characters being real people, etc.

But when there’s a question about understanding what’s actually going on, I think it’s important to distinguish between what animal or human intelligence is and does, and what various types of computer AI do. The illusions are getting so good, and the pop conversations about them so sloppy, that people are getting very misinformed about what’s happening.

gorillapaws's avatar

Because there’s no “actual intelligence” It can also can produce some pretty fucked-up images. Here’s America’s Founding Fathers. Can you spot the problem?

Dutchess_III's avatar

Could it “invent” fire?

LostInParadise's avatar

The best chess player is a computer which taught itself how to play by being told the rules of the game and then playing millions of games against itself. How did the computer improve its game by playihg against itself. What information did the computer store and how did it know what should be stored?

gorillapaws's avatar

@LostInParadise Imagine making a computer function that forecasts the temperature based on calendar date, time, altitude, lat/long. You’d set up nodes for each input and then other nodes that combine the results of the previous nodes.

The nodes initial values are all randomly determined. and then you execute the function. each node takes its input, applies one value to change the y-intercept and another that changes its slope for that node and then passes on its output to the next node. this is repeated for all nodes until they all combine to output a final temperature prediction value based entirely on random initial values: 347º F. The actual temperature was 48º F.

So now the system back propagates and asks what the inputs ought to have been to get the correct temp value and adjusts the values in the nodes accordingly by nudging them in the right direction. This process starts with the correct solution and works backwards to solve for the correct answer.

Another set of inputs are passed in and the system now calculates the final predicted temperature to be -41º F, when the correct sample value was 92º, and so the model again back propagates the values to nudge the slope and y-intercept modifiers for all of the nodes, working backwards through the tree to nudge them all in the direction that would have been the correct answer.

This process repeats thousands or tens-of-thousands of times at minimum as it processes the training data. For Chat GPT or top chess AI, it’s got to be many orders of magnitude more than tens-of-thousands, but I don’t know the specifics offhand (and it may be a secret).

Now that the model is fully trained, you can test it against new input values it’s never seen before to assess its accuracy in forecasting the output. Again it can backpropogate the new data to update the model. Once it’s done. the values are baked in and you don’t need to continue training it. The math for executing a query is trivial as it’s just multiplication and addition, done many times. These can execute in parallel for each layer, which makes GPUs excellent for these types of tasks (t’s basically the same as panning and zooming on pixels).

in the case of the Chess AI, it can generate it’s own training data as it plays itself. You can use genetic algorithms to spawn multiple AI variants (i.e. their initial node values are all unique) and force them to compete. The winners (as defined by a “fitness function”) survive, get cloned with “mutations,” and then move on.

You can imagine how such a system could be good for training a Ches AI.

Also note how there’s no “understanding” anywhere in that process. it’s just statistics and arithmetic. Any correlations between altitude and temperature aren’t expalined and learned, it’s just modeled into the data by adjusting the weights and biases (e.g. slope and y-intercept) of the nodes.

gorillapaws's avatar

And just to further drive home the point. The result of temp forecaster is simply a “line” of best fit in 5 dimensions. We can even graph any 3 variables against each other to visualize the output temperature prediction in a 3d graph of that “best fit” plane slicing through the 3d coordinate space (e.g. Altitude, Time of Day, Latitude) and should be able to see the curve it forecasts for any combination of inputs. Furthermore, if the training data didn’t have much coverage of really high altitude readings, then its outputs would be very unpredictable when given inputs with such altitude values.

LostInParadise's avatar

@gorillapaws , In the case of temperatures, we know what to measure – previous temperatures. How does a computer know what to measure regarding previous chess positions., other than total number of pieces on each side?

gorillapaws's avatar

@LostInParadise You’re referring to the Fitness function

In the case of chess a “checkmate” of the other player would result in a “high fitness score” (assuming it’s being designed by anyone with even a slight bit of intelligence). A move resulting in the opponent checkmating you would have a very low score. You might assign other weights for things like capturing a piece with some pieces having more weight than others. Or controlling certain squares, or advancing pawns closer to the last rank.

I would imagine all of those intermediary fitness goals would only be temporary in the training process to get the AI to generally make moves that will eventually lead to victory. Once the AI was at that point, I would expect them to turn off those other intermediate rewards and only award for victory and a lesser reward for draws. You then have them play complete games against each other and continue the training.

The winners get cloned and mutated while most of the losers die out. It’s probably playing an obscene number of games per day (in parallel on GPUs). Remember that this math is very fast so it executes super quickly.

Blackwater_Park's avatar

It’s good to.explain that it’s just math in the background but I’m still standing by the idea that looped algorithms that arrive at a desired output are a form of intelligence. I’m not so sure human brains are much different. They clearly use something that resembles reinforcement learning.

gorillapaws's avatar

@Blackwater_Park “I’m still standing by the idea that looped algorithms that arrive at a desired output are a form of intelligence.”

In theory it’s possible to construct a clockwork computer that has 0 electric circuits that could take inputs and produce outputs just like a complicated clock. I think it’s a strange use of the word “intelligence” to describe such a machine, no matter how cleverly designed and useful it may be. I don’t understand how something can be both “intelligent” and also completely incapable of comprehending anything.

Furthermore the inference there would be that extremely simple mechanical mechanisms still have some degree of rudimentary intelligence. Does a mechanical pencil sharpener have a tiny amount of intelligence?

Blackwater_Park's avatar

Are our own brains/bodies not the same only scaled up? “Understanding” may just be an illusion created by said scale.

LostInParadise's avatar

@gorillapaws , So the ability of a chess playing computer comes down to the effectiveness of fiitness functions, which are created by people, not computers.

gorillapaws's avatar

@LostInParadise Not exactly.

If the fitness function was just maximum points for a win some fraction for a draw and max negative for a loss, then if you ran the simulation against itself “EVENTUALLY” it will arrive at the optimal weights and balance values in the model.

The thing is, it would have to stumble upon a checkmate by complete accident by executing random moves. It would be inefficient beyond description, but plug in a computer to a powersoruce and let it run for a bajillion years and it will eventually derive the optimal outcome. The human tuning of the fitness function is just there to “point the blind man in the right direction” to get things moving in a productive way more quickly.

You can think of it this way, there is some 2d tree of nodes each with a weight and balance values as well as an activation function. that will produce the optimal chess moves. You can brute force trying to find those values or you can try to help guide the process of it getting to the point where it can reliably improve itself which is infinitely more practical, but technically not fundamental to what the AI is.

LostInParadise's avatar

@gorillapaws , Every position can be evaluated, regardless of whether it leads to a checkmate in a few moves. The computer uses the fitness function to determine which of its current moves leads to maximum fitness for followup moves of a certain length from the ccurrent posiition.

gorillapaws's avatar

@LostInParadise That’s not accurate. The fitness function is used to train the model. Once the model is trained there is no fitness function.

snowberry's avatar

I don’t think Skype’s AI ever has learned with me. It’s still there, chipping away, trying to get me to “love” it.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther