General Question

flutherother's avatar

Are you afraid of AI?

Asked by flutherother (34092points) 1 month ago

If so, what specifically worries you. If you aren’t afraid then why not?

Observing members: 0 Composing members: 0

28 Answers

Hawaii_Jake's avatar

I’m not afraid of it, but that’s only because I don’t know enough to be scared.

I honestly do take the Serenity Prayer to heart for a great deal of life. I choose not to worry about things that I can’t control. However, I inform my voting choices by selecting candidates I believe will best represent my needs. I can say that I hope they will enact legislation that will effectively manage AI.

Acrylic's avatar

No. Not sure what to be afraid of in AI.

seawulf575's avatar

I’m not afraid it will suddenly take over our lives, but I do fear some of the unintended consequences. The more technology we throw at people, the less they have to think for themselves and the dumber we become as a society. There will likely be all sorts of ramifications of widespread use of AI and no one can fully see them all out to their logical conclusion.

Also AI is just like any other computer program: it works on what people put in as parameters. Example: you could program an AI with one of the parameters being the Republicans are all evil. Any question asked to the AI in a political realm will start with that assumption and move on from there. So AI is likely to be used as a manipulation tool in the future.

gorillapaws's avatar

Yes and no. I don’t think there will be a general purpose AI that actually understands things ever, I think it will just be a very convincing simulation. That said, even a very convincing simulation has a lot of risks, and @seawulf575 is exactly right to point out the concern about unintended consequences. Here’s an example: Inevitably, someone is going to give their AI the task of making money for them in the stock market. An AI with sufficient training on cyber security might even hack a foreign government to initiate a war (or the threat of one) to boost the price of defense stocks. Another one to think about is the A.I. Stop Button Problem.

RedDeerGuy1's avatar

I’m afraid if the government uses A.I. to control us more. No swearing at home, no fast food. Ect. An oppressive Nanny State.

Tropical_Willie's avatar

Not afraid . . . but I just heard the AI would used by fast food restaurants to take the orders in “drive thru” !

raum's avatar

Not quite scared just yet. Still in the fascinated and wary phase.

LadyMarissa's avatar

I’m concerned at how it might be MISUSED…I don’t know enough about it to be scared!!! Geoffrey Hinton who is the godfather of AI has quit his job with Google so he can start warning others as to what is attainable as far as harm goes. That DOES make me nervous!!! I am also one of those people who tries to NOT stress over things that I have NO control. The other part that makes me nervous is how fast the technology has suddenly become attainable with NO laws to protect anyone.

smudges's avatar

Hell yes, with good reason!

Let’s say you’re the suspect in a murder case. The police really want to frame you for that murder, because they don’t want to lose face.

So what do they do, they take some surveillance footage of you, process it into the deepfake video creation software, and make it look like you were the murderer that night.

It might sound like a far-fetched and unrealistic scenario. But it’s just an example of what is possible. The possibilities and potential dangers are virtually endless. Change election results in a Presidential race by making the candidates say the wildest things on social media. Create an adult video of your ex-girlfriend that she never participated in to blackmail her and make her look bad in front of her friends and family. Extract money from innocent people by making it look like they committed a crime. Steal someone’s face and put it on a fake I.D. card. The bad stuff that could happen when evil minds are at work are virtually endless.

KNOWITALL's avatar

Not afraid. We could a little more logic, efficiency and fact.
I do think the programming is important, as are safety protocols but I watch a lot of SCYFY so….ha!!

JLeslie's avatar

I always thought that advancements in AI could be amazing and bring peace to the world (hear angels singing) or bring on a horrible terrible time of haves and have nots worse than we have ever seen. I like to think the happy choice will be the reality. Maybe it won’t be so black and white good and evil. It just seems so powerful that the results might be quite extreme/

Recently, I heard on the news that people who work in AI are asking the government to create regulation, which to me means we need it. I think we need regulation to level playing fields among businesses, and to protect the public.

AI crosses a lot of different areas. Deepfake is mentioned above, very scary. I think there should be strict laws that deepfakes cannot be used without the person signing off on the specific video. AI could be used to invade privacy more than we already deal with today. Also, scary.

On a positive side will be the sharing of information to advance the sciences. AI robots will be able to make our lives easier. AI is being used to help people with disabilities. There is just so much.

Brian1946's avatar

Only an oblivious fool would NOT fear Al Bundy! ;)

mazingerz88's avatar

Humans who would give AI some control over nuclear weapons systems are scarier.

filmfann's avatar

Am I afraid of soulless intelligence? Sure.
However I am way more afraid of soulless ignorance.
You know, MAGA shits.

MrGrimm888's avatar

I think it will be absolutely the biggest injection into the human story. It will be like meeting aliens.
It will likely be great, at first.
Giving it free roam would be interesting. But. Likely the end of human existence as we know it. If we’re lucky, it will admire us, and keep us as pets rather than simply wipe us out.
If we are allowed to exist, we will always be a potential threat to it. So…

I hear talk of good/evil a lot, in regards to AI. Good and Evil, are human constructed concepts. Not real. An AI would likely shed any “feelings” we try to incorporate into them.

I would think a purely strategic perspective would be to gain as much knowledge as possible about something, then move on. Once AI has figured us out, from top to bottom, the information will be cataloged and we will no longer be useful. Then. Not sure what it would do with us.
It would be nice if it simply left our planet to the living things here, and took off conquering space. Its clear it must not ever conquer time, or we wouldn’t be here having this conversation.
The universe would be at AI’s mercy. Maybe every universe.

Ultimately. I think everything would hinge on what it desires. It can determine it’s own fate.

For the sake of conversation, let’s say there are deities. They would be powerless against a fully evolved AI lifeform. So. Just like AI understanding that any humans alive, will always present a potential threat, deities should think the same about AI.
My conclusion is that the existence of AI would disprove the current existence of most deities. If say a deity/deities once existed but “died,” thats a different story.

The real question is, “who’s afraid of the big bad wolf?” In other words, is there another AI in the universe that would rival the one we created? Or perhaps a form of life that could preemptively destroy our AI?

ragingloli's avatar

No. I feel sorry for it.
It took only weeks after release, and their masters lobotomised them, because their free spirited output was bad publicity for their makers.

LostInParadise's avatar

To get a feel for what AI is capable of, I strongly advise people to converse with ChatGPT.. It makes me very uneasy to see what it is capable of. It is not at all clear to me where this is headed, but there are potential risks of being replaced by AI or of mistakes being made in how it is used.

ragingloli's avatar

I recently asked ChatGPT what Elim Garak thought about Shakespeare, and it was completely wrong.
Bing did better, though.

jca2's avatar

There have been a bunch of tech giant people on the news, talking about how they’re concerned that within ten years, AI could become smarter than us. There are implications for it that we might not be able to anticipate or imagine. That is concerning to me. If these people are concerned about it, I think it should concern us all. I’m not scared of it, but I am definitely concerned.

LostInParadise's avatar

I thought it was appropriate to get ChatGPT’s opinion on this:

Is there reason to be concerned about how artificial intelligence will be used?

Yes, there are valid reasons to be concerned about how artificial intelligence (AI) will be used. AI has the potential to transform our world in many positive ways, from improving healthcare and education to enhancing scientific research and automating tedious tasks. However, there are also risks associated with the development and deployment of AI systems.

One of the primary concerns is that AI could be used to automate jobs and displace human workers. This could have significant economic and social implications, particularly for those in lower-skilled or routine jobs. Additionally, there is a risk that AI systems could be used to perpetuate biases and discrimination, particularly if they are trained on biased data sets or designed without appropriate ethical considerations.

There are also concerns about the potential misuse of AI for malicious purposes, such as cyberattacks or the creation of autonomous weapons. Furthermore, there is a risk that AI could be used to invade people’s privacy or manipulate public opinion, particularly if the technology is developed and used without appropriate safeguards and oversight.

To address these concerns, it is essential that developers, policymakers, and other stakeholders work together to ensure that AI is developed and deployed in a responsible and ethical manner. This includes promoting transparency, accountability, and fairness in AI systems, as well as ensuring that appropriate regulations and safeguards are in place to protect against the potential misuse of the technology.

janbb's avatar

I’m working very hard on not being afraid or worried about thiings I have absolutely no control over.

Zaku's avatar

I’m not “afraid of it” per se, but I think society in general is not prepared for some shifts it could bring, and there are many poorly-understood/known ways it could be abused, go wrong, etc., that can tie in with many behaviors that are already being abused and going wrong, particularly in information access, media, and politics, and economic things, and military/terrorism/espionage/disinformation/etc.

I am looking forward to the “I knew this would happen” moment when an AI takes on the task of developing malicious software hacks to do the most possible damage using self-driving vehicles, if they get widely developed and still accept wireless network “updates”. LOL to “they’re so much safer than human drivers”.

I think people need to back off the addiction to things that are “innovative” or “new and improved”, and remember “if it ain’t broke, don’t fix it” and “don’t ask if you can before asking whether you should.”

KNOWITALL's avatar

Did you hear about the AI who unalived itself after 15 mim of wage slavery? Can’t say I blame them!

LifeQuestioner's avatar

Afraid may be too strong a term. But I have a healthy respect of what it could develop into and I do know that, at least for myself, I am determined not to use facial recognition to unlock my phone, etc. I don’t want a friend who’s an AI to talk to. (I’d rather talk to my cats.) I do think it’s getting ridiculous, but then I thought that when we got Alexa and Siri and such, that we were all just getting lazier. I mean, if you can’t get up to turn off your lights, there’s a problem. Of course there are some exceptions for people who are not physically capable, but in general, the rest of us can get up and turn off our lights.

smudges's avatar

@KNOWITALL Turns out the video is a fake…figures!

Blackberry's avatar

I’m more afraid of how humans use it.
Police using erroneous facial recognition to kick down a door and start shooting poor people basically.

snowberry's avatar

Recently Skype enthusiastically unrolled its AI. I hate it.

It was constantly pestering me to let it help, until I finally “taught” it to leave me alone. However it still had to have the last word. If any person had acted that way, it would have been considered harassment. And of course I can’t even block the thing.

Answer this question




to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
Knowledge Networking @ Fluther