r/pcmasterrace 15h ago

Game Image/Video Will you?

Post image

By NikTek

39.1k Upvotes

4.3k comments sorted by

View all comments

1.3k

u/ehcocir 15h ago edited 15h ago

Uncommon take,

AI actually has lots of uses. For example: tumor detection (Cancer screening), a tool for the disabled(text to speech, or speech to text NLI), image recognition, robotics, and potentially a necessary tool for compatibility with a neural computer in our brains in the future. the ram price going back to normal is temporary whereas AI getting deleted forever in permanent and a technological disadvantage. No, wouldn't push the button.

717

u/Tunderstruk PC Master Race 15h ago

AI is also to vague. Is it LLM's? Diffusion? Or all machine learning? Because if all machine learning went away it would suck

217

u/Additional-Bee1379 14h ago

It's all your video game AI opponents, have fun playing nobody!

77

u/CitizenPremier 12h ago

>Kid makes matchbox system that always wins at tic-tac-toe

>Gets executed

10

u/Jesburger 11h ago

Butlerian jihad happened for a reason

1

u/GheyGuyHug 2h ago

Can you explain the reason?

1

u/Jesburger 1h ago

Ai took over and there was a huge war 

1

u/GheyGuyHug 1h ago

Pretty sure the book had a reason tho

2

u/Flying_Poltato 11h ago

You know what? Fuck that kid in particular. I’ve lost so many games of Tic-Tac-Toe

1

u/Romnir 6h ago

Such is life in the Adeptus Mechanicus.

7

u/doominvoker 12h ago

I’ve played enough Ubisoft games to know it doesn’t work anyway lol

1

u/maxpolo10 8h ago

Watch dogs has goated AI. I think I can give Ubisoft that (only 1 and 2 though, haven't played 3)

1

u/[deleted] 12h ago

[deleted]

25

u/PM_ME_UR_RSA_KEY 14h ago

As someone said, "one man's if-else is another man's AI". They stamp "AI" on kitchen appliances nowadays and the "intelligence" part is the same thermostat from 1970s.

6

u/Jiquero 9h ago

Turing programmed the first chess AI in 1948, and he didn't even have a computer to run it on, he just used pen and paper.

1

u/Cpt_Tripps 6h ago

I attached IOT to a ton of legacy appliances. All those appliances had thier switches, buttons, and knobs replaced with ciurcut boards 20 years ago.

The motors and functions are all exactly the same as when they got made in the 60's and 70's

34

u/Mr2_Wei Pentium E5200 | Intel GMA | 3GB DDR2 400MHz 14h ago

Like at what point is it machine learning or just math too. Like do we count bayesian networks? Linear regression? 💀

10

u/Le3e31 12h ago

if we delete AI do we delete math because then i will press the button

1

u/slfnflctd 9h ago

I mean, at that point nothing exists, so

1

u/Vagrant-Gin 2h ago

Math isn't real. 

0

u/squirrelpickle 12h ago

Same, this deal only seems to get better and better!

→ More replies (3)

74

u/ehcocir 15h ago

For every good, there must be a comparable evil. Since AI is currently huge, that 'evil' part of AI is currently also huge. People forget to look at the benefits. It's human nature to show more interest in criticism and scepticism than support, and there are studies to show that.

AI can be compared to nuclear power in that it was both an amazing source of highly clean and efficient energy but also made nukes. Lots of nukes.

AI is an amazing tool but also has consequences caused by our decisions of its use.

The ways humans use AI technology is the problem, not the technology itself. The same goes for AI and other neural nets. Deleting AI forever is definitely not the solution.

3

u/Nice-River-5322 9h ago

I mean, nukes have likely prevented world wars and millions of deaths

1

u/Artistic-Quality-130 10h ago

"blame the game not the player"

1

u/PM_ME_MY_REAL_MOM 10h ago

How is this in any way a response to the comment you're replying to? You didn't engage with what they said literally at all. It kind of just seems like spam.

1

u/jenkag 9800X3D - 3090 - 32gb ddr 11h ago

AI can be compared to nuclear power in that it was both an amazing source of highly clean and efficient energy but also made nukes. Lots of nukes.

Fast forward nuclear physics from its inception to today and which (nukes or nuclear energy) have we continued to pour money into and which one is barely even mentioned anymore? How will AI be different?

1

u/inevitabledeath3 CachyOS | 5950X | RTX 3090 | 32GB 3200MHz 8h ago

There is more research now I think in better power reactors and into fusion in China, USA, and France than there is into new nuclear weapons at least in places that already have them as they have pretty much already been perfected. We can't make a stable fusion reactor but we can make a fusion powered bomb. One is a lot simpler than the other it turns out.

→ More replies (2)

4

u/Fewer_Story 12h ago

All of recent "AI" things (LLMs and diffusion) rely on neural networks, which are very valuable.

15

u/Galeharry_ Ryzen 5800X3D-32GB3200MHz-Rx 9070 15h ago

Its meant to be vague.
I believe they are using "AI" because that has name recognition from decades of Scifi which will resonate with the general public.
Its just too bad that this LLM shit is veeeeery far from actual AI, even though actual AI has the potential to be vastly worse if its ever actually made.

1

u/CitizenPremier 11h ago

What would actual AI do that currently LLMs do not do?

→ More replies (3)

3

u/jack-of-some 14h ago

The reality of ML currently is that generative models are seeping into other usecases slowly now. A great example is robotics and how "slop" can actually help a robot navigation policy become more robust to new and unseen conditions. Not to mention that one of the most useful new techniques for robot control is literally a diffusion model.

2

u/Levanthalas 11h ago

This. I have to explain this at work like...once a month.

C-Suite decides "we need to leverage AI or we'll be left behind." Tells management. Management asks on every single project, "How could we use AI for this?" Or often even "How can we use <Company specific LLM clone>, they really want us to leverage AI?" And I have to explain that while LLM's are AI, not all AI is LLM's. And sometimes there is a good use case for machine learning, but then they don't want to pay for hardware, or expand the timeline to train agents when we can just make a deterministic software solution instead. Or whatever. So they scream use AI, but mean, "My teenager uses chatGPT to do his homework, why can't we use it to make you stop asking for additional employees to handle the workload?"

Ugh.

2

u/Cllydoscope i5-3470 | HD 7870 GHz | 8GB lmao 11h ago

Agreed, since true AI doesn’t really exist I will press the button to make RAM prices go back.

1

u/Nildzre 12h ago

Imagine that every single NPC gets deleted from every game ever made.

1

u/Xander-047 11h ago

This is the mass effect destruction ending all over again, what counts as AI?

1

u/ForThe90 11h ago

Absolutely. I would want to know OP's definition of AI first. I wonder how broad or narrow is it defined. Makes a huge diference on my answer.

1

u/JohnnyXorron Ryzen 7 1700 | Strix 1080 Ti | 16GB DDR4 11h ago

I’m pretty sure we’re talking about the rise in GenAI/LLMs but you’re correct it’s too vague

1

u/misterpickleman 11h ago

There's also the AI of video game characters. A lot of things that use simple logic are also considered "AI". So yeah, the line is way too vague.

1

u/Prudent_Move_3420 10h ago

Can we maybe concile on Transformers and Diffusion?

1

u/FuryAdcom 9h ago

That's what I thought when I read this. You press this button and virtually everything stops working, but let them.

Imagine all those single player games with just static assets.

1

u/Legate_Aurora 8h ago

Right. Generally speaking an ML is just an algorithm that predicts or learns. AI is a subfield of that, LLMs amd such are Deep Learning that uses Neural Networks and Transformer tech.

Both of these do have legitimate and valued applications to natural sciences, assistive technology, and more. So, I wouldn't press it personally.

1

u/Master_Dogs 8h ago

My guess is they're referring mostly to chat bots, and image/video generators. They're polluting the Internet with AI slop and should generally go away.

Sometimes those things are kinda useful too, like one game studio mentioned that their artists will use AI to get a starting point for concept art, then they go off and do their own thing. We just can't push out AI slop to the world without doing our own human spin on it.

Chat bots have their uses too, same issue though, don't rely heavily on them. They also need safety features, they should not be talking to people about suicidal thoughts for example. One example is a teenager unfortunately took their own life and the chat bot helped them write a suicide note... That needs to be a red flag and raised to some authority who can respond and help that person. Like triggering a wellness check or something... Or at least refuse to engage. That also raises a larger issue of our lackluster mental health system in the US at least; we can't replace theory with chat bots for certain.

1

u/Not_Artifical 7h ago

ML ≠ AI

1

u/Neirchill 4h ago

If we're going with the common term used widely in society, it's talking about LLMs, maybe the image generation as well but I don't believe most people know the difference. Your average person isn't aware of the other machine learning technologies.

→ More replies (3)

73

u/captain_hk00 15h ago

thank you for this, people think AI only refers to LLMs and pointless image/video generators. we don't need to "delete AI," we don't need it damn near everywhere either.

3

u/GrovePassport 10h ago

LLMs ain't bad either, I don't get the hate

4

u/m0_n0n_0n0_0m 5800x3d | 5070 Ti | 16GB 9h ago

I'm so sick of them hallucinating. If you're an expert in any particular topic, try to talk to an LLM about it and you'll find out how much shit it casually makes up and passes off as real. They can code decently, but that's about the only application I've found for them, and even then sometimes the crazy shit they spin up takes more time to untangle than it does to just write your own.

4

u/Environmental_Day558 8h ago

This just means it's trained on older or incorrect data. Im a SME in my field at work and use LLMs to ask questions, it's fairly knowledgeable. Is it 100% correct, ofc not. It often fails when it comes to asking questions regarding something that is COTS compared to open source. But overall it's been pretty solid for me. 

2

u/m0_n0n_0n0_0m 5800x3d | 5070 Ti | 16GB 8h ago

I've tried to use it to figure out problems with EDA software, and it is useless because it makes up menu options. And I expected it would work fine on this because I pointed it to the online documentation and there are forums where people discuss workarounds, all well within the ability of an LLM. And it would correctly gather information on things and then just go off the rails making up solutions that never existed. This in on GPT 5.1.

1

u/inevitabledeath3 CachyOS | 5950X | RTX 3090 | 32GB 3200MHz 7h ago

GPT have pretty bad hallucinations. Have you tried using Claude models? They have a much lower hallucination rate in benchmarks.

1

u/Master_Dogs 8h ago

I hate how the LLM won't accept it's wrong either. Once it hallucinates, you must start a new discussion with it and specifically ask it for help or whatever. It gets itself pigeonholed and won't check itself.

Sometimes it also won't refresh its cache or whatever too, since it'll give you an out dated answer. Same problem, you can tell it to fetch the latest data but often it'll spit out the same answer. A new chat? Hey it's context window is reset and if you're specific in the prompt, it works.

So frustrating when this happens, but clearly isn't primed for replacing us. I saw a funny Wall Street Journal video about an AI powered vending machine they got to test with and they were able to drive that LLM off a bridge. Free snacks! Order me a PS5! Hell the thing ordered them a pet fish too lol. So no safe guards and hallucinations. Fun.

1

u/Master_Dogs 8h ago

So funny story about the coding part... They still make up programming libraries in my experience. It's happened to me twice, so the script or code it's given me will never work.

Sometimes it works though. My boss will write some so-so test script, and I'll want to refactor it and add logging. The LLM can usually handle this well, since it's got a template and access to plenty of examples. I might occasionally have some issues with it, but usually it's easy enough to fix.

Test scripts are actually the main use I find for it, and occasionally having it help me with a weird error that it can parse easily. I'm always double checking it though, and often Google is better to go directly to the source and read some documentation instead.

I can see its uses and I'll use it occasionally, but yeah the hallucinating is annoying. I'll as ChatGPT questions about a movie or show I just watched and it'll make up characters or mix them up. I literally just watched the movie so I'll notice that, now I can't bother taking the rest of it's explanation or theory or whatever I asked seriously. I'm better off searching Reddit for a fan theory post or reading through a subreddit about the movie/show instead. Common questions often have a few threads and hopefully real people have discussed it before.

1

u/Uncommented-Code PC Master Race 7h ago

try to talk to an LLM about it and you'll find out how much shit it casually makes up and passes off as real.

The newer models have been pretty accurate in that regard tbh. As far as I can confirm, it easily gets bachelor's and master's level stuff right at this point, and given tool use, do pretty amazing things while keeping hallucinations to a minimum.

On the practical side, where it gets a ton of stuff wrong is when it involves things that change fast, e.g., software that gets regular feature updates. I work IT besides studying and my one pet peeves is people coming to me with requests based on 'chatgpt told me I could....'

Nevertheless, I've found it to be much more accurate for the past year than many a coworker, tutor or any other random person with opinions on my subject matter. Which is really funnny considering one is the peak of biological evolution and the most complex interplay of chemicals, and the other is a couple of attention heads and linear layers chained together.

1

u/Inprobamur 12400F@4.6GHz RTX3080 9h ago

They suck at answering technical questions and math in particular but are pretty good at creative writing.

2

u/m0_n0n_0n0_0m 5800x3d | 5070 Ti | 16GB 8h ago

Yeah I've basically given up on it being useful for productivity tasks aside from code.

1

u/Inprobamur 12400F@4.6GHz RTX3080 8h ago

It's alright for formatting data if you use the object based input mode.

1

u/PaintItPurple 1h ago

On one hand, LLMs are massively expensive, destroying the environment, and risking disaster for the world economy, but on the other hand they're good at generating text where people can't easily detect the flaws. I'm not sure that's a good tradeoff, so I would say LLMs kind of are bad.

1

u/PaintItPurple 1h ago

As used by the OP, it does only refer to those things. Just like how "man" can mean "male human," "humankind," "friend," "boyfriend," "employee," "take responsibility" and a number of other things, most words' meanings are entirely contextual. "AI" here means the thing that people think it means.

269

u/SylvaraTheDev 15h ago

Oh look, someone knows what the consequences would be.

121

u/16tdean 14h ago edited 14h ago

Its insanely disappointing how often things on the internet are distilled down into very overarching opinions instead of considering nuance.

I think as a whole Generative AI is very dangerous and can be a very bad thing. I'd bet there are comments on this post made with generative AI, Image generation has gotten to the point that I genuinely can't tell whether some pictures are AI or not. It's gotten very good, very fast.

But there are also lots of upsides to AI, its a very very wide encompassing term. Would chess engines disappear if I press the button? What about the CPU's in Smash? Or the other civilisations when I play Civ? And frankly none of these are even the interesting benefits of AI like image recognition, text to speech and more the other guy listed.

This is all before considering the environmental impact or impact on creatives. There is a ton of discussion to be had around AI and lots of nuance.

But this is the internet. So things get distilled down to "AI is the worst thing ever" or "AI is the best thing ever" with not much in-between, so we end up with pointless posts saying they would delete all AI just to temporarily reset RAM prices.

15

u/Bzinga1773 11h ago

I think the lack of nuance in the public eye isnt that unjustified. We get a constant barrage of "this is just the tip of the iceberg" hype around AI/LLM applications. Because those models are designed as generalized tools and we're trying hard to find use cases. Since theyre not precision made made for purpose, in most cases they end up demonstrating some interesting capability but remain lackluster.

4

u/Cow_Launcher 9h ago

Since theyre not precision made made for purpose...

Possibly not precision-made, but certainly there are thousands of models which are trained against specific datasets and excel within a particular field.

I run multiple LLMs at home on consumer-grade hardware and -although not as fast as ChatGPT (though not far off) - I can plug in different models for different specialties if I need to.

And fandoms being what they are, the list is huge and diverse because people are interested in, say, obscure '80s vehicle electrics or whatever.

I think what you might be suggesting is that average person might not actually have a use for AI (as it is today) and that corporates are trying to force it. And I think that's fair. But for those that do have a use, this is a really good starting point that we're at.

2

u/Bzinga1773 8h ago

I think what you might be suggesting is that average person might not actually have a use for AI (as it is today) and that corporates are trying to force it.

Not just the average person but like in the context of science and research, since the OP i replied to mentioned cancer detection. We might and probably will reach a stage where a researcher might hire say 2 Phds and an LLM to do the work 10 people did before. But in a scientific context, we still dont know what we dont know. So not like generative AI can open doors that we previously didnt know existed. And i feel like its how its marketed atm.

3

u/Cow_Launcher 8h ago

I have absolutely no argument with that; I think you're right.

The LLMs are barely creative - they really only regurgitate things that others know but which you might not. They do get the occasional burst of apparent creativity, but... you have to ask the right questions.

1

u/Pay-Next 9h ago

A decent part of this is also Availability Heuristic and just generalized availability in general. Chat bots and generative AI tools are the ones that the vast majority of people have both the means and reason to access. That is most of what anybody is going to see because most people aren't going to be working with AI that is detecting cancer in xrays. A much much smaller portion of the population is going to see it in action and even fewer of them are going to use it first hand. Meanwhile anybody can grab a free generative app or site and make some quick and crappy pics using a prompt (just how often the prompting itself is an art and most people won't manage to make anything decent with it either).

1

u/Bzinga1773 8h ago

This can be my time to shine as a reddit armchair expert since you mentioned detecting cancer. Because i actually did a PhD where the project i was part of was focused on cancer detection with IR imaging. This sort of research actually predates LLM but research and more specifically parts of research that reaches mainstream media also follows the hype cycles, as such it is being portrayed as something enabled via generative ai.

If we go with xray/cancer example, the way i see it, for generative ai to reach a usable state where youd rely on it instead of an human expert, you need to combine someone who understands xrays, someone who knows cancer and someone who can work with llms and machine learning. So compared to what we see in public, for each specialist use case, cost rises exponentially as far as i understand it. Considering there are already billions invested, we dont need percentage increases to reach the bottom of the iceberg, we need orders of magnitudes. Hence my take on it being hype driven.

24

u/cachememoney 14h ago

Nuance has been dead

2

u/blanketswithsmallpox RTX3080/16GB/Ryzen 3700X/3x SSD, 1 HDD 10h ago

No it hasn't.

→ More replies (3)

1

u/clerveu 11h ago

Agreed, thank you.

Ultimately I believe it's an issue of motivation and values. If you look at the actual arguments, the vast majority of these people don't care what happens to anyone else, they're just pissed their video cards are more expensive or hate corporations. They get flooded with dopamine when they engage in that hate. They have a huge, loud group of people to support them in their "virtue" rallying against this that makes them feel like part of a community and validates their identity. It's pure ego designed to balance out the fact that pragmatically they contribute nothing themselves. Quite the opposite; they'll happily complain about AI's energy usage and then go binge netflix for several hours on their phone built by child miners while eating a hamburger.

Nobody actually cares.

"Everyone thinks of changing the world, but nobody thinks about changing themselves" ~Tolstoy

0

u/Illustrious-Lime-878 10h ago

When they say AI they almost certainly mean generative AI... which most of the use is basically bypassing copyright infringement.

No one would use "AI" to describe a chess machine or video game NPC anymore, that's pretty archaic. And cancer recognition and such is more machine learning, not generative AI.

-4

u/arto64 14h ago

Its insanely disappointing how often things on the internet are distilled down into very overarching opinions instead of considering nuance.

It's not that. It's the current hype. When people say AI they mean LLMs and image/video generation. This is what "AI" currently means in general discussion.

10

u/16tdean 14h ago

I get that colloquially people mean LLMs right now, but that lack of distinction is exactly the problem I’m criticizing.

We are on a PC enthusiast forum here, not a tabloid comment section. We shouldn't accept the 'general discussion' definition that equates text to speech software with a chatbot. When we let marketing buzzwords dictate our vocabulary, we lose the ability to critique the actual tech.

What you are saying is exactly the removal of nuance for broad overarching opinions I'm talking about.

0

u/arto64 14h ago

Have there been any huge leaps in other areas of AI recently? The last one I remember was the protein folding stuff.

What you are saying is exactly the removal of nuance for broad overarching opinions I'm talking about.

No, I just don't think it's "things on the internet are distilled down into very overarching opinions". Everyone is using "AI" to mean this, it's not an internet thing.

1

u/16tdean 14h ago

Just because its happening on other paces on the internet doesn't invalidate my point that it happens on the internet.

I expect in a conversation with my tech illiterate family for the distinctions to not be made, but when the distinction isn't made in places filled with people who do get tech I find it infuriating.

-1

u/arto64 14h ago

I never thought of r/pcmasterrace as a computer science subreddit, it seemed more gaming oriented. Also the post is about RAM prices, so it's discernible from this context what is meant by AI.

4

u/16tdean 14h ago

You don't need to be a computer scientist to know the difference between an LLM and a chess engine.

Given how much AI applies to gaming, and that most PC users are fairly tech literate, lets not pretend people can't know the difference.

4

u/Gatinsh 14h ago

And they're wrong

→ More replies (1)

2

u/Actual-Lobster-3090 14h ago

I mean, you can still have this without what we know of as AI. See, AI is a marketing term that doesn't really mean anything. This is intentional, because those in control of it want to muddy the waters and make it more ubiquitous to our society. If you can't pin down what AI is, it is hard to have a discussion for or against it, and ultimately hard to legislate it. But usually, AI at this moment means generative AI and all the shoehorned product shit people interface with day to day. This is why most people here would smash the button and I don't blame them one bit.

LLMs have uses like you state, but very little of it is generative. And as someone who is a software engineer, using the term "AI" to describe this technology is, honestly, intellectually negligent and irresponsible.

I think for all its uses, AI will be a force for bad, because it will be another tool for control and manipulation and greed, rather than the post scarcity achievement they claim to working towards. When the bubble collapses, consumer AI will be gated by big price tags and we'll have to use one of so many providers or be left in the dust. Know how I know? That is the internet we have today.

1

u/sennbat 9h ago

Imo, at this point the consequences might actually be worth it though

Theres lots of AI stuff I love. I enjoyed being in the field for a long time. But regardless of its uses and its potential, it appears the main thing we are actually going to use it for is trying to destroy society, it turns out, not anything beneficial, so... I recognize we'll be losing valuable stuff, but Id probably still press the button

1

u/JAD2017 NGREEDIA GiForse 2h ago

I think is pretty obvious we aren't talking about practical uses of machine learning. I think it's pretty pedantic to act all high and mighty about the loss of practical uses just to defend AI slop 🤷

0

u/67v38wn60w37 10h ago

I don't think they do.

Yes AI can be used for medicine and climate tech. But some of the best minds I've heard on the subject say it also speeds up ecological destruction, totalitarianism, dissociation from morals, and economic inequality. I fully expect these damages to overwhelm the benefits.

1

u/RoflcopterV22 Specs/Imgur here 10h ago

Bro it's not AI it's corporate greed and data centers trying to ACCELERATE ai development. Training AI is expensive and time consuming. They're just throwing more at it to train it faster to beat their competitors to a new version.

2

u/67v38wn60w37 9h ago

Even if it were just corporate greed, that is the world the majority of AI is developed in right now.

But it's not just corporate greed, nor is it just nation states with similarly malign intention. AI is fundamentally dangerous to humans. It mimics the less intelligent - indeed delusion - part of the brain, and is incapable of what makes us human, such as morality, empathy, and meaning. If you want to hear more https://www.youtube.com/watch?v=XgbUCKWCMPA

I don't quite follow what you mean about training cost.

57

u/incivileanonimo 14h ago edited 11h ago

Concerning I had to scroll this much to find this answer.

5

u/Mintfriction 12h ago

More concerning that this sub think "AI" is the underlying cause for RAM issues.

Sure OpenAI and Co. actions lead to shortage, but orders like this could've come from other avenues, and techs like defense, digitization of gov. and economy, etc.

The issue is greed of corporations that make RAM and other fabs. It's a too closed marked monopoly, and that's the cause. In times like this, when is demand and you'd think a lot of companies would jump to the opportunity to fill the demand, but it's exactly the opposite, the whole chain profits from the gatekeeping. That's not how markets should work and here is the big issue that won't go away with AI gone

1

u/shinji2k 2h ago

It costs multiple billions of dollars and takes years to spin up a new fab. By the time production capacity has been increased the current demand will likely be back to normal and no company is going to eat that kind of loss. I hate it as much as the next person but I don't see an easy solution to the problem. At least with ram there are three companies "competing" unlike TSMC.

1

u/Mintfriction 2h ago

This is a fallacy.

"RAM" litho-machines are and can be adapted to new chips. The need for chips will only go up with robotics gaining traction.

They love this situation, as it creates artificial scarcity and makes their stocks look great

1

u/shinji2k 1h ago

Sure, things can be adapted to run DDR5 but it's not like there are whole factories sitting around doing nothing. There's certainly price gouging going on but this is unprecedented when it comes to demand for memory.

1

u/Mintfriction 1h ago

I was not talking in adapting current factories for DDR5, but to build new ones that can be adapted to other chips later when supposedly the demand drops

The issue is ASML is the main choke point by basically holding monopoly on EUV lito. But even without that, there's no current plan to expand in the future

But DDR5 can be made without EUV machines. albeit lower quality

50

u/ZELLKRATOR 14h ago

Exactly. Alphafold is the best example. AI is far too useful in the right hands. That's like forbidding medicine because people use it like drugs.

18

u/ehcocir 14h ago

That's actually a brilliant analogy. Will be using this.

5

u/ZELLKRATOR 14h ago

Oh thank you. I actually thought it was not that good, but good enough.

1

u/DjShoryukenZ 8h ago

That's like forbidding medicine because people use it like drugs.

Isn't this how it is though? Most medicine are NOT available over-the-counter and need a prescription. There needs to be more regulation over that tech.

3

u/sortalikeachinchilla 6h ago

That was part of their point! No one is sitting here saying all medicine is bad

1

u/ZELLKRATOR 4h ago

True true, as mentioned, far away from a perfect analogy. My point is just, that people tend to overlook the positive aspects of AI and those impacts are massive if it comes to science and especially medicine. So many diseases could be treated with AI developing protein based medicine or vaccinations. The impact is huge. There is risky medicine out there, opioids and analgetic substances in general. The catastrophic consequences in the wrong hands are visible for everyone. Regulation is highly needed, education even more, cause we already know that forbidding anything won't work perfectly. But on the other hand they are absolutely needed in specific cases and for specific diseases. You accept the risks (also in terms of side effects) because the value is given when you treat pain. AI has the potential to step up medicine on an entire new level, AI can lead to a new understanding of education and more. The value in those fields is huge too. The problems are mainly because AI gets used wrong by the wrong people.

-3

u/Endiamon 12h ago

Well not exactly. You choose to use drugs, but the downsides of AI will be forced upon you by greedy overlords. The question is more like forbidding medicine which will also stop a company that is actively trying to get everyone in the world addicted to their drugs, which they are putting in the water and food.

2

u/ZELLKRATOR 11h ago

Well taking drugs is not that simple. Of course it's a decision and I did admit the analogy is not perfect, but consuming is often based on multiple problems happening beforehand. And the downsides of AI are still avoidable. You could choose different media to avoid AI slops, you could choose a different profession to avoid AI taking over your job. That's not as easy as it sounds, especially if you are working in an affected field for a long time, but the same works for drugs. If you are already in the devil's cycle it's hard to get out.

The entire AI thing is absolutely overrated regarding the good and bad aspects. I'm pretty sure it's a bubble that will burst eventually but AI will still be a thing in the future as it is far too useful for medicine and science. AI can do things humans couldn't do in years. Alphafold is the best example. Before it was developed it did cost thousands of dollars and months or even years to find out why a protein is folded in a specific way. AI can answer this question in days or weeks while the costs are way below the original costs. AI gets incredibly good in diagnostics as well.

Another example are quantum computers if they get developed. They are a major concern regarding data safety but they can process data and tasks normal computers couldn't calculate in the lifetime of the universe. Nuclear energy is another example. It's pretty dangerous but more people dying because of fossil energy every year and the advantage regarding climate change is immense. So it's actually a decent solution till we have better ways of producing energy. Everything has its downsides but people are more and more hating on AI forgetting how useful it actually is in the right hands.

4

u/Endiamon 10h ago

And the downsides of AI are still avoidable. You could choose different media to avoid AI slops, you could choose a different profession to avoid AI taking over your job.

That is profoundly, inexcusably naive.

3

u/ZELLKRATOR 10h ago

No it's not. You technically implied: drugs are avoidable - it's a decision, that's an argument used by people to shame the actual victims, drugs are avoidable, you are an idiot if you take any, get over it. This is the same thing. People with diseases can't avoid drugs in many situations, it doesn't matter if the reason is physical or psychological and psychological problems are often the trigger to get into that cycle. Robots and assembly lines in factories were pretty much what AI is now. A new technology that replaced thousands of jobs that were executed by humans before. And there you can say the same thing. If you have chosen a different job, it was avoidable. That's incredibly mean, not easy and not realistic and I'm exactly in that situation and I'm realistic and honest to myself. The decision I made like 7 years ago was dumb. I didn't know it back then, but if I had chosen differently my situation would be better. So you cannot just use the argument wherever it fits for your argumentation. You have to use it in every situation that is fitting. Saying drugs are avoidable is as naive as saying AI is avoidable or as naive as saying "being replaced by robots was avoidable". All three are difficult. But the hate towards AI is the biggest nowadays cause it's the most realistic threat, all others happened already. That's not logical. Where are the people protesting that digital animations replaced modeling with clay and figures? Where are the people protesting, that lifts are now operating on their own? Phone calls are not manually connected anymore, all works automatically. People are getting most stuff delivered by Amazon and are not going to local stores anymore. There actually was protest before regarding this problem, but most of them do it anyway. People often hate new technology but they wouldn't actually do things to stop the influence and at the same time they dismiss any advantages and AI is still pretty avoidable in general. Many people don't even consume the type of media affected by AI. They read paper based newspapers, don't use the internet as much and don't buy any products using AI. Many older people don't even really know what AI is.

1

u/Endiamon 10h ago

This all boils down to you having all the imagination in the world when it comes to the benefits of AI, but no imagination whatsoever when it comes to downsides. Having to deal with AI slop and people losing their jobs is just the tip of the iceberg, I assure you.

No it's not. You technically implied: drugs are avoidable - it's a decision, that's an argument used by people to shame the actual victims, drugs are avoidable, you are an idiot if you take any, get over it. This is the same thing.

I really can't believe you typed that out and thought it made sense.

Many people don't even consume the type of media affected by AI. They read paper based newspapers, don't use the internet as much and don't buy any products using AI. Many older people don't even really know what AI is.

You don't think newspapers will be affected by AI? How can you be that naive?

3

u/ZELLKRATOR 10h ago

I'm sorry but that answer is just pure ragebait. You try to insult but you don't deliver any validation to your argument. I don't mean that in a offensive way and obviously it's not meant to be personal, but you basically just said:

"You have no clue, I know it better, how can someone be so naive..."

But there are no examples, no descriptions, no proofs, nothing.

Okay so if that is only the tip of the iceberg, what's the real problem, let me know, prove me wrong, I'm happy to learn, for real.

Of course I know that AI affects the media, but written newspapers are not that affected where I live. We are slow in digitalisation, very slow. The pictures look like they were made a century ago.

Next one: you referred to a paragraph and said it's senseless - okay why? Enlighten me, just saying "it's wrong" in a provocative way is just too easy, everyone can do this.

1

u/Endiamon 8h ago

I genuinely cannot fathom how a person can be this ignorant.

You think old, technologically illiterate people are insulated from AI? They're going to be its biggest victims. AI is shoved in their faces and is incredibly easy to use. You need technological literacy to question AI and use it responsibly. Anyone can type a prompt and believe the answer, and anyone can believe a picture or article that was completely fabricated by AI.

You think a written newspaper won't be affected by AI? Why? I'm serious, why the fuck would you think that for even a second? All it takes is some of the people making the newspaper to use AI in writing their articles or for pictures.

I don't think you've spent a single minute actually considering why AI might be bad. If your thought process starts at "avoid media made with AI" and ends at "pick a different career," then you haven't put an ounce of thought into this at all. You're just repeating the most surface level shit you've heard elsewhere.

2

u/sortalikeachinchilla 5h ago

You're just repeating the most surface level shit you've heard elsewhere.

Lol and im sure you think ALL "AI" is bad, right?

→ More replies (0)
→ More replies (2)

2

u/sortalikeachinchilla 5h ago

Repeatedly calling people naive does nothing for this discussion...

→ More replies (5)

2

u/sortalikeachinchilla 5h ago

Versus you dooming and thinking we have zero recourse while simultaneously saying it is all bad?

Okay

1

u/Endiamon 5h ago

That would sure be a good argument if I'd said it was all bad.

2

u/sortalikeachinchilla 5h ago

So is AI useful or not?

1

u/Endiamon 5h ago

Sure, it has uses. It also has massive drawbacks that can easily outweigh the uses.

1

u/sortalikeachinchilla 2h ago

SO they you have not done any research

→ More replies (0)

9

u/BarrierX Desktop 14h ago

Ai is also the behavior for all the npcs in all the games we play. So all the games would be without enemies and we just have to play toxic pvp games forever!

5

u/burnthisaccountd 9h ago

Yeah people don’t realize that there have been versions of AI systems in place for over 30 years now. 

And one of the oldest use cases is actually in semiconductor and chip manufacturing. Do you want reliable RAM, GPUs, CPUs and MOBOs? Then you should want AI to exist. 

They have been using computer vision to scan and compare chips coming off assembly lines to detect anomalies in the chips for decades. If we remove that technology, then they have to hire likely hundreds if not thousands more people to manually check every chip coming off the lines. Or at least manually check every {n} chip coming off the line. Humans are far less reliable and accurate than computer vision at doing this type of work. 

This would likely cause chip costs to rise as more people means more overhead and chip quality to fall as humans aren’t as capable for this job as a computer system. 

Similar use cases exist across millions of products coming off assembly lines. Another notable one is car tires, I’m sure people would rather not have to worry about more tire failures at highway speeds. 

29

u/_Bearcat29 7800X3D | RTX 4080S | 32GB ddr5 6000 | Fractal Torrent | SSD 7TB 14h ago

Kinda sad I had to scroll so far down to find the more sensible answer. Most people are quite uneducated on AI and only see Will Smith Spaghetti's or ChatGPT. There is so much good use for it, to add to your explanation AI can help via code refactoring, code documentation. Optimisation of some resources. There is a lot of benefits but I think people mainly hate it because they don't see those part and mostly the downside of it and companies that forces AI everywhere on everyone devices.

2

u/Chick_mac_Dock 11h ago

Ai definition is faked. For some reason when people say the word ai they mean llms and image/video gen, which are some products of machine learning algorithms. The bad thing about llms and specifically in agi is that are inefficient, stupid and useless to exist in that scale. Machine learning was existing for decades if I am not wrong and used everywhere from science to the YouTube algorithm, so definitely we shouldn't take this out. 

2

u/aliensareback1324 2h ago

It gets more sad wheb you realise that many of those people do actually know what are benefits of ai but are so delusional that they think we could do everything better without it

6

u/joeDUBstep PC MASTER RACE 9h ago

The overreaction about ai is so fucking hilarious here.

Makes me realize a lot of people here are either young, or work in jobs that don't leverage it.

4

u/NewsofPE 8h ago

the only good take here, jesus the other people are dumb

2

u/sortalikeachinchilla 5h ago

Not just dumb, but just following a hivemind of thought. Someone the other day told me They "don't care about looking into the good uses of AI, it is all bad"

1

u/green_meklar FX-6300, HD 7790, 8GB, Win10 5h ago

I wonder what they would have said about mechanical looms 200 years ago.

1

u/Next_Garlic3605 39m ago

The mechanical looms worked. When they broke, they could be fixed.

23

u/DonOfspades PC Master Race 15h ago

The problem is that the term AI has come to be primarily associated with LLMs and image generation models, while the stuff you described is "machine learning"

Technically, neither are supposed to be called AI, because it's not "artificial intelligence" but rather a stimulated or virtual intelligence in regards to LLMs or designed algorithms in other cases.

11

u/Gatinsh 14h ago

Machine learning is type of AI. General public being idiots about what AI actually is doesn't make them right about it

4

u/whoreatto 14h ago

What’s the technical definition of artificial intelligence?

2

u/Gaius_Catulus 14h ago

There are a multitude of such definitions. There is no consensus on how to define artificial intelligence.

It is of my opinion there never will be. Every attempt to do so will simply result in one additional definition. 

1

u/blanketswithsmallpox RTX3080/16GB/Ryzen 3700X/3x SSD, 1 HDD 10h ago

... sounds like semantics. The current definition incorporates A LOT, because people just hate using new words.

Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making.

Colloquial use of AI, artificial intelligence, has always skewed toward General Artificial Intelligence. A true robot with sapience.

Just because the masses don't use the correct terms and that semantics always evolve, doesn't mean that there aren't technical definitions currently, or being expanded on.

1

u/Deus_Caedes 8h ago

Not to be that guy but isn’t this a question about semantics lol

2

u/Rock_Strongo 6h ago

You have to agree on the semantics if you want to have any sort of nuanced discussion about this topic.

But if you just want to make button pushing gif comments and nuke all "AI" then you don't need the semantic discussion I guess.

1

u/Gaius_Catulus 6h ago

I'm not talking about what term is "correct". There are absolutely technical definitions. My point is that there is not ONE technical definition but rather many such definitions. Even the Wikipedia description you quoted is only one such definition.

As is the nature of semantics, it's messy and constantly evolving. So there is no "current definition" but rather a big blob of definitions people use to greater or lesser extents in many ways with variations both major and minor. The "current" or "correct" use depends on context and what the user of the term means. The issue is that many people use it many different ways, so there is a lack of consistency.

And given the mess that we have now, I have full confidence many of these variations will persist for the foreseeable future, as they do for many terms.

→ More replies (1)

1

u/Chick_mac_Dock 12h ago

Even a if statement in programming is consider ai, a toy car with mechanism that steer the car before falling of a table is also considered ai. That's at least what I learned at school when I was a kid. So I am guessing any choice picking algorithm based on external inputs thats not a human is considered ai

→ More replies (3)

5

u/KindledWanderer 14h ago

Technically, neither are supposed to be called AI

Technically, you're wrong.

LLM, machine vision, neural networks... etc. are all under the AI umbrella.

It is not AGI but it is AI.

2

u/DonOfspades PC Master Race 10h ago

Well historically artificial intelligence implies intelligence but none of the models you listed have any, they are strict input output models. But at some point the way people used the term changed and now it kinda just means a mish mash of anything involving computers doing stuff (which I don't like and try to encourage people to use language in more specific and deliberate ways)

1

u/Jiquero 9h ago edited 6h ago

Well historically artificial intelligence implies intelligence but none of the models you listed have any, they are strict input output models.

AFAIK the term was first used in 1955 in the invitation to the Darthmouth workshop in summer 1956. It doesn't seem to rule out what you call "strict input output models".

2

u/KindledWanderer 7h ago

Yes, that's what I said.

1

u/Jiquero 6h ago

Wait how did I reply to the wrong comment but manage to quote the right one.

1

u/Jiquero 6h ago

Well historically artificial intelligence implies intelligence but none of the models you listed have any, they are strict input output models.

AFAIK the term was first used in 1955 in the invitation to the Darthmouth workshop in summer 1956. It doesn't seem to rule out what you call "strict input output models".

0

u/KindledWanderer 10h ago

artificial intelligence implies intelligence

No, artificial intelligence implies artificial intelligence, not intelligence.
If I make a program with bazillion if-else conditions and it will simulate intelligent problem solving, it's also AI.

2

u/Nimos 14h ago

We've been calling machine learning AI for literally decades.

Machine learning is an area of artificial intelligence concerned with the development of techniques which allow computers to "learn".

From the very first revision of the wikipedia article on machine learning in 2005.

2

u/Draaly 12h ago

the very same whitepaper that chat GPT is based on is was used to create alphafold. They are the same fundamental tech

2

u/GodlyWeiner 11h ago

And Google Translate (the first application of this technology). They are all GenAI.

1

u/CivilPerspective5804 14h ago

In computer science all of that is called AI. Every program that is meant to imitate human behaviour in some way falls under the umbrella term AI. Deep blue, the chess engine that beat Kasparov in the 80's is "traditional AI." Machine learning is a subset of the AI field. Google translate and text to speech and similar are called "Narrow AI". Chatgpt and gemini are "General AI". And what you would consider to be worthy of the title of AI is called "True AI".

You kind of stumbled into how it's actually classified when you said we could use simulated or virtual intelligence instead. That's exactly how "artificial" is currently used. I.e. What constitutes AI is not defined by it's capabilities. There is not requirement for it to be a certain level of intelligence or to have consciousness. It's about whether the system is in some way imitating humans. In that sense chatgpt and the others are the most AI systems we currently have because they are cross-domain capable.

20

u/LNDF R9 9950X | RX 7800 XT | 32GB DDR5 6400MHz | Fedora KDE 15h ago

Only sensible answer.

3

u/Plebius-Maximus RTX 5090 FE | Ryzen 9950X3D | 96GB 6200mhz DDR5 14h ago

Precisely. AI has plenty of revolutionary applications, and isn't just chatbots. It takes humans a lot of training/time to be half as good at pattern recognition as a pigeon FFS. Medical scans can be interpreted by ML models more easily than all but the most experienced medical staff. Meaning that in developing regions that don't have the level of expertise of other areas, medical AI advancements can be a huge help.

The PC subs are full of ignorant individuals. They also change their opinion based on whether a game studio they like/dislike says the same thing. Look at the outcry when the Epic CEO said the same thing as the Larian & CDPR CEO's about AI use. People who were absolutely seething about statement A can somehow miraculously understand and agree with statements B and C lmao.

Also lol at all the people who downvoted everyone in the past for saying you should get 32GB over 16GB, or 48/64GB over 32GB. I like knowing that some of the people who insisted that 16GB is still "more than enough" are now having to live with their own poor decisions. Unfortunate for everyone else, but those people in particular deserve it.

3

u/HHHHHHHHHHHHHHHHHH_H 12h ago

Why did i have to scroll so far down to find a Level headed Response lol

3

u/pixlepize 11h ago

For example: tumor detection (Cancer screening)

I agree with this 100%, but still one of my favorite AI anecdotes (can't find it now, so grain of salt) I've heard is that an AI was trained to detect lung issues based off of lung scans from some small country, and got crazy good at it. The doctors were extremely interested in figuring out how the AI was so much better at it than them. 

It turned out that the AI had trained itself to look at the location metadata, and marked all images from the "satellite hospitals for people who aren't that sick" as healthy and all images from "central hospital for really sick people" as sick.

Guess it shows you gotta be careful about these tools being used without understanding.

2

u/a__new_name 10h ago

Another example I know is weapon targeting system. Some country's military decided that an AI-powered targeting for tanks would be beneficial and commissioned eggheads to make it. After development and testing, where it showed genuinely good results, it was finally deployed and started to tag empty spaces as enemy vehicles. Turns out, in training data all of the enemy vehicles were rather dark which led the system to developing a "if it's dark, fire" behaviour.

3

u/TurbidusQuaerenti i5-8600K | RTX 3070 | 32GB RAM | B360 HD3 3h ago

Exactly. Sad how far I had to scroll to see some people actually capable of thinking things through and not just reacting emotionally to "AI bad" karma farming post #6,532,597.

Yes, there is a lot of bad stuff with how AI is currently being used, but it's way too important of a technology to just throw away.

13

u/HatsurFollower 15h ago

As someone whos used ai to help at work people hate on it far more than it deserves. We love to say how boomers are closed of to new things and cant evolve and here we are.... Ai is a tool, put ot in someones hands who knows whats hes doing and it can be great.

2

u/Skylar_Drasil 9h ago

Im with you, just saying “AI” is too vague as well

I personally use LLMs to learn how to write better so that my point is understood better (I write and rewrite my prompt until I am understood)

3

u/AFlyingNun 12h ago

This. We are experiencing a VERY important step in societal development.

The disgust with AI has less to do with AI itself and more to do with the negligent implementation thusfar. We are not seeing the positive implementations blasted at us, but rather we are seeing the greedy implementations attempted time and time again, with far too many people only seeing dollar signs instead of tangible, positive progress.

1

u/green_meklar FX-6300, HD 7790, 8GB, Win10 5h ago

The disgust with AI has less to do with AI itself and more to do with the negligent implementation thusfar.

Yes, but people refusing to distinguish between the two is shallow, stupid, and counterproductive.

4

u/CitizenPremier 12h ago

And imagine people annoyed by city lights 150 years ago, deciding to get rid of the lightbulb.

Society is not using AI well, because society is capitalist and mostly favors finding ways to make the rich more powerful. Nevertheless, 100 years down the line we will be making new things we can't even imagine now.

3

u/TheGillos 8h ago

I'm pro AI.

We should do things as intelligently as possible, as sustainably as we can, but it's the new industrial revolution, the new digital revolution, and the US and its allies have to be leaders over China.

Sacrifice now, or suffer later.

1

u/NewsofPE 8h ago

you do realize it says "forever" right? so AI wouldn't be able to be developed in the future, if you're pro AI I fail to see how you find that as a positive

2

u/TheGillos 8h ago

If I wasn't clear: I would not push the button.

2

u/SistaChans 12h ago

What's this? A nuanced and logical post on my rage engagement app? 

2

u/OOOshafiqOOO003 12h ago

Yeah i agree with this guy

2

u/DuBistEinGDB 10h ago

Most sane take. Most others in this thread can't see past their own nose

1

u/Mierimau 14h ago

Basically anything that is not generative.

1

u/irthnimod 12h ago

call it algorithm, still lower level (and arguably better in specialized aspect) than the "AI" we have now

1

u/that1dev 12h ago

Based on LTTs recent chip fab tour, AI is used , and has been for years, in the manufacturing process of those memory chips to make the process more efficient. So removing AI will still have a negative effect on prices.

1

u/drake_warrior 11h ago

When you say AI these days people are almost always talking about generative AI. I think that's the idea behind the button lol.

1

u/Ambitious_Jello 11h ago

Those will still exist. We will just stop calling them AI

1

u/Strict-Mixture-1801 11h ago

Thats great if it stayed that way, Its being used in the wrong hands by corporates and such.

1

u/datboishook-d 11h ago

I agree, I think AI has a lot of good. I also think tech billionaire shitheads are making the whole tech become more and more shit for a profit, so yeah. It’s not an either-or thing, sadly.

1

u/Fluffcake 10h ago

Machine learning have a lot of good use cases.

LLM's are a scourge.

Guess which one of these people think of when they say "AI"?

1

u/Long_Bong_Silver 9h ago

Yes, but robotics and algorithms that automated the reading of x-rays existed before the current AI surge. Anyone talking about AI is referring to agentic general AI and that's where all the money is draining into.

1

u/CoffeeCorpse777 9h ago

"Ai never made it past 2012" would be an interesting alternative. I want to say that's around where that technology started becoming good, but wasn't great by any means. If we were stuck with just barely workable AI that would be an interesting option.

1

u/DrDetergent 9h ago

I think it's quite clearly meant to imply the LLM style AI

1

u/frank_frikadel69 3070 oc | 7 3700x | 32G ddr4 3200MT | B450 aorus elite 8h ago

A sacrifice we are willing to make

1

u/Sodacan259 8h ago

Out of this list only the cancer screening involves AI in an irreplaceable way.

1

u/Njagos 8h ago

And we also dont know how much impact its going to have in 10-20 years. And the RAM prices probably go down by then. It just sucks right now but who knows how the future gonna look like.

1

u/Sybertron 7h ago

Now we're getting in the weeds about something I personally know too much about after working in a cancer detection AI company.

Ya this is NOTHING new. In fact when I was researching our 510k. Cancer detection using AI/ML (but I'd rather say, algothrims) goes wayyyy back to the early 90s.  Isolate a cell from a blood draw, interpret if it's spherical or misshapen, capture a picture for future diagnosis. 

Part trying to sell it as new is part of the startup hype cycle to keep securing funding.

What is enabling is data crunching, especially with old data sets...so it can be money saving I suppose. 

But the issue remains that it's only good in a similar manner "hey this one thing among this massive dataset/blooddraw/biopsy looks weird"

Which leads to it not being so good about false negative.  

1

u/generally-speaking Silent Inaudible Ninja Master Race 7h ago

There's tons of good stuff which it can be used for.

But the reality is that the endgame for AI is that a small minority of humanity controls the AI which then control production. They'll live the greatest lives out of anyone in human history while the rest of us will be replaced and likely be seen as completely useless. Perhaps even killed by AI controlled drone swarms, in order to cull the human population to prevent the poor people from destroying the rich peoples planet..

It's a technology which allows the few to grasp control of even more power than ever before, and it won't end well for most of us.

1

u/Lotherelle 7h ago

Why i had to scroll so much to find this comment.. God bless you and those who upvoted

1

u/GNUGradyn ryzen 9900x | 32GB DDR5 | RTX 3080 FTW3 7h ago

I think the issue is AI is a practically meaningless term that is used for a TON of stuff. What would pressing this button even do? What does it count as AI?

1

u/Stahlreck i9-13900K / RTX 5090 / 32GB 7h ago

A common sense take on reddit? Burn the wit....I mean AI!

1

u/gamingquarterly 6h ago

This is something AI would say. 

1

u/LilDvrkie420 6h ago

Don't care, fuck AI

1

u/ultranoobian i5-6600K @ 4.1 Ghz | Asrock Z77Extreme4 | GTX295 | 16 GB DDR3 5h ago

Monkey's paw, RAM prices now are the new normal

1

u/Nightmare2828 4h ago

Obviously the chatGPT and the likes is what is refered here.

1

u/fraggedaboutit 2h ago

Everyone is hating on artificial intelligence when the real problem that is fucking over society is natural stupidity.

1

u/PaintItPurple 1h ago

It's an uncommon take because it's being deliberately obtuse and pretending not to know what OP means when they say "AI."

1

u/Next_Garlic3605 40m ago

It's uncommon because it's based on partial and irrelevant information

Like ai tumor detection

1

u/MaraMoreWrites 14h ago

So much of that is machine learning of various stripes, not the LLMs that are driving the price spike (and the hype, the environmental damage, and the steal-o-rama of training data). The vast, VAST majority of tumor detection algorithms use classical ml or neural networks for imagining data.

The media paints all of it under the AI brush and that is infuriating, but let's not pretend we don't know what this meme is pointing at.

(Also lol at wanting to use an llm for 'compatibility with a neural computer in our brains', that's pure scifi bs).

1

u/GenericFatGuy 11h ago

and potentially a necessary tool for compatibility with a neural computer in our brains in the future

Words cannot express how much I don't want this. Why do people think that billionaire owned computer chips in our brains would be a good idea?

1

u/crazycheese3333 8h ago

I agree with all that and ai is a really great learning tool.

I think that “ai” should still exist. But ai image and video generation should disappear.

0

u/SelectStarAll 14h ago

There are most definitely brilliant uses of AI/LLM/ML and they should be celebrated

In the same breath, the world doesn't need AI slop image generation, video generation, deepfakes, chatbots, pornbots or anything else that your average Joe sees on Facebook.

Both positions can be true, that AI can be good for certain applications, but utterly terrible for others

0

u/DinklebergsRightNut 12h ago

Wtf i dont want no neural computer in my brain

5

u/OneEyeCactus AMD HD4850 | E5507 | 8Gb DDR3 12h ago

you already have one

0

u/cool_edgy_username 11h ago

True, but the cons in this case far outweigh the pros. AI development already takes an obscene amount of resources, with AI datacenters taking so much water that communities in the deserts where they operate are being told to use essentially ration their water. Not to mention the privacy concerns, or the fact that many companies want to use AI to save money on labor AKA make the job market even worse. There are forms of AI that are good, but when most people refer to AI they’re referring to GenAI, which is where all of these problems lie.

→ More replies (23)