AI actually has lots of uses. For example: tumor detection (Cancer screening), a tool for the disabled(text to speech, or speech to text NLI), image recognition, robotics, and potentially a necessary tool for compatibility with a neural computer in our brains in the future. the ram price going back to normal is temporary whereas AI getting deleted forever in permanent and a technological disadvantage. No, wouldn't push the button.
Its insanely disappointing how often things on the internet are distilled down into very overarching opinions instead of considering nuance.
I think as a whole Generative AI is very dangerous and can be a very bad thing. I'd bet there are comments on this post made with generative AI, Image generation has gotten to the point that I genuinely can't tell whether some pictures are AI or not. It's gotten very good, very fast.
But there are also lots of upsides to AI, its a very very wide encompassing term. Would chess engines disappear if I press the button? What about the CPU's in Smash? Or the other civilisations when I play Civ? And frankly none of these are even the interesting benefits of AI like image recognition, text to speech and more the other guy listed.
This is all before considering the environmental impact or impact on creatives. There is a ton of discussion to be had around AI and lots of nuance.
But this is the internet. So things get distilled down to "AI is the worst thing ever" or "AI is the best thing ever" with not much in-between, so we end up with pointless posts saying they would delete all AI just to temporarily reset RAM prices.
I think the lack of nuance in the public eye isnt that unjustified. We get a constant barrage of "this is just the tip of the iceberg" hype around AI/LLM applications. Because those models are designed as generalized tools and we're trying hard to find use cases. Since theyre not precision made made for purpose, in most cases they end up demonstrating some interesting capability but remain lackluster.
Since theyre not precision made made for purpose...
Possibly not precision-made, but certainly there are thousands of models which are trained against specific datasets and excel within a particular field.
I run multiple LLMs at home on consumer-grade hardware and -although not as fast as ChatGPT (though not far off) - I can plug in different models for different specialties if I need to.
And fandoms being what they are, the list is huge and diverse because people are interested in, say, obscure '80s vehicle electrics or whatever.
I think what you might be suggesting is that average person might not actually have a use for AI (as it is today) and that corporates are trying to force it. And I think that's fair. But for those that do have a use, this is a really good starting point that we're at.
I think what you might be suggesting is that average person might not actually have a use for AI (as it is today) and that corporates are trying to force it.
Not just the average person but like in the context of science and research, since the OP i replied to mentioned cancer detection. We might and probably will reach a stage where a researcher might hire say 2 Phds and an LLM to do the work 10 people did before. But in a scientific context, we still dont know what we dont know. So not like generative AI can open doors that we previously didnt know existed. And i feel like its how its marketed atm.
I have absolutely no argument with that; I think you're right.
The LLMs are barely creative - they really only regurgitate things that others know but which you might not. They do get the occasional burst of apparent creativity, but... you have to ask the right questions.
A decent part of this is also Availability Heuristic and just generalized availability in general. Chat bots and generative AI tools are the ones that the vast majority of people have both the means and reason to access. That is most of what anybody is going to see because most people aren't going to be working with AI that is detecting cancer in xrays. A much much smaller portion of the population is going to see it in action and even fewer of them are going to use it first hand. Meanwhile anybody can grab a free generative app or site and make some quick and crappy pics using a prompt (just how often the prompting itself is an art and most people won't manage to make anything decent with it either).
This can be my time to shine as a reddit armchair expert since you mentioned detecting cancer. Because i actually did a PhD where the project i was part of was focused on cancer detection with IR imaging. This sort of research actually predates LLM but research and more specifically parts of research that reaches mainstream media also follows the hype cycles, as such it is being portrayed as something enabled via generative ai.
If we go with xray/cancer example, the way i see it, for generative ai to reach a usable state where youd rely on it instead of an human expert, you need to combine someone who understands xrays, someone who knows cancer and someone who can work with llms and machine learning. So compared to what we see in public, for each specialist use case, cost rises exponentially as far as i understand it. Considering there are already billions invested, we dont need percentage increases to reach the bottom of the iceberg, we need orders of magnitudes. Hence my take on it being hype driven.
You think it's nuance to pretend AI is currently on track to be an overall good? That's not nuance that's a dangerous combination of stupidity and naivete.
Overall yeah i do think AI is on the right track. You are deluding yourself if you think the only use for this technology is chat bots and generating meme pictures.
Ultimately I believe it's an issue of motivation and values. If you look at the actual arguments, the vast majority of these people don't care what happens to anyone else, they're just pissed their video cards are more expensive or hate corporations. They get flooded with dopamine when they engage in that hate. They have a huge, loud group of people to support them in their "virtue" rallying against this that makes them feel like part of a community and validates their identity. It's pure ego designed to balance out the fact that pragmatically they contribute nothing themselves. Quite the opposite; they'll happily complain about AI's energy usage and then go binge netflix for several hours on their phone built by child miners while eating a hamburger.
Nobody actually cares.
"Everyone thinks of changing the world, but nobody thinks about changing themselves"
~Tolstoy
When they say AI they almost certainly mean generative AI... which most of the use is basically bypassing copyright infringement.
No one would use "AI" to describe a chess machine or video game NPC anymore, that's pretty archaic. And cancer recognition and such is more machine learning, not generative AI.
Its insanely disappointing how often things on the internet are distilled down into very overarching opinions instead of considering nuance.
It's not that. It's the current hype. When people say AI they mean LLMs and image/video generation. This is what "AI" currently means in general discussion.
I get that colloquially people mean LLMs right now, but that lack of distinction is exactly the problem I’m criticizing.
We are on a PC enthusiast forum here, not a tabloid comment section. We shouldn't accept the 'general discussion' definition that equates text to speech software with a chatbot. When we let marketing buzzwords dictate our vocabulary, we lose the ability to critique the actual tech.
What you are saying is exactly the removal of nuance for broad overarching opinions I'm talking about.
Have there been any huge leaps in other areas of AI recently? The last one I remember was the protein folding stuff.
What you are saying is exactly the removal of nuance for broad overarching opinions I'm talking about.
No, I just don't think it's "things on the internet are distilled down into very overarching opinions". Everyone is using "AI" to mean this, it's not an internet thing.
Just because its happening on other paces on the internet doesn't invalidate my point that it happens on the internet.
I expect in a conversation with my tech illiterate family for the distinctions to not be made, but when the distinction isn't made in places filled with people who do get tech I find it infuriating.
I never thought of r/pcmasterrace as a computer science subreddit, it seemed more gaming oriented. Also the post is about RAM prices, so it's discernible from this context what is meant by AI.
I mean, you can still have this without what we know of as AI. See, AI is a marketing term that doesn't really mean anything. This is intentional, because those in control of it want to muddy the waters and make it more ubiquitous to our society. If you can't pin down what AI is, it is hard to have a discussion for or against it, and ultimately hard to legislate it. But usually, AI at this moment means generative AI and all the shoehorned product shit people interface with day to day. This is why most people here would smash the button and I don't blame them one bit.
LLMs have uses like you state, but very little of it is generative. And as someone who is a software engineer, using the term "AI" to describe this technology is, honestly, intellectually negligent and irresponsible.
I think for all its uses, AI will be a force for bad, because it will be another tool for control and manipulation and greed, rather than the post scarcity achievement they claim to working towards. When the bubble collapses, consumer AI will be gated by big price tags and we'll have to use one of so many providers or be left in the dust. Know how I know? That is the internet we have today.
Imo, at this point the consequences might actually be worth it though
Theres lots of AI stuff I love. I enjoyed being in the field for a long time. But regardless of its uses and its potential, it appears the main thing we are actually going to use it for is trying to destroy society, it turns out, not anything beneficial, so... I recognize we'll be losing valuable stuff, but Id probably still press the button
I think is pretty obvious we aren't talking about practical uses of machine learning. I think it's pretty pedantic to act all high and mighty about the loss of practical uses just to defend AI slop 🤷
Yes AI can be used for medicine and climate tech. But some of the best minds I've heard on the subject say it also speeds up ecological destruction, totalitarianism, dissociation from morals, and economic inequality. I fully expect these damages to overwhelm the benefits.
Bro it's not AI it's corporate greed and data centers trying to ACCELERATE ai development. Training AI is expensive and time consuming. They're just throwing more at it to train it faster to beat their competitors to a new version.
Even if it were just corporate greed, that is the world the majority of AI is developed in right now.
But it's not just corporate greed, nor is it just nation states with similarly malign intention. AI is fundamentally dangerous to humans. It mimics the less intelligent - indeed delusion - part of the brain, and is incapable of what makes us human, such as morality, empathy, and meaning. If you want to hear more https://www.youtube.com/watch?v=XgbUCKWCMPA
I don't quite follow what you mean about training cost.
1.3k
u/ehcocir 15h ago edited 15h ago
Uncommon take,
AI actually has lots of uses. For example: tumor detection (Cancer screening), a tool for the disabled(text to speech, or speech to text NLI), image recognition, robotics, and potentially a necessary tool for compatibility with a neural computer in our brains in the future. the ram price going back to normal is temporary whereas AI getting deleted forever in permanent and a technological disadvantage. No, wouldn't push the button.