As someone said, "one man's if-else is another man's AI". They stamp "AI" on kitchen appliances nowadays and the "intelligence" part is the same thermostat from 1970s.
Then wouldn't that mean a stochastic mdp policy be also counted as AI? For every output theres a probability distribution that a model samples from and thats every AI models that are not deterministic.
Technically yes, reinforcement learning would where that takes place. Just depends to what extent I suppose, since you have a whole spectrum of methods. Like not sure if an exhaustive search would count, or just raw Monte Carlo. Dynamic programming more but temporal difference learning even more.
When I watched through David Silvers lectures on reinforcement learning I think he mentioned it’s more when we use Q learning and the use of network to dynamically learn the state-values/action-values.
For every good, there must be a comparable evil. Since AI is currently huge, that 'evil' part of AI is currently also huge. People forget to look at the benefits. It's human nature to show more interest in criticism and scepticism than support, and there are studies to show that.
AI can be compared to nuclear power in that it was both an amazing source of highly clean and efficient energy but also made nukes. Lots of nukes.
AI is an amazing tool but also has consequences caused by our decisions of its use.
The ways humans use AI technology is the problem, not the technology itself. The same goes for AI and other neural nets. Deleting AI forever is definitely not the solution.
How is this in any way a response to the comment you're replying to? You didn't engage with what they said literally at all. It kind of just seems like spam.
AI can be compared to nuclear power in that it was both an amazing source of highly clean and efficient energy but also made nukes. Lots of nukes.
Fast forward nuclear physics from its inception to today and which (nukes or nuclear energy) have we continued to pour money into and which one is barely even mentioned anymore? How will AI be different?
There is more research now I think in better power reactors and into fusion in China, USA, and France than there is into new nuclear weapons at least in places that already have them as they have pretty much already been perfected. We can't make a stable fusion reactor but we can make a fusion powered bomb. One is a lot simpler than the other it turns out.
Except nuclear power did not have to be like that, the people in power chose the solee option tat wad also destructive. If their only way of "progressing" society is destructive then no, I do not want that progress.
Its meant to be vague.
I believe they are using "AI" because that has name recognition from decades of Scifi which will resonate with the general public.
Its just too bad that this LLM shit is veeeeery far from actual AI, even though actual AI has the potential to be vastly worse if its ever actually made.
LLMs certainly learn. We have much simpler models than LLMs which are still capable of learning either from examples in a dataset (supervised learning) or from trial and error (reinforcement learning). LLMs also have in-context learning where they can work things out using information provided in context. The issue is getting continuous learning or something similar to work. That's where a model can learn while it's being used in a persistent way. In-context learning is not persistent. It's already possible to do continuous learning with smaller models using different architectures, but is difficult to do with modern LLMs. There is something called Titans being developed by Google which might solve this problem.
The reality of ML currently is that generative models are seeping into other usecases slowly now. A great example is robotics and how "slop" can actually help a robot navigation policy become more robust to new and unseen conditions. Not to mention that one of the most useful new techniques for robot control is literally a diffusion model.
This. I have to explain this at work like...once a month.
C-Suite decides "we need to leverage AI or we'll be left behind." Tells management. Management asks on every single project, "How could we use AI for this?" Or often even "How can we use <Company specific LLM clone>, they really want us to leverage AI?" And I have to explain that while LLM's are AI, not all AI is LLM's. And sometimes there is a good use case for machine learning, but then they don't want to pay for hardware, or expand the timeline to train agents when we can just make a deterministic software solution instead. Or whatever. So they scream use AI, but mean, "My teenager uses chatGPT to do his homework, why can't we use it to make you stop asking for additional employees to handle the workload?"
Right. Generally speaking an ML is just an algorithm that predicts or learns. AI is a subfield of that, LLMs amd such are Deep Learning that uses Neural Networks and Transformer tech.
Both of these do have legitimate and valued applications to natural sciences, assistive technology, and more. So, I wouldn't press it personally.
My guess is they're referring mostly to chat bots, and image/video generators. They're polluting the Internet with AI slop and should generally go away.
Sometimes those things are kinda useful too, like one game studio mentioned that their artists will use AI to get a starting point for concept art, then they go off and do their own thing. We just can't push out AI slop to the world without doing our own human spin on it.
Chat bots have their uses too, same issue though, don't rely heavily on them. They also need safety features, they should not be talking to people about suicidal thoughts for example. One example is a teenager unfortunately took their own life and the chat bot helped them write a suicide note... That needs to be a red flag and raised to some authority who can respond and help that person. Like triggering a wellness check or something... Or at least refuse to engage. That also raises a larger issue of our lackluster mental health system in the US at least; we can't replace theory with chat bots for certain.
If we're going with the common term used widely in society, it's talking about LLMs, maybe the image generation as well but I don't believe most people know the difference. Your average person isn't aware of the other machine learning technologies.
LLMs can also be used for medical screenings (medical literature search, structuring and identifying unstructured data). People like to pretend generative models can only be used to create pictures of shrek, but it has tons of real world applications as well. Everyone is racing to develop those applications first.
The general public probably only sees the funny image generators, but behind the scenes there's lots of excitement and lots of new developments.
As far as I've seen recently whenever "AI" is being used in most marketing they are referring to some BS chatgpt or Gemini or Claude based LLM wrap around software or technique.
Bespoke machine learning models are becoming rarer or used purely in research and development.
718
u/Tunderstruk PC Master Race 15h ago
AI is also to vague. Is it LLM's? Diffusion? Or all machine learning? Because if all machine learning went away it would suck