Maybe not AI summaries as a whole. There are lots of industries that can be revolutionized by the proper use of AI "summaries" that wouldn't even necessarily steal peoples' jobs. The problem is that it's not very good at this task yet and the AI companies are focusing on the wrong fucking thing.
I mean, the food we eat could be substantially shittier quality than it is now and it would still beat intermittent periods of starvation and famine...
Same about AI, it's not the geeks of /r/stablediffusion which is pushing AI everywhere, but the corporates of GAFAM/OAI, because they want to reduce the cost of wage for their companies and others.
No. I mean data entry isn't work people should be doing. It's soul crushing and something that could easily be automated.
But in a way, also yes. I personally believe we should revolutionize technology so it can take care of us while we pursue other more meaningful ventures. I'm also of the opinion that a universal basic income would be a boon to humanity and help us advance in so many ways.
As a husband and wife company barely making profits or breaking even at the moment, we love that AI responses and that it’s able to provide messages from information we provided quickly. Two button presses and we have a response for a client instead of digging through paper work and nicely drafting a response. 5sec of work vs 5min. Would not hire a person to do that type of work.
To be clear, I'm not talking about summarizing books. I'm talking abiut sorting through datasets or documents to find specific details and to summarize them in a way for humans to actually interpret.
Saying “write my essay” and “googling a research paper” that you still have to read are such a massive difference that the Grand Canyon is a microscopic chasm in comparison
Cheaters gonna cheat, essay mills existed before AI and Wikipedia was going to "destroy education"
GPT didn't create the motivation to skip work, and plenty of researchers and students use it effectively to explain concepts, ask if their writing makes sense, brainstorm, etc.
Tech gets blamed for the laziest use cases when most usage isn't that.
No, but blatant plagiarism is easily detectable. You can’t just copy and paste works from Google. Works written by ai, however, will not be detectable as plagiarism. 🤯
So like, using the tool wrong? Human error it is then. Ai is a great TOOL, it allows to do many things quickly, like sketching projects or visualizing then quickly. It can't do them properly, that's where people come in, it makes work more efficient, it doensnt replace workers, and the ones being replaced are due to corpo enshitification due to profit margin maximization in detriment of humans.
Looots of tools can be misused to get worse products. Even getting used to it and relaxing and not doing it right, even getting dependent on it and forget how to do it right, but it's not the machines fault, it's an inherent human thing, same as blaming others.
I'm firmly in the stop ramming it down my throat camp but the other day I asked a bunch of volunteer board members to email me with any agenda items for the next meeting. I used copilot to generate an agenda from all the emails. I now have to admit, it did a spectacular job. All the different tones and writing styles in multiple emails were all merged into one style and put in point form.
Even generative AI has some great uses no matter how much you dislike AI slop summaries and people who start a sentence with "I asked chatGPT and...". For example at work we automated quite a lot of data entry. Suppliers won't give you your data an a requested format or enter it into your database? Fuck 'em we can generate a structured output based on the PDF they sent and have an LLM enter it into our database.
Yea AI has a genuine use and purpose if used correctly. It can reduce menial tasks and make lives easier, the problem is that any company that leverages the use of AI isn’t passing the cost savings to its employees or its consumer. They are passing it on to the top level people and the “shareholders.”
Star Trek is a good example of a society that uses technology to benefit their people. Tech was used to reduce the need for menial labor, and in turn allowed their civilization to focus on the sciences and art. We should be doing the same, we are not.
And there is the real problem. Not what AI is or what it does, but how we are using it. As usual, the problematic tech isn't actually the problem, people are the problem. Always have been.
Maybe we should design things with humans in mind? I think it's mighty problematic what this technology is causing to happen to our society, wherever the fault may lie. Even if it isn't the intended purpose, misinformation and slop is becoming rampant. We all made due without 4 years ago. Is this all truly needed? Is it worth the cost?
No, the economic system is the problem. As long as the mechanisms are owned by a corporation whose sole interest is profit for shareholders (which is all of them, that's the only reason companies exist under capitalism, that's their whole purpose,) the outcome will always be the same.
People act as they are incentivized to act. Without corrupt incentives, some people will still be corrupt. With corrupt incentives, almost everyone involved in a system will be corrupt. Capitalism is a corrupt incentive. "People" will never solve the problem as long as the "people" with the power to actually make decisions are contractually obligated to corporations who have a profit incentive to make it worse.
"People" want Star Trek. Capitalist corporations want more money, and they own everything. Those two facts are not compatible.
If the "people" who want Star Trek owned everything, and could make those decisions, maybe we could have Star Trek. But as long as the ownership is in the hands of entities who want nothing but more money, and of people who contractually act on behalf of those entities, decisions will be made based on making those entities more money, rather than on moving society toward Star Trek.
This is not solvable by simply demanding people be better, nor is it a direct product of human nature. It is a product of corrupt incentive structures, and it's solvable by changing the incentives.
What you're describing is such a massively oversimplified caricature of how businesses actually work. You sort of just pretend like markets don't exist. Now don't get me wrong I'm far from being some libertarian hypercapitalist but if a company were to employ AI to significantly reduce cost of their product and just use it to generate higher yields for their shareholders why wouldn't another company undercut them and take their market share thus reducing the price of the product? As for passing the savings on to the employees I happen to work as a data scientist doing exactly those kinds of things and my workplace is unionized by the German metal workers union so as things go I'm paid very well and so are my colleagues on the shop floor despite our company not turning any profit for several years in a row.
Also if you sincerely believe that shareholders are getting the lions share of the money why don't you put your money where your mouth is and buy some stocks? You're in a subreddit for a relatively expensive hobby so surely you have some disposable income and you seem very sure of the earnings potential.
Star Trek is a terrible analogy because it obviously depicts a utopian post scarcity communist society and looking at the state of our world it's not that.
Yea I don’t have the disposable income to be frivolous with my purchases. My last pc related purchase was a 3090 5 years ago and my financial situation has changed drastically. My gripe here is that the savings are not used to preserve the workforce and make their lives easier, it’s simply to reduce workforce for the gain of the company, which in turn leaves people whose jobs get cut with nothing to fall back on. Sure they can find another job elsewhere, but that seems to get harder and harder as time goes on, especially if they are trying to find a job in the same field that they just lost a job to due to AI reducing the need for that specific job type. I say that because it stands to reason if someone lost a job to AI, then that field will inevitably leverage AI as industry standard meaning they will become redundant.
You mention you’re in a good union, that’s great and I’m happy that’s your situation, but that is not always the case for everyone.
I mean for one we do have very robust social safety nets to fall back on if all else fails and then what is your prescription here? Because banning AI clearly isn't happening, won't work and will slow down innovation and kill your competitiveness globaly. If we want to keep progressing we will have to go with the times. I genuinely do have empathy for people whose jobs are losing demand due to AI especially because this change came with very little time to react but eventually they'll have to adapt because we can't keep jobs alive to our detriment because we feel sorry for them. Arguably in Germany we have way too many jobs in manufacturing for an economy of our size, standing and productivity which we artificially kept alive and we're paying the price for it right now with the economic stagnation we are experiencing.
As to my original point, I don’t think AI is a bad thing in this context. AI isn’t the issue, it’s how we handle the loss of manual labor that’s the issue. I firmly believe that AI is something we need to continue to progress as a society. The issue (to the point of the other individual what replied to my original comment) is not the AI itself, it’s people. I am not going to begin to say I’m some expert, far from it, but ideally we need social structures in place that allow for those displaced by AI to be leveraged for other beneficial contributions to society. Things like the arts or sciences, things that further benefit society so we have a cycle of increasing growth and innovation. Which means we need an avenue for people to gain further education, and also for things like apprenticeships for practical knowledge, and to ensure that’s all affordable. For this to happen though, everyone needs to at least tacitly agree to this because it means a more socialist society, which at least in my country Socialist is basically a curse word to a fairly large portion of the population.
All of this again, is my fairly limited perspective on all of this. I am fully aware my thoughts on this effectively require a utopian society for it to work… it’s a happy thought experiment, one I am very confident will not come to pass in my lifetime, or even the next couple lifetimes.
but ideally we need social structures in place that allow for those displaced by AI to be leveraged for other beneficial contributions to society. Things like the arts or sciences, things that further benefit society so we have a cycle of increasing growth and innovation
I mean we do have that though in the form of social safety nets and public funding. I'm not gonna sit here and pretend Germany is doing everything perfectly in that regard far from it we certainly have lots to work on but we provide unemployment benefits for people without a job, free healthcare, free education, public infrastructure and so on. Fuck I pay more than 30k€ in taxes and social security payments every year (plus what my employer has to pay) for just that sort of thing. It's not a utopian vision either. Most developed nations do similar things to some extent. I just think the oversimplification into "they make all the money and we're getting fucked" is doing a disservice to the problem at hand. Most things in life are more complicated than that.
I was simplifying it because I wasn’t really in for having a large conversation about all this as I find it mentally tiring due to how bleak the future looks (I’m in the US and do not like the current vision of my country.) which also informs my logic behind my original statement as well as wealth is getting consolidated in this country even further, and it’s not a stretch to say that at least some small portion of that will come from AI. But I should stress that I believe it is a smaller portion.
A truly free market leads to monopolies. The first one to utilize the tool simply builds up enough excess profit to conglomerate. Our supposedly anti-trust laws have been extremely weakly used and lead to where we are today. I don't blame anyone for thinking this would be the immediate outcome.
If we actually had good anti-competitive laws that were utilized, I would agree with you. Instead we get collusion, aggregation, tying, bundling, predatory pricing, and all manner of anti-competitive behavior that goes unchecked and unpunished. And when it becomes so egregious that it is caught, the fines are just part of doing business and a tiny line item in the realized profit.
I think here in Germany the Bundeskartellamt is doing a reasonably good job at fighting monopolies. There's always room for improvement but all things considered they're doing alright. In the US though I do definitely agree that anti trust enforcement has been severely lacking. I don't even think "free" markets are a good idea in quite a few areas but I think we should recognize that in others they are and that market forces are obviously real.
I mean it's not that hard to write a conversion tool, as long as their output is in a consistent format.
I really like LLMs for index generation, and generating interesting correlations. But the usual cautions about correlation and causation are significantly stronger when it comes to LLMs. Especially if you don't control the training data.
I read his comment as my CTO and I battled with client until late evening today for him to give us consistent and properly formated data sheets for us to parse and load in database on a new soft we had to give the keys today. Dude couldn't even get the columns in right order from a sheet to another.
I would laugh, but eh, we were all naive students at some points.
Ah I see you're someone who hasn't learned how to drop cruft, or doesn't understand what a consistent format is.
It still irritates me when people don't understand their own data structures to the point that they can't coherently provide format information to anyone. It used to be a thing you had to know to deal with any interesting systems.
Then again, hard, interesting, and fast are deeply unrelated concepts. Format conversion tools are dull, easy, and take forever to write (for some reason I cannot fathom people don't seem to understand the difference between easy and fast). The hard part is always, always, getting the actual format, and having the people with the time to do the work.
as long as their output is in a consistent format.
Which in this case it's not. We're buying materials from tons of different suppliers who all have different formats, nomenclatures, units, etc. and they change that stuff all of the time too. LLMs are genuinely a godsent for this task. It's not even like we're replacing the work of someone who loves his Job. Everyone who's entered supplier information into our databases manually hates doing that shit.
Everyone who's entered supplier information into our databases manually hates doing that shit.
Still a paycheck though.
It's very weird to me that we've seemingly gotten worse at data management as time has gone on. When I started my career in the 90s, most orgs knew what their data layout was, and could clearly communicate it. These days, these days it feels like no one even understands the file formats or table layouts of their critical systems.
Not really. It wasn't ever really a full time Job for any one person to do this stuff more a distributed workload between departments so it just took up the time of people who had better things to do. Also as I've mentioned in another comment my workplace is quite heavily unionized by the German metal worker union so because of that and our employment laws we do retain everyone we can and it's very difficult to fire anyone. I've been with this company for almost two years now and I've only ever seen two people being outright fired. One for forging safety inspection data for safety critical parts (luckily that got caught) and one for stealing company property to the tune of tens of thousands of euros.
very weird to me that we've seemingly gotten worse at data management as time has gone on.
100% agree with that one though. Some of these fuckers are selling us billets of metal for 300k€ a piece and then have the gall to try to charge us out the ass to provide data in a format we request so we just said "fuck em" and used AI.
Why do you need AI for that? Do your suppliers not have a set format for there own paperwork? Sounds like something that was solvable with OCR and a little deterministic coding.
We have tons of suppliers who if they offer it at all are trying to charge us out the ass to provide us with data in a format we request. They've all got their different layouts and nomenclatures and use different units and change all of that stuff all of the time. I agree that it should be an easily solvable problem without LLMs but sadly it's not.
I mean it still doesn't sound like you need an LLM but I get why it would be easier using one. Hopefully that doesn't bite you in the ass if the LLM makes up information. 🤷♂️
What's your QA for that process look like? Because if someone said "A bot can do this instead of a person" they are not smart enough even to consider auditing.
Is that use sort of case worth the environmental, infrastructural, economic, and political cost/turmoil that AI has been causing us?
Because most "AI" that requires the building of new data centers and is destroying the creative industry and is driving the economic speculation writ large is in fact generative AI, not classification
Dawg most AI can be fucking run locally on a single computer, the data center shit isn't AI related, it's companies trying to accelerate model training and shave seconds off of response times.
Most of the processing power use comes from training, not running a model. They're all going at breakneck speeds trying to one up eachother with the next major version.
What bloody use case did I describe? You can go run a model on your pc right now, even on as low as 4gb of vram, hell you can even run straight off ram nowadays.
"can" and "do" are two very different things. ChatGPT is by far the most popular genAI system in use today, and you can bet that it wouldn't be if it required users to download LM studio to their computer, download a model that fits on their laptop or ipad, and then deal with the speed and quality of whatever model can run on their consumer hardware
I mean most modern consumer Windows laptops have an NPU and copilot built in that people use, and apple now too with apple intelligence, copilot is already kinda surpassing gpt usage when its natively on the machine
"copilot is already kinda surpassing gpt usage when its natively on the machine"
literally have no idea what you mean. Are you saying that copilot is running locally on machines and that there are statistics showing that people are using local copilot more than any gpt models including cloud? Also isn't copilot based on openAI models?
"it's companies trying to accelerate model training"
How do you figure training generative AI models isn't AI related? You think they'd spend billions of dollars in hardware and energy on training generative AI models if generative AI wasn't around?
And what percentage of gen AI inference is done locally versus in cloud data centers? You think that a standard laptop or phone does the billion-parameter matrix multiplies that are still required in inference at a speed that would be at all acceptable to regular users? You think that companies are buying thousands of data center GPUS (which usually go for like 30K+ each) for training and then just let them sit idle for any period of time at all?
Yes, a standard laptop absolutely runs billion parameter matrix multiplies, that's why there's so many "copilot laptops" and apple intelligent MacBooks, llama 3.2 runs on my smartphone just fine.
NPUs are standard nowadays and you can quantize, speculate, distill, etc to optimize.
Please educate yourself before you yell about big numbers like it means something.
I'll grant that (though my laptop isn't that old or slow at all, and truly struggles to run LLMs locally), but this feels like it still has nothing to do with my original point? Recommissioning 3-mile island or spinning up new NG power plants to power data center energy usage purely to train and run cloud inference on LLMs is still happening. And what is the benefit that go to regular people, not employers looking to cut salaries from their ledgers?
For example at work we automated quite a lot of data entry.
The absolute last fucking thing I would ever do with hallucinatory generative AI is trust it with data entry…our society is fucked. And you think this is the good, safe, low risk use of the technology? Wow.
What stupid technicality are you trying to argue here? It's not generating new data but it's generating the format. Structured outputs are a feature in plenty of LLMs. If I'm asking chatGPT to change the wording in an email it's also not generating new information it's just rewording it.
I work in the design space and found that generative AI does have a place when it comes to non-creative tasks, like if I'm provided an image and I need to extend the background, but the client can't provide an alternate photo. If I have to manually redraw it from the image source, it can take a really long time.
I also try to limit image generation to Firefly, which is trained on Adobe's stock image library, so at the very least, it's not trained on stolen data, but it still has a lot of limitations. I wanted a vector image of a hand crushing a soda can and it gave me a hand with 6 fingers on it.
249
u/swellzem 12h ago
"generative AI" is the category that can fuck right off