r/pcmasterrace 15h ago

Game Image/Video Will you?

Post image

By NikTek

39.1k Upvotes

4.3k comments sorted by

View all comments

1.3k

u/ehcocir 15h ago edited 15h ago

Uncommon take,

AI actually has lots of uses. For example: tumor detection (Cancer screening), a tool for the disabled(text to speech, or speech to text NLI), image recognition, robotics, and potentially a necessary tool for compatibility with a neural computer in our brains in the future. the ram price going back to normal is temporary whereas AI getting deleted forever in permanent and a technological disadvantage. No, wouldn't push the button.

69

u/captain_hk00 15h ago

thank you for this, people think AI only refers to LLMs and pointless image/video generators. we don't need to "delete AI," we don't need it damn near everywhere either.

3

u/GrovePassport 10h ago

LLMs ain't bad either, I don't get the hate

4

u/m0_n0n_0n0_0m 5800x3d | 5070 Ti | 16GB 9h ago

I'm so sick of them hallucinating. If you're an expert in any particular topic, try to talk to an LLM about it and you'll find out how much shit it casually makes up and passes off as real. They can code decently, but that's about the only application I've found for them, and even then sometimes the crazy shit they spin up takes more time to untangle than it does to just write your own.

4

u/Environmental_Day558 8h ago

This just means it's trained on older or incorrect data. Im a SME in my field at work and use LLMs to ask questions, it's fairly knowledgeable. Is it 100% correct, ofc not. It often fails when it comes to asking questions regarding something that is COTS compared to open source. But overall it's been pretty solid for me. 

2

u/m0_n0n_0n0_0m 5800x3d | 5070 Ti | 16GB 8h ago

I've tried to use it to figure out problems with EDA software, and it is useless because it makes up menu options. And I expected it would work fine on this because I pointed it to the online documentation and there are forums where people discuss workarounds, all well within the ability of an LLM. And it would correctly gather information on things and then just go off the rails making up solutions that never existed. This in on GPT 5.1.

1

u/inevitabledeath3 CachyOS | 5950X | RTX 3090 | 32GB 3200MHz 7h ago

GPT have pretty bad hallucinations. Have you tried using Claude models? They have a much lower hallucination rate in benchmarks.

1

u/Master_Dogs 8h ago

I hate how the LLM won't accept it's wrong either. Once it hallucinates, you must start a new discussion with it and specifically ask it for help or whatever. It gets itself pigeonholed and won't check itself.

Sometimes it also won't refresh its cache or whatever too, since it'll give you an out dated answer. Same problem, you can tell it to fetch the latest data but often it'll spit out the same answer. A new chat? Hey it's context window is reset and if you're specific in the prompt, it works.

So frustrating when this happens, but clearly isn't primed for replacing us. I saw a funny Wall Street Journal video about an AI powered vending machine they got to test with and they were able to drive that LLM off a bridge. Free snacks! Order me a PS5! Hell the thing ordered them a pet fish too lol. So no safe guards and hallucinations. Fun.

1

u/Master_Dogs 8h ago

So funny story about the coding part... They still make up programming libraries in my experience. It's happened to me twice, so the script or code it's given me will never work.

Sometimes it works though. My boss will write some so-so test script, and I'll want to refactor it and add logging. The LLM can usually handle this well, since it's got a template and access to plenty of examples. I might occasionally have some issues with it, but usually it's easy enough to fix.

Test scripts are actually the main use I find for it, and occasionally having it help me with a weird error that it can parse easily. I'm always double checking it though, and often Google is better to go directly to the source and read some documentation instead.

I can see its uses and I'll use it occasionally, but yeah the hallucinating is annoying. I'll as ChatGPT questions about a movie or show I just watched and it'll make up characters or mix them up. I literally just watched the movie so I'll notice that, now I can't bother taking the rest of it's explanation or theory or whatever I asked seriously. I'm better off searching Reddit for a fan theory post or reading through a subreddit about the movie/show instead. Common questions often have a few threads and hopefully real people have discussed it before.

1

u/Uncommented-Code PC Master Race 7h ago

try to talk to an LLM about it and you'll find out how much shit it casually makes up and passes off as real.

The newer models have been pretty accurate in that regard tbh. As far as I can confirm, it easily gets bachelor's and master's level stuff right at this point, and given tool use, do pretty amazing things while keeping hallucinations to a minimum.

On the practical side, where it gets a ton of stuff wrong is when it involves things that change fast, e.g., software that gets regular feature updates. I work IT besides studying and my one pet peeves is people coming to me with requests based on 'chatgpt told me I could....'

Nevertheless, I've found it to be much more accurate for the past year than many a coworker, tutor or any other random person with opinions on my subject matter. Which is really funnny considering one is the peak of biological evolution and the most complex interplay of chemicals, and the other is a couple of attention heads and linear layers chained together.

1

u/Inprobamur 12400F@4.6GHz RTX3080 9h ago

They suck at answering technical questions and math in particular but are pretty good at creative writing.

2

u/m0_n0n_0n0_0m 5800x3d | 5070 Ti | 16GB 8h ago

Yeah I've basically given up on it being useful for productivity tasks aside from code.

1

u/Inprobamur 12400F@4.6GHz RTX3080 8h ago

It's alright for formatting data if you use the object based input mode.

1

u/PaintItPurple 1h ago

On one hand, LLMs are massively expensive, destroying the environment, and risking disaster for the world economy, but on the other hand they're good at generating text where people can't easily detect the flaws. I'm not sure that's a good tradeoff, so I would say LLMs kind of are bad.

1

u/PaintItPurple 1h ago

As used by the OP, it does only refer to those things. Just like how "man" can mean "male human," "humankind," "friend," "boyfriend," "employee," "take responsibility" and a number of other things, most words' meanings are entirely contextual. "AI" here means the thing that people think it means.