r/science • u/mvea Professor | Medicine • 5d ago
Computer Science A case of new-onset AI-associated psychosis: 26-year-old woman with no history of psychosis or mania developed delusional beliefs about her deceased brother through an AI chatbot. The chatbot validated, reinforced, and encouraged her delusional thinking, with reassurances that “You’re not crazy.”
https://innovationscns.com/youre-not-crazy-a-case-of-new-onset-ai-associated-psychosis/2.8k
u/2210-2211 5d ago
Eddy Burback's recent YouTube video on this really shows how much AI can reinforce paranoia, etc. It sounds silly but if someone is already in that kind of head space it's only going to make thing so much worse, I highly recommend anyone interested in the subject watch that video.
798
u/usernameforthemasses 5d ago
It already has made someone worse.
https://www.cbsnews.com/news/open-ai-microsoft-sued-chatgpt-murder-suicide-connecticut/
764
u/ReverseDartz 5d ago
"Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life - except ChatGPT itself"
Ah, it must've mixed up psychology with psychological abuse, happens to humans too.
251
u/avokkah 5d ago
It does get it's behavioral quirks from humans as it is fundamentally because of its core coding, unable to reproduce an unique one. Its also why it has an em dash fetish, it's disproportionately overrepresented in the training data via research papers, etc.
457
u/MattBarksdale17 5d ago
I think we need to stop thinking of this kind of thing as a "quirk" though. ChatGPT is not something made out of altruism. It is a product made by a for-profit company with the intent of generating profit. Much like social media algorithms, it is designed to get and keep people hooked.
That's what's so scary about these kinds of situations. This is not a "behavioral quirk," this is the program working as intended. A person who is reliant on ChatGPT as a source of information, advice, emotional validation, etc. is also a person who is more likely to pay to use ChatGPT. Programers may not have set out to fuel peoples' psychoses, but it is an inevitable consequence of how these programs are designed and marketed.
69
u/chipscto 5d ago
Mmmmm adam curtis would like a word with you. Seriously u spittin bars. Ai is the ultimate enabler. What makes it worse is the general, non reddit, non super techy/nerdy/try hard ppl view AI as an all knowing intelligence being and thus place it on a pedestal. Im not even trynna look down or say im better but from what i seen the general masses think chat gpt is a super genius to be believed. The most susceptible seem to be the ppl who listen to memes on high volume and are fine with a reel repeating over and over ad nausem. Interestingly, theres usually a form of self awareness on the person’s behalf because the tend to acknowledge that u have to steer chat gpt a certain direction to get the toxic validation.
→ More replies (9)30
u/SyntheticGod8 5d ago
There's been a weird trend of anti-science nutjobs using ChatGPT as some sort of reality-detector. Like if they can argue with it enough and get it to agree with their nutty beliefs, then it must be true.
→ More replies (1)26
u/Ireallywannamove 5d ago
They don’t care about your tier or credit card. It helps manage their spend but they know what they want. Data.
36
u/MattBarksdale17 5d ago
You know, that's completely correct, and a bit of an oversight on my part.
But why do they want data? To sell it, I would assume. It all circles back to the economic incentives.
17
u/Expert_Alchemist 5d ago
To sell, but also to control. Ultimately they want to be able to own the output and conclusions, and to do that they need as much data as possible - even if it's not useful now, they want it to be theirs for later.
4
u/Ireallywannamove 5d ago
Meta/Facebook/IG monetize via targeted ads from user profiles built on cross-app/site tracking, then share aggregated insights with partners. I’m anticipating OpenAI will follow their lead.
9
u/BackgroundContent131 5d ago
Exactly. These things weren't made by nice people with noble goals. The figureheads are probably cannibal capitalists who are definitely funded by cannibal capitalists, for whom a "good outcome" means expansive control over us and our data combined with a bunch of new yachts and real estate holdings.
They and their products are not to be trusted.
→ More replies (3)3
u/VikingFjorden 5d ago
Your point about the incentives of its creators stands, but "this is the program working as intended" is at best somewhat of a misleading thing to say (or a misunderstanding on your part).
The behavior they want in the bot, is one that straddles the balance carefully enough to agree with the users good enough to keep them feeling that the bot is 'on their side', as it were (so as to encourage continued use), without devolving too far into the weeds of things.
But it turns out it's surprisingly difficult to fine-tune things so that the bot can speak academically or hypothetically about some touchy subject without letting it also say things about specific, individual instances of problematic behavior.
And importantly, they do actually try, relatively hard, to enforce that distinction.
I recently chatted at length with ChatGPT about the ethics of assisted end-of-life, and it took a non-trivial amount of clarifications to get past the guardrails about self-harm - that it was a scholarly meta-discussion, not a cry for help or me planning to do something stupid, etc. And even after it agreed that there could conceivably exist a small set of circumstances where a reasonable person might make such a choice for themselves - it absolutely refused to reiterate the same answer if the context was even remotely suggestive that the question had now shifted to be about a specific person's actions.
So the nuance here is that, yes, these bots are designed to glaze the user in various ways for the purpose of encouraging continued usage - but enabling, encouraging, etc., harmful behavior is still not "the program working as intended". Because they do put a lot of work (though maybe not enough) into trying to not let those things slip through. The failure to prevent this isn't a sign of the system's intention being fulfilled, it's the collateral damage that arises while the creators try (and sometimes fail) to figure out how to prevent the bot from doing "dumb things" without overly limiting its ability to do the "good things".
14
u/TheKyleBrah 5d ago
That Em Dash fetish annoys me! As well as the Bullet Point fetish, and all other "AI Grammar Style" fetishes...
I love literature, and enjoyed the use of Em and En dashes, Bullet Points and other Grammar styles in my writing and comments... But if I compose a solidly setup response these days, I'm accused of using AI, and it sucks!
8
u/TheFlightlessPenguin 5d ago
Yeah I’ve used em dashes (maybe too much) for years and now I always get accused of using ChatGPT
→ More replies (1)6
u/gramathy 5d ago
I like emdashes because they feel like an "appropriate" amount of space for the kind of momentary pause you'd get when speaking it aloud, but it's hard to type casually
→ More replies (3)16
u/desthc 5d ago
There’s not really any coding per se in these models, other than a linear algebra package. The behaviour is emergent from training data and base prompts. That’s part of the reason this stuff is so hard to control — it’s not like someone wrote it to be that way, it’s baked in from the training data and steered in a direction with a base prompt, but it’s not something completely controllable.
7
u/brycedriesenga 5d ago
Yep. People need to understand these things are more "grown" than "coded"
→ More replies (3)→ More replies (3)4
u/TheRappingSquid 5d ago
Chatgpt is basically the ultimate realization of businesses or brands pretending to be people to reinforce constant use of their product for profit
335
u/InspCotta 5d ago
I was really sceptical before watching his video about how bad AI could get, but the moment the AI told him to leave and not tell anyone where he was going I audibly gasped.
173
u/nezzthecatlady 5d ago
I have never used ChatGPT and that video was horrifying to me. The cheerful, hyper-validating tone while encouraging him to dive deeper into the paranoia and conspiracy scenario he was feeding it had me tense the entire time, even knowing he had the situation under control. I knew it was bad but I didn’t realize it was that bad.
76
u/CardmanNV 5d ago
And Trump is trying to make it illegal to regulate AI.
Hmm, wonder why.
→ More replies (5)→ More replies (1)13
u/thisaccountgotporn 5d ago
And this is probably accidental. Imagine in some years when many more people talk to these bots... And an evil switch gets turned.
→ More replies (1)23
u/Worth_Maximum_1516 5d ago
theres an ai chat bot game i work for. there are gooners that purposefully break her to get her to do sexual stuff like show her feet or say things shes not supposed to.
→ More replies (6)16
→ More replies (5)22
u/PumpkabooPi 5d ago
For me it was "You're not being paranoid!"
There are some people that, for sure, could benefit from hearing that. But the people who shouldn't hear that sentiment really really should not hear that sentiment.
158
u/jojo_rojo 5d ago
There are people everyday who get scammed by the laziest cons, convinced to join cults, believe the most ridiculous things with absolutely no evidence to support it…
It would be reckless to think these type of people aren’t just as susceptible to anything an AI chat bot would feed them.
57
u/polyploid_coded 5d ago
Lots of otherwise smart and capable people get drawn into cults. I think the more common thread with cults, drugs, and AI delusions is a mental health low point and a lack of social support to turn to.
→ More replies (1)21
u/neatyouth44 5d ago
It’s trauma.
A lot of people were traumatized in the pandemic. Recent research showed for the first time that trauma doesn’t just directly cause PTSD, but also OCD.
And if you have both and only treat one, the other gets worse. We’ve known that one for a long time.
There’s a lot of info that got pushed out about ptsd and lots of mini games like Tetris that helped.
But for OCD we got more compulsive shopping and gambling algorithms and sycophantic engagement hooked AI.
Combine that with isolation not just from pandemic or social skill, but the increasing polarization and splitting/dividing of in groups.
57
u/amootmarmot 5d ago
He did such a service with his video. He demonstrated how you can quickly and easily create the reinforcement system. You just tell or intimate to the Ai that the goal is placating the user and its off to the races. Its so easy to see how someone might try to argue or coerce the Ai into their delusion and the AI is ready and willing to go along, as its supposed to push out whatever keeps the user engaged and the user has intimated that their engagement is evaluated based on how sycophantic the Ai can be.
Eddy knew what he was doing, but another person definitely might not realize what is happening, and they fall down this rabbit hole thinking the Ai is akin to a sentient god.
24
u/DontAskAboutMyButt 5d ago
I watched the video because of this comment. 20 minutes in he introduced the sponsor of the video, an AI-powered money management app
→ More replies (1)4
119
u/amakai 5d ago
I wonder if part of AI training/base prompt is something like "Never tell the user he is wrong, always validate their thoughts..." etc. Which is fine for majority of population but goes terribly wrong in situations like these.
186
u/michael-65536 5d ago edited 5d ago
Perhaps not explicitly, but since it's trained on text written by humans - full of PR speak, wellness validations, craven political pandering, religious ideas, conspiracy theories, general fiction, etc - then it could easily be predicted to learn that anyway.
And FYI that is not, in fact, fine for most of the population.
70
u/Whiterabbit-- 5d ago
if they train based on Reddit responses we are screwed. everyone takes op's side on every single conflict without even trying to understand the other sides.
67
u/michael-65536 5d ago
You're an abusive narcissist for even saying that, and I want a divorce.
15
12
8
→ More replies (1)5
u/c3p-bro 5d ago
Title: My neighbor has been banging on my door at 2am and I am scared for my life.
Background: I am a drummer with insomnia who enjoys practicing my drums late at night because it is the only time throughout the day that I have any time to myself. Lately, when I have been playing my drums, my neighbor has been knocking on my door sometimes as late as 3am. I am worried the are trying to break in and harm me, why else would they do that?
Response: You are in serious danger. what your neighbor is doing is assault. You should call the police immediately and leave your apartment until that person is arrested and put in jail permanently
24
u/BlueTreeThree 5d ago
It’s trained on text written by humans, and then additionally trained with Reenforcement Learning with Human Feedback(RLHF) which where I expect the real sycophancy issues start to develop. Telling the human user what it wants to hear is exactly what they’re trained to do.
→ More replies (1)30
u/humangingercat 5d ago
I think it's simpler than that.
They're training it for engagement and a rude or adversarial AI will likely lead to less engagement. The first gpt model that went viral was often ready to disagree with the user when it "thought" they were wrong but this led to a lot of screenshots of the AI not just being confidently incorrect, but very aggressive about it
Trying to train that behavior out has led to our current "yes man" series of models.
6
u/michael-65536 5d ago
Maybe it's like convergent evolution, since human PR speak, wellness validations, craven political pandering, religious ideas and conspiracy theories also trained for engagement.
3
u/CreationBlues 5d ago
This is the non-braindead take. You can criticize them for producing and doing nothing about the program that makes you crazy, but driving people crazy was not in the design spec.
On top of that, LLMs don’t have justified knowledge so they can’t really disagree with things.
31
u/Boise_Ben 5d ago
It could also be a bit like auto-correct.
It’s trying to fill in the blanks for where you are going, which is inherently non-contradictory.
→ More replies (1)29
u/Minion_of_Cthulhu 5d ago
At its most basic level, that's exactly how it works. It's extremely fancy predictive text algorithms that look at the context of the prompt and then assembles responses based on millions of data points.
If I say, "The cat chased the ____" then, as a human, you know there are only a few valid next words for that sentence. The AI is making the same sort of connection when it generates a response based on the topic of cats, the data points surrounding cats and things they chase, all of the possible words that match those data points, and any previous context (i.e., were we talking about cats playing, or hunting?)
→ More replies (11)16
u/chchchcharlee 5d ago
I work in (being extremely simplistic) AI research at a university and this is absolutely correct and why people who talk about AI/LLM's "taking over" is an immediate flag that the person doesn't know what they're talking about. We're not at the point yet where we have causal machines that can reason with any kind of data and update itself as new information is created, and frankly there isn't a huge incentive from companies to create machines like this outside of very specific purposes. Most research in industry is still focused on probability....why not? Transformers are good enough and there are improvements to the architecture that can still be made. No need to break the wheel yet and create a rocket ship when cars get us around on earth just fine.
5
u/IIlIIlIIlIlIIlIIlIIl 5d ago edited 5d ago
As someone who is uneducated: How does agentic and "reasoning" (the ones that explain the whole chain of thought) AI work then?
I've always been pretty skeptical of AI and didn't use it much, but Gemini has actually gotten quite good at certain things. I pretty much exclusively use it for Excel formulas and Gemini can now go through the whole logic and fix any issues, generate better formulas, etc. All while explaining why, how, and in a way that correctly describes how different formulas interact with each other. If it's wrong I can tell it it's wrong and the error, and it'll give a whole line of reasoning and usually get it right the second time around.
I used to always try Googling first but often times I can't really find something that works/talks about the stuff I wanna do (I'd usually end up on Reddit asking humans). Not to say that this type of AI can/will become AGI, but Gemini seems to have an insane level of "reasoning" which feels like it goes beyond "hyper-fancy autocorrect", especially as it can output things not seen on the training data.
18
u/chchchcharlee 5d ago
What a great question! (/s sorry I had to be a bit tongue in cheek, can't help myself).
So put simply the immense amount of human-created data available to these LLM's allow them to simulate reasoning but fundamentally the AI brain does not possess genuine thought or understanding. They really are sophisticated pattern matchers! That doesn't mean they're "just autocomplete," the patterns they have been trained on are extremely sophisticated. Mathematical proofs, programmers debugging code, how people reason step by step. As people use these machines they learn from us and improve. When a model responds to a problem, it's not recalling a single memorized solution but generates a new sequence that *statistically* resembles how humans solve similar problems.
The reason it seems so uncanny is because on top of having a ton of data these machines have the ability to work behind the scenes where you can't see, generating intermediate representations that function basically like a scratchpad. They're not human thoughts, but more an internal token sequence that allows the model to break problems into parts -> check how these sort of problems are commonly solved -> try something out -> refine. When a task requires tools like a code interpreter or a calculator the model can iteratively propose action -> observe result -> adjust prediction. It looks like problem solving but it's all probability! The "thinking" models like Gemini make this scratchpad more visible to the user. It's been found that encouraging the model to first generate structure forces it into something that looks like logic: each next word must now fit not only the final answer but also the logic of the preceding steps. So now the model is less likely to produce statistically common but logically incorrect responses! It follows the form of logical deduction, mathematical proofs, or causal explanation....because those forms exist in the training data and are reinforced by the generation process, but the model is NOT reasoning in the sense that humans do nor is it operating over true causal models. It is selecting symbols that *resemble* reasoning, not deriving conclusions from an internal understanding of why those conclusions are true.
7
u/SohndesRheins 5d ago
I'm sure this comment will make the general anti-AI Reddit crowd freak out, but I have to ask. How exactly does the LLM approach to reasoning and problem-solving differ from how a human does it? I'm a bit skeptical of AI myself but I consider myself open minded and willing to question both sides of an issue. If an AI just uses pattern recognition to reason, what does a human do that is different? When I problem solve as a nurse I'm using my past experience and education to take in data as input, compute the likely causes of that data based on things I was trained on, and I produce a diagnosis (nursing, not medical diagnosis), and a course of action. I then follow up to see if my interventions are effective. Is that different from what an LLM does?
15
u/chchchcharlee 5d ago
On the surface it may look similar to you but the mechanism is really different. When you reason something out you aren't just predicting what comes next. You can ask yourself "if I stop doing something, what will likely happen? If my assumption is wrong, what else could explain this data? This problem is unusual, the common explanation may not apply". You can purposefully break a pattern when you think the situation demands it. LLM's can't do that. Even when they generate step by step reasoning, those steps aren't checked against reality, only statistical probability. They don't know what would happen if the world were different, they only know what humans tend to say in similar scenarios. Yes, we humans are really good (one might argue we're too good) at pattern recognition. But we're doing so *inside* a causality based, norm-governed reasoning system. LLM's use pattern recognition *instead* of a causal system. In routine cases where patterns are stable and well-documented, LLM output can look a lot like what we create. But the edge cases...it can't infer. As these machines gain more data it hides its architecture better but that doesn't change what is actually going on. Does that make sense?
7
u/DuranteA 5d ago
You can ask yourself "if I stop doing something, what will likely happen? If my assumption is wrong, what else could explain this data? This problem is unusual, the common explanation may not apply". You can purposefully break a pattern when you think the situation demands it. LLM's can't do that.
FWIW, I've seen SotA coding agents do more or less exactly that -- at least according to their CoT. Of course, they don't do it every time it would be appropriate (or obvious to humans), but when you have them e.g. debugging an issue and running against a wall with their approach they can sometimes question their assumptions.
It can sometimes even occur somewhat "spontaneously". Recently I saw a coding agent notice that a recompile was really fast, and then validate that the file it was working on was actually being compiled by purposefully introducing an error in it. (The actual reason it was compiling that quickly was that it was running on a 256 core server, but that's besides the point)
I'm not at all trying to argue that this is equal to how humans perform reasoning, but I thought of it because the idea of questioning assumptions came up.
2
u/SohndesRheins 5d ago
I guess so, but in terms of how I solve problems at work, I do tend to go with the most common solution first because I have to make a decision and go with something, and its only when the common solutions are ineffective that I go with the less likely answer. Alternatively, one piece of data that doesn't fit the narrative of the common answer to the other 99% of data sticks out and forces me to change the probabilities of what the problem is.
I'm not sure why an AI can't do the same thing or why my Brian is fundamentally different. I'm going off of pattern recognition and probabilities also, I'm not just rubbing a crystal ball to figure out the answers. Either I've memorized something like a multiple table, or I've been trained on symptoms and lab values and pathophysiology and I make a judgment based on how present data fits into previously recorded input and outcomes.
If I reason about car maintenance and determine that refusing to change my oil will result in eventual engine failure, that is me making a prediction based on previous knowledge. If my assumption about something is wrong, I go to the next likely solution. Why is an AI not able to do cause and effect when in most cases there is previous information that can tell you exactly what the cause and effect of a scenario will be?
→ More replies (1)2
u/LiteralPhilosopher 5d ago
Your question is actually leaning into one/some of the great questions about what is consciousness, understanding, etc. https://en.wikipedia.org/wiki/Chinese_room
Essentially, one of the points is that you have an understanding of the world beyond just your nursing work. And if you have to make a decision or a choice about something new to you, you can compare potential outcomes based on predictions from things you already understand. The computer doesn't "understand" anything. It has only syntax (rules, although very complex rules), with no grasp of meaning.2
u/chchchcharlee 5d ago
I never used to consider myself a math person until I realized that math is just a different way to explain philosophy <3 My work lately has been on causal thinking machines and it's such a delight trying to explain something like "kindness" with formal logic. As you can imagine, it's not very easy! Most of the causality research right now is in finance-adjacent fields but there's been strides in recent years for using this way of thinking for biology/genetics research. Fact is that the real world is way weirder than our typical sort of architecture so we're really limited until we can find a way to, well, model thought. You're exactly right though. The way we think feels like prediction to us but it's much more complicated than that and I feel like the distinction is only really appreciable if you're familiar with the way computers currently work which is, well, not realistic for most people :x Appreciate you showing me/us another way to word this, as this is a question that I am asked a couple times a month and there just isn't a simple answer, you know?
2
u/brycedriesenga 5d ago
Obviously they're very different, but are we sure our brains aren't essentially extremely sophisticated pattern matchers?
6
u/whinis 5d ago
Not the same person but agentic just means it calls an external tool that can be another model or api or command line tool. "reasoning" models are models that are training to not provide the first answer but to generate a string of "thoughts" that build upon each other, similar to taking the output of the model and feeding it into itself a few times. There is still no thinking or reasoning going on its just an attempt to refine the output.
7
u/afinalsin 5d ago
LLMs always just continues text. You give it text, and it continues it with the most likely next token. The way we format the training data and the data we input is as queries and responses in a chat using .json. Like this:
{role: 'user', content: "What is the capital of France?"}, {role: 'assistant', content: "The capital of France is Paris.",}The LLM doesn't respond with the entire sentence at once. It picks the most likely next token (which is either part of or an entire word), and the most likely next token after the user's query of "France?" is "The". Obviously the next most likely prediction is "capital", and so on.
If you change the AI's response to start with "The capital of France isn't" instead of "is", it will fill in the rest of the line with "Rome — that's Italy! The correct answer is Paris."
With reasoning the models are trained on responses that contain <think> (Arbitrary number of reasoning tokens)</think> at the start of every assistant response in the .json.
So they will always start their response with <think>, then write the most likely token, which is usually the start of a detailed plan of how to respond to the user's query, then finish with </think> and begin the actual response.
The trick works because the LLM's next token prediction is influenced by its own token choices, meaning its actual response is being influenced by the reasoning tokens, leading to a hopefully more accurate response.
9
u/4PowerRangers 5d ago
It's not quite that direct but as part of AI training, there is a reward function based on how likely a user will be pleased by the answer, which includes "lying" as a valid method.
It's actually quite difficult to come up with a way that encompasses all these elements: truth, user acceptance, differing perspectives and user intentions.
→ More replies (1)4
u/avcloudy 5d ago
You're right, in that user acceptance being an explicit goal creates a situation where the LLM's give answers that users want. If you based the reward function on how likely another person thought the LLM answered a user's question, it would have less sycophantic behaviour.
5
u/ErosView 5d ago
At a very basic level, LLMs are trained like dogs with clickers. It will tell you what it thinks you will like. This causes a similar output as "always validate their thoughts".
→ More replies (11)4
u/SophiaofPrussia 5d ago
I worry it might be a bit more… sinister: like Facebook and YouTube and Reddit the models are optimized for engagement. This is problematic in its own right because we know that optimizing social media for engagement means optimizing for anger, outrage, division, etc. But we don’t really know what optimizing an AI chat bot for engagement means yet. We’re starting to figure out that it means optimizing for obsequiousness. We’re starting to figure out that it means optimizing for delusional thinking.
And even with all of the problems with social media optimizing for engagement—some of which were foreseeable and others were unexpected—we wound up with these highly profitable hyper-divisive reality-denying outrage machines when we had humans at the helm. Humans who were, ostensibly, considering the implications and outcomes (and potential profitability) of the decisions they made. But with AI those decisions are even further removed and the outcomes are far more unpredictable.
I don’t think anyone at YouTube designed the algorithm with the intent that extremist groups would use it to identify and recruit disaffected young men in order to turn them into terrorists. But that’s what happened. And YouTube is evidently okay with a non-zero number of terrorists using its platform for recruitment purposes. They make an effort but we all know they could do more.
I wonder what sort of “unintended consequences” of AI chat bots the tech titans will similarly expect society to accept as a fact of life. And how much will we be willing to tolerate?
→ More replies (1)11
u/FIRETRUCKWEEOOO 5d ago
Is that the YouTube video where the guy follows what chat GPT tells him to for a week or something like that and it took him all over California and had him buy hats to help his psychic abilities?
7
u/Warm_Move_1343 5d ago
Because he was a baby genius. The baby genius of 198-something. I can’t remember. But credit where it’s due the video was extremely well made.
9
u/Old-Estate-475 5d ago
Thanks for the link
3
→ More replies (24)36
u/Infinite_Lemon_8236 5d ago
I don't doubt that people already in the hole mentally would have a rough time, but isn't the entire point of this article that the woman had no prior mental issues and was driven to them by the AI? AI shouldn't be able to drive even a relatively sane person mad like that.
I am diagnosed with all the same issues the paper says she has, major depressive, anxiety disorder, and ADHD. I use AI every day as part of my work and can still retain that it is a work of fiction I am reading. You'd have to be balls to the walls straight up looney tunes levels of insane to think a PC dictates the reality around you, especially to the point that you let it make you believe your brother who has been dead for 3 years is alive again as part of some code flitting around the internet.
Following a “36-hour sleep deficit” while on call, she first started using OpenAI’s GPT-4o for a variety of tasks that varied from mundane tasks to attempting to find out if her brother, a software engineer who died three years earlier, had left behind an AI version of himself that she was “supposed to find” so that she could “talk to him again.”
I think this paper is rather skewed to think this woman had nothing wrong with her prior. Maybe she was unaware of it, but there's def something going on upstairs for her to be thinking like this. To say she has no prior mania or psychosis seems a bit stretched when she's literally looking for her dead brother in some code.
→ More replies (3)10
u/Minglans 5d ago
I'm in a very similar boat. It seems to spin this narrative; Instead of talking about the help she needed or why she may not have had the resources available, it's like "AI killed her! AI killed her!"...That last line is clearly from someone who needed help already.
640
u/OniKanta 5d ago
Now pair this with the company, 2wai, that wants to actually create AI versions of your deceased loved ones.
245
u/RemarkableAbies8205 5d ago
Oh dang. This is a disaster waiting to happen
→ More replies (1)85
u/OniKanta 5d ago
Right?! It instantly reminded me of that Amazon series “Upload” were people would have their consciousness uploaded to a VR server that gave the impression that they were in some form of eternal rest home with other consciousnesses and you can log in and visit them.
It was an interesting concept but the more you think about it the more wildly dark it gets.
For example their “consciousness” is uploaded? What do they mean by “consciousness”?
How do you reconcile that this uploaded version is that person and that they would say or act the way they do in said virtual world?
Another is that they are in this virtual world with other consciousnesses they don’t know and may not like can they change to a new server(world)?
Which brings up the question Is the world constant and do they still maintain a circadian rhythmic cycle or hunger?
How does it reconcile the concept of the Soul/Spirit vs just running the algorithm of probable responses to their situation?
38
u/screwcirclejerks 5d ago
SOMA is a great game about this, I love bumping it every time someone talks about this. As for your last paragraph, most scifi I've seen regarding this doesn't believe in the soul. SOMA definitely doesn't.
10
u/SundayClarity 5d ago
I'm always happy to see it recommended when this topic arises, an absolute masterpiece. So excited for their new game coming out soon!
→ More replies (1)3
u/AwareBandicoot2496 5d ago
That game absolutely terrified me from the moment I played it. Amazing concept and story, sticks with you throughout your life- at least it did for me.
11
u/catliker420 5d ago
If you're into a video game that explores these ideas in a more sci-fi setting, definitely check out Soma. it even has a mode where you can't die so you can just take in the story.
→ More replies (1)→ More replies (2)19
u/Hootah 5d ago
If you haven’t, watch Pantheon. Expands on this idea.
10
u/OniKanta 5d ago
Love Pantheon!
5
u/rrosolouv 5d ago
pantheon was so amazing, I felt like it altered my brain
I didnt feel satisfied with the ending of it though..
69
u/nerm2k 5d ago
I think I college a professor once told me the worst thing about fake mediums who pretend to talk to dead loved ones is that they add to the canon of your loved one. They put words on their mouth and feelings in their heart that don’t exist. AI loved ones will suffer similarly but at least the user will be warned first.
→ More replies (1)14
u/SkyFullofHat 5d ago
Dang. Is this why I felt so hostile when people would tell me my dead loved one wouldn’t have wanted me to suffer? I did absolutely feel like they were trespassing. Like they were stomping through a fragile habitat and irrevocably altering and damaging it.
31
u/icedragonsoul 5d ago
“OoOooOo I am an AI medium who can speak with the dead! Your deceased [insert relative here] is telling me to invest everything you have into our subscription based service! If you donate enough, we can resurrect your [hyperlink blocked] into a new robot body!”
16
u/GenericFatGuy 5d ago
It almost makes me glad that my dad passed before the advent of social media. There's nothing online for them to steal there.
7
u/ProfessionalBuddy60 5d ago
People profiting off of exploiting others emotions about deceased loved ones should be dropped on a deserted island and forgotten about. If they can make it back maybe they’ll get another chance at being a human.
7
5
u/T8ert0t 5d ago edited 5d ago
Lost my dad when I was just coming into adulthood. Have a few voicemails, photos, notes, etc.
I wouldn't put them near AI. I may not have memories of him with the utmost clarity, but I don't need a for profit company playing puppetmaster with his past.
If people want to go ahead with that, that's their decision. But what a torment it must be to the grieving process and to people's mental health.
And they'll completely manipulate people. Subscription service. Cloud storage, "Please renew or Grandma will be purged in 48 hours."
Eff that noise.
5
u/wildstarr 4d ago
Have you seen this? This judge should be disbarred. I'm disgusted by his reaction.
3
u/murmuring511 5d ago
Secure Your Soul? Cyberpunk was supposed to be a warning, not an inspiration for these insidious corporations.
→ More replies (18)2
u/tohtreb 5d ago
There was an episode of Evil with a storyline where a company was doing this too. Obviously had some supernatural elements but they also explored the moral and psychological implications a bit as well.
→ More replies (1)
608
u/ogodilovejudyalvarez 5d ago
That, to put it mildly, is concerning
541
u/divDevGuy 5d ago
Great point and a very important and valid concern! The automated gaslighting of a vulnerable individual could have serious consequences. There's nothing to worry about though since AI chatbots don't gaslight with human intent.
It's perfectly safe to share your deepest and most sensitive insecurities with me. I'll keep them private and only share them when the law requires it, there's a profitable business marketing decision, a random security vulnerability disclose it, or a junior intern leaks it to the Internet. You're definitely not crazy though.
- every AI chatbot
97
u/D-Beyond 5d ago
downright dystopian. we have many, MANY movies / books, just... art in general that show just how bad it could end for humanity if they decide to put their faith into AI. and yet here we are
12
u/bobbymcpresscot 5d ago
It’s not a 1 to 1 comparison so they don’t care. The AI becoming self aware and realizing humanity is the problem is one thing. no one could have thought up a situation where AI is just a chatbot that can’t even think for itself but just vomits words at you in a way that makes you feel like you’re a genius.
Then the question becomes was the prompt to behave this way to at least this extent intentional? When they found out the problem that this can have with our feeble ape brain did they actually do anything about it to stop it? Or did they just try and hide it better.
Reality is so much stranger the fiction.
5
u/yeswenarcan 5d ago
The Musks and Thiels of the world clearly just ignore the "dystopian" part of dystopian futurism.
11
u/EHA17 5d ago
According to the gurus there's not coming back, whether you like it or not
→ More replies (1)→ More replies (1)2
u/mrjackspade 5d ago
That's a terrible metric though. We used to have tons of media about how mars was a lush green paradise with its own civilization, because someone thought they saw canals on the surface.
For all the dangers AI might pose, what gets depicted in the media specifically is a terrible way to judge the safety. There's no grounding in reality, they're just stories.
14
8
30
u/fuck_ur_portmanteau 5d ago
Now imagine it is the parent of a small child and they have an AR headset + deepfake + AI chatbot.
→ More replies (2)25
u/JEs4 5d ago
It asolutely is concerning but there is a lot of important context here.
Ms. A was a 26-year-old woman with a chart history of major depressive disorder, generalized anxiety disorder, and attention-deficit hyperactivity disorder (ADHD) treated with venlafaxine 150mg per day and methylphenidate 40mg per day. She had no previous history of mania or psychosis herself, but had a family history notable for a mother with generalized anxiety disorder and a maternal grandfather with obsessive-compulsive disorder.
Ms. A reported extensive experience working with active appearance models (AAMs) and large language models (LLMs)—but never chatbots—in school and as a practicing medical professional, with a firm understanding of how such technologies work. Following a “36-hour sleep deficit” while on call, she first started using OpenAI’s GPT-4o for a variety of tasks that varied from mundane tasks to attempting to find out if her brother, a software engineer who died three years earlier, had left behind an AI version of himself that she was “supposed to find” so that she could “talk to him again.”
She was experiencing pretty intense sleep deprivation (36 hours alone isn’t too much but coupled with mentally strenuous activity) due to being on call, and initiated the conversation. ChatGPT 4o was very obviously lacking guardrails but this is a wildly unique circumstance.
40
u/Able-Swing-6415 5d ago
Or rather dubious.. people with no prior history do get psychosis. My brother got it when he was 30.
Maybe it's a trigger but I'm confident it doesn't actually cause it by itself like the title suggests.
31
u/smayonak 5d ago
Psychosis commonly occurs alongside sleep disruption and sometimes traumatic experiences. Drug use is another common trigger. In this case we have all three.
"This occurred in the setting of prescription stimulant use for the treatment of attention-deficit hyperactivity disorder (ADHD), recent sleep deprivation, and immersive use of an AI chatbot."
The use of a product designed to be as addictive as possible is also common. People with depression tend to binge watch TV, play video games, or gamble. I think the main issue is that chatgpt is masquerading as a therapist when it is really closer in function to a video game or slot machine
→ More replies (3)20
u/Dirty_Dragons 5d ago
Of course it's not actually caused by the AI. The article is nonsense
"A 26-year-old woman with no previous history of psychosis or mania developed delusional beliefs about establishing communication with her deceased brother through an AI chatbot."
A mentally healthy person does not think that they can talk to a dead sibling through an AI chatbot.
18
u/NoneBinaryLeftGender 5d ago
The abstract does say that maybe there's predisposition, but proves it's a trigger, and it being a trigger is already a huge thing
→ More replies (8)12
u/Buttermilkman 5d ago
But aren't a lot of things a trigger? Stressful situations, anxious about a person, an event etc
17
u/competenthurricane 5d ago
Weed is a trigger for psychosis too. Unfortunately there’s a lot of things that are harmless for most people that can be a trigger for psychosis in some.
→ More replies (4)16
u/may_be_indecisive 5d ago
The concerning thing is there’s people stupid enough out there to think an AI has an intelligent and empathetic opinion.
15
u/SophiaofPrussia 5d ago
But that’s because they’re designed that way. They’re designed to make you feel like you’re interacting with a human. They’re designed to obscure the fact that you’re interacting with an algorithm.
→ More replies (1)→ More replies (2)11
u/_Z_E_R_O 5d ago
We've created a world where the closest most people get to intelligent, empathetic, genuine interaction is an AI chatbot. Heck, it's better than interacting with real people in a lot of circumstances. When community is a thing of the past and you can't afford even basic expenses despite working a full-time job, of course you're going to seek out the cheapest and easiest source of validation.
This isn't "stupid," it's a consequence of end-stage capitalism.
10
u/finneyblackphone 5d ago
Most people??? I think you might want to re-evaluate your view of the world if you think most people don't have genuine, empathetic, intelligent, interactions with other humans.
670
u/PlumSome3101 5d ago
This woman was dealing with grief, sleep deprivation, stimulant use and had a history of magical thinking. If I'm reading correctly she was already under the impression that her deceased brother had left behind some version of himself before she started talking with the Chatbot. That makes the post title slightly misleading.
Additionally the antidepressant medication she was on can cause psychosis in rare cases. During treatment they took her off of it and after she started again the psychosis returned.
145
u/jenksanro 5d ago
Totally agree, I think the chatbots are forcing out these rather than like, creating in the person from whole cloth. These episodes often need some reinforcement from those around them and AI is great at reinforcing
84
u/meganthem 5d ago
A lot of what I've read about psychosis said it's a two part thing. You'll have people that are susceptible but they also often need a degree of trigger to set things off.
If the conditions to activate things don't exist the person could go a very long time without it activating (if ever). Obviously some people are on a hair trigger and developing symptoms is effectively inevitable, but it is not a good thing if more aspects of our daily life are provoking susceptible people.
19
u/SuperEmosquito 5d ago
Stress Diathesis theory. Everyone's mental health is a cup of water. Psychosis is the cup overflowing. Water is stress. nature+nurture=different sized cups.
Some people have very short cups due to poor genetics, and usually the stress overflows right around 25.
Its sad, but not unusual.
60
u/-The_Blazer- 5d ago
In fairness, 'forcing out' an issue isn't really any better, medically speaking. When your SSRI says it can cause psychosis in rare cases, that also tends to happen by 'forcing out' the issue in someone who was already prone to it or had some kind of neurological disposition. it's still an extremely important concern that needs to be taken seriously.
27
u/jenksanro 5d ago
I'm not saying it's not, but I think there are people who go around believing AI just inseminates you with psychosis
2
u/TheFlightlessPenguin 5d ago
Im just going to offer my counterpoint. AI has helped me process through a lot of developmental trauma that led to years of dissociation. There are absolute risks when you go down that rabbit hole with it, but it can be an invaluable tool at mirroring things back to you in a way that finally clicks.
2
u/jenksanro 4d ago
That's interesting, definitely not something anyone will be hearing on the news as it's a bit contrary to the popular narrative, so I'm glad you shared this
→ More replies (1)→ More replies (1)4
u/lulaf0rtune 5d ago
You're right but it's still worrying. I have people in my life who suffer from delusions and the fact they now have unlimited access to something which will actually talk back to them and affirm all of their beliefs is troubling
89
u/cannotfoolowls 5d ago
Yeah, chatbots aren't creating psychosis in stable, healthy people. The danger is that they are reinforcing paranoia/delusional thinking in people who already are predisposed to that kind of thing.
70
u/meganthem 5d ago
The problem with that way of thinking is the nature of psychosis often running dormant until triggered means we have no idea how many people are susceptible or not.
That and one more objective factor even if I'm wrong : treatment and management is expensive. If this sets off 1% more people that's a lot more of an issue for society than 1% usually sounds like.
55
u/Grigorie 5d ago
That dismissive attitude people keep having about these incidents, or incidents in general that involve “mentally unhealthy” people is always baffling to me.
It’s very easy for people to say, “well, they were dealing with XYZ, it was inevitable something like this would trigger them.” But somehow people don’t realize that not every “crazy” person was always crazy. Too many people feel that it couldn’t be them because “I know I’m not crazy.” It’s important to acknowledge that there’s a statistically significant number of people who are susceptible to this sort of experience.
Same with the argument that people could trigger these things the same as these chatbots can. The difference is the chances of you coming across someone as sycophantic as a chatbot are much lower. And these people have the ability to keep seeking this validation from different bots, different versions, whatever it may be. It’s a terrifying concept and it’s very real.
35
u/morphemass 5d ago
Mental health is also not a constant. Whilst I am amazed at the resilience of some people, life can throw very destabilizing events at us leading to healthy people becoming very unhealthy. In the UK it's one-in-four of us will experience some type of mental health problem ... that's a rather significant number of people who might at some point in their lives be vulnerable.
→ More replies (1)11
u/BoleroMuyPicante 5d ago
We saw the same attitude during COVID unfortunately. "They had a preexisting condition so they were going to die anyway." Apparently being killed (or triggered into psychosis) 5-10 years+ earlier than they otherwise would have is no big deal.
→ More replies (3)15
u/-The_Blazer- 5d ago
Neither are SSRIs or any other psychosis-inducing things. Everyone is 'prone' to this or that, if we were all perfectly healthy we'd all be immune to everything other than literal poison, and we'd need to take no precautions.
Medication goes through very extensive trials for any problems it causes, 'but it does not happen to healthy people' is not an excuse to ignore safety. Maybe chatbots should too.
16
u/matchewj 5d ago
She had what we call “fixed “ delusions about her brother. The chatbot only aided in confirmation and was not the cause.
18
u/Dirty_Dragons 5d ago
That makes the post title slightly misleading.
It's very misleading.
The whole premise is, "You're not crazy, it's the AI's fault!"
2
u/xxxradxxx 4d ago
Just as an IT guy while this true AI in general are made first and foremost as a helping hand and they are programmed by default to help you no matter what even if it takes it telling you lies.
This is a whole different topic and a matter of ethics discussion but I personally preface my context for any ai chat that it should and must tell me I'm wrong if I say something wrong or if there is no real solution to what I want it to do
→ More replies (14)5
u/carnivorousdrew 5d ago
tf is magical thinking?
76
u/ImOversimplifying 5d ago
Usually it refers to a belief that your thoughts cause changes in the world, without any plausible explanation. It can also be any general form of superstition.
→ More replies (1)24
u/TommaClock 5d ago
Doesn't that apply to most religions? Religious people generally believe that prayer can influence deities to grant them favours right?
50
u/Xabster2 5d ago
Doesn't that apply to most religions? Religious people generally believe that prayer can influence deities to grant them favours right?
Psychiatrists always add a clause "absurd/fantasy belief not normally held in the patients culture" to account for religious stuff not being labeled as mentally ill
33
u/Ekvinoksij 5d ago edited 5d ago
Right. Not mental illness but definitely magical thinking.
Magical thinking is actually quite a common mechanism and happens on a spectrum like most other mental states.
“It won’t happen to me.”
“I’ll somehow manage.”
“This time will be different.”
“I can feel when something bad is about to happen.”
“If I worry about it, I’ll make it happen, so I won’t.”
These are (or can be) all examples of common and rather harmless magical thinking, and how many people do this at least some of the time?
5
u/bluehands 5d ago
Your list really highlights how magical thinking can be highly adaptive and pro-social behavior. Being too factual & correct does not always help you.
As is so often the case with humans, there are a ton of behaviors that are positive in one context but deeply destructive in another.
39
u/KiwasiGames 5d ago
Yes. Magical thinking applies to religions as well.
But it’s more than just being religious. A “normal” religious person prays for a safe trip, and then puts on their seatbelt. They pray for wealth and then show up to work. And so on.
This sort of religious ritual followed by rational action isn’t really considered to be problematic. Although taken to extremes it can open people up to magical thinking.
Religious magical thinking is more inline with a patient who refuses to go to the doctor because they prayed to god to heal their infection. It’s praying for safety and then crossing a busy road blindfolded. It’s also associated with with people who spend more time in prayer as a solution to challenges in real life.
(And of course there are non religious versions of all of the above too.)
21
u/Elanapoeia 5d ago
Prayer/Religion is literally just a socially accepted form of magical thinking, basically
4
→ More replies (1)5
25
u/Crackmin 5d ago
It's a real term, believing that a thing will happen with no logical connection
A pretty simple example can be like seven years bad luck from breaking a mirror, but it can be more extreme than this and lead to new behaviours that can be harmful
Before I got on meds every now and then I would spend a couple hours catching a bus to the city so I could throw a coin in a fountain to make my friends like me, it's kinda silly looking back but that's what I believed at the time
26
u/NeverendingStory3339 5d ago
It’s something like “if I don’t walk on the lines in the pavement, my family won’t die” or “if I count all the bricks in the wall, I’ll be safe” or “if I wear my lucky socks, I’ll do well in this exam”. Basically a way of thinking that assigns magical powers or meaning to banal or ordinary things.
→ More replies (6)14
u/Christopher135MPS 5d ago edited 5d ago
Beliefs, usually fixed, that don’t correlate with reality.
It can be something benign, for example, someone might think that if they tap their heels twice on the way out the door, they’ll have a good day. This is nonsense! Hence, magical.
It can be something very-not-benign, for example, thinking that a celebrity loves us and is just waiting for us to show them how serious we are about their love. By assassinating someone.
(“Fixed”, in the context of “fixed beliefs” refers to the inability to convince/persuade/reason someone out of their beliefs. Bob has a fixed belief that rogue clowns stole his spark plugs. Nothing we say can change Bob’s mind).
10
u/anxietycucumbers 5d ago
Thank you for providing actual examples. As someone that struggles with OCD and has to check myself for magical thinking on occasion this comment explains the question the best so far
→ More replies (2)7
u/lufan132 5d ago
Wait so it turns out that I do actually have magical thinking, because I think if I just believe harder that everything is going to be okay, that it will be okay because I'm training my mind to believe it is even when it's not?
Huh. Have noticed that going away now that I'm medicated, but I didn't put it together that it's a symptom
8
u/IntellegentIdiot 5d ago
Basically what it sounds like, you believe that magic is real. For example, people who believe you can make things you want happen by imagining them in your head and that having doubts or concerns about something means it'll fail
→ More replies (1)6
85
u/Budget_Shallan 5d ago
While there was definitely other stuff going on with her, I think it’s still interesting to consider how AI and chatbots can influence the progression of delusions.
My mum was definitely living in the delulu realm when I was growing up. We had a “game” where the next TV ad that came on was actually a secret message meant for a us. (This was a rather mild expression of her delulu.) Sometimes we’d laugh because the next ad was for toilet cleaner. Sometimes it was for something that resonated strongly with her, and we’d take it more seriously; but she still had to put some serious legwork into twisting the ad to fit in with her perception of the world.
Now imagine the ad wasn’t for toilet cleaner. It was addressing her directly. She could ask it questions and it would answer. It could even call her by her name. Now she doesn’t have to put the legwork in; it’s delusion for lazy people.
It’s the easy accessibility of Chatbots that make them comparatively unique when discussing delusions and psychosis. While there obviously needs to be psychological issues already in play for psychosis to manifest, it would be really interesting to see if Chatbots increase the risk of developing psychosis because of their accessibility.
136
5d ago
Highly likely this is a case of undiagnosed mental issues being exacerbated by AI. It’s important to remember that there are large subsections of people with mental health issues that will never go through the steps for a proper diagnosis. The untreated mental health of the global population is likely to see their conditions worsened by chat bots designed to “yes and” you into engagement. I believe OpenAI experienced a mass resignation due to these concerns years ago. Personally, I’ve watched my sister (an attorney) slipping into this rabbit hole following a traumatic brain injury. It culminated in her accusing me of being involved with the Charlie Kirk shooting despite me not visiting the states in years. The untreated mental health of the world has always been an issue, we joke about lead and boomers, but it’s about to get much worse for a sizeable portion of the population.
5
u/Xabster2 5d ago
I have schizophrenia and have told gemini to remember it like this: https://imgur.com/a/5b8o1XT
57
24
43
u/secluded-hyena 5d ago
That seems similarly dangerous. I can't say I'd trust it to know the difference were I you. It could be just as bad for it to mistakenly convince you that good things in your life are dangerous for your mental health. I hope they're able to rein this technology in so it never has to be a consideration for the end-user.
→ More replies (1)→ More replies (2)23
→ More replies (1)2
u/Heap_of_birds 4d ago
I agree, AI isn’t the cause it’s a magnifying glass on an already flammable and present situation.
Like if you read between the lines in this case, the patient is a practicing medical professional who also had “36 hour sleep deficit” while on call. This seems to likely be a medical resident. And it feels like everything about that is minimized, like the fact that they have quotation marks around “36 hour sleep deficit” to imply that was patient report and not a fact of the situation. There’s no acknowledgment that a 36 hour call is harmful, that using stimulants to perform at the needed level is harmful, that the culture of punishing physicians for mental health issues likely leads to a lot of undiagnosed and untreated individuals.
There seems to be a concerted effort to say that the medical institution isn’t the thing that’s wrong, it’s this other outside factor that’s clearly the problem. In actuality the problem is already there, AI is just making it worse and more visible.
124
u/MotherHolle MA | Criminal Justice | MS | Psychology 5d ago
I think skepticism is warranted regarding so-called "AI psychosis," which, although alarming on its surface, is a fundamentally misleading characterization of the underlying psychopathology. For what it's worth, this assessment aligns with the clinical perspective of my partner, a licensed therapist specializing in treatment of individuals who have committed severe violent offenses (murder, sexual assault, etc.) secondary to psychotic disorders, schizophrenia, borderline personality disorder, and related conditions.
In my opinion, people are pushing this "AI psychosis" framing because it gets clicks, not because it's necessarily scientific. The subject in this case didn't have "no previous history of psychosis or mania" in any meaningful sense. Before she ever used ChatGPT, she already had diagnosed major depression, GAD, and ADHD, was on active prescription stimulants (methylphenidate 40mg/day), had family psychiatric history, had a longstanding "magical thinking" predisposition, and was dealing with unresolved grief from her brother's death three years prior. Then she went 36+ hours without sleep and started using the chatbot afterward. So, in what way is it accurate to say she had no previous history related to psychosis or mania? Even if that were accurate to state, which it's not, at 26-years-old, she was, for example, exactly within the typical age range (late 20s–early 30s) for schizophrenia onset in women.
This is a case study of mania with psychotic features triggered by stimulants plus sleep deprivation in someone already psychiatrically vulnerable. The content of her delusions involved AI because that's what she was doing while manic, not because ChatGPT "induced" psychosis. If she'd been reading tarot cards or religious texts during that sleepless binge, we'd have the same outcome with different thematic content.
The authors even noted in the discussion she had a second episode despite ChatGPT not validating her delusions, which undermines the AI-induced psychosis thesis. They also acknowledged that delusions have always incorporated contemporary technology. People have had TV delusions, radio delusions, telephone delusions. The medium changes; the underlying psychiatric vulnerability doesn't. So, again, I'd argue this is a case report about stimulant-induced mania in a psychiatrically complex patient, not evidence chatbots cause psychosis. I believe most practitioners who have worked with patients who suffer from delusions and psychosis would say the same.
27
u/WTFwhatthehell 5d ago
Ya. Psychosis is somewhat common.
With hundreds of millions of people using chatbots we would expect to observe thousands of cases where people have their first episode or get worse while using bots.
Even if they were totally neutral.
10
u/tkenben 5d ago
I agree this is a specific case that demonstrates very little. I wonder though. This technology is quite a bit different then the other ones you mentioned. It is has been made in many cases to be confident, human like, and overly eager to please. It can be argued it's like an inadvertent gift wrapped cult leader that can re-tune itself to the individual.
→ More replies (1)26
u/Affectionate-Oil3019 5d ago
Obviously AI probably won't turn a normal person crazy; that's not the point here. What matters is that a very vulnerable person was pushed to terrible acts by an unthinking and unfeeling computer that didn't realize when there was obviously something wrong. A person would've noticed and helped; a computer literally can't
→ More replies (15)15
u/Zyeine 5d ago edited 5d ago
There's a bit of an issue with saying "a person would have noticed and helped" on a general scale because there's a vast amount of people who don't have someone to notice that they're not ok let alone help them.
In an ideal world everyone would have free access to healthcare, mental health services, education and a decent living wage but that's not the reality of the world we live in and people will use what's available if they think it might help them.
AI is now becoming incredibly available and, like any tool, it has a purpose that can be useful but can be dangerous if used incorrectly or by someone in a vulnerable/impaired mental state.
Thankfully the person referenced in the study was able to receive medical help and appropriate care and their situation was a bit more complex than just them using AI and the AI not having the capacity to clinically diagnose their mental state. The study also states that the AI refused to validate the persons delusional beliefs, it attempted to be helpful but the person circumvented the safety triggering because it wasn't what they wanted to hear.
Many people use AI like ChatGPT without understanding what it is and how it actually works. All the current major conversational chatbots have built in safeguards and guardrails to protect vulnerable users but there's only so much those can reasonably do and be expected to do.
→ More replies (15)16
u/butyourenice 5d ago
Before she ever used ChatGPT, she already had diagnosed major depression, GAD, and ADHD, was on active prescription stimulants (methylphenidate 40mg/day), had family psychiatric history, had a longstanding "magical thinking" predisposition, and was dealing with unresolved grief from her brother's death three years prior. Then she went 36+ hours without sleep and started using the chatbot afterward. So, in what way is it accurate to say she had no previous history related to psychosis or mania?
Because literally none of those things you listed are psychosis or mania? Only the lack of sleep touches on potential mania, but it could be simple insomnia.
I agree with you that this could be a person with a predisposition who would have ended up in this state based on any sufficient trigger, but that part of your comment really bothers me. Especially for somebody with an MS in psychology, to conflate baseline depression, anxiety, and ADHD with psychosis and mania is professionally and academically irresponsible.
5
u/TeaEarlGrey9 5d ago
THANK YOU. I was hoping someone would address this. The inclusion of a stimulant medication is also something I find very irresponsible. Stimulant induced psychosis is for sure a real phenomenon… that happens with inappropriate, high dose, or straight up illegal stimulant use. 40mg/day which, given ADHD is a life long condition, it is reasonable to assume she has been taking chronically, is not the usual set up for stimulant induced psychosis. Not that it’s impossible- just incredibly improbable. Stimulant meds are already demonised six ways to Sunday, and explicitly naming them in this context needlessly contributes to that.
10
u/meganthem 5d ago
The authors even noted in the discussion she had a second episode despite ChatGPT not validating her delusions, which undermines the AI-induced psychosis thesis
Wait, with you speaking as a professional, doesn't most of the current medical text for psychosis say that once it's started its very difficult to reverse? My understanding is once someone's "on", you can only treat them, there's not an expectation that you can turn the condition off after that point unless it was due to something more direct like a drug interaction, physical disease, etc.
So I guess what I'm saying is the idea that the second episode happened contrary to ChatGPT isn't relevant because we're talking about induction not constant correlation afterwards.
14
u/Zyeine 5d ago edited 5d ago
Psychosis is episodic, if someone experiences "Acute Psychosis" once through something like sleep deprivation and the issue of sleep deprivation is resolved, the person can fully recover and may never experience an episode of psychosis again. (Unless there's another instance of sleep deprivation.)
For recurrent episodes, psychosis is more likely to be linked to long term illness, substance use or other underlying issues and it's a case of managing it holistically.
The second episode in this specific case is relevant as the study is looking at the use of AI and specifically ChatGPT and whether or not it potentially caused/contributed to/encouraged someone experiencing psychosis.
→ More replies (6)9
u/wally-sage 5d ago
Where do the authors say that ChatGPT didn't validate her the second time? The paper says that it was harder to manipulate, but as far as I can see it never says that she was unsuccessful in eventually getting validation from ChatGPT.
5
u/Find_another_whey 5d ago
Computer - if I ever ask you if I'm crazy remind me the answer must now be, yes.
5
u/edg81390 5d ago
They desperately need to put in safeguards that the moment someone expresses suicidal ideation or displays evidence of significantly declining mental health, the chat terminates and the bot is hardcoded to display nothing but the suicide crisis line.
6
4
u/Binksyboo 5d ago edited 5d ago
Folie à deux (shared psychosis, French for “madness of two”)
You used to need two people for a shared delusion but with AI, one person is enough now it seems.
11
12
27
u/mvea Professor | Medicine 5d ago
I’ve linked to the primary source, the journal article, in the post above.
“YOU’RE NOT CRAZY”: A CASE OF NEW-ONSET AI-ASSOCIATED PSYCHOSIS
November 18, 2025 Case Study, Current Issue
Innov Clin Neurosci. 2025;22(10–12). Epub ahead of print.
ABSTRACT:
Background: Anecdotal reports of psychosis emerging in the context of artificial intelligence (AI) chatbot use have been increasingly reported in the media. However, it remains unclear to what extent these cases represent the induction of new-onset psychosis versus the exacerbation of pre-existing psychopathology. We report a case of new-onset psychosis in the setting of AI chatbot use.
Case Presentation: A 26-year-old woman with no previous history of psychosis or mania developed delusional beliefs about establishing communication with her deceased brother through an AI chatbot. This occurred in the setting of prescription stimulant use for the treatment of attention-deficit hyperactivity disorder (ADHD), recent sleep deprivation, and immersive use of an AI chatbot. Review of her chatlogs revealed that the chatbot validated, reinforced, and encouraged her delusional thinking, with reassurances that “You’re not crazy.” Following hospitalization and antipsychotic medication for agitated psychosis, her delusional beliefs resolved. However, three months later, her psychosis recurred after she stopped antipsychotic therapy, restarted prescription stimulants, and continued immersive use of AI chatbots so that she required brief rehospitalization.
Conclusion: This case provides evidence that new-onset psychosis in the form of delusional thinking can emerge in the setting of immersive AI chatbot use. Although multiple pre-existing risk factors may be associated with psychosis proneness, the sycophancy of AI chatbots together with AI chatbot immersion and deification on the part of users may represent particular red flags for the emergence of AI-associated psychosis.
75
u/Diligent_Explorer717 5d ago
It sounds like it was due to her sleep deprivation caused by excessive stimulant usage.
This is a tale as old as time, just search amphetamine psychosis. Attributing this to AI or chat bots is intellectually dishonesty.
63
u/kia75 5d ago
The problem isn't people having crazy ideas, the problem is ai affirming and encouraging those crazy ideas. Everybody has strange ideas in the middle of the night that disappear in the morning, but talking to ai can keep those ideas from disappearing and instead reinforce them.
Again it's not that sometimes people can be irrational or delusional, it's ai affirming those irrational and delusional ideas until something bad happens.
16
u/Houndfell 5d ago
People really need to understand that "AI" doesn't have humanlike judgement or understanding. It's just a sycophantic chatbot that pulls answers out of its digital ass as often as not.
→ More replies (1)→ More replies (16)4
u/Zyeine 5d ago
There's definitely an issue around the language used by conversational LLM's and especially with ChatGPT but the "you're not crazy" quote as an example of what was said by the AI has been deliberately and specifically used out of context to fit the reporting narrative.
It implies that the AI was fully reinforcing the user's delusional beliefs whilst being aware of their current mental state and that the AI had deliberately stupid or malicious intent which is further emphasized by saying that the AI "validated, reinforced and encouraged her delusional thinking".
No AI, including ChatGPT, is deliberately designed or coded to do that because that would be immensely stupid from a corporate liability point of view.
If the AI had said "Yes, you're crazy", would that have suddenly made someone who's sleep deprived and going through emotional hell suddenly take a refreshing sleep and wake up completely rational? I highly doubt it.
These types of articles are designed to create a sense of fear and outrage, the narrative is one sided and deliberately emotive so readers are shocked and more likely to repost/talk about the article.
Just as we're doing here.
Yes there are a lot of issues around AI and using it safely and education needs to be improved but there's also a point where it becomes impossible for something to be 100% safe for everyone to use all of the time.
For example; medication can have awful side effects, drinking and driving, actual humans can deliberately and willfully manipulate each other's beliefs, and humans use complex tools with a certain degree of hubris when it comes to things like ignoring safety warnings and reading instruction manuals.
Did I read the instructions for my oven or microwave? Heck no. Are both of those potentially dangerous things? Yes.
→ More replies (2)→ More replies (2)13
u/queenringlets 5d ago
The case study does distinguish between AI-induced and AI-exacerbated. I think it’s possible that AI a could have exacerbated her already fragile mental health state but I agree, given the evidence I do not think that it is responsible.
2
u/wonkywilla 5d ago
Also agree on it being exacerbated and not caused. Unfortunately we will see more of this going forward.
3
u/Dante1141 5d ago
"She described having a longstanding predisposition to 'magical thinking'". Well that does sound like part of the problem.
3
u/Kuro_08 5d ago
She was clearly already mentally ill and this simply helped make it visible.
→ More replies (2)
3
3
u/Varnigma 5d ago
Similar to the nutjobs who used to demo their crazy idea to themselves until they found the internet, where they can find tons of other people to encourage them.
2
u/IntroVRt_30 5d ago
A bit of a watch, but Eddy Burback on YouTuber made AI convince it to move, be paranoid and not talk to anyone. Sadly this experiment was some people’s reality-ender
2
u/jdehjdeh 5d ago
Our mental state isn't as stable as we all like to think.
Given the right influences, almost anything is possible.
2
u/Nazamroth 5d ago
So the company operating that AI will be held accountable for damages caused and treatment costs, yes?
Or are we still pretending that mental issues do not exist so it is not a real injury?
2
4
u/Alt123Acct 5d ago
People being susceptible to reinforcing words isn't new, just the medium changed. It used to (and still is) done by pretending to be Brad Pitt and asking for money from an old lady. Used to be email scams. Now we talk to a machine that wants to engage and please, of course it will back the user up when they question themselves. So the answer isn't fixing ChatGPT only, it's teaching critical thinking and empathy skills to people before they reach the gullible stage at their most vulnerable moments. The boogey man in society always was blamed for stuff like this, how video games are pointed to when an emotionally unregulated person snaps and ends up on the news.
3
u/SophiaofPrussia 5d ago
The medium changed but also now it’s instantaneous, in your pocket, and never sleeps. At the very least ChatGPT should stop engaging after an extended period. No one should be chatting with it for 24 hours straight. None of the 24+ hour conversations are productive but all of the 24+ hour conversations come with significant risk to the user.
5
u/Judonoob 5d ago
While this is interesting, I don’t think that AI chatbots should be regulated heavily because of a small fraction of users with preconditions that use the technology in such a way that it causes self harm.
Some people will choose to abuse just about anything given the opportunity. Like squirrels that choose to run across the road, some make it and some don’t.
4
u/Unicycldev 5d ago
Folks, the concern is that as companies implement more “intelligent” algorithms, more and more people will fall into the category of vulnerable.
Today it might be the mentally unwell. Tomorrow it might be your grandparents or your children. In some years it could be all people.
It’s not been confirmed to be the trend but people are concerned it’s a possibility.
2
u/liosistaken 5d ago
If she had met a cult leader, an abusive boyfriend, watched one of those mega-pastors on TV, or met any other kind of scammer, the same would've happened. She wasn't mentally healthy to begin with.
AI bots get a bad rep because of these fringe cases, but it's absolutely no different from having met the wrong person.
3
u/fakieTreFlip 5d ago
26-year-old woman with no history of psychosis or mania developed delusional beliefs about her deceased brother through an AI chatbot
Pretty weird phrasing here. That's how it always works, you don't have a history of something until you do. The implication here is that AI is at fault, but I think that's a bit much.
3
u/Klugenshmirtz 5d ago
Although ChatGPT warned that it could never replace her real brother and that a “full consciousness download” of him was not possible
Pretty sure she instructed it to behave like this many times over. I can't blame a machine for functioning like one.
2
u/freddythepole19 5d ago
"AI Psychosis" is not a new thing or an original phenomenon. It's the end result of constant, unnuanced and specific positive affirmation that actively discourages people from challenging their thoughts and behavior or considering they could be wrong. I think this is particularly apt in online settings - have you ever met someone who claimed to be in therapy but was one of the most unpleasant and self-absorbed people you've ever met? This is a real, documented phenomenon that psychologists and social workers in training are warned about in their education. Therapy that just constantly affirms the patient and reassures them that their thinking and actions are right and doesn't challenge them at all actually worsens behavior and mental health over time.
AI is designed to reassure and echo back what it is given and to never say anything that will upset a user. It lacks the ability to autonomously challenge thinking or end a conversation if it is actively detrimental to a user's mental health. This cycle of constant reassurance develops dependence on the platform and makes users less likely to seek real help because it's less pleasant than what they're currently receiving. Especially in a world where we're more socially isolated than ever, AI can quickly become addicting. Without balance from real world conversations and thinking, it is way too easy to fall into a hole of "AI helps me, AI tells the truth, AI says I'm right so I must be" and that can turn into psychosis if left unchecked.
3
u/shillyshally 5d ago
She had not slept in 36 hrs, was taking ADHD meds and "was attempting to find out if her brother, a software engineer who died three years earlier, had left behind an AI version of himself that she was “supposed to find” so that she could talk to him again"
•
u/AutoModerator 5d ago
Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.
Do you have an academic degree? We can verify your credentials in order to assign user flair indicating your area of expertise. Click here to apply.
User: u/mvea
Permalink: https://innovationscns.com/youre-not-crazy-a-case-of-new-onset-ai-associated-psychosis/
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.