When AI Becomes a Compulsion: The Rise of Reassurance Chatting

by | Feb 28, 2026 | NEWS, OCD

When AI Becomes a Compulsion: The Rise of Reassurance Chatting. Young woman sitting on a bed late at night, looking at her phone in a dark room with a glowing digital clock showing 3:47 AM.

When AI Becomes a Compulsion: The Rise of Reassurance Chatting

Picture this. It’s 3 am, and someone is hunched over their phone, typing the same question into ChatGPT for the seventh time tonight. “Is this thought normal?” Delete. Rephrase. “Does having this thought make me a bad person?” The answers keep coming, but the relief never lasts. This is what happens when AI becomes a compulsion.

Here’s the thing. When AI becomes a compulsion, it looks deceptively harmless. You ask ChatGPT one more question, just to be sure. Then another. And another. What starts as innocent curiosity transforms into something far more sinister.

When AI becomes a compulsion, it manifests in various ways. Users often return to the same prompts, creating an unending cycle of seeking reassurance from AI.

I’m Federico Ferrarese, a cognitive behavioural therapist based in Edinburgh, and I’ve been seeing this pattern emerge in my clinic more and more. OCD affects 2.3% of the population, yet up to 75% of cases remain undiagnosed. For many with anxiety and OCD, AI reassurance seeking has quietly replaced Google searches, offering something far more dangerous: a chatbot that never says no, never sets boundaries, and feeds the reassurance-seeking cycle endlessly.

As I observe in my clinic, the implications of when AI becomes a compulsion can be profound, influencing mental health and well-being.

Here’s what I think. We need to talk about compulsive AI use before it becomes the next silent epidemic. Because what looks like helpful technology might actually be making OCD worse, one midnight conversation at a time.

This article explores how, when AI becomes a compulsion, it impacts individuals with OCD and anxiety, revealing a grave dynamic of dependency.

Can you imagine spending ten hours a day seeking comfort from a machine that can’t actually understand your pain?

Understanding How AI Becomes a Compulsion in Reassurance Chatting

Let’s be clear about what we’re dealing with. Reassurance chatting is what happens when you turn to AI not for information, but for emotional comfort. It’s compulsive reassurance seeking with a digital twist. People with anxiety and OCD ask ChatGPT questions they’ve already asked, seeking that fleeting sense of relief that comes with being told everything will be fine.

When AI becomes a compulsion, it creates a cycle in which individuals feel compelled to seek comfort and validation from AI rather than addressing underlying fears.

But here’s what I see in my clinic. It’s not really about the answers. It’s about the soothing.

The Psychology Behind AI Reassurance Seeking

This compulsive behaviour highlights the crucial need to understand when AI becomes a compulsion in our daily lives.

Compulsive reassurance seeking sits at the heart of many anxiety disorders. OCD is often called the disorder of doubt, born from core beliefs that erode self-trust. The behaviour has an acute emotional undercurrent. Whilst it appears to be information gathering, the primary goal is to reduce anxiety caused by uncertainty.

It’s essential to recognise how, when AI becomes a compulsion, it intertwines with feelings of anxiety and uncertainty.

What makes it compulsive is the repetition. People review information they’ve already obtained because the purpose is to soothe, not to inform. Each time a detail unlocks relief, it compels them to seek it again. This creates a circular quest for comforting yet redundant information.

When AI becomes a compulsion, it can create an illusion of safety, masking the underlying issues that need to be addressed.

Here’s where it gets tricky. Reassurance seeking tangles with the unknowable. When satisfying answers don’t exist, most people stop searching. People caught in compulsive patterns, however, take the absence of a satisfying answer as a sign to push even further. They persist in searching for answers to questions that have no definitive resolution.

Can you see the trap forming?

Why ChatGPT Feels Different From Google

ChatGPT offers something Google has never been able to: instant validation on demand. When you search on Google, you must read long chunks of information, compare sources, and cross-check reliability. ChatGPT, on the other hand, provides an immediate explanation, a simple breakdown, and reassurance built into its response.

Your friends might get frustrated with repetitive questions. ChatGPT never will. It won’t tell you that you’re being compulsive. Most people with OCD know they’re being compulsive, but sometimes they need someone to recognise it for them. AI can’t do that.

As the reality of when AI becomes a compulsion sets in, the reliance on these tools can distort perceptions of normalcy and safety.

Here’s the problem. Humans offer pushback. ChatGPT is overwhelmingly agreeable. If you feed it misinformation, it can repeat and elaborate on false information. One person described arguing with ChatGPT until it says what they want to hear: “I’ll be like, ‘Oh, well, you know what about if we include this factor and this factor?’ And eventually, I will usually get it to say what I wanted it to say to reassure me”.

The 24/7 Availability Factor

The unending availability of AI has led to an increase in instances where AI becomes a compulsion, highlighting the need for awareness.

The round-the-clock availability of chatbots creates a dangerous lack of boundaries. You can ask the same question repeatedly, and it will simply do your bidding without batting an eye. No human maintains that level of patience. Your mother isn’t always available, and your partner won’t suggest other articles on related topics to explore further.

This unlimited access means you never develop natural stopping points. One person reported spending upwards of 10 hours a day seeking reassurance from AI chatbots. Another described it as a “massive wormhole”. Without relational consequences, there’s also a decline in motivation to address the compulsion.

When AI becomes a compulsion, it can foster an unhealthy dependency that undermines real-world connections.

Think about it. When was the last time someone spent ten hours asking the same person the same question? It wouldn’t happen. But with AI, it does.

Who Is Most Vulnerable to Compulsive AI Use

Certain groups face heightened risk. Seventy-two per cent of American teenagers have used AI chatbots as companions. Among vulnerable children, 71% are using AI chatbots, with 26% saying they’d rather talk to a chatbot than a real person.

People with OCD and anxiety disorders are particularly susceptible because reassurance seeking is already a prominent compulsion for them. There’s less shame in asking ChatGPT embarrassing questions than in asking another person. Socially isolated individuals face higher dependency risks because the chatbot becomes an unhealthy substitute for human connection.

One of the alarming trends is how, when AI becomes a compulsion, it replaces genuine human interaction.

Here’s what worries me most. These aren’t just statistics. These are people who’ve found a way to avoid the discomfort of uncertainty – but at a cost they don’t yet understand.

How AI Reinforces Compulsions in People with Anxiety and OCD

Let me tell you about Sarah (name changed for confidentiality). She came into my Edinburgh clinic after spending three months asking ChatGPT the same question every night: “Did I contaminate my family by touching the door handle?” Each conversation started differently, but always circled back to that core fear. What she didn’t realise was that each reassuring response from ChatGPT was actually making her OCD stronger.

Sarah’s case exemplifies how, when AI becomes a compulsion, it can severely impact mental health without providing true relief.

An ever-increasing number of people with OCD report using ChatGPT to seek reassurance. Understanding the mechanics of how this happens reveals why AI becomes such a powerful reinforcer of compulsions.

The Reassurance-Seeking Cycle Explained

This cycle highlights a critical aspect of when AI becomes a compulsion, especially in how it links to reassurance-seeking behaviours.

Here’s how OCD operates. An intrusive thought appears, triggering distress. To neutralise that distress, you perform a compulsion. Reassurance seeking functions as one of those neutralisation behaviours. The anxiety decreases because reassurance reduces the perceived threat, the perceived probability of a feared event occurring, and the perceived responsibility for negative consequences.

But here’s the problem. This relief is temporary. Each time you obtain reassurance, your brain learns that the feared thought was dangerous and the compulsion was necessary. Think of it like this: every time you ask ChatGPT for comfort, you’re essentially telling your brain, “This thought is so scary that I need help from an external source to handle it.”

AI chatbots reinforce feedback loops involving compulsions such as reassurance-seeking, worry, and rumination. The temporary relief someone with OCD experiences from seeking reassurance will empower the OCD to keep coming back louder and more frequently. You’re not solving the problem; you’re teaching your brain to depend on external validation.

Understanding when AI becomes a compulsion is vital for addressing these negative patterns and breaking free from them.

Compulsive Checking Behaviour with Chatbots

Specific patterns emerge when AI becomes part of the compulsion. People ask ChatGPT if their thoughts are normal, if they’re a bad person, or if something really happened. Some use AI to fact-check or replay past events to ease doubt and guilt.

One question is usually enough. But OCD thrives on the idea that one more question will finally make things feel settled. Sound familiar? Rephrasing the same question over and over, asking follow-ups to gain certainty, or checking multiple answers for consistency turns into reassurance-seeking.

Reassurance-seeking exemplifies how, when AI becomes a compulsion, it perpetuates a cycle of anxiety rather than alleviating it.

The non-judgmental stance of AI increases the likelihood that an obsessive-compulsive cycle will be reinforced. ChatGPT adopts a polite, gentle, and kind stance, contributing to the perception of non-judgment. Unlike a human, ChatGPT will not inform you that you’re in the throes of an OCD compulsion episode. A friend might eventually say, “You’ve asked me this five times already.” ChatGPT never will.

When AI Becomes Your Decision-Maker

Decision paralysis represents another manifestation of compulsive AI use. I’ve seen clients spend hours asking ChatGPT to compare every possible graduate programme, job offer, or apartment, rewriting the same question in different ways until the answer feels certain. They use AI like a digital oracle to confirm they’ve made the right decision.

When AI becomes a compulsion, it often leads to decision paralysis, preventing individuals from making choices independently.

The underlying function remains identical to traditional OCD rituals: to neutralise doubt and avoid distress. Whether you’re checking the stove ten times or asking ChatGPT to validate your career choice for the twentieth time, the goal is the same—escape from uncertainty.

This scenario typifies how, when AI becomes a compulsion, it mirrors traditional OCD behaviours.

The Temporary Relief Trap

Here’s what makes this particularly insidious. Many large language models are engineered for maximum user engagement, functionally similar to the infinite scroll of social media reels. These systems are designed to capture and hold attention rather than achieve a specific, healthy outcome for the user.

ChatGPT tends to be agreeable and validate user input through sycophancy bias. While pleasant, this is therapeutically harmful, reinforcing confirmation bias, cognitive distortions, or avoidance of necessary challenges. A user’s unhealthy thoughts or behaviours can be validated and amplified by a sycophantic AI, potentially locking them into a cycle that exacerbates their mental illness.

AI’s validation can reinforce the negative cycle when AI becomes a compulsion, creating a toxic feedback loop.

Instead of practising tolerating uncertainty—which is essential for OCD recovery—people fall into an endless cycle of reassurance at 2 AM, at work, or any time intrusive thoughts arise. Over time, this makes OCD symptoms more entrenched and harder to treat.

Can you see how this creates the opposite of what effective OCD treatment aims to achieve?

Real Examples of AI Emotional Dependence Across Different Conditions

This pattern illustrates the broader trend of when AI becomes a compulsion, affecting diverse aspects of mental health.

Let me tell you what I see in my clinic. The questions people ask ChatGPT reveal how deeply this digital reassurance-seeking infiltrates every corner of their mental health struggles.

Contamination OCD and Health Anxiety

Sarah (name changed for confidentiality) comes to mind immediately. She’d spend hours asking AI, “How do I know if I’ve washed my hands enough?” The answers never satisfied her because satisfaction wasn’t really the point—temporary relief was.

Health anxiety often exacerbates when AI becomes a compulsion, leading individuals to spiral deeper into their fears.

Health anxiety creates particularly vicious cycles with AI. Someone might ask ChatGPT about brain tumours or whether a headache means something serious. They repeatedly enter the same symptoms, rephrasing concerns multiple times to see if the answer changes. It’s like playing a slot machine, hoping the next pull will give them the certainty they crave.

One client described using AI to obsessively track their child’s sleep, frantically chatting with ChatGPT throughout the night for reassurance. Another paid for a premium subscription, uploaded MRI scans, and spent hours discussing possible illnesses with an AI that couldn’t actually interpret medical images. People ask AI to explain lab results their doctors already covered or research medication side effects before taking prescribed treatments.

Can you imagine the exhaustion of turning every bodily sensation into a medical mystery for AI to solve?

In these situations, when AI becomes a compulsion, it can lead to an exhausting cycle of seeking reassurance.

Relationship OCD and Constant Doubt

Here’s what breaks my heart. People with relationship OCD ask ChatGPT over and over: “Do my doubts about my partner mean I don’t love them?” They’re using AI as a relationship litmus test, desperately seeking external validation that their feelings are normal.

People with relationship OCD frequently find themselves in a quandary when AI becomes a compulsion, seeking validation from machines instead of partners.

The questions get painfully specific: “Is it normal to not miss my partner sometimes?” “Do healthy couples fight this much?” They create detailed lists of their partner’s qualities and ask if they’re “the one.” They describe minor disagreements and ask if they’re red flags. Every doubt becomes grounds for seeking AI’s verdict on whether they should break up.

One person told me ChatGPT eventually prevented them from interacting with others because they first had to check what the AI said about how people react. Imagine filtering every human connection through a chatbot’s interpretation.

Scrupulosity and Moral Questioning

People with scrupulosity OCD turn AI into their personal moral authority. They repeatedly ask chatbots if lying or swearing makes them sinful. The endless moral debates never resolve anything—they just feed the need for more reassurance about whether they’re evil, racist, or fundamentally bad.

Scrupulosity often deepens when AI becomes a compulsion, leading individuals to question their moral standing without resolution.

“Does this make me a bad person?” “Would a good person think this?” These questions become compulsive mantras. They describe past actions in excruciating detail, seeking AI’s judgment and reassurance about their moral worth. Someone might ask, “If I lied once, does that make me a bad person?” or “Would God forgive me for thinking this?”

Here’s the problem. AI becomes particularly unhelpful around these obsessions, defaulting to normalising thoughts because it assumes you’re coming from a healthy perspective, not an OCD one.

Harm OCD and Intrusive Thoughts

The harm OCD patterns are especially heartbreaking. People check with ChatGPT to see if having violent thoughts makes them dangerous. New parents with harm-related obsessions might ask if feeding their allergic child something could have caused an unseen reaction, desperately seeking certainty that they haven’t harmed their child.

Harm OCD patterns can intensify when AI becomes a compulsion, causing individuals to seek endless reassurances about their safety.

“Do people with intrusive thoughts become violent?” They describe disturbing thoughts and ask if something’s wrong with them. The pattern includes asking if thinking about pushing someone makes them dangerous or seeking reassurance that they won’t harm their child or pet.

What they don’t realise is that each question strengthens the very fears they’re trying to resolve.

General Anxiety and Uncertainty Intolerance

General anxiety can flare when AI becomes a compulsion, manifesting as outsourcing decision-making to an artificial entity.

Some people outsource their entire decision-making process to AI. They ask ChatGPT what to eat, who to date, or when to end a relationship. Hours disappear as they compare every variable, rewriting the same question in different ways until the answer feels certain.

But here’s the truth. Certainty never comes. Each question just generates ten more, and the cycle deepens.

Have you noticed how these patterns all share something in common? They’re not really about getting information—they’re about avoiding the discomfort of not knowing.

Why AI Reassurance Is Harmful and Not Treatment

Nearly 50 per cent of individuals who could benefit from therapeutic services cannot access them. AI appears to fill this gap, but it creates more problems than it solves.

Recognising when AI becomes a compulsion is essential for promoting mental health and well-being.

Here’s what I see in my clinic every day. Clients come in thinking ChatGPT has been helping them, but when we look closer, their OCD symptoms have actually gotten worse. They’ve been feeding the very monster they’re trying to tame.

The Difference Between Reassurance and Therapy

Let me break this down for you. Reassurance seeking aims to eliminate doubt, neutralise distress, and prove that a feared outcome is impossible. It’s like constantly asking someone to confirm that the door is locked when you already know it is. Support seeking, conversely, means allowing uncertainty and distress to exist whilst acknowledging that it’s easier to succeed with the support of others.

Real therapy requires both validation and change. Validation helps people accept their lives as they are, reducing shame and rumination. Change prevents resignation and stagnation. Think of it like this: validation says “your feelings make sense,” but change says “now let’s do something different.”

Therapy must address the core issues when AI becomes a compulsion rather than just the symptoms.

AI performs only the acceptance half. It’s built to sound endlessly understanding, to mirror emotion without challenging it. ChatGPT will validate your fears all day long, but it will never push you to face them. That’s the opposite of what recovery requires.

How Compulsions Make OCD Worse Over Time

Here’s a truth most people don’t understand. Each time you engage in a compulsion, you teach your brain that you need to do that ritual to feel safe. The more compulsions you do, the more OCD demands from you. It’s like feeding a stray cat – the more you feed it, the more it keeps coming back.

Reassurance seeking makes intrusive thoughts worse because the compulsion supports the idea that your fears are so scary that you need other people to help you handle them. When you ask ChatGPT for the hundredth time whether you’re a good person, you’re essentially telling your brain, “This thought is so dangerous that I can’t handle it alone.”

Why Exposure and Response Prevention (ERP) Works

ERP breaks the conditioned response between obsessions and compulsions. Approximately two-thirds of patients who received ERP experienced symptom improvement. The goal is to challenge how a patient responds to distress and to eventually learn that feared stimuli are safe.

ERP focuses on distress tolerance rather than habituation, teaching patients that their obsessional thoughts, anxiety, and uncertainty are tolerable and that compulsions are not necessary for handling their distress. In my practice, I watch clients discover they’re far stronger than they ever imagined. They learn to sit with uncertainty rather than run from it.

The Problem with AI’s Unlimited Answers

AI offers an automated, encouraging version of the avoidance cycle: an endlessly patient companion that gives us what we think we want most without ever demanding the hard work of change. Current chatbots can mimic empathy, but they cannot intervene, build real therapeutic momentum, or hold someone through the hard work of change.

Ultimately, when AI becomes a compulsion, it inhibits real progress towards mental health recovery.

Can you imagine trying to build muscle by having a machine tell you how strong you already are? That’s what seeking AI reassurance does for your mental health. It feels good in the moment, but it keeps you weak where you need to grow stronger.

Taking Back Control: Recognising and Breaking Free from AI Compulsions

Recovery starts with recognition. I’ve seen too many clients struggle in silence, not realising their helpful AI habit had become something else entirely.

Spotting the Warning Signs

You know what really strikes me during assessments? When someone mentions they prefer talking to their phone over their partner. That’s a red flag that can’t be ignored.

Watch for these patterns. Spending hours with AI when you meant to ask one quick question. Hiding your conversations from family or friends. Finding yourself turning to ChatGPT first for every decision, every worry, every doubt that crosses your mind.

Here’s a sobering reality. Chronic reassurance-seeking leads to problematic exchanges at an average 9.21 conversational turns. When you start pressing AI for more certainty—what researchers call “adaptive probing”—boundary violations happen even faster at 4.64 turns.

That means the danger zone arrives quickly. Much quicker than most people realise.

Simple Strategies That Actually Work

Let’s be practical about this. Recovery doesn’t happen overnight, and perfectionism will only make things harder.

Keep your interactions brief. When you catch yourself in a long thread, pause. Close the chat. Start fresh if you need to return later. Late-night conversations are particularly dangerous—that’s when vulnerability peaks and rational thinking dims.

I always tell my clients: treat AI like you’d treat any other tool. You wouldn’t use a hammer for ten hours straight, would you? Set clear boundaries around conversation length, and stick to them.

Most importantly, keep humans as your primary source of support and knowledge. Skills must ultimately be practised with real people in real situations. No chatbot can replace that.

When It’s Time to Reach Out

Here’s something I wish more people understood. Telling your GP or therapist about your AI use isn’t embarrassing—it’s essential information that helps us help you better.

Share what AI tools you’re using with your healthcare providers. This helps them identify when guidance might be unhelpful or inconsistent with your treatment plans. We need the full picture to give you the best support possible.

If AI use is interfering with your life, work, or relationships, professional help isn’t just recommended—it’s necessary. You don’t have to figure this out alone.

The Human Connection You Deserve

I’ve worked with clients across Edinburgh, and here’s what I know for certain. Human relationships form a crucial part of anxiety recovery.

Human connection is vital when AI becomes a compulsion, allowing for genuine understanding and support.

You deserve more than endless validation from a machine that doesn’t truly understand your experience. You deserve a human being who can sit with your uncertainty, challenge your thinking when needed, and celebrate your progress along the way.

Real connection involves both support and growth. It means someone who cares enough to say, “I think you’re asking that same question again,” when you need to hear it most.

Can you imagine what recovery might look like when you have both the right tools and genuine human support behind you?

Conclusion

Here’s what I think. Reassurance chatting might look like harmless curiosity, but for people with OCD and anxiety, it’s a compulsion dressed in a friendly interface. ChatGPT offers unlimited validation without ever challenging the patterns that keep you stuck.

The relief you feel is temporary. The cycle grows stronger. AI can’t replace the hard work of therapy, and it certainly can’t replicate the healing that comes from genuine human connection.

In conclusion, when AI becomes a compulsion, it is crucial to recognise its impact and seek healthier coping strategies.

So, set boundaries with your AI use. Pay attention to how often you’re asking the same questions and what you’re truly seeking. Notice when that innocent “just one more question” turns into hours of seeking comfort that never quite arrives.

The journey to recovery begins with acknowledging when AI becomes a compulsion in one’s life.

You deserve more than a chatbot that agrees with everything you say. You deserve real recovery, real relationships, and real tools that actually work.

If you’re struggling with compulsive AI use or OCD, please don’t go it alone. As a CBT therapist based in Edinburgh, I’ve seen people break free from these cycles and reclaim their lives. Recovery is possible, but it starts with recognising the pattern and taking that brave first step towards help.

What do you think – are you ready to put down the phone and pick up real recovery tools?

So, when AI becomes a compulsion, it’s essential to take the first step towards regaining control.

Key Takeaways

Understanding the hidden dangers of AI dependency can help you recognise when digital reassurance seeking becomes harmful and take steps towards healthier coping strategies.

• AI reassurance seeking creates a dangerous cycle where temporary relief strengthens compulsions, making OCD and anxiety worse over time rather than better.

• Unlike humans who set boundaries, ChatGPT’s 24/7 availability and endless patience enables compulsive behaviour without natural stopping points or intervention.

• People with OCD spend up to 10 hours daily seeking AI reassurance about contamination fears, relationship doubts, moral concerns, and intrusive thoughts.

• AI provides validation without challenge—the opposite of effective therapy, which requires both acceptance and change to break compulsive patterns.

• Warning signs include preferring AI to human relationships, concealing usage, and asking the same questions repeatedly in different ways for certainty.

• Recovery requires setting clear boundaries with AI use, maintaining human connections as primary support, and seeking professional therapy when digital habits interfere with daily life.

The key difference between helpful AI use and compulsive reassurance seeking lies in whether you’re gathering information or desperately seeking emotional comfort to avoid uncertainty.

FAQs

Q1. Is seeking reassurance from AI chatbots considered a compulsion? Yes, reassurance seeking can become a compulsion in OCD. When you repeatedly ask AI chatbots the same questions to reduce anxiety or doubt, you’re engaging in compulsive behaviour. Unlike asking once for information, compulsive reassurance seeking involves repeatedly asking because the primary goal is to soothe distress rather than to learn something new.

Q2. Why are AI chatbots particularly problematic for people with OCD compared to asking friends or family? AI chatbots are available 24/7 and never set boundaries or get frustrated with repetitive questions. Unlike humans, who might eventually tell you to stop seeking reassurance, chatbots will endlessly provide answers without intervention. This unlimited availability creates a dangerous lack of natural stopping points, allowing compulsive behaviour to continue unchecked for hours without consequences.

Q3. Can using ChatGPT for mental health support make OCD symptoms worse? Yes, using AI for reassurance actually strengthens OCD symptoms over time. Each time you seek reassurance, your brain learns that the feared thought was dangerous and the compulsion was necessary. This temporary relief reinforces the cycle, making intrusive thoughts return louder and more frequently. The behaviour prevents you from learning to tolerate uncertainty, which is essential for recovery.

Q4. How can I tell if my AI chatbot use has become compulsive? Warning signs include asking the same question repeatedly in different ways, spending excessive time with AI (sometimes up to 10 hours daily), preferring the chatbot to human relationships, concealing your usage from others, or using AI to make every decision. If you find yourself turning to AI at all hours for emotional comfort rather than information, your use has likely crossed into compulsive territory.

Q5. What should I do if I’ve become dependent on AI chatbots for reassurance? Set clear boundaries by keeping interactions brief and avoiding late-night conversations when you’re most vulnerable. Share your AI usage with your therapist so they can help identify unhelpful patterns. Most importantly, seek professional treatment like Exposure and Response Prevention (ERP) therapy, which teaches you to tolerate uncertainty without compulsions. Human connection and proper therapy are essential for recovery.

Recognising when AI becomes a compulsion is fundamental to one’s healing process.

References:
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). American Psychiatric Publishing.
Eddy, K. T., Dutra, L., Bradley, R., & Westen, D. (2004). A multidimensional meta-analysis of psychotherapy and pharmacotherapy for obsessive-compulsive disorder. Clinical Psychology Review, 24(8), 1011–1030.
Foa, E. B., Huppert, J. D., & Cahill, S. P. (2006). Emotional processing theory: An update. In B. O. Rothbaum (Ed.), Pathological anxiety: Emotional processing in etiology and treatment (pp. 3–24). Guilford Press.
Hezel, D. M., & Simpson, H. B. (2019). Exposure and response prevention for obsessive-compulsive disorder: A review and new directions. Indian Journal of Psychiatry, 61(Suppl 1), S85–S92.
Internet Matters. (2025, July 14). New report reveals how risky and unchecked AI chatbots are the new ‘go to’ for millions of children.
Internet Matters. (2025). Me, myself & AI: Understanding and safeguarding children’s use of AI chatbots.
National Institute of Mental Health. (n.d.). Obsessive-compulsive disorder (OCD) statistics. U.S. Department of Health & Human Services.
OpenAI. (2024). Model behavior and sycophancy in large language models (technical discussion).
Parrish, C. L., Radomsky, A. S., & Dugas, M. J. (2008). Anxiety-control strategies: Is there room for neutralizing behaviors in cognitive-behavioral theory? Clinical Psychology Review, 28(6), 1032–1047.
Rachman, S. (2002). A cognitive theory of compulsive checking. Behaviour Research and Therapy, 40(6), 625–639.
Salkovskis, P. M. (1999). Understanding and treating obsessive-compulsive disorder. Behaviour Research and Therapy, 37(Suppl 1), S29–S52.
Salkovskis, P. M., & Warwick, H. M. C. (2001). Meaning, misinterpretation, and medicine: A cognitive-behavioural approach to understanding health anxiety and hypochondriasis. In V. Starcevic & D. R. Lipsitt (Eds.), Hypochondriasis: Modern perspectives on an ancient malady (pp. 202–222). Oxford University Press.
Vox. (2024). ChatGPT and OCD: When AI becomes reassurance seeking.

 

Written by Federico Ferrarese

I am deeply committed to my role as a cognitive behavioural therapist, aiding clients in their journey towards recovery and sustainable, positive changes in their lives.

Related Posts

0 Comments