tools

Stop Trusting Relationship Text Analysis: Why It Flags Neurodivergence as Abuse

Paste that confusing text from your 'situationship' — see its calculated Manipulation Score and what they actually meant.

Initializing NLP Engine...
Analysis Result

By Del.GG Research Team | February 27, 2026 | 5 min read

You paste the screenshots from last night’s confusing argument. The verdict loads in seconds: "85% likelihood of manipulation."

It feels like vindication. It’s actually a statistical hallucination.

Relationship AI wrappers like Mei or Crushed, often built on OpenAI (GPT-4), promise to decode mixed signals. They scan your chat logs for the "Four Horsemen"—criticism, contempt, defensiveness, and stonewalling—using frameworks derived from The Gottman Institute. But in their rush to identify toxicity, these tools have developed a dangerous habit: pathologizing neurodivergence.

To an algorithm trained on neurotypical dialogue, the low-context directness of an Autistic sender looks like "narcissistic coldness." The erratic response times typical of ADHD executive dysfunction read as intentional "breadcrumbing."

This is Algorithmic Gaslighting. The machine isn't just misreading the room; it is enforcing a communication standard that millions of brains literally cannot meet.

The Literalism Penalty: Why NLP Hates Directness

Here is the reality developers gloss over: if your partner is Autistic, these tools are rigged to flag them as abusive.

Modern relationship analyzers don't "read" your screenshots; they map them mathematically using Natural Language Processing (NLP). The software converts text into Vector Embeddings—multi-dimensional coordinates where words with similar contexts cluster together.

🔑 Key Takeaways

  • The Literalism Penalty: Why NLP Hates Directness
  • ADHD, Executive Dysfunction, and False "Breadcrumbing"
  • The Danger of False Positives

The problem? The training data. These models are often fine-tuned on high-conflict datasets like Reddit’s r/relationships, where brevity usually signals anger.

In this skewed vector space, a direct, unadorned text ("I need to be alone right now") doesn't map to "sensory overload." It maps to "stonewalling" or "hostility." This creates a "Literalism Penalty." Sentiment Analysis algorithms demand emotional fluff—emojis, softeners, performative politeness—to score a message as neutral. Without them, the AI assumes the sender is angry.

48%of Americans 18-29 report experiencing digital abuse, fueling the demand for automated detection (Pew Research Center, 2020).

We are handing our most intimate vulnerabilities to a calculator that thinks "K." is an act of violence.

ADHD, Executive Dysfunction, and False "Breadcrumbing"

The bias gets worse when we look at frequency analysis. Apps look for patterns of Love Bombing followed by withdrawal to identify narcissism, heavily citing experts like Dr. Ramani Durvasula in their marketing. But code lacks clinical nuance.

For someone with ADHD, communication often runs hot and cold due to executive dysfunction, not malice. They might hyper-focus on a conversation (looking like love bombing) and then vanish for two days because they got overwhelmed or simply forgot the object permanence of the chat (looking like withdrawal).

📊Conversely, an anxious partner might be convinced by the "85% manipulation" score to leave a healthy relationship, gaslit by a bot into...

The algorithm sees this inconsistency and flags it as Breadcrumbing—a manipulative tactic to keep a victim hooked. It identifies the behavior perfectly but hallucinates the intent.

Dr. John Gottman defines "turning away" from a bid for connection as a predictor of divorce. The AI sees an unanswered text as "turning away." A human therapist might see it as "time blindness." The app doesn't ask follow-up questions; it just assigns a toxicity score.

The Danger of False Positives

This isn't just about hurt feelings. It’s about safety. By flooding users with False Positives, we risk desensitizing them to actual danger.

If the AI cries wolf on every neurodivergent text, users might stop listening when real Coercive Control appears. Conversely, an anxious partner might be convinced by the "85% manipulation" score to leave a healthy relationship, gaslit by a bot into believing they are being abused.

The "Neuro-Inclusion" Prompt Fix

If you must use AI to analyze texts, stop pasting blind screenshots. You need to prompt the model to account for the "Double Empathy Problem."

Try this prompt: "Analyze this text exchange for conflict patterns. Flag potential 'Four Horsemen' indicators, but explicitly evaluate if the 'cold' or 'inconsistent' tone could be attributed to neurodivergent communication styles (autistic literalism or ADHD executive dysfunction) rather than narcissistic intent."

📌 Worth Noting: It’s actually a statistical hallucination

We rely on these tools because digital dating is terrifying. With Norton reporting that 1 in 2 adults admit to stalking partners online, the fear is real. But an algorithm that can't distinguish between a neurological difference and a personality disorder isn't a shield. It's a funhouse mirror.

If you genuinely fear you are in a toxic situation, skip the app store. Contact the One Love Foundation. They use humans, not vectors, to help you find the truth.

Dr. Ramani Durvasula The Gottman Institute Natural Language Processing (NLP) Pew Research Center Sentiment Analysis
← Explore More Tools