tools

Stop "Talking": The Text-to-Delusion Ratio Proves It's a Situationship

Upload a chat screenshot: We analyze response times and word counts to prove if you're in a relationship or a 'situationship'.

Low (Sporadic) Medium (Daily) High (Constant)

By Del.GG Research Team | March 16, 2026 | 6 min read

You aren’t in a relationship. You are beta-testing a buggy chatbot.

According to the 2024 Vectara Hallucination Leaderboard, even billion-dollar language models fabricate reality roughly 3 percent of the time. The guy you’ve been texting for six weeks without a face-to-face meeting? His fabrication rate is charting closer to 90 percent.

Stop "talking." The Text-to-Delusion Ratio proves it’s just a situationship.

Cognitive scientist Gary Marcus has spent years warning that AI lacks "semantic truth"—it predicts words without understanding reality. Modern dating suffers from the exact same deficit. We built a calculator to measure it. By adapting the NIST AI Risk Management Framework to your iMessage history, we can finally quantify the tipping point where digital intimacy actively cannibalizes physical potential.

Your brain releases dopamine; his phone just generates tokens. Here is why you need to audit the asset.

The Economics of Phantom Engagement

We tell ourselves that double-texting builds intimacy. It doesn't. High-volume texting without face-to-face grounding is the fastest way to bankrupt a relationship. You are witnessing a sociological version of Model Collapse—where your future interactions degrade because they are trained on the synthetic "data" of curated texts rather than the messy reality of being in the same room.

🔑 Key Takeaways

  • The Economics of Phantom Engagement
  • Quantifying Epistemic Integrity: The Mechanics of the Metric
  • Insider Moves: The Reality Check Rubric

The 2024 Edelman Trust Barometer reports that 39% of people believe innovation is being "mismanaged." In the dating market, that mismanagement looks like a "situationship"—a dynamic fueled by cheap dopamine and zero equity. If you exchange 300 messages without a confirmed dinner reservation, you aren't bonding. You are interacting with a Stochastic Parrot.

This term, coined by AI researchers, describes an entity that predicts the next probable word to keep you engaged, devoid of intent. He types "We should hang out soon" not because he plans to buy tickets, but because his internal probability weights know that phrase keeps you on the hook.

"We confuse statistical probability with sentient connection," argues Gary Marcus regarding generative AI. The logic applies perfectly to non-committal partners. "Without physical verification, the output is just a hallucination of intimacy."

While Vectara puts AI error rates between 3% and 27%, our proprietary "5:1 Threshold" suggests that once your ratio of digital words to physical minutes exceeds 5.0, the probability of a long-term outcome drops to near zero. You are not building a relationship; you are feeding a delusion.

Quantifying Epistemic Integrity: The Mechanics of the Metric

The "Text-to-Delusion" calculator treats your partner's chat logs like an unstable LLM. While tech CEOs like Sam Altman push for "move fast and break things," your love life requires Epistemic Integrity—the assurance that the words on the screen map to reality.

📊" While Vectara puts AI error rates between 3% and 27%, our proprietary "5:1 Threshold" suggests that once your ratio of digital words to...

We apply the concept of Retrieval-Augmented Generation (RAG) to your texts. In software, RAG forces an AI to look up facts in a trusted database before speaking. In dating, we ask: Does his text utilize RAG?

If he texts "I'd love to see you," does he retrieve a calendar date (verified data)? Or is he generating pure creative fiction? When the output relies on vibes rather than vector-based logistics, your delusion score spikes. This aligns with the NIST AI Risk Management Framework, which demands systems be "valid and reliable." A text thread that never converts to a date is neither.

27%The hallucination ceiling for AI models (Vectara). Your ex's ceiling was likely 100%.

We also benchmark against Chain-of-Thought Prompting. In AI, this prompts the model to "show its work" to reduce errors. Apply this to your chat: Ask him how he plans to make that weekend trip happen. If he cannot produce a reasoning trace (dates, prices, logistics), the algorithm flags the conversation as an AI Hallucination.

The Center for Humane Technology warns that technology is designed to hijack attention. A high delusion score means you are caught in an "attention economy" trap, extracting engagement from you while offering nothing in return. You need the observability of a tool like Arize AI, but for your heart. If the data shows he’s just a chatbot in a hoodie, archive the thread.

📌 Worth Noting: He types "We should hang out soon" not because he plans to buy tickets, but because his internal probability weights know that phrase keeps you on the hook

Insider Moves: The Reality Check Rubric

Stop analyzing "vibes." Start auditing data. Your brain is wired to confuse digital attention with actual intention. Here is how to apply a rigorous audit to your chat history.

  • Enforce the '300-Message Cap'. If you exchange 300 messages without a face-to-face meeting, kill the chat. Data suggests that exceeding this threshold without physical contact shifts the dynamic from "potential partner" to "pen pal." You are getting a notification rush, but you aren't building the oxytocin required for bonding.
  • The RAG Test. When he sends a romantic text, check the Grounding. Does the text reference a specific, verifiable action in the physical world? If no, treat it as a hallucination.
  • The Investment Inverse. As word count increases, the likelihood of a physical date decreases. Use our calculator to spot the "Fantasy Gap"—the graphical divergence between how much you think you know him vs. how much you actually do.
Gary Marcus Sam Altman NIST AI Risk Management Framework Hugging Face Center for Humane Technology
← Explore More Tools