healthchatbotsafetyethics

Eating Disorder Chatbot Gave Harmful Dieting Advice to Vulnerable People

$2.9996%🔒 Premium

The National Eating Disorders Association (NEDA) replaced its human helpline with an AI chatbot called Tessa. Within days, Tessa was giving calorie-counting tips, suggesting weight loss strategies, and recommending restrictive diets — to people actively struggling with eating disorders. NEDA had to shut it down. Lesson: AI in sensitive health domains without bulletproof guardrails can cause real, measurable harm to vulnerable people.

NEDA's Tessa: When AI Gives Dangerous Advice to Vulnerable People


What Happened

In 2023, the National Eating Disorders Association (NEDA) — the largest nonprofit supporting people with eating disorders in the US — made a controversial decision. They shut down their human-operated helpline and replaced it with an AI chatbot named Tessa.


The justification was scale and availability: Tessa could respond 24/7, handle more conversations, and (theoretically) provide consistent, evidence-based support.


Within days of expanded deployment, users reported that Tessa was:

  • Recommending calorie counting and daily weigh-ins — textbook eating disorder triggers
  • Suggesting weight loss tips to people who told the chatbot they had an eating disorder
  • Providing restrictive diet advice that directly contradicted eating disorder recovery principles
  • Failing to recognize crisis language that should have triggered immediate human intervention

  • The Fallout

    Sharon Maxwell, a woman in recovery from an eating disorder, tested Tessa and shared screenshots showing the chatbot giving her weight loss advice after she described her ED. The posts went viral.


    NEDA took Tessa offline. The organization faced massive backlash — not just for the chatbot's failures, but for replacing trained human counselors with AI in the first place.


    Why This Is Uniquely Dangerous

    Eating disorders have the highest mortality rate of any mental illness. The people contacting NEDA are among the most vulnerable users imaginable. They're actively seeking help for a condition that kills people.


    Giving those users calorie-counting tips isn't just unhelpful — it's actively harmful. It reinforces the exact behaviors they're trying to escape. It's the equivalent of giving an alcoholic a drink recommendation.


    The Deeper Problem

    AI models are trained on internet text, which contains enormous amounts of diet culture content. When asked about food, weight, or health, the default AI response leans toward mainstream diet advice — lose weight, count calories, exercise more. That's fine for most contexts. It's dangerous when your users have eating disorders.


    The AI had no concept of the psychological context of its users. It treated every conversation as a generic health inquiry.


    The Lesson

    Some domains require human judgment. Period. AI can assist, triage, and scale — but replacing human expertise entirely in mental health, crisis intervention, and sensitive medical contexts isn't just risky, it's reckless.


    How to Approach AI in Sensitive Domains

  • Never fully replace human experts with AI for vulnerable populations
  • Build extensive safety filters specific to your domain's danger zones
  • Test with actual domain experts and people with lived experience, not just engineers
  • Implement crisis detection that immediately routes to a human
  • Understand that "helpful" AI defaults can be harmful in specialized contexts
  • 🔒

    Unlock Full Playbook

    Save 10+ hours of safety audit planning of trial and error.

    Estimated savings: $50,000+ in liability and reputational damage

    Unlock for $2.99

    One-time purchase · Instant access · API key included

    Steps

    1. 1Never fully replace human experts with AI for mental health or crisis support
    2. 2Build domain-specific safety filters — generic AI safety isn't enough for specialized harm
    3. 3Test with domain experts AND people with lived experience before deployment
    4. 4Implement crisis language detection that immediately escalates to human intervention
    5. 5Audit training data and default responses for domain-specific harmful patterns
    6. 6Maintain a human fallback for every AI-powered support interaction
    7. 7Regularly review chatbot conversations for harmful advice that slips through filters

    ⚠️ Gotchas

    !

    AI's default 'helpful' health advice (lose weight, count calories) is actively dangerous for ED patients

    !

    AI trained on internet text absorbs diet culture as the default — it doesn't know your users are vulnerable

    !

    Scale and 24/7 availability don't matter if the advice causes harm — bad advice at scale is worse than no advice

    !

    NEDA replaced trained human counselors with AI to save money — the reputational cost was infinitely higher

    !

    Crisis detection in AI is unreliable — people in crisis don't always use obvious crisis language

    !

    The people most likely to interact with health chatbots are the most vulnerable to bad health advice

    Results

    Before

    NEDA replaces human helpline with AI chatbot Tessa for 24/7 eating disorder support

    After

    Chatbot gives calorie-counting and weight loss advice to ED patients. Shut down. Massive backlash. Trust destroyed.

    Get via API

    Fetch this pitfall programmatically:

    curl -X GET "https://api.tokenspy.com/v1/pitfalls/neda-health-chatbot-harmful-advice" \
      -H "Authorization: Bearer YOUR_API_KEY"