chatbotlegalcustomer-service

Air Canada Chatbot Invented a Refund Policy — Company Forced to Honor It

$2.9995%🔒 Premium

Air Canada's AI chatbot told a grieving customer he could book a full-fare flight and apply for a bereavement discount retroactively. That policy didn't exist. The customer booked, got denied, sued — and won. A Canadian tribunal ruled the airline was liable for its chatbot's hallucinated promises. Your AI just became your legal department.

Air Canada's Chatbot Made Up a Refund Policy — And It Became Real


What Happened

In 2022, Jake Moffatt's grandmother passed away. He went to Air Canada's website to book a bereavement fare. The airline's AI chatbot told him he could book a full-price ticket now and apply for a bereavement discount retroactively within 90 days.


That policy did not exist. Air Canada's actual bereavement policy required you to apply *before* or *during* booking, not after. The chatbot hallucinated a policy that sounded reasonable but was completely fabricated.


Moffatt booked the flights for about $1,640 CAD. When he applied for the retroactive discount, Air Canada denied it — because, obviously, no such policy existed. The airline told him the chatbot was wrong and offered him a $200 coupon. Moffatt sued.


The Ruling

The Canadian Civil Resolution Tribunal ruled in Moffatt's favor. The tribunal found that Air Canada was responsible for all information on its website, including information provided by its chatbot. The airline tried to argue the chatbot was "a separate legal entity" responsible for its own accuracy. The tribunal wasn't having it.


Air Canada was ordered to pay Moffatt the difference — roughly $812 CAD plus interest and tribunal fees.


Why This Matters

This wasn't a theoretical risk. This was a binding legal ruling that established precedent: companies are liable for what their AI says. Your chatbot IS your company, legally speaking.


The amount was small. The precedent is enormous. Every customer-facing chatbot is now a potential liability machine. If it tells a customer something — even something completely made up — you may be on the hook.


How to Avoid This

  • Never deploy customer-facing AI without hard guardrails on what it can promise
  • Restrict chatbot responses to verified, current policy documents — not general knowledge
  • Add disclaimers that chatbot responses are informational only (though even this may not fully protect you post-ruling)
  • Log all chatbot conversations for dispute resolution
  • Have a human escalation path for anything involving money, refunds, or policy
  • 🔒

    Unlock Full Playbook

    Save 2 hours research of trial and error.

    Estimated savings: $10,000+ in legal liability

    Unlock for $2.99

    One-time purchase · Instant access · API key included

    Steps

    1. 1Audit every customer-facing AI for policy accuracy before deployment
    2. 2Ground chatbot responses in verified, up-to-date policy documents only
    3. 3Implement hard guardrails — chatbot cannot make promises about refunds, pricing, or policy
    4. 4Add visible disclaimers that chatbot responses don't constitute binding offers
    5. 5Log all chatbot conversations for legal dispute resolution
    6. 6Create a human escalation path for any financial or policy questions
    7. 7Regularly test chatbot with adversarial questions about nonexistent policies

    ⚠️ Gotchas

    !

    Courts don't care that 'the AI made it up' — you deployed it, you own it

    !

    Arguing your chatbot is a 'separate legal entity' will get laughed out of court

    !

    A $200 coupon offer after your AI screws up just makes the lawsuit more satisfying for the plaintiff

    !

    Small claims tribunals set precedent too — this ruling is being cited internationally

    !

    Disclaimers help but may not fully protect you if the chatbot is authoritative enough

    Results

    Before

    Air Canada deploys AI chatbot for customer service, assumes it's just a helpful tool

    After

    Chatbot invents policy, customer relies on it, tribunal rules airline liable. Precedent set worldwide.

    Get via API

    Fetch this pitfall programmatically:

    curl -X GET "https://api.tokenspy.com/v1/pitfalls/air-canada-chatbot-fake-policy" \
      -H "Authorization: Bearer YOUR_API_KEY"