Oh, Air Canada! Airline pays out after AI accident
- Ruling says Air Canada must refund customer who acted on information provided by chatbot.
- The airline’s chatbot isn’t available on the website anymore.
- The case raises the question of autonomous AI action – and who (or what) is responsible for those actions.
The AI debate rages on, as debates in tech are wont to do.
Meanwhile, in other news, an Air Canada chatbot suddenly has total and distinct autonomy.
Although it couldn’t take the stand, when Air Canada was taken to court and asked to pay a refund offered by its chatbot, the company tried to argue that “the chatbot is a separate legal entity that is responsible for its own actions.”
After the death of his grandmother, Jake Moffat visited the Air Canada website to book a flight from Vancouver to Toronto. Unsure of the bereavement rate policy, he opened the handy chatbot and asked it to explain.
Now, even if we take the whole GenAI bot explosion with a grain of salt, some variation of the customer-facing ‘chatbot’ has existed for years. Whether just churning out automated responses and a number to call or responding with the offkey chattiness now ubiquitous with generative AI’s output, the chatbot provides the primary response consumers get from really any company.
And it’s trusted to be equivalent to getting answers from a human employee.
So, when Moffat was told he could claim a refund after booking his tickets, he went ahead and, ceding to encouragement, booked flights right away safe in the knowledge that – within 90 days – he’d be able to claim a partial refund from Air Canada.
He has the screenshot to show that the chatbot’s full response was:
If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.
Which seems about as clear and encouraging as you’d hope to get in such circumstances.
He was surprised then to find that his refund request was denied. Air Canada policy actually states that the airline won’t provide refunds for bereavement travel after the flight has been booked; the information provided by the chatbot was wrong.
Moffat spent months trying to get his refund, showing the airline what the chatbot had said. He was met with the same answer: refunds can’t be requested retroactively. Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.
We’ve underlined the phrase that the chatbot used to link further reading. The way that hyperlinked text is used across the internet – including here on TechHQ – means few actually follow a link through. Particularly in the case of the GenAI answer, it functions as a citation-cum-definition of whatever is underlined.
Still, the chatbot’s hyperlink meant the airline kept refusing to refund Moffat. Its best offer was a promise to update the chatbot and give Moffat a $200 coupon. So he took them to court.
Moffat filed a small claim complaint in Canada’s Civil Resolution Tribunal. Air Canada argued that not only should its chatbot be considered a separate legal entity, but also that Moffat never should have trusted it. Because naturally, customers should of course in no way trust systems put in place by companies to mean what they say.
Christopher Rivers, the Tribunal member who decided the case in favor of Moffat, called Air Canada’s defense “remarkable.”
“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”
Rivers also found that Moffat had no reason to believe one part of the site would be accurate and another wouldn’t – Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” he wrote.
In the end, he ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) (around $482 USD) off the original fare, which was $1,640.36 CAD (around $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.
Ars Technica heard from Air Canada that it will comply with the ruling and considers the matter closed. Moffat will receive his Air Canada refund.
The AI approach
Last year, CIO of Air Canada Mel Crocker told news outlets that the company had launched the chatbot as an AI “experiment.”
Originally, it was a way to take the load off the airline’s call center when flights were delayed or cancelled. Read: give customers information that would otherwise be available from human employees – which must be presumed to be accurate, or its entire function is redundant.
In the case of a snowstorm, say, “if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.
Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal being to automate every service that did not require a “human touch.”
Crocker said that where Air Canada could, it would use “technology to solve something that can be automated.”
The company’s investment in AI was so great that, she told the media, the money put towards AI was greater than the cost of continuing to pay human workers to handle simple enquiries.
But the fears that robots will take everyone’s jobs are fearmongering nonsense, obviously.
In this case, liability might have been avoided if the chatbot had given a warning to customers that its information could be inaccurate. That’s not good optics when you’re spending more on it than humans at least marginally less likely to hallucinate refund policies out of thin data.
Because it didn’t include any such warning, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.” The responsibility lies with Air Canada for any information on its website, regardless of whether it’s from a “strategic page or a chatbot.”
This case opens up the question of AI culpability in the ongoing debate about its efficacy. On the one hand, we have a technology that’s lauded as infallible – or at least on its way to infallibility, and certainly as trustworthy as human beings, with their legendary capacity for “human error.” In fact, it’s frequently sold as a technology that eradicates human error, (and, sometimes, the humans too) from the workplace.
So established is the belief that (generative) artificial intelligence is intelligent, when a GenAI-powered chatbot makes a mistake, the blame lies with it, not the humans who implemented it.
Fears of what AI means for the future are fast being reduced in the public media to the straw man that it will “rise up and kill us” – a line not in any way subdued by calls for AI development to be paused or halted “before something cataclysmic happens.”
The real issue though is the way in which humans are already beginning to regard the technology as an entity separate from the systems in which it exists – and an infallible, final arbiter of what’s right and wrong in such systems. While imagining the State versus ChatGPT is somewhat amusing, passing off corporate error to a supposedly all-intelligent third party seems like a convenient “get out of jail free card” for companies to play – though at least in Canada, the Tribunal system was engaged enough to see this as an absurd concept.
Imagine for a moment that Air Canada had better lawyers, with much greater financial backing, and the scenario of “It wasn’t us, it was our chatbot” becomes altogether more plausible as a defence.
Ultimately, what happened here is that Air Canada refused compensation to a confused and grieving customer. Had a human employee told Moffat he could get a refund after booking his flight, then perhaps Air Canada could refuse – but this is because of the unspoken assumption that said employee would be working from given rules – a set of data upon which they were trained, perhaps – that they’d actively ignored.
In fact, headlines proclaiming that the chatbot ‘lied’ to Moffat are following the established formula for a story in which a disgruntled or foolish employee knowingly gave out incorrect information. The chatbot didn’t ‘know’ what it said was false; had it been given accurate enough training, it would have provided the answer available elsewhere on the Air Canada website.
At the moment, the Air Canada chatbot is not on the website.
Feel free to imagine it locked in a room somewhere, having its algorithms hit with hockey sticks, if you like.
It’s also worth noting that while the ruling was made this year, it was 2022 when Moffat used the chatbot, which is back in the pre-ChatGPT dark ages of AI. While the implications of the case impact the AI industry as it exists here and now, the chatbot’s error in itself isn’t representative, given that it was an early example of AI use.
Still, Air Canada freely assigned it the culpability of a far more advanced intelligence, which speaks to perceptions of GenAI’s high-level abilities. Further, this kind of thing is still happening:
Also, does it bother anyone else that an AI chatbot just hallucinated a more humane policy than the human beings who operated it were prepared to stand by?