Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

Air Canada chatbot case highlights AI liability risks

Air Canada plane in flight SEO

Photo by Gary Hershorn/Getty Images


Air Canada has been held liable for a negligent misrepresentation made to a customer by one of its chatbots in a case that one expert said highlights broader risks businesses must consider when adopting AI tools.

Meghan Higgins of Pinsent Masons, who advises businesses in complex technology disputes and investigations, was commenting after the Civil Resolution Tribunal of British Columbia in Canada upheld a complaint made by an Air Canada customer who was disadvantaged by their reliance on information provided by a chatbot on Air Canada’s website.

Higgins Meghan

Meghan Higgins

Senior Associate

The case provides an interesting example of how we can expect novel arguments as the courts apply traditional legal concepts to disputes involving generative AI

After his grandmother passed away, Jake Moffatt visited Air Canada’s website to find and book a flight. He sought information about Air Canada’s bereavement fares, which are discounted air fare rates many airlines provide to support people who must travel in the event of the death of a family member or close friend.

While using the website, Moffatt interacted with a customer support chatbot that provided information to him in response to his prompts. Moffatt asked the chatbot about bereavement fares, and the chatbot responded that passengers that needed to travel immediately or had already travelled could submit their ticket for a reduced bereavement rate within 90 days of the ticket’s issue. The message included a hyperlink to a separate page on the Air Canada website where there was additional information about Air Canada’s bereavement policy. On that page, it was stated that the airline’s bereavement policy did not apply to requests for bereavement consideration after travel had been completed.

Moffatt booked airfares and retrospectively submitted an application for a refund to the reduced bereavement fare after travelling. Air Canada denied the request. Moffatt challenged that decision, saying he was owed the refund because he had relied on the information provided to him by the chatbot on Air Canada’s website. Air Canada admitted that the information provided by the chatbot was “misleading”, but it contested Moffatt’s right to a refund, highlighting that he had been provided with the correct information via the link the chatbot shared in its message.

The Civil Resolution Tribunal considered whether Air Canda was liable for negligent misrepresentation, which arises under Canadian law when a seller does not exercise reasonable care to ensure its representations are accurate and not misleading. Moffatt was required to show that Air Canada owed him a duty of care, that its representation was untrue, inaccurate or misleading, that Air Canada made the representation negligently, that he reasonably relied on it, and that that reliance resulted in damages. The court held that Moffatt met those requirements.

The Civil Resolution Tribunal noted that Air Canada had argued that it could not be held liable for information provided by one of its agents, servants or representatives, including a chatbot, but had not explained the basis for that suggestion. The Civil Resolution Tribunal rejected as a “remarkable submission” Air Canada’s suggestion that the chatbot was a separate legal entity that was responsible for its own actions. It concluded that it should be obvious to Air Canada that it was responsible for all of the information on its website, regardless of whether it appeared on a static page or was provided by a chatbot. It also rejected the suggestion that Moffatt should have verified the information provided by the chatbot against other information on Air Canada’s website.

Higgins said that while the judgment is confined in its effect to the case that arose, it does highlight a wider risk to businesses amid the rapid adoption of AI technologies to increase productivity and reduce costs.

“The case provides an interesting example of how we can expect novel arguments as the courts apply traditional legal concepts to disputes involving generative AI,” Higgins said. “Air Canada argued that a chatbot was analogous to an agent, servant or representative, drawing on concepts of vicarious liability available in an employment context rather than arguments about the safety or performance of a product. The tribunal didn’t accept those arguments in this decision, which suggests that where technologies such as chatbots are made available to consumers, courts are likely to look to the business deploying that technology to accept liability when something goes wrong. The tribunal clearly considered that Air Canada was better placed than a customer to ensure that a chatbot on its site provided accurate information.”

Lucia Dorian, also of Pinsent Masons, said: “AI-related disputes are starting to emerge in increasing number. As the courts get to grips with issues of liability, at least initially we expect them to allocate the risk associated with new AI technologies to the companies using them, particularly as against consumers. The position is likely to more complicated where AI tools are developed by one company and used along the supply chain by another.”

Laura Gallagher of Pinsent Masons added: “We know that generative AI technologies can ‘hallucinate’ and can produce outputs that are incorrect, and companies deploying these tools should consider how to mitigate those risks. In some circumstances companies deploying AI tools developed by and sourced from third parties will have scope to recover the cost of legal claims arising from inaccurate or misleading information those tools provide, from those third parties.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.