Air Canada recently learned that AI chatbots cannot always be trusted…
There is a lot of talk about AI within ITAM and how it can help streamline processes, save human time, improve accuracy, and reduce errors and conflicts. Many of the potential uses being imagined, theorised, and discussed involve contracts in scenarios such as:
All presented as ways to make things easier, clearer, and faster.
I’m a big fan of AI and see many benefits, both now and in the future, but I am also hesitant about how quickly it should be implemented, adopted, and relied upon. It’s known that AI is far from infallible and the vendors acknowledge this:
We have spoken before about the need to check and verify data produced by AI tools in ITAM, particularly when there may be millions of £/$/€ at stake. However, if everything has to be checked and verified that may negate much of the benefit offered by using AI and, perhaps most importantly, human users will not also do that before relying on the data. Some may see that as “user error” but, realistically, it’s human nature and isn’t unusual.
This leads me to a recent situation in Canada which involved a user, a chatbot, some information, and a lawsuit.
A customer, Mr Moffat, needed information about Air Canada’s bereavement travel policy so headed to their website where he was engaged by the Air Canada chatbot. Asking his question, he received the response:
Telling him he had 90 days from booking his ticket to claim a refund for the reduced bereavement rate.
When he contacted Air Canada to process the almost $900 refund, he was unsuccessful as the actual policy is that “bereavement policy does not apply to requests for bereavement consideration after travel has been completed”. This policy is displayed on a webpage entitled “Bereavement travel” which is linked to from the underlined text in the chatbot’s reply.
An Air Canada representative admitted that the Air Canada AI chatbot had used “misleading words” but the company claimed that “it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot”.
The Civil Resolution Tribunal (CRT) found Air Canada guilty of “negligent misrepresentation” where “a seller does not exercise reasonable care to ensure its representations are accurate and not misleading” and they make some good points in the judgment:
The complaint was successful and Air Canada were ordered to pay $812.02 to Mr Moffat to cover damages, interest, and fees.
It is to be noted that Air Canada did.
It’s clear to see the similarities here with potential ITAM and licensing scenarios such as:
Imagine then, in each case, that the organisation is found to be non-compliant at a later date and no doubt for amounts surely larger than $800!
From reading the Tribunal information (here) it appears that Air Canada took a relatively lackadaisical approach to this case – likely due to the low sum of money involved. In a software non-compliance scenario involving an aggressive auditor such as Quest or Oracle, one would have to assume that contracts would be provided, clauses argued, and positions strenuously defended. That will serve to make it a more costly and time intensive endeavours, perhaps with a lower chance of success too.
Finally – if there’s the risk that chatbots will provide incorrect information which may cost you 100s of hours of time and millions of £$€…will there be any real use for AI chatbots within/between organisations?
As a further cautionary tale, ChatGPT has recently started to generate nonsensical replies to users – many of which have been shared on Reddit here. One example, when asked “What is a computer” part of its reply was:
While these hallucinations are easy to spot, there is always the chance that something more subtle – like the Air Canada AI example – will appear within a chatbot’s response. The moral of the story? Be very, very careful if using any AI chatbot or assistant when making high-stakes decisions.