- cross-posted to:
- technology@lemmy.world
- technology@lemmy.world
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
- technology@lemmy.world
- technology@lemmy.world
look at the cute little ai, it thinks it’s people! becoming a legal liability and everything! adorbs
Cute little executive team, they think AI is people! Yikes…
Then like any corporation they had the little troublemaker snuffed out.
This entire article is a treasure trove.
According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.
Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.” … Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.
Last March, Air Canada’s chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.” … Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”
Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
As an experiment, it certainly produced some findings
Least annoying A/B test.