Retailers Liable for Chatbot Communications: New Accountability Measures
The integration of generative chatbots by retailers raises significant legal concerns regarding accountability for chatbot communications. Recent developments highlight that retailers could be held liable under consumer law for any misleading information provided by these AI systems.
Generative Chatbots in Retail
Australian retailers, such as Woolworths, Kmart, and Bunnings, are increasingly adopting advanced generative chatbots. These bots engage customers more dynamically compared to traditional systems that follow rigid scripts. Despite the advantages, the unpredictable nature of AI communication poses risks for retailers.
Recent Incidents and Legal Implications
- Woolworths’ chatbot, Olive, faced backlash for inappropriate and misleading remarks about its fictional mother.
- Bunnings had to revise its chatbot after it provided illegal repair instructions.
- A Canadian airline was ordered to pay damages due to a chatbot incorrectly offering a bereavement discount.
- In England, a small business encountered a customer threatening legal action over an erroneously high discount suggested by an AI chat.
These examples raise the critical question of whether chatbot communications are considered equivalent to information published on company websites under consumer law. Matthew McMillan from Lander & Rogers emphasizes that retailers cannot absolve themselves of responsibility for misleading information provided by AI.
Liability Under Consumer Law
Retailers are accountable for chatbot communications under the Australian Consumer Law. Misleading statements can lead to severe consequences, particularly concerning refunds and returns. Mismanagement of customer queries can expose businesses to substantial penalties.
- Specific instances include fines imposed on companies like Valve, Sony, and Mazda for violating consumer guarantees.
- Qantas faced a staggering $100 million fine in 2024 for booking practice violations.
Monitoring and Compliance Strategies
Retailers are urged to assess their AI systems to mitigate risks associated with providing misleading information. Implementing effective monitoring processes is crucial. Retailers are encouraged to develop systems for recourse should a customer be misled.
Genevieve Elliott, Chief Information Officer at Bunnings, stated that their AI is designed to enhance customer experiences by facilitating product searches and order tracking. However, retailers must remain vigilant to customer feedback and chatbot performance to ensure reliability.
The Role of Disclaimers
While disclaimers indicating the potential for errors in chatbot responses may provide some level of protection, McMillan warns that they are unlikely to absolve liability. The focus is on whether a reasonable consumer could be misled by the chatbot’s assertions.
Current Trends in Chatbot Performance
In recent assessments, many retailers’ chatbots were found to refuse answering questions about pricing or returns, instead directing users to human staff. This highlights a cautious approach to managing AI interactions while ensuring compliance with consumer rights.
Retailers, adopting generative chatbots in customer service, must navigate the complexities of accountability. As the use of AI in retail continues to grow, understanding and adhering to consumer laws will be vital for maintaining customer trust and mitigating legal risks.