David Eby Urges Ottawa to Mandate AI User Reporting to Police

David Eby Urges Ottawa to Mandate AI User Reporting to Police

British Columbia Premier David Eby is calling for national regulations concerning artificial intelligence (AI) user reporting. This follows the tragic incident in Tumbler Ridge, where a shooter, who had previously been banned from OpenAI’s ChatGPT, killed five children and an educator. Eby emphasized that AI companies should be obligated to report concerning user behavior to law enforcement rather than making independent judgments.

Calls for Federal Action on AI User Reporting

During a news conference in Victoria, Premier Eby demanded a clear reporting threshold for AI service providers in Canada. He stated, “The federal government needs a reporting threshold for all artificial intelligence companies that deliver services in Canada.” This would ensure that vital information is communicated to authorities whenever there is a potential threat, prioritizing public safety for families and children.

Amid growing scrutiny, OpenAI employees had reportedly sought to alert authorities about the shooter’s disturbing discussions with ChatGPT. However, their concerns were not acted upon. OpenAI later banned the shooter from its platform but only notified law enforcement after the tragic events unfolded.

Investigation and Accountability

Premier Eby expressed the need for transparency. He called for measures that would require OpenAI to meet with the families of the victims to explain their inaction. “I want them to look in the eyes of these families and tell them why they made the call they did,” Eby remarked.

The AI Minister, Evan Solomon, met with OpenAI officials in Ottawa to discuss safety protocols. His statement after the meeting highlighted a pressing need for timely escalation of credible threats. He urged that internal reviews alone do not suffice to protect Canadians.

Government Response and Regulatory Framework

  • Minister Solomon indicated the necessity of establishing credible warning systems for serious violence.
  • The ongoing investigation by the RCMP prevented detailed discussions about the Tumbler Ridge shooter’s interactions with the AI platform.
  • Future safety measures are expected from OpenAI tailored specifically for the Canadian context.
  • An online harms bill is under development to address the implications of AI interactions, particularly concerning vulnerable individuals.

Concerns have been raised by experts regarding the lack of regulatory oversight in AI, particularly regarding chatbot interactions and the safety of users. Taylor Owen from McGill University criticized the current gaps in AI regulation, stressing the urgency for establishing an online safety regulator to effectively address these challenges.

As discussions evolve around AI user reporting mandates, the Canadian government is urged to create a robust regulatory framework. This would help ensure the safety of individuals engaging with AI platforms and prevent future tragedies.