Firms Urged to Prioritize AI Risks on Business Agendas

Firms Urged to Prioritize AI Risks on Business Agendas

Reports that lawyers have cited AI-generated case law and entered client information into public generative systems have alarmed regulators and courts. Insurers have taken notice as law firms complete professional indemnity insurance renewals.

Insurance scrutiny rises at renewal

One firm owner told Filmogaz.com they faced many detailed questions on renewal forms. Insurers asked about AI policies, risk plans, and staff use. Many firms struggled to give complete answers.

Underwriters say they are not trying to catch firms out. Marc Rowson, a partner at broker Lockton, said most insurers welcome responsible AI adoption. He added they want clear evidence of controls and verification.

What insurers want to know

  • Accuracy of work produced with AI.
  • Data security measures and privacy protections.
  • Human checks and other precautions against errors.

Paragon senior vice president Arjun Rohilla warned that weak governance would worry PII insurers. Firms should avoid vague answers like “experimenting” when describing AI use.

Regulatory red flags and recent cases

Last year, Mr Justice Ritchie criticised solicitors and barristers who used fake case citations. He described their conduct as appalling professional misbehaviour.

Last month, two immigration solicitors were referred to the Solicitors Regulation Authority. Regulators allege the lawyers used generative AI to create irrelevant or false cases.

One of those solicitors admitted inputting Home Office emails containing client details into ChatGPT. That admission underlines data protection and confidentiality risks.

Conference findings on governance

At the Law Society’s risk and compliance conference this week, delegates answered polls on AI governance. Fourteen percent said AI was allowed but largely unmanaged.

Another session showed nearly half of attendees assign responsibility for AI use to individual fee-earners. Only 24% viewed supervising or managing partners as accountable.

Guidance and firm responsibilities

The SRA plans to publish new guidance on safe AI use in the coming weeks. The guidance will clarify rules for generative tools and reinforce existing duties.

Client confidentiality, privilege, and consent remain non-negotiable. The use of AI does not remove solicitors’ professional responsibilities.

Olivier Roth, SRA policy manager for AI and technology, said generative tools must support professional judgement. They should not replace that judgement.

Practical steps for firms

Experts say firms must prioritize AI risks and put them on business agendas. Insurers will look for concrete policies and effective controls, not pilot programmes.

Eloise Butterworth, head of risk and compliance at HiveRisk, urged firms to build robust frameworks before scaling AI. She recommended COLP involvement and risk-team ownership.

Butterworth warned that blanket bans can backfire. Bans may push staff to use tools without guardrails, increasing insurer concern.

Checklist for insurers and compliance

  • Create a written AI policy with clear controls.
  • Ensure the compliance officer (COLP) reviews AI use.
  • Document human verification steps for AI outputs.
  • Protect client data and limit exposure to public models.
  • Train fee-earners and managers on approved practices.

Insurers remain in a fact-finding phase. Firms that demonstrate governance and risk management will be better placed at renewal.