AI is already in your law firm. That was the headline message from our recent session, Getting AI Right – A Practical Guide, hosted live on 4 June.
This popular compliance webinar brought together a panel of experts to discuss the very real risks and challenges posed by artificial intelligence tools. But more importantly, the panel focused on what firms can do now to stay safe and compliant while taking advantage of the technology.
In case you missed it, here are the key insights from the session.
1. You probably have shadow AI in your firm already
One of the most sobering takeaways from the webinar is that generative AI is already being used by lawyers and support staff – often without authorisation. From drafting documents to summarising advice notes or rewriting emails, tools like ChatGPT have crept into daily practice without most COLPs or IT teams being aware.
“That’s one of the big challenges that a lot of people who work in risk and compliance say, isn’t it? We have policies, but we don’t know what people are doing every minute of every day.”
That means the firm is carrying unknown risks relating to confidentiality, data protection, legal privilege and quality control. The first step in any AI compliance programme is to understand what’s already in use across the business.
2. Deploying AI is a compliance project, not just an IT one
The panel warned against treating AI adoption as a purely operational or IT issue. Unlike switching case management systems or upgrading printers, deploying AI changes the way legal work is done. That means it touches every compliance duty under the SRA Codes.
Any procurement or internal development of AI tools should involve a cross-functional team, including compliance, risk, and senior management. The product should be thoroughly risk assessed before it goes live, and governed centrally.
“Be robust. Be bullish. Ask for free trials.”
3. You still have to supervise the robot
Even where the use of AI is sanctioned, it doesn’t absolve the solicitor of responsibility. If an AI tool drafts a contract or advice note, the lawyer signing it off must check the output carefully. Clients are entitled to receive competent and accurate advice, not hallucinated text with a gloss of credibility.
Under the Code of Conduct, the solicitor is responsible for their work product, even if it was generated by an algorithm. Firms need to make sure appropriate supervision, file reviews and training are in place.
4. Be transparent with clients
Clients don’t need to know every tool used behind the scenes, but where AI tools are directly involved in drafting, research or other substantive parts of the service, the consensus was that this should be disclosed.
That could take the form of a client engagement letter clause or an updated privacy notice. In some matters, it might need to be discussed directly with the client, especially where there are sensitivities or cross-border issues.
“We’re going to run into all sorts of problems with the regulatory stuff: client confidentiality, client care, and also the more substantive legal stuff.”
5. Public models carry confidentiality risks
There was particular concern about free-to-use public models, such as OpenAI’s ChatGPT. If client data is pasted into these tools, there is a real risk that the information is used to train the model or stored in a way that breaches confidentiality obligations.
This may be mitigated by using enterprise accounts or self-hosted models. But even then, it’s essential that data handling arrangements are clear, and that staff are trained not to use sensitive data in public-facing tools.
6. AI hallucinations are a live risk
We’ve already seen court cases where solicitors have relied on AI-generated content, including case law citations that turned out to be entirely fictional. These hallucinations are not rare glitches – they are inherent to the way generative AI works.
Firms should treat every AI output as a first draft, not a final product. If a junior solicitor had produced the same work, would you sign it off?
“The Code doesn’t distinguish between you and the robot. If it’s wrong, it’s your problem.”
7. AI will not train your juniors for you
There was a concern raised that overuse of AI tools might lead to a loss of training opportunities for junior lawyers. If generative AI is used to create first drafts, answer client queries, and conduct research, how do junior team members learn the craft?
The panel urged firms to be deliberate about protecting core training experiences and to limit AI use in certain types of work.
8. This touches your insurance and complaints risk
Professional indemnity insurers are only just beginning to get to grips with AI usage. As it stands, most PII policies do not exclude claims arising from use of AI tools. But that may change if claims increase.
Firms were advised to notify their brokers about any significant AI deployment and to ensure appropriate governance and supervision structures are in place.
Likewise, if a complaint arises and it turns out the advice was generated or influenced by AI, the firm may have difficulty defending it – especially if there’s no record of review.
9. You need an AI policy (and someone to own it)
The final message was clear: firms must take control. That means having a clear, firm-wide AI policy that covers acceptable use, procurement, training, supervision, output review and transparency.
Ownership of the policy should sit with someone senior, whether that’s the COLP, Head of Risk or an AI committee. Usage should be logged. Outputs should be spot-checked. Vendors should be challenged.
“Don’t experiment without a clear plan.”
AI is not a hypothetical future risk. It’s happening now.
Want to watch the webinar? The recording is available free for 30 days here (use passcode O1t.!FC1).