Out-Law / Your Daily Need-To-Know

Out-Law Analysis 4 min. read

In-house legal teams must get fit now for generative AI age

Briefcase SEO

iStock.com/olaser


In-house legal teams that do the groundwork now in relation to digitisation, data cleansing, and change management, can get ahead of their own businesses’ adoption of generative artificial intelligence (AI) tools.

While tools such as Chat GPT have shot to prominence in the past year, use of it and other generative AI tools in the world of business to-date has been more cautious. Most businesses acknowledge the technology’s potential, but few have managed to really embrace its everyday use in operations. That is changing, though, as people explore the technology and better understand how it might be used in the business context.

In time, the use of generative AI in business operations will become essential for maintaining competitiveness, but many in-house legal teams are not yet prepared to help their business embrace the technology – or adopt it themselves.

The legal function in businesses across sectors typically lacks bandwidth to deal with the confluence of issues their business is turning to them for help on – whether advising on the post-Covid business model, helping the board understand geopolitical influences on trade and supply chains, staying ahead of policy, regulation and investor expectations on ESG, or ensuring the business is responding to growing and evolving cyber risk. The polycrisis state limits the time, the resources and finances available to in-house teams to address new and emerging issues.

However, there are everyday tasks that in-house lawyers perform that could be effectively outsourced to be undertaken by generative AI tools – provided they are used safely and informatively. This would create greater capacity within those teams, enabling them to better serve the business as strategic advisers.

For example, in-house teams could use generative AI tools to support low value contract review – an activity that normally takes hours could be reduced to seconds as the technology could help lawyers understand whether agreements are ready to be signed or whether specific clauses need to be renegotiated and redrafted.

Generative AI could also be used to draft basic documents itself, such as employment settlement agreements, flag legal precedents, or generate summaries of long and complex reports. It could be used to answer queries, such as whether an activity undertaken within the business complies with an internal policy, instantaneously – without the lawyer having to trawl through the policies at length to find the relevant section.

However, in-house legal teams must overcome barriers to successfully adopt generative AI tools.

An immediate step they can take is to digitise and cleanse the data they want generative AI tools to base their outputs on. Some of the data that businesses would want to inform such outputs may, for example, exist in paper form only and need added to electronic systems – generative AI tools cannot interrogate such data unless it is digitised. In-house teams will want to ensure that, at the point the rest of the business is ready to use generative AI, such data is available and organised for that purpose and that they do not wait until then to first approach the issue.

There are data risks for in-house teams to manage too when exploring the use of generative AI.

Tools like Chat GPT are trained on the data input to them and open to everyone. In this context legal teams must avoid exposing confidential client information, personal data and valuable intellectual property to systems that may disclose such data to others. Where in-house teams see utility in exposing such data to generative AI tools, they will want to ensure that those systems are closed to others.

There is also a risk that generative AI systems that pull data from the internet produce unreliable outputs – we know the extent of inaccurate information that proliferates in the public domain, so human oversight and sense-check processes need to be put in place to ensure output is reliable. There have also been reported examples of so-called ‘hallucinations’ where AI tools generate seemingly authentic information that is entirely made-up – in the US, one lawyer was struck off after citing and relying on precedents generated by AI that did not exist.

Another issue for the leaders of in-house legal teams to address in the context of using generative AI is the cultural mindset necessary to exploit the technology.

While there are pockets of people within businesses who are curious and asking questions about generative AI, most are not considering what they can and need to do differently in their day job. In-house legal teams are no different in this regard – a shift in mindset is needed to enable widespread adoption of the technology and deliver the transformational change it can bring.

Team leaders should consider whether there are case studies they can share with their colleagues around how they can safely use generative AI in their work now, and support that with training to upskill the team as necessary – lawyers won’t suddenly need to code, but they will need the right skill set to ask the right questions to get the right answers and understand both the capabilities and limitations of the systems they are using. Robust governance around the quality assurance of the outputs is imperative.

With cultural change so important, consideration should be given to recruiting legal operations specialists to drive this initiative.

It is also vital that the vision for and purpose of using generative AI is clearly communicated to in-house lawyers who may share some of the broader cynicism that persists about how the technology might impact on jobs.

The reality is that generative AI should not take away jobs but rather create new ones and change the nature of the roles people perform. Many in-house lawyers will be enthused about being able to effectively outsource burdensome, low-value administrative tasks to generative AI tools and focus more of their time on higher value, strategic activity – like considering and advising proactively on regulatory change on the horizon.

At this time, some businesses may feel that there are too many risks – too much scope for error in outputs – to adopt generative AI tools on a widespread basis in their operations. However, technological progress is fast and in-house legal teams cannot wait for the risk appetite of their business to change before thinking about generative AI.

Lawyers should embrace that change is coming – and is here in some respects already. There is significant value to be derived from talking about the technology with peers and understanding how others are using it, and to in-house teams taking the necessary internal steps now to ensure they are in a position to act when their business inevitably turns to them to advise them on operationalising generative AI tools.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.