abstract group of people

Podcast: AI and the legal function

We investigate the impact of the seemingly sudden leap in AI capability on in house legal teams.

 

The way organisations perform legal functions will change. Hear how from Alastair Morrison, Lucy Shurwood and Orlando Conetta of Pinsent Masons, and University of Miami law professor Michele DeStefano.

Find out why AI has taken such a huge leap in capabilities; what kinds of tasks it can now perform, and what decisions in-house legal teams will have to make about how they structure and resource their operations.

 
US_UK_Apple_Podcasts_Listen_Badge_RGB listen-spotify

 

 

 

Transcript

Matthew Magee:

Hello and welcome to Brain Food for General Counsel, a podcast that investigates the biggest issues facing your organisation. My name is Matthew Magee, and I am a journalist here at Pinsent Masons. This time we are going to investigate something that has barely left the lips of commercial lawyers in the past six months or so, and that is the impact that artificial intelligence, or AI, is going to have on in-house legal functions.

That artificial intelligence or AI will change the world seems certain. It usually takes decades for the inventors of life-altering tech to warn that genies are perhaps best kept in their bottles, but the people who have helped invent AI are issuing grave warnings even as new versions of the software are published.

A statement released in May 2023 said: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war’. That’s pretty strong stuff from anyone, but from leaders at the companies actually still developing the technology – OpenAI, Google DeepMind and Anthropic – it is nothing short of extraordinary.

So: we’re in for some big changes in, probably, most aspects of our life. But not yet. Our lives are not about to be turned upside down in the next couple of years, but our working lives in the legal field might change pretty quickly.

We’re going to look at what impact rapidly accelerating AI development is going to have for in-house legal teams and the organisations they work for. How will it improve current tasks? What tasks does it make possible that we haven’t even thought of? And

So what has changed? We all used ChatGPT last autumn and got a bit thrilled and a bit scared, that’s what. All of a sudden the decades-long promise of AI – that it would just work, it could just answer questions or do jobs we asked it to in normal language – was seemingly fulfilled.

So what had actually happened?

Well, we all finally got to see the fruits of systems developed by academics and Google researchers in 2017. This was called ‘transformer’ architecture and it fundamentally changed how AI systems worked.

We previously had to train systems so that they had a facsimile of some kind of understanding of the information they were processing. This made them incredibly labour-intensive and every time you wanted to use them with new data you had to retrain the system.

Post-transformer, you could create large languassge models. Orlando Conetta, who is in charge of product engineering at Pinsent Masons, explains the impact this would have on a typical talk, such as reviewing lots of contracts to see if a change of control of a company requires consent.

The advantage is not just that this can be done at huge scale – once you have got the system set up – but that its results are more sophisticated than what was possible before.

Orlando Conetta:

With the legal AI tooling which was already on the market, many of those platforms would come with pre-built models that would identify change of control provisions across contracts, which is great – it's really good – but what that’s not giving you is that differentiation between those change of control clauses that require consent against those that do not, and if you wish to do that, you would require to come up with a vast training set on your own and manually tag those two variations of change of control clauses and train a new model from scratch, which you would then have to test.

So, now if we move to today, with large language models; how would that be different? Well, we don't need to train a new model. Instead, what we can do is take a GPT model and come up with a prompt, and this would set out how we can create an instruction or read the following clauses and determine if change of control requires consent. You can run that prompt in parallel many times across all of your documents and get that structured content out, which would then be used by you and your lawyers to provide further analysis – so, you may not trust it straight away – and you can look at this explanation and you can look at its text and make a determination there. But, the important thing is you have not been required to change a bespoke model from scratch, and so the barriers to entry have been reduced. You are getting more than just a standard classification out of the system; you are getting an explanation as well. You are able to iterate, consequently, far more easily and try different approaches against the data that's in front of you against the task.

The real power is that it's a general purpose computer that can be put to language tasks. So, much like in a normal computer you'll have an input, you'll have an algorithm and you'll have an output, and what is different is that this is being applied across an irregular language, rather than the regular language of computer programmes. And that is one of the ways to look at this; you can take that capability, you can embed it in systems and apply it to a wide range of use cases. We are now extending computation into the social sciences, and in our case, the legal industry.

Matthew:

AI in legal work is not new, people have been trying to get computers to understand and process legal information for decades. In the US researchers used machine learning to predict who would get pre-trial detention in a way that, if adopted, could increase the consistency of detainment decisions, while a University of New South Wales study found that several groups of researchers have built machine learning programs that can predict the outcomes of trials.

Law firms, including Pinsent Masons, have worked for years with varying levels of success to use computers to process legal information, finding particular success with reviews of huge contracts or large numbers of contracts.

Lucy Shurwood of Pinsent Masons, who’s been working on using technology to deliver legal services for over a decade, explains what it’s been used for.

Lucy Shurwood:

The biggest benefit in the kind of work that I do has been around extracting information from contracts and making it a lot easier to extract the relevant information from a contract so that it can be reviewed and evaluated. Anything to do with contractual terms, really, so the key terms that a client might be interested in like ‘when can I terminate the agreement?’; ‘what is the governing law of the agreement?’; ‘if I want to transfer the contract, can I do that?’; ‘are there any conditions?’. The latter is usually because they have large volumes of contracts and to review them all using a human would take far too long and cost far too much. So, it's about being able to extract information really efficiently and support decision-making.

There are two big impacts in taking a technology-led approach and using AI for this. The first is that you are able to get to the information that you need much more quickly. So, I, for example, have reviewed 250 contracts to find out whether or not, and I've done that in four minutes, in a particular case. So, you can get to the information a lot faster; that's obviously more efficient for clients and generally, it's more cost-effective. The other big impact, though, is that in the old days, the way this would have been done is you'd have a huge pile of 200 contracts and a lawyer would faithfully sit there and read them all, and probably fill in a Word document, or a handwritten document, and then someone would type it up. So, for every contract you'd end up with, for example, 200 reports – so, you'd have 200 contracts, 200 reports – and a client doesn't want to read 200 reports on what all of their contracts say, so somebody else's job would then be to read all the 200 reports and write a summary. And that would take days and it was often quite a lot of work. We now generate all of that automatically, so all of the information that's extracted from the contracts combined with, where relevant, the lawyers' input, is all stored in a database and – literally, at the push of a button – we can create a summary report for a client. So, again, it's much more cost-effective, it's much faster, but it's also giving the client much more relevant information. So, instead of having to trawl through 200 separate reports, they get one nice summary.

Matthew:

So this technology is already having a major impact on in-house legal functions. Law professor Michele DeStefano of the University of Miami says that it is part of a much wider aim of digital transformation that legal functions are pursuing.

Michele DeStefano:

So, I think the biggest trend for in-house legal departments right now is digital transformation, and what in-house legal departments are trying to do is create a way to manage all of the intake, all of the requests from the business, because they get… One of the most common complaints I hear – and I interview general counsels all the time – hundreds – is the same requests by the business, yet no streamlined way to provide the answer. So, they need to figure out a way to use digital tools so that they can free themselves up from some of the grunt work – some of the easy answers that would be easy and could be automated – so that they can get to the strategic work. We're seeing legal departments now thinking of their departments like a business; with P&Ls, with purpose statements, branding statements, taglines – and their goal is to actually behave and be seen the same as other departments.

And I think the general counsels that are really with it right now are restructuring their departments, hiring legal ops people, looking at alternative legal service providers – which I know is a misnomer, but the type of services which can use technology to pull everything together so you have no version wars; you could actually mine your contracts for provisions that could save the company a lot of money. Often there's provisions in there where there might be breaches and nobody's going after to get the money for the breaches, and more than that, to look at it to find secondary opportunities. Maybe there is something in the contract that gave you rights to something else that no one saw.

Matthew:

This is what was possible before autumn 2022. But once ChatGPT caught the public imagination – and especially once ChatGPT version four went live in March 2023 – expectations and predictions of what generative AI can do have been a heady mix of the gleeful, the fearful and the bold.

Before we get further we’re going to have to get straight on some terms: what do we mean by language models, large language models, and generative AI?

AI systems used to try to understand the meaning of concepts and use logic and rules to process them.

Language models dispensed with meaning altogether and put together material by looking at the words used and predicting what word should come next based on how humans use language.

Large language models do this but on a huge scale – they predict based on training on a huge proportion of human written knowledge – usually massive chunks of the internet that are fed into the machine.

Generative AI predicts how data is spread across groups or classifications and can produce new examples, given a context.

So what can generative AI do for legal teams in the near future? Michele, then Lucy explain.

Michele:

The types of tools that I think lawyers will be using in-house that have AI embedded in them are the tools that will help them develop a front door to legal, help them manage their contracts, and mostly collect data across the company that can be used to make predictions about the future – to prevent risk, but also to identify areas where new revenue streams can be created. So, one can imagine an in-house legal department using these various AI tools to be able to see how people in the business are working and making suggestions to how it should change to make sure that the right regulations are being followed, but also the standards and ethics related to the company's ESG goals are being met.

One could see also using the data about customers to predict where and how customers will behave in the future, which is new for in-house. Some in-house legal departments don't focus as much on the customer – they focus on their internal business client – but I think that's changing. The general counsels that are starting to digitally transform are thinking about 'how am I going to create some kind of front door to legal?' Meaning, if you, as a business person, want to find a resource or talk to one of our people, or you need help, you might come to a platform that will help you through a chat using some type of AI tool so that you get faster results – sometimes maybe a DIY/'do-it-yourself' – and other times it has let me get to the right person faster.

Lucy:

I think what we're seeing from the early experiments with generative AI is it's taken things that were either very, very difficult or very, very time consuming – or both – and made them much more accessible. So, for example, we have been working on extracting a particular piece of data from contracts for about two years and it looks as though generative AI will be able to do that with, maybe, a couple of weeks training. So, it's the difference between investing an awful lot of time and expense in being able to do something versus being able to fine-tune relatively quickly and reliably. It's all very well being able to extract information from a contract, but when you're looking at 'can I do this?', you need more than to know 'yes, there is a termination clause'. You need to know what the termination clause says, and how does it impact specifically on the act that you want to undertake? Up to now, that really has, for the most part, been something that a human has to do, and I think where we're going to see AI taking us now is it will be able to do more of that subjective judgement-making. So, it will recognise if the wording is a particular thing, and if the prompt has been written well, it will be able to give you a conclusion.

Matthew:

But to do any of this at all the system will need access to data – good quality data in a consistent format in a single place. That might sound simple but for many organisations it’s a pipe dream.

Lucy explains.

Lucy:

The things that you need in order to be able to take advantage of many of these tools is you need to have the data in the first place, and you need to have it in the right format, and if possible, you need to have it structured so that it's easier to analyse. And, a lot of business still businesses still don't have that, so they're still quite contract-heavy, or they don't store documents centrally or in a consistent way, which means that, to be honest, you can have some of the best tech in the world, but if your underlying data is in a very poor state, you're always going to struggle to get useful information out.

Matthew:

Alastair Morrison is in charge of client strategy at Pinsent Masons and thinks about generative AI as something whose main impact will be to expand the capacity of legal functions.

Alastair Morrison:

So, in-house legal teams…we often talk about a permit crisis – the sheer range of issues that are being dealt with that mean that often in-house teams just don't have the bandwidth to deal with a lot of stuff, and you kind of put things away that you would love to be able to do, but you don't have the time, the resource or the finances to be able to do things. So, that can be around making sure that all that you've got a single source of truth, if you like, for all your data, your documentation, you know where all your contracts are, you know exactly what they're all saying and you've got a kind of a proper, data bank, if you like, of everything that everything you've got going on. I can see applications for that. The kind of projects that you would not otherwise have the time or the money or the resource to be able to do – these kind of 'clean-up' projects that get things sorted out and get new kind of institutional integrity and resilience.

I often talk about the universe of law expanding at an ever-increasing rate, because it does; the regulatory environment – what industry is not regulated? We see regulation expanding; with that, increased compliance; with that, increased investigations as result of people falling down on things. So, you see all of these things moving all the time. I think this probably offers the opportunity to get your arms around things that you couldn't otherwise get your arms around and be feeling you've got better risk-management processes, protocols and are better able to do this.

So, you then see generative AI augmenting the legal team, and augmenting the capabilities of the legal team; not replacing, and not automating that, but being a very powerful tool to get stuff done that wouldn't otherwise be achievable. What you'd need to be looking at is working through with clients as to what is an application, what is a particular issue, what's the problem that you're seeking to solve and, as generative AI have a place in seeking to solve that problem, so it becomes an augmenting tool in terms of helping to develop legal solutions, and those use cases should be kind of co-developed to the business like ourselves and in-house teams around the issues they're seeking to solve. I don't think one should be going around and trying to find solutions for a problem. I think a far better way of thinking about generative AI is to be thinking that is a tool in the toolbox that needs to be deployed, depending on what the particular problem is. Much more bespoke and service-driven than just developing products and going into the market saying ‘here's a fantastic product for the following’.

Matthew:

Michele thinks that it’s a great tool to get started on a project, and that we’re maybe thinking about its capabilities in the wrong way.

Michele:

You think about, for a moment, you have all these associates or trainees outside the US. So, they get an assignment to go draft a summary judgement motion. They don't even know where to start. AI – ChatGPT 4 – is where to start, and the law firm lawyers that don't start there are just silly. Now look, you've got to be careful. It lies, and it lies really convincingly. But, what it does is you can take something accurate and say ‘change the tone’; ‘can you make this more business-like?’; ‘can you make this one page?’; ‘can you argue a little bit more for X side versus Y side?’. And I think that that is where young people should start out. It will help. So, the risk is they won't check it, but that's an easy thing to train. And of course, you have to cite. They have to admit that they used it. I think it can be a great tool to take some of the drudgery of the associate level's work out.

Matthew:

Alastair describes how it could connect to other systems and fundamentally reshape how contracts work and how disputes express themselves, perhaps even eliminating them from view for many organisations.

Alastair:

It's interesting to see the evolution of contracts, because when I first started, a contract that many people would say; well, you signed the contract – or maybe you don't sign in some cases – stick it in the drawer, get on with the job and get it done, and then you pull the contract out to the end if you've got a problem, and you only look at it if you've got a problem. So, it was kind of denial about the contract you're working on, and contracts evolved to kind of being much more of a front-end management tool, that if you do not put in relevant notice provisions, you do not use the contract almost as a management tool – a programme management tool – throughout the process, you would you would be doing that to your potential prejudice. So, if you think there's a problem emerging, give me notice so we can manage it, work out the cost and see what's going on.

I think the evolution of this now is if systems start talking to other, contracts start to talk to programme management tools, start to link in to live events that are going on in a project at the time. So, just take a very simple one – an adverse weather condition which might entitle, in particular, contracts people to take longer to complete because the weather conditions were adverse – people couldn't work on site – and you needed longer to complete the project. There's sometimes disputes around that as to how materially adverse was the weather conditions, those kinds of things. What you could see now is something like generative AI instantly resolving that, and having an implication where it's wired into a programme, that the programme and the change of programme is automatically made. So, what you might see emerging in the future is far less disputes emerging in contract issues; the impact of change and variation can instantly be adjudicated, sorted out with programme issues and consequences resolved straight away. That's quite interesting as to where that goes, but I think that's far further down the line, but you can see how when these all link and talk to each other, applications that we can't envisage at the moment could well emerge.

Matthew:

What it will certainly do is the kind of large scale information processing and risk assessment task that eats up huge human resource but can be done almost instantly by a computer. Alastair, then Michele, explain.

Alastair:

When you look at legal opinions where facts aren't in dispute, but what is the legal consequence of something, use cases around legal opinions definitely exist – use cases around comparative law. So, when you're looking at legal systems across the world, you need to know and understand what the provisions are and different jurisdictions, that's interesting. So, compliance areas, when we think about all the compliance and regulatory complexity that major corporates have to deal with, and the cost and expense that that has probably been required in the past to kind of answer and identify those issues.

Michele:

So, one can imagine an in-house legal department using these various AI tools to be able to see how people in the business are working and making suggestions to how it should change to make sure that the right regulations are being followed, but also the standards and ethics related to the company's ESG goals are being met.

Matthew:

And this of course frees up the lawyers to do other things, which may end up being one of its most powerful qualities.

Michele:

In-house needs to start being involved in product development at the early stages of product development. So, consider an in-house legal department of a bank. Constantly, new products are being developed and they're using AI. So, for example, a consumer lending tool that uses AI to help it make decisions about who should be able to get this type of loan at this type of rate. We have to be careful because there's humans involved in creating the technology, so there can be cognitive bias baked in. If somehow that tool is discriminating against single women with children, that's a problem, and what happens in the product development cycle is, generally, people don't think of lawyers as creative – sometimes they think of us as pariahs – you put a creative idea in front of us and we're going to tear it apart, actually – and lawyers also don't think of it as their job to be helping the creative development of new product ideas. And I'm a big believer that all that has to change, because if we don't involve the lawyers at the beginning in the process, number one: they're not so sold on the idea, so by the time it gets to them, everyone just thinks of them as ‘they don't get it, they don't understand how cool this is, they're just the red flag people and the no people’. And also, sometimes it's too late. A lot of time has been wasted if we're going to develop this whole product and at the end, legal is going to say no, you can't do that for X, Y and Z reasons. And then you start to think, OK, that means lawyers – in-house lawyers, especially – need new skillsets and mindsets.

Matthew:

Possibly the most daunting – and exciting – thing about this explosion of interest in AI is that we just don’t know what it might do. Sure, it’s alarming that some of the inventors of it say they don’t know either, but it certainly does open a lot of possibilities up to the imagination, as Orlando outlines.

Orlando Conetta:

The risk is that we, as an industry, see this as a race; a race to create the magical legal large language model that we are just going to be able to apply to all of our tasks. There is a – and this is the bigger point I think - an opportunity risk that we miss the chance to use this large language model, or language computer, to interrogate natural language sources of law – be it contracts, legislation or arguments – in order to understand ourselves better; both in terms of legal theory, but also in legal practise, and take that knowledge and improve the service we provide. Because we still have huge unmet legal needs, both in the commercial space as well as the public space, and I would be sad if we saw these tools as a silver bullet or a quick fix, rather than an opportunity to do the hard yards, to learn more about ourselves.

Thanks for joining us for the latest Brain Food for General Counsel podcast. Now remember you can keep up with hour by hour, business law news coverage by the Outlaw reporting team at pinsentmasons.com. And don't forget to subscribe to us wherever you get your podcasts and if you've enjoyed this or past programmes, please do like and review them or share them - it really does help us reach people who might be interested in our thoughts. Until next time goodbye.

Brain Food for General counsel was produced and presented by Matthew Magee for International professional services firm Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.