• AI and the legal profession are working together.
• Particularly carved-out safeguards and guardrails are necessary.
• AI never overrules a lawyer’s legal and ethical responsibilities.
Generative AI is already more or less everywhere – both on display and invisibly pulling data together behind the scenes of our daily life. But the technology has had a turbulent first year, including publishing private data to its public model, regularly “hallucinating” – creating answers to questions out of thin data, much of which has no basis in truth – and being notoriously dependent on the quality of its training data, which can deliver falsely shaded results.
READ NEXT
The legal profession and AI
If you trained a large language model only on data that supported the flat earth theory, for instance, your generative AI would reject answers that depended on heliocentrism for their truth, which, just by way of example, would be a fundamentally stupid thing to do.
All that being so, we sat down with former litigator and now CPO at NetDocuments, Dan Hauck, to explore the use of AI in the legal profession.
In Part 1, we brought up the case of Levidow, Levidow & Oberman, PC, a law firm which was fined $5000, after one of its lawyers used generative AI to pull together case histories to plead a case, only to discover the AI had hallucinated and the basis of their case was erroneous.
As we spoke to Dan though, he reassured us of a set of legal exemptions to the way in which generative AI usually works, which NetDocuments had negotiated with the likes of Microsoft, so that for instance, privileged client data would never be retained or re-used by the large language models at work.
AI and legal exemptions.
Those exemptions, he said, formed the basis of a net that could let the legal profession use generative AI safely – always remembering that no tool or piece of technology absolved a lawyer of their duty of care towards the facts, their clients, or the law itself.
We wanted to dig into that net a little further.
THQ:
Those exemptions sound like a reasonable start in reassuring the public at large that their data will be safe if their legal representatives use AI to compile their case notes.
So you’re certain the technology is as sophisticated as it needs to be for people to trust their potentially life-or-death data in its hands?
DH:
Yeah, from a data security standpoint, we feel very confident about the approach we’ve taken. When we think about the ability to use generative AI in legal practice, there are a couple of different considerations.
One is, what’s the right method through which to engage with the technology? Traditionally, the way that ChatGPT works, for instance, it’s a very individual, chat-style conversation. One of the challenges that we’ve identified in working with customers is that every lawyer might ask a question differently. They might include different information, and that can give you different outcomes in the response.
Many law firms are really looking for high quality and consistent outcomes. So we essentially took generative AI and added it to a no-code platform that we developed, called Pattern Builder. What that allows organizations to do is actually build applications that use generative AI that let them control what that question or what that prompt looks like, and then evaluate the outputs of that.
Then they can make tweaks to that, so they can really dial in and not depend on each individual lawyer to ask questions exactly the right way.
The legal profession and AI training.
THQ:
Like algorithmic prompt engineers?
DH:
Kind of, yeah. With that kind of consistency, the lawyer using the generative AI in their legal practice can simply run an app and know that it’s something that their firm has evaluated, and that is fit for the use case for which they’re using it.
THQ:
This is a left-field question, we know. If you have multiple law firms using the same system, which includes the app to reduce prompts to the same “right” answer, are they not, over time, going to get very similar or exactly the same results? And doesn’t that lead to a kind of AI-driven atrophy in the legal system? Like algorithms playing eternal rock, paper, scissors?
DH:
One of the things that we found in talking with customers is that they really do have unique areas of expertise, unique areas of work that they have developed over the years.
Really, what they’re looking to do is not just get vanilla answers out of ChatGPT. They’re looking to be able to pull in that prior experience and expertise and use that to inform that next generation of work. And they want to be able to scale that in a unique way.
It’s been fun to work with customers in building their apps, because what they’re essentially doing is taking their processes and their knowledge, which is why clients come to them in the first place, and building generative AI apps around that. We’ve found that to be really effective, aligning with where customers want to go and still allowing them to maintain that differentiation.
How to differentiate AI and judge results in the legal profession.
THQ:
The differentiation is key, then? Because otherwise, if you have two law firms using the same system, with similar histories and client-bases, it feels like it would be a homogenizing experience. Does the differentiation come down to just the way that they build the apps or the way that they phrase the prompts?
DH:
It can be that, but also it can be the unique nature of the client work. So if you’ve worked with a client on six or seven past engagements, and you know the negotiating positions that they will routinely take, or things they’ll say no to a lot, you can start to use that information.
So that when the client comes back to you for that next engagement, you can more readily apply that past experience and build that in. And that’s information that’s not available anywhere in any kind of public domain. It’s really the law firm that has built that expertise through their relationship with a client.
THQ:
Okay, that begins to make sense in terms of differentiation.
We’ve seen at least one case where a lawyer went to court with briefings more or less built by AI, and they’ve come up against hallucinations in the process, and been fined for them. How rare are those cases? How close are we to being able to say this is a sealed bundle of safeness?
Familiarity breeds…content?
DH:
Yeah, that’s a real concern for people. And that’s one of the reasons why we talked about being able to put guardrails in place, not only about what kinds of apps you can run, but also who has access to those apps and in what contexts.
As firms become more comfortable with the technology, as they can test it in a variety of use cases, you’re going to see firms really becoming more familiar with what they can do with it, but also what their review mechanism needs to be, because, again, this is a tool. It’s not something that should be replacing their work, but it should be augmenting what they can do and how fast they can do it.
The other thing I’d highlight is that although generative AI is great at generating text, and that’s a use case that we see commonly, it also covers a variety of other things that would be important in a legal practice. We’re already seeing that in some of the apps that our customers have built – things like classifying documents, based on work you’ve done in the past and being able to better classify and organize it, so you can use things like semantic search to find it again.
Those are the other types of capabilities that this technology unlocks, because it’s based on large language models, and those are really good at understanding and interpreting human text. So it doesn’t always have to be drafting that next brief, it can be doing non-billable, time-consuming, and challenging things. And it can do them well.
THQ:
Cheap shot, this, but are we saying that a generative artificial intelligence can classify and declassify documents just by, as it were, thinking about it?
DH:
Ha. Classifying in this case meaning “correctly filing.”
THQ:
Tragically, we knew that. We did say it was a cheap shot. On a slightly more serious note though, that’s the traditional image of a law firm, isn’t it? Drowning in historical paperwork? And one of the things at which generative AI has proven itself to be excellent is precisely that repetitive electronic “paperwork” function.
DH:
Absolutely, yeah.
Show me (more of) the money!
THQ:
Again, this might sound frivolous, but if the generative AI is doing a lot of the grunt-work in a legal office, summarizing, classification, pulling previous work into the present for repeated use, etc – all things that in an entirely human model would be part of the salaried hours of a junior associate, say – does the bill come down for the client if the law firm has fewer hours of staff overheads?
And where do we stand on client transparency? Is there anyone demanding that law firms tell their clients when their brief has been at least partially put together by generative AI?
DH:
There are a number of use cases where it’s really not directly involving the client. So whether it’s classifying documents or summarizing content, normally you might have an assistant or a junior associate do that. Generative AI is exceptionally good at that, and it allows someone to get read into a matter much more quickly, so they’re up to speed on what’s going on.
When it comes to client content, ultimately it goes back to the lawyer being responsible for what those outcomes are, and being able to review and evaluate it.
I think as AI starts to get embedded into the workflow, you will see an evolution in terms of how law firms are thinking about pricing matters, and about how they’re engaging with their clients.
What we’re seeing right now is that clients are very interested in how AI can be applied to produce better outcomes, to produce more efficient outcomes.
They’re absolutely looking to their law firms to start engaging with this technology.