Episode 2: What are the biggest challenges associated with using AI in the legal field?

Gianluca Rossi, Ontra’s Director of Machine Learning, discusses the challenges associated with using AI in the legal industry.

Video Transcript

Hi everyone, welcome back to Machine Learning With Luca — where you get real answers to all of your AI questions.

I’m your host, Gianluca Rossi. I lead the Machine Learning team here at Ontra and I’m responsible for Ontra Synapse – the AI built specifically for the private markets.

Let’s get learning!

Today’s question comes from Kerry who wants to know, “What are the biggest challenges associated with using AI in the legal field”.

Kerry, I love this question because it goes straight to the heart of what makes my job so fun.

There’s no doubt about it. Legal use cases can be tough, but not impossible for AI to crack. And that’s because of three factors:

First, they are based on extremely nuanced language.

Second, they involve specialized, industry-specific processes.

And third, there’s no room for error.

Let’s look at an example and compare a generic AI use case to a legal use case and you’ll see what I mean.

Chances are, you’ve spent some quality time in the recent past with one of these guys…

[Display: Shows a picture of a chatbot on a mobile phone].

These chatbots are personable, they’re efficient and, in most cases, they’re powered by AI.

Let’s say that you’re interested in purchasing some HR software. You visit a vendor’s website and when a chatbot asks how it can be of service, you say, “I’d love to grab some time to walk through a demo.”

Then the chatbot presents you with a link to a sales rep’s calendar, you book time, and you’re off to your next task.

AI really shines in this type of scenario for a few reasons. First, it’s based on commonly used language.

The chatbot can translate “Grab some time” into “Schedule a meeting” because it’s trained on massive public datasets in which those expressions appear frequently.

Second, scheduling a meeting is a simple, routine task.

And lastly, in the event that the chatbot makes a mistake — say it sends the user a brochure instead of setting up a demo — how bad can it get?

The user will probably repeat the request, or worst case, a human can follow up.

Contrast that with a typical scenario from private equity — say, negotiating an NDA. Take a look at this text.

[Display: Shows a page of a contract with long legal text].

If you’re a lawyer from the private funds industry, you know immediately that this is a standstill clause.

But an AI model, trained on news sources and Wikipedia pages about the Guardians of the Galaxy trilogy (by the way, I give the latest movie a big thumbs up), will never recognize this term.

It doesn’t appear frequently enough in those sources.

Now, let’s break down the process behind an NDA negotiation:

Multiple rounds of analyzing the contract, comparing it to a playbook, and redlining unacceptable terms.

Generic models that excel in scheduling sales meetings simply aren’t sophisticated enough to manage this kind of work.

And here is the biggest challenge — a bad contract could cost a firm a deal or even its reputation.

With that in mind, there’s a huge incentive for legal teams to make sure that the AI they’re working with is trained on large amounts of industry-specific data.

Also that it’s built to perform nuanced legal processes, and that the AI outputs are bullet-proof.

The good news is that this kind of technology does exist. It’s exactly why my team and I built Ontra Synapse — AI designed to check all of these boxes.
Kerry, thanks for a great question. If there’s anything you’d like to know about AI, send us an email to [email protected], and I’ll be sure to get back to you.
And if we end up reading one of your questions in these episodes, we will send you one of our famous Ontra mugs.

Keep Learning! I’ll see you next time.

Ready to take control of your fund obligations?

Trusted by Leading Firms

Explore our content