Listen now: Developing AI models for contract management in financial services

Ontra

December 21, 2023

Share This Article

Share This Article

Each week Emerj interviews top experts regarding how AI is disrupting the financial services industry, offering a unique perspective on each show.

Our co-founder and CEO Troy Pospisil joined Emerj Senior Editor Matthew DeMello to discuss model development for legal workflows and what financial leaders need to know about the process to properly leverage new generative AI capabilities in these sectors.

Read the conversation below

Matthew DeMello (0:03)

Welcome, everyone, to the AI in Financial Services Podcast. I’m Matthew Demello, senior editor here at Emerj Technology Research. Today’s guest on the program is Troy Pospisil, founder and CEO of Ontra.

Ontra is an AI-driven tech company that develops contract management and other legal solutions for organizations across financial services with a strong presence in private markets. Troy joins me on today’s show to discuss model development for legal workflows, and what financial leaders need to know about the process to properly leverage new generative AI capabilities in these sectors. Without further ado, here’s our conversation.

Troy, thanks so much for being with us on the program this week.

Troy Pospisil (0:55)

Matt, it’s great to be here. longtime listener, first-time caller. I appreciate you having me on.

Matthew DeMello (1:00)

So Ontra, you guys are working in legal spaces and in private markets. You have a huge crossover with financial services. I think a lot of the use cases that you guys are really pointing to, especially in contract processing, really get to the heart of where we’re going to see lines blur between legal and financial services, especially looking at things from a data standpoint.

But coming from a legal standpoint, what do you see as the biggest challenges when it comes to legal work for financial services organizations, just given the current state of AI development, particularly those in private markets where Onra operates?

Troy Pospisil (1:39)

That is a great question. And a pretty good tee-up for me to talk about what Ontra does and what we’re excited about building here at Ontra.

I used to work as a private equity investment professional and dealt with a lot of inefficiencies and problems in my day-to-day life related to contracts. The first of those — which is the first product that we launched, Contract Automation — dealt with a very high volume of reviewing, marking up, negotiating, and then keeping track of the terms in high-volume routine contracts, things like NDAs, joinders, reliance letters, and vendor contracts.

Private funds, asset managers, or private equity firms, in some cases, have 1000s of routine contracts per year that they have to deal with associated with reviewing investment opportunities in the private markets. Historically, doing that is incredibly time-consuming, and also not done very well. It’s not done in a consistent fashion. It’s not done quickly. And long turnaround time can have a real negative impact on being an effective investor because it gets us moving forward with reviewing investment opportunities.

Then not tracking that data in a systematic way creates compliance risks and also real risks with how you’re managing your investments. Because you’re not keeping track of things like who you can share confidential information with, other equity sources or debt sources, and things like that.

So we launched Ontra about nine and a half years ago initially to automate high-volume, routine contracting. AI has always been a big part of what we’ve done. We hired our first machine learning engineer in 2016, and we have found that we have been able to enhance and automate really every aspect of the workflow by leveraging AI.

I can talk about some of what that is, initially, when AI tools weren’t as powerful. We were able to build a lot of data extraction models. One of the most important aspects of our end-to-end workflow is tracking what’s been agreed to in the 1000s of routine agreements.

Even for something as simple as an NDA, you need to track when was that agreement executed? When is this agreement no longer enforced? Do I have a non-solicit? Who is that non-solicit in place with it? Is there a standstill? Are there any carve-outs to that standstill? These are all terms in the agreement that you want to put into a data-like format so that you can report across your many 1000s of agreements that may be enforced at any given time.

Even in 2016, we were building our own machine learning models using a lot of open-source tooling to extract that data in an automated way because doing that manually would have been incredibly time-consuming and, of course, fraught with some risk in terms of accuracy and consistency in how you were doing that.

Especially in the early days, we always built in a human-in-the-loop process. One of the challenges always with machine learning is the consistency and perfection in terms of the accuracy of that information. Obviously, as the models get more powerful, the accuracy of those models is getting close to 100%. But when you’re dealing with things like legal, the importance of perfection is absolutely critical. So we’ve always been big believers and still are huge believers in a human-in-the-loop process where you have an expert reviewer, that, depending on the accuracy of your models, is reviewing data before it gets published to the database. It still makes that reviewer far more efficient in terms of the volume of work they can process. Then as you get close to 100%, at least the human-in-the-loop auditing process ensures that your models are still behaving the way that you intended them to.

One of the great things about a human in the loop is your models can, if data gets amended by the human in the loop, update your model automatically. You want to build it in a way such that the model learns based on the changes that may get made.

Matthew DeMello (6:05)

Right, and what I really appreciated about your answer is you took us from the days of only having manual processes, and we’re kind of caught up to today, where we’ve got a little bit of a hybrid. We can kind of see the attention — or correct me if I’m wrong here — it seems like the attention where it used to be on the manual processes, dotting the I’s and crossing the T’s everywhere, that wouldn’t be 100% accurate. Making sure you’re doing your due diligence, but with nothing to double-check you.

So your attention is kind of everywhere. Where do we see it? I know you were kind of touching on this in your last answer, where do we see the attention being spent to close that gap? To 100? You were mentioning in model development. It sounds a lot like what I’m hearing from our friends in healthcare, in terms of more expertise being allocated to giving that feedback and a lot less on the manual labor involved.

It seems as though you’re not having clerks or lower-level folks with their hands as much in the process. It’s involving more and more senior people. I’m also wondering if that has an effect on training, as well?

Troy Pospisil (7:14)

I think, as the state of the technology advances, there are a handful of things that are impacting the quality of the models and how you can deploy them. One is for simple tasks, which we’ve been focused on for a long time. The accuracy is increasing, and the amount of training data you need is decreasing. So you can much more quickly deploy AI for simple tasks. You can just do it more rapidly — you can get more models out at a more rapid pace. That’s just because the size of these large language models and the amount of training data that they come pre-trained on is obviously just massively immense. So you don’t need as much training data as you did historically to create really good models.

Then, as I’m sure you’re aware, the specificity of that training data and the volume of that training data puts it over the edge to be incredibly accurate. In our case, where we are creating models that are for very specific use cases, within a particular industry, on specific document types, and in a lot of cases, specific clauses within specific document types.

For the contracts that we process within our end market, which is private equity, we have about a million tagged contracts. We can create incredibly accurate models today relatively quickly, because we’ve spent the past seven years now building that corpus of data and making sure that it was tagged in a way that we could do model training.

So, with a million pieces of training data, that’s a million documents. Within each of those documents, there are multiple versions of those documents. We have tagged around 500 different data points. Within each document, we have millions of tag data points that we’re using to train our models.

In a world today where there are incredibly powerful large language models out of the box, we pair that with our dataset. We have incredibly accurate models for simple tasks and deploy those really rapidly.

The other thing that large language models enable you to do is move up the complexity curves in terms of how complex those tasks are that you’re asking AI or ML to assist with. That’s where we are most excited and where we’re spending a lot of our time and energy today: Moving up the complexity curve and assisting humans with automating tasks that a few years ago weren’t on our radar in terms of where you can deploy AI.

I talked about our Contract Automation product, where we automate high-volume routine contracts. One of the other big problems that private equity firms deal with, and I would say probably any business deals with, is tracking and managing obligations and restrictions in very long, complex agreements. In private equity, in particular, you have a number of different agreement types that can sometimes be 1,000 or more pages. You are responsible for adhering to everything that it says in those agreements.

One of the use cases that we’ve been really focused on is fund documentation. These are the set of agreements that govern the relationship between an asset manager and their investors or limited partners. The way the industry has trended is that basically, now every large investor asks for a side letter, which contains bespoke obligations and restrictions that they would like. On top of that, the number of LPs in each fund has generally grown; and the size of most private funds and asset managers has also grown. They’re raising funds more quickly, they’re raising funds across more investment strategies. So the volume of fund documentation has grown exponentially for most asset managers over the last decade. And within each fund, you now have probably 100 plus side letters; each side letter we’ve found now has about three dozen bespoke obligations or restrictions.

Just do that math for each strategy. If you’re raising a new fund every two to three years, you now have maybe 10 different investment strategies. You’re talking about 10s of 1000s, maybe 100,000 obligations and restrictions that you have to manage as an investment firm. And that work has to get done across functions. Your finance team may be responsible for some, your clients team may be responsible for some of it. And that becomes an essentially impossible workflow problem because all of that information is entombed inside PDFs essentially.

Historically, firms have attempted to stay on top of this problem with spreadsheets or emailing outside counsel to say, hey, can we invest in a chain of stores that sells lottery tickets? We don’t know. Can you go through literally hundreds of 1000s of PDFs to tell us if that’s okay? Or if we need to create a special vehicle that carves out one or two of our investors who don’t want to invest in what they would deem as gambling?

This is where AI can be incredibly powerful. We have been able to use these more advanced large language models to automate the extraction and the tagging of obligations and restrictions. Essentially taking long, complex agreements that are constructed of not just English language, but really nuanced language, legal language that is very specific to this area of the law: fund documentation for private funds asset managers. And using our training data set in partnership with a lot of these big LLMs to create really accurate processes for extracting those obligations and categorizing them. For example, this is a key man provision.

So our investors can then use that structured data to build workflows that allow them to proactively stay on top of their obligations and restrictions, and know what actions they have to take in response to specific operational events within their firms. Historically, we did deploy this product in a pre-AI environment. It was very manual, time-consuming, and expensive to onboard and deploy the product. We’ve now been able to automate 95% or more of the onboarding process using AI to transform PDF documents into the structured data form that we need to deploy the product. So that’s been incredibly exciting for us. It basically makes it such that these customers can tackle a problem that was almost impossible to manage given the volume of work due to just how the industry has evolved over the last decade or so.

Matthew DeMello (14:27)

Right, and really emphasizing that they can do this with much smaller teams and, of course, with their partners. So much to pull apart in your last answer, it’s probably going to take us the rest of the episode, but that’s the thing about great answers.

I want to go back to something you were saying at the beginning of your answer and then tie that back to the use case you were just talking about. You were mentioning how you know there are out-of-the-box, bespoke models that will handle a lot of the more manual, easy-to-understand, simpler tasks in these workflows. It sounds like for tagging the obligations, that is a small bespoke model that’s going to handle that. Elsewhere in FinServ. I’ve always heard that at least for the strategy — and let me know if you guys do things differently; we will always love to hear that — smaller models are not just for smaller tasks, but they’re better for client-facing tasks as well.

Usually, how you’ll do model development is you’ll have kind of a core model, a foundational model, just to touch on some jargon. I know our financial services friends have heard out there in the news there’s a larger foundational model that’s not going to be client-facing because that’s going to be tapped into all of the data for your entire organization. It’s more meant to just really interface with your C suite, in your data scientists, more than anybody else.

As you get out and develop systems for smaller, more client-facing tasks, that’s where it’s going to break off, you’re going to have more bespoke models. Do I have that much correct? And that tagging use cases fall in that category?

Troy Pospisil (15:56)

To me, when you say client-facing, it’s really a synonym for perfection or accuracy. If you have a use case where it is right 90% of the time, or it creates a draft of something in a language form that is 90% of the way there, you’re going to be hesitant to deploy that to a client-facing use case.

Or you’re going to want to make sure that you’re very clear and you design a process where humans in the loop to bring it over the finish line. Whether that’s a human in a process that you’re managing, or there’s a client user who is using the output, and they’re aware of what this really is, and the limitations and the use. Just make sure that there’s a UI that encourages the use of that output in the right and responsible way.

If it is a machine learning model, where the task is simple enough, you can get to near 100% accuracy, or at least better than human-level accuracy. I think some of these smaller models that are more specifically trained on a very specific task can outperform the generic large language models. You can actually deploy those without humans in the loop into client-facing environments.

In the case of documents, some very simple examples are: who is the legal entity who signed us? We have so much training data, and it’s such a simple use case that you can get to essentially 100% accuracy using a simple model. Maybe that calculation doesn’t mean you necessarily have to ping the cloud, it can be performed locally, and you can get really high accuracy near instantaneously.

Matthew DeMello (17:47)

Absolutely. I love how you frame this in terms of client-facing as a synonym for perfection for that 100% accuracy or as close as you can get to. I’m a musician, we all know perfect is a big word. I don’t know if anybody really understands what it looks like. But we all know how to get there. And we all know how to close that gap. Looking at it from a data perspective, that’s when we start to see the patterns of client-facing really just means this, and it’s those insights we look for in the show.

Troy, thanks so much for being with us this week. It’s been a blast.

Troy Pospisil (18:18)

Matt, this has been a lot of fun. Thank you for inviting me. It was great to be here after listening to the show for so long.

Matthew DeMello (18:29)

Before we close out today’s show, I thought that Troy really did a great job of explaining model development both in a legal space, in a financial services space, in terms of the priority of the size of the model, where that will face customers, how facing customers is related to accuracy, when we’re looking at things from a data perspective. I also think a lot of what he talked about really crosses over industries.

If you check out our sister podcast, the AI in Business Podcast, not too long ago, we had an interview with Erik Duhaime, CEO of Centaur Labs, talking all about expert feedback and model development for healthcare. The reason for that being, because it’s just such a regulated space across sectors. I think, if only for that reason, it has enormous amounts of crossover with everything that Troy was talking about today.

On behalf of Daniel and the entire team here at Emerj, thanks so much for joining us today. We’ll catch you next time on the AI and Financial Services Podcast.

Here's how the PE industry is gaining a competitive edge with AI

Watch Now

Explore our content