Episode 4: What does “human in the loop” mean?

Gianluca Rossi, Ontra’s Director of Machine Learning, discusses how people and AI can partner in human-in-the-loop AI systems.

Video Transcript

Hi everyone, welcome back to Machine Learning with Luca, where you get real answers to all of your AI-related questions.

I’m your host, Gianluca Rossi, and lead the machine learning team here at Ontra. Let’s dive into today’s question.

Today’s question comes from Maxine who wants to know, “What does ‘human-in-the-loop’ mean and why does it matter”? I like this one because it gets to the heart of the great partnership that can exist between people and AI.

So, let me answer this question with another question. What’s the difference between a first-year associate and a partner?

In a word, it’s experience. The average partner has probably spent decades practicing law.

During that time, they’ve developed a knowledge base and the judgment to handle just about anything clients can throw at them.

A first-year associate, on the other hand, brings an academic understanding of the law and that’s about it.

That’s why law firms typically don’t ask associates to fly solo. Don’t get me wrong, first years play an important role at a law firm.

They perform lower-stakes functions like research and contract analysis so that senior members of the team can focus on more complex work.

If you keep the skill set of an associate in mind, you’ll have a pretty good sense of the kinds of tasks AI can perform unsupervised.

For example, AI is great at comparing inbound contracts to a playbook and flagging clauses that require additional attention.

During a negotiation, it can also retrieve previously executed documents, highlighting relevant terms and suggesting language based on established precedent.

But like an associate, AI also requires oversight in certain cases. It’s not ready to independently negotiate a contract.

And once a document is executed, AI can take a first pass at summarizing it, but a lawyer still needs to check the work.

Limitations in today’s generative AI models stem largely from two issues.

First, these models are prone to “hallucinate,” they can invent information and present it as fact.

Second, generative AI can at times provide inconsistent answers to a given set of questions.

By keeping a human in the loop to review AI outputs for accuracy and consistency, legal teams can fully capitalize on the power of AI without introducing unnecessary risk to critical legal workflows.

And just like an associate who gets better over time, AI can learn from human feedback and generate more accurate and relevant outputs.

So Maxine, thanks for your question. One of our world-famous Ontra mugs is on its way.

And if any of you have AI-related questions, please send an email to [email protected]. Thanks for tuning in and remember, keep learning!

Ready to take control of your fund obligations?

Trusted by Leading Firms

Explore our content