While artificial intelligence has the potential to improve legal productivity in the private equity industry, generic legal AI applications may bring with them more risk than reward. Legal AI solutions that purport to help with contract drafting, redlining, and negotiations could inevitably cost lawyers more time if they have to reject or fix the solution’s bad recommendations. A poorly built tool could slow negotiations and, worse yet, damage the relationship between contracting parties.
Even AI providers that profess to be at the forefront of the industry don’t offer the precision and accuracy that legal work in the private markets demands. Because of this, private fund managers hoping to take advantage of AI need to diligently evaluate vendors and their solutions.
Only by asking the right questions can managers distinguish between generic legal AI and purpose-built AI that can offer real improvements in efficiency and productivity.
This article will help readers ask the right questions of an AI vendor about solutions for private fund managers’ legal workflows.
How using generic legal AI for private fund legal workflows can go wrong
Not all legal AI solutions can appropriately address the legal needs of private fund managers. Generic AI solutions, even those marketed as legal AI, typically lack the fine-tuned models and industry-specific datasets AI needs to benefit the private markets.
To understand the importance of conducting due diligence on AI vendors, it helps to have a clear picture of what could go wrong by implementing a generic AI tool.
When analyzing and interpreting legal documents, generic AI solutions might:
- Misunderstand the meaning of a legal term or contract provision.
- Misunderstand how two or more provisions relate to each other.
- Fail to identify defined terms and implement those consistently within a document.
- Fail to identify and address material terms within a document.
- Fail to correctly interpret lists.
- Fail to correctly parse complex sentence patterns, center-embedded clauses, or conditional logic.
- Fail to identify conflicts among contract provisions.
- Fail to correctly identify and consistently use pronouns throughout a document.
- Mistake one type of clause for a meaningfully different clause because of similar words.
When providing answers and other outputs, conventional AI solutions might recommend:
- Incorrect interpretations of legal terms or contract provisions.
- Poorly drafted or incoherent edits.
- Inconsistent language within a document.
- Irrelevant changes to a contract or its terms.
- Duplicative or conflicting clauses.
- Heavy edits to substantively acceptable provisions that differ from AI’s training data.
- Deleting or simply ignoring necessary clauses that the model does not recognize.
- Aggressive markups that employ a “replace all” approach, rather than editing text inline.
Avoiding these outcomes depends on choosing a legal AI application built for the legal tasks and documents used in the private markets, and finding that AI depends on asking the right questions.
5 questions to ask an AI vendor
1. Does the vendor use open source or proprietary models?
When performing due diligence on an AI vendor, fund managers should gain a thorough understanding of the vendor’s AI models, including whether the vendor relies on open source or proprietary models.
Open source models are generally available to the public and typically address generic use cases. Proprietary models, on the other hand, contain private source code, are only accessible to authorized parties, and deliver unique capabilities.
The key consideration is understanding how specialized a vendor’s models are in relation to use cases for private markets. For the AI to be useful to a private fund manager, the vendor should be able to demonstrate that its models are fine-tuned to industry-specific legal tasks.
2. How does the vendor address the fundamental limitations associated with generative AI models?
Legal AI vendors typically offer a combination of predictive and generative models. A predictive model makes decisions or predictions about future outcomes by identifying patterns and trends in data. These can deliver consistent, accurate results when trained on high volumes of relevant information.
A generative model can create unique text by mimicking content it has previously analyzed, enabling legal professionals to tackle work that requires generating context-specific text-based responses.
Unfortunately, generative AI outputs aren’t always relevant, accurate, or well written. Fundamental limitations of generative models include false information meant to create a coherent narrative (called hallucinations) and inconsistent answers to the same questions (called non-deterministic responses).
Another limitation is the caliber of generated text. Writing quality is often subjective, which makes it challenging to evaluate generative AI outputs. Additionally, good legal drafting differs from other forms of writing, and tools that generate well-written articles or essays may not be particularly useful for drafting contracts or other legal documents. In many cases, fixing bad generative AI may be a bigger waste of time than revising contracts the old-fashioned way.
The only way a vendor can control the quality of generative AI outputs is by assigning experienced humans to supervise the model’s recommendations. A human-in-the-loop system means subject matter experts evaluate the AI outputs; correct irrelevant, inaccurate, or poorly written content; and feed their responses back into the model to improve future outcomes.
3. Does the vendor fine-tune its AI to industry-specific contracts?
It’s important to uncover whether the vendor has adjusted its models to work well for industry-specific contracts. Generic AI typically fails the private equity industry because legal language is difficult for AI to understand, contextualize, and generate with precision. Private equity relies on numerous key agreements that are fairly unique to the industry.
AI models meant to tackle contract-related tasks fall short if they lack an understanding of industry-specific legal terms, relevant contract structures, requisite formality, and stylistic conventions. For instance, open source models usually don’t recognize contracts’ specific structures, and vendors might not have adjusted their proprietary models well enough, either.
Vendors might not have trained their models on the many specialized contract types managers use during the fund and investment lifecycle, including limited partnership agreements, investment agreements, NDAs, side letters, and other ancillary documents. A model trained on commercial real estate contracts, which often contain lengthy “whereas” clauses and complex definitions sections, may not be suitable for reviewing side letters, which tend to be simpler and contain in-line definitions.
Consequently, models that aren’t fine-tuned to industry-specific contracts can fail to sufficiently address important use cases, such as document summarization where the AI must correctly categorize and organize its results. Ask the vendor to demonstrate how it has adjusted its AI models to address highly specific and structured private equity contracts.
4. Does the vendor’s training data come from an open or industry-specific source?
The foundation of high-quality AI outputs is a large volume of training data relevant to the intended use cases. The higher the quantity and quality of the dataset used to train a model, the higher the quality of the final model and its outputs.
Many vendors train their AI on open source data from sources like Wikipedia and the U.S. government, which aren’t relevant, complete, or precise enough for legal use cases. Models trained on generic data will yield inferior outputs compared to models trained on industry-specific data.
Contract-related datasets for training may not be sufficient either. Vendors that train models on a broad library of commercial contracts likely can’t offer high-quality results for the most common contracts within the private equity industry. For example, a model trained on employment or service provider non-disclosure agreements may not understand the process considerations and restrictive covenants often addressed in private equity transactional NDAs.
To ensure the outputs will be appropriate for private markets workflows, verify that vendors have trained their AI on accurate, relevant, and industry-specific data sets, such as common private equity contracts.
5. What are the vendor’s data security protocols?
Ask AI vendors to explain in detail how they safeguard customer data and protect confidentiality. Be sure to understand whether vendors or their customers retain ownership of the data.
It should be clear from the information the vendor provides — such as encryption protocols and audit certifications — that a firm’s data would be private, protected, and anonymous within the AI models.
The bottom line: Test AI applications
Before adopting an AI tool, the best course of action is vetting a solution. It’s crucial for managers to actually test AI tools on their own documents against their own use cases. Keep in mind that flashy demos are often carefully built, tested, and tailored to showcase AI getting it right.
We recommend managers choose a complex document or legal task that is representative of a real workflow for their test run. Some AI solutions may perform well enough on templated documents and simple legal tasks, but fall short when completing something more challenging. Testing solutions on edge cases can uncover endemic flaws. Managers need to see how AI will perform when their legal teams need help the most.
Continue to explore legal AI
We’ll continue to dive deeper into the topic of legal AI for the private markets. In future posts, we’ll explore how legal and business professionals can be a driving force for digital transformation within a private equity firm, and provide insights into how to use AI to gain a competitive advantage.