LLMs for private equity: the “upload and prompt illusion”

Ontra

May 5, 20267 min read

Large language models have changed the game, and private equity firms are now assessing whether to rely on frontier LLM platforms from OpenAI, Anthropic, and Google, purchase purpose-built AI solutions, or build tools in-house. Throughout that process, seemingly successful individual experiments are creating the illusion that horizontal LLMs can solve widespread PE pain points — but that may not be true at scale.

The Upload and Prompt Illusion: PE professionals upload one or a few documents in a chat, craft a prompt, get a seemingly accurate output, and label the experiment a success.

From conversations with both customers and prospects, we’re hearing about an experiment so often that we call it the “Upload and Prompt Illusion” — PE professionals upload one or a few documents in a chat, project, or workspace, craft a prompt, and get a seemingly accurate output.

Some examples:

  • A managing director uploads a handful of fund documents, prompts the LLM with a question about the documents, and gets a surprisingly good answer.
  • A compliance team drops ten side letters into an LLM, asks it to extract investor obligations, and walks away impressed.
  • A finance team uploads a credit agreement, requests a summary of key covenants, and gets a usable output.

The trap is in labeling these instances as a success. Follow-up questions quickly surface limitations. Do any of these scenarios set up a legal, compliance, or finance team for a smoother workflow in the future? Has the firm found a repeatable solution to a manual task or a persistent bottleneck? In all likelihood, no.

A handful of acceptable outcomes from an LLM across disparate use cases doesn’t constitute a reliable long-term solution.

Why using LLMs out of the box falls short for PE

Steep learning curves

The frontier AI developers — OpenAI, Anthropic, and Google — are continuously improving and updating their models and functions. Paid commercial users, such as PE firms, typically have access to several models that serve slightly different purposes. The PE professionals who rely on these commercial LLMs become individually responsible for determining which model and function work best for each question or task they want completed, creating new, incremental work.

Additionally, PE professionals typically have more than one horizontal LLM to experiment with. This introduces additional complexity — not only do users have to decide which model and function work best for each workflow, but they also need to compare results across the platforms.

Structural limitations

PE professionals have to consider horizontal LLM’s inherent limitations, including:

  • Data limitations: There’s a limit on how many documents or how much data a prompter can upload in a single session. LLMs also offer limited ways to save this information for repetitive, collaborative, or ongoing workflows.
  • Context windows: LLMs have a limited “working memory,” which includes the user’s prompt, previous conversation history, and system instructions. With repeat tasks and questions, the model’s recall degrades. In other words, information provided hours or days earlier doesn’t contribute to the user’s current request.
  • Privacy: LLMs have various data privacy and security measures, and an individual user may not know if or how a specific model retains or uses the information they provide. Depending on the security and privacy measures in place, it may not be safe to upload sensitive, confidential, or personally identifiable information into the LLM.

Lack of institutional knowledge

An LLM isn’t a comprehensive repository. There’s no persistent source of truth underneath each user’s individual chats, skills, or projects. The LLM lacks access to the firm’s institutional logic and knowledge, such as how obligations are categorized, which precedent documents should guide future negotiations, or how to format investor reports. In reality, much of this lives scattered across a firm’s files, folders, and even in one person’s head.

Users may try to include institutional logic and context into each prompt or build a layer of institutional knowledge related to their function into the LLM. Both of these options increase the workload for users looking for faster, automated workflows and don’t guarantee reliable, accurate results.

Collaboration & self-service struggles

LLMs’ structural limitations quickly become detrimental to essential collaboration. LLM chats don’t reliably carry over to the next conversation, and they aren’t accessible to anyone else at the firm. The assistant general counsel who uploaded side letters on Tuesday has a useful chat thread. The compliance lead preparing for an audit on Wednesday starts from scratch.

Are there workarounds? In some cases. Horizontal LLMs enable users to create and share projects, workplaces, or skills. However, firms often encounter data limitations with these options. An LLM project can’t hold the total contract and business data required for most fund management workflows.

Verification concerns

There are clear citation and verification issues with horizontal LLM outputs. In the examples above, the output looks right after only a five-minute process. But “looks right” is only acceptable assuming the prompter can evaluate the output. For PE professionals who know these documents well, that’s possible. They can verify in a legacy solution or in the underlying documents themselves. Even with those extra steps, it still feels like they’ve saved time.

Verification is much harder across hundreds of agreements that the prompter hasn’t read closely or at all. For multiple teams across a PE firm, it’s not a reliable way to quickly and accurately answer questions. It doesn’t prevent a bottleneck if one user lacks citations and has to request verification from the more knowledgeable team.

Documentation & audit requirements

One person verifying an AI output is far different than a firm defending its decisions to the SEC. When an LLM tells a finance analyst that a credit facility has no restricted payments covenant, they’re trusting a black box. That quickly becomes an operational and fiduciary risk.

The SEC has made clear that investment advisers’ fiduciary duties extend to the tools and processes they rely on. If a firm makes a material decision based on an unverifiable AI output, regulators will inevitably ask whether the firm had a reasonable basis for relying on the AI. “The model said so” won’t be a sufficient answer.

Example: managing DDQ workflows in an LLM

DDQs are a clear example of the limitations of horizontal LLMs for PE workflows:

  • Data limits make it cumbersome and time-consuming to upload precedent materials and new LP questionnaires, while context windows mean answers to new questions may not accurately pull from previously uploaded precedent materials or take into account changes other stakeholders made to prior outputs.
  • IR teams may pull their first-draft answers from an LLM, but without citations, they can’t confirm where those answers came from and whether they’re recent, accurate, and thorough.
  • IR teams struggle to collaborate quickly and efficiently with other stakeholders in the LLM. IR has to leave the LLM to assign reviews and verify AI outputs.

DDQ workflows demonstrate when purpose-built AI solutions are a better choice to solve a PE pain point. An AI-powered DDQ solution can specifically address the shortcomings of horizontal LLMs by enabling a single source of truth for institutional knowledge, precedent materials, and approved answers, and smarter search results with full attribution.

Token use & enterprise costs

In general, PE firms that use an enterprise version of a platform pay a per-seat fee, as well as per-token usage fees. (Pricing details vary significantly across platforms.) A token is a unit of data processed by AI models, and what constitutes a token varies across platforms, languages, and content types.

For every API call, there are input tokens and output tokens, which are typically more costly than input tokens, given the computing power needed to generate an AI output. Simply uploading one document — let alone an entire fund’s worth of documents — costs tokens before ever asking a question. Additionally, token costs compound throughout an ongoing chat, during which all previous messages are repeatedly re-sent, costing additional input tokens.

Token usage is where costs can dramatically rise for PE firms. If firms are paying for each employee to have a seat and every employee is encouraged to use the LLM to its fullest potential, then employees are likely using millions (or billions) of tokens per month.

Conquer LLM limitations with a hybrid tech stack

Ultimately, the real question isn’t whether an LLM can give a PE professional a good answer from a few documents. It’s whether a private equity firm can reliably operationalize workflows and results across hundreds of documents, numerous funds, and multiple teams with the confidence, traceability, and permanence that LPs, lenders, and regulators expect.

That’s a high bar, and meeting it requires more than a prompt.

The build vs. buy debate will continue to rage on. For some use cases, a generic LLM will suffice. For others, building internally makes sense. But the firms that will move fastest aren’t choosing one approach to AI. They’re building hybrid tech stacks that take advantage of every option, including horizontal LLMs, purpose-built solutions, and AI tools built in-house.

Reach out to Ontra to learn how we fit into your tech stack.

Explore Category

Drive efficiency with the AI-powered platform for private markets

Learn More

Ontra is not a law firm and does not provide any legal services, legal advice, or referral services and, as a result, we do not provide any legal representation to clients, nor do we participate in any legal representation of clients. The contents of this article are for informational purposes only, and are not intended to constitute or be relied upon as legal, tax, accounting, regulatory, or other professional advice, opinion, or recommendation by Ontra or its affiliates. For assistance or guidance regarding the impact or applicability of the topics discussed in this article to your business, please consult your legal or other professional advisers.