Understanding AI models & prompts

Ontra

February 4, 20254 min read

The difference between training and prompting models is critical to understanding how data remains private and secure when using AI.

Let us walk you through the basics of training AI models vs. using prompts to generate AI suggestions or predictions.

An overview of AI models & training

AI systems use algorithms to analyze data and identify patterns and relationships in that data. Developers can train algorithms by using various machine learning (ML) techniques. In general, developers provide the AI system with large amounts of data and adjust the system’s parameters until it can accurately perform a given task on new data.

The success of an AI application depends on many components, but the following two stand out:

  • The models, which are the systems used to learn from data
  • The volume and quality of the data the developers use for training

Models

Models can be either open source or closed source (also known as proprietary). Open source code is generally available to the public, and depending on the license, parties may be able to access, modify, and distribute the model royalty free. Proprietary models may contain open-source code but rely on private source data to deliver unique capabilities. Only authorized parties can access these models.

Data

During training, models are exposed to large quantities of labeled and unlabeled data to learn how to perform specific tasks. These datasets can also be either open source or proprietary. A model’s accuracy depends on the volume and relevance of the training data used. Modern large language models (LLMs) are trained on vast amounts of data, which gives them a better understanding of human language in all its forms.

Training

To create functional AI models, developers feed training algorithms labeled and unlabelled data. Labeled data is information that’s tagged to guide the algorithms. Unlabelled data is raw data without any labels or tags.

Throughout training, developers will use supervised and unsupervised learning. Typically, during supervised learning, the algorithm is given labeled data — the labels are the types of outputs the AI model is learning to produce. During unsupervised learning, the AI model is given unlabeled data to find the structure or patterns within that data.

Developers will also use other techniques, such as reinforcement learning, during which they provide feedback to the model after it performs a specific action. Over time, the feedback enables the model to make better decisions.

Training is not inherently continuous — it’s an intensive and iterative process with feedback loops.

Understanding AI prompts & inference

Once trained, a user can give an AI model a problem, and it will provide an answer — this process is called inference. The AI model draws a conclusion or makes a prediction based on the inference data the user provides. Inference does not train the model further.

A prompt is the input provided to an LLM to elicit a desired response. It can be as simple as a question or statement or a more structured guide to the model’s output.

The design of a prompt significantly affects the quality, relevance, and accuracy of the model's response.

Prompts typically include instructions, context, or examples to shape the response. For example:

A simple prompt: “Explain machine learning.”

A structured prompt: “You are an expert in AI. Explain machine learning to a beginner in three sentences.”

Prompts play a central role in LLMs because these models generate responses purely based on the context provided in the input. Crafting effective prompts, sometimes referred to as prompt engineering, is a key skill for leveraging LLMs efficiently, especially when working on complex tasks.

One way to improve responses is by using multi-shot prompts, which means providing multiple examples within the prompt to show the model what kind of answer is expected. This helps the model recognize patterns and generate more accurate or consistent responses.

Another useful technique is chain-of-thought prompting, where the prompt guides the model to break down a problem step by step, instead of jumping to an answer immediately. This approach is especially helpful for reasoning tasks, as it encourages the model to explain its thought process, leading to more reliable and well-structured answers.

By combining these techniques, users can make LLMs more effective, ensuring they provide responses that are not only correct but also clear and logically structured.

In Ontra’s AI-backed solutions, data is fetched in real time for optimal and relevant results. Third-party LLM providers do NOT retain customer data used during inference. Additionally, AI-generated results within Ontra’s solutions are clearly identifiable for human review, whether by Ontra, a Legal Network member, or a customer.

Don't miss Ontra's expert insights

Join our newsletter to stay up to date on features and releases

By subscribing you agree to our Privacy Policy

Thanks!

Ontra’s secure, reliable AI for private markets

Ontra’s AI engine, Synapse, powers our private markets technology platform. Synapse is:

  • Safe, secure, and accurate: Customer data is not retained by third-party LLM providers; AI-generated results are clearly identified for human review; and rigorous controls and testing ensure accuracy.
  • Designed for private markets: Models and prompts are purpose-built for private market contracts and workflows, and the ideal model is chosen for optimal performance for each individual workflow.
  • Provides customer-specific results: Data is fetched in real time for optimal and relevant results. Ontra can leverage historical context derived from our 1M+ private market contract dataset.
  • Part of a human-in-the-loop system: Dedicated professionals review AI-generated results. Ontra’s global Legal Network of 600+ legal professionals supports Synapse.

Adopting AI policies and implementing AI tools is a tall order for private fund managers. Ontra partners closely with our customers to ensure our tools solve real, daily problems safely and securely. Data privacy and security are ongoing priorities at Ontra, as demonstrated by our SOC 2 Type 2 and ISO 27001:2022 certifications.

To learn more about Ontra’s reliable AI and how to adopt forward-thinking AI policies, reach out to Ontra today.

Explore Category

Learn more in our AI 101 Guide

Download Guide

Ontra is not a law firm and does not provide any legal services, legal advice, or referral services and, as a result, we do not provide any legal representation to clients, nor do we participate in any legal representation of clients. The contents of this article are for informational purposes only, and are not intended to constitute or be relied upon as legal, tax, accounting, regulatory, or other professional advice, opinion, or recommendation by Ontra or its affiliates. For assistance or guidance regarding the impact or applicability of the topics discussed in this article to your business, please consult your legal or other professional advisers.

Explore our content