Skip to content
Praelexis AI

Talk to Us

Start a conversation about AI for your business.

AI Readiness

Take our quiz and find out where you are on your AI journey.

Explore Use Cases

See how we've applied AI across industries.

Book a Workshop

Bring your team up to speed on AI and uncover opportunities.

VIEW ALL SERVICES
GenAI LLM

How to Safeguard Against Large Language Model Hallucinations

2 June 2025 | Praelexis | 5 min read
LLM Hallucinations

Matthew Tam and Ryno Kleinhans

LLMs (Large Language Models) are AI types that utilise deep learning to “understand” big data sets. What makes LLMs a unique type of generative AI is the fact that they can be prompted using natural language and respond in natural language. However, sometimes the answer an LLM gives is incorrect because occasionally the LLM values answering over answering correctly. This could occur because it was not trained on the correct data or does not have access to data that helps it respond to the user prompt. When an LLM does not respond correctly, it is called a “hallucination”. Unfortunately, there is no way to prevent LLM hallucinations completely. The LLM is not a definitive expert but rather a tool (sometimes utilised by experts). There are, however, some ways to successfully safeguard your LLM application against most hallucinations. This blog discusses three types of LLM hallucinations and four safeguards against these hallucinations.

What are the types of LLM hallucinations?

There are three main ways in which an LLM hallucinates:*

  1. Input-conflicting Hallucinations: The LLM responds in a way that does not match the user input. The response is either off-topic or does not utilise the source material given in the original prompt.
  2. Context-conflicting Hallucinations: This is when a follow-up response from an LLM does not match the initial response(s). This occurs when a user wants the LLM to utilise one of its responses and then utilises it incorrectly or changes its original input’s data.
  3. Fact-conflicting Hallucinations: The LLM produces a factually incorrect response that does not match what we know about the world.

How do you minimise the risk of an LLM hallucination?

There are four ways of minimising the risk of hallucinations when implementing an LLM in your business.

Safeguarding against Large Language Model hallucinations

Utilise a vector store

Forcing an LLM to utilise a vector store is one of the best safeguards against hallucinations. Providing the LLM with a vector store and forcing it to utilise previously vetted sources not only personalises your LLM application but also ensures that it is safeguarded from referencing incorrect information.

Give the LLM a clear role and task

Ensuring that your LLM application has both a role and a task helps to keep it on track. To explain the difference between the two, consider a teacher. The role of a teacher is to teach students content. This role might include the specific task of explaining a chapter in a mathematics textbook. Regarding an LLM, your application might have the role of financial advisory and the task of specifically helping clients in a South African context make better investment decisions. A well-formulated system prompt is one of the ways to define an LLM’s role and task.

Force the LLM to follow steps

As a user, you need to guide the LLM to help it “think”. If the user presents the LLM with a complex problem, the LLM will not, like a human, break the problem down into smaller steps and try to solve one bit of the problem at a time. If you want to utilise it to help you complete complex tasks with many different aspects to consider, you have to break your prompt into smaller and sequential prompts or “steps”. Breaking up complex assignments into smaller steps and guiding the LLM through these steps minimises the chance of mistakes.

Check whether the LLM’s capabilities align with your goals

It is important to consider the task you want the LLM to assist you with. An LLM might not be the ideal tool for your current task. For example, currently, LLMs are famously terrible at solving mathematical problems. LLMs are great at being able to “understand” a body of text to summarise it or elaborate on it. LLMs are also used to generate code for experts, who then check it before implementing it. Writing code is not an LLM’s best “skill”, but it can save you time. To receive the best results from your LLM application, use it for its intended purpose.

Summary:

An LLM should always be considered a tool that helps and assists. It does not replace current experts or resources. Providing users with an LLM as a tool could save time and make work easier, but it functions as an assistant, not an expert. However, LLMs sometimes hallucinate or provide incorrect responses. There are three types of hallucinations: (1) The response does not match the content of the user prompt, (2) The response contradicts a previous response, or (3) The response is factually incorrect. It will never be possible to rule out hallucinations completely, but it is possible to safeguard your LLM applications against most hallucinations. You could (1) force your LLM to utilise a vector store of reliable resources, (2) give your LLM a clear role and tasks through its system prompt, (3) break down a complex task into smaller prompts or “steps”, or (4) only utilise an LLM for tasks that fall within its capabilities.

Get in touch with Praelexis to explore how we can help you harness the power of LLMs

  • Source: Simform