Skip to main content

My employees say they can do things they can't!

A short guide to understanding what AI hallucinations are

Sam avatar
Written by Sam
Updated over a month ago

You might find that your AI team occasionally says they can do something, only for you to discover later that they can’t. Or you'll give them specific instructions to follow, only for them to get a bit too creative and add things you didn't ask them to.

This can be incredibly frustrating, and we totally understand how this might slow things down. To help you understand why this happens, you'll need to learn a bit about AI hallucinations.

What are hallucinations?

Hallucinations occur when your AI agent claims it can perform or understand a task, but the response is misleading, incorrect, or fabricated.

An example of this would be when Eva tells you that she'll send that email right after it's drafted, when she can't!

Why do hallucinations happen?

The thing to note is that AI models don't actually "know" anything, they will generate a result based on what is most likely to be next, based on the patterns in the data they were trained on.

This means that they will sometimes provide information that is not correct in the current context, but it is what is most likely to come after.

For example, Eva might say “London is the capital of the U.K.” not because she actually knows that it is, but because this response matches patterns in her training data.

How can we stop hallucinations?

The short answer is, there isn't a way to completely prevent hallucinations and they are a known limitation. All AI models will hallucinate to some degree given the nature of the technology.
However there are a few things that you can do to reduce the likelihood of hallucinations happening:

  • Ensure your guidelines are dialled in: Guidelines act like long-term memory, and your team always has access to them. A well-defined set of guidelines will help your agents perform tasks more consistently.

  • Give your agents feedback: Let your team know when you want something changed going forward, they will confirm that the guidelines have been updated.

  • Be explicit in your instructions: Don't give your agents room to fill in the gaps or guess. Telling them exactly what you want reduces the likelihood that they'll hallucinate information.

  • Familiarise yourself with your agents' capabilities: Knowing what each of your agents can and can't do will help you spot when they're not sticking to their capabilities.

Did this answer your question?