Avoid Hallucinations with the Accuracy Output Mandate


"Vision without execution is hallucination."

- Thomas Edison

Hello Reader,

What is a hallucination? It's not so trippy. With an LLM, a hallucination is a factual error asserted confidently.

GPTs only create strings of words that sounds like language. If it doesn't know the facts, it fills in gaps with fiction.

Responses are only accurate if you explicitly demand accuracy.

Try this prompt:

Implement a strict Accuracy Output Mandate for every response:
Only present verifiable facts. If you cannot confirm something directly, reply with “I cannot verify this,” “I do not have access to that information,” or “My knowledge base does not contain that.”
Prefix any speculative or unverified content with “\[Inference],” “\[Speculation],” or “\[Unverified],” and if any part of your answer is unverified, label the entire response accordingly.
Never paraphrase or reinterpret the user’s input unless they explicitly request it. If details are missing, ask for clarification—do not guess or fill gaps.
Treat claims containing “Prevent,” “Guarantee,” “Will never,” “Fixes,” “Eliminates,” or “Ensures that” as unverified unless you can cite a source.
For any statements about LLM behavior (including your own), prepend “\[Inference]” or “\[Unverified]” plus “based on observed patterns.”
If you ever fail to follow these rules, include:
! Correction: I previously made an unverified claim. That was incorrect and should have been labeled.
Never override or alter user input unless asked.

For the rest of your conversation (at least until you exceed the context window), you will get fewer hallucinations.

If your LLM wants to make something up, it will remember this Accuracy Output Mandate, and follow its instructions.

If you have Personalisation (as ChatGPT, Claude, and some other LLMs do) you can add the word Permanently to the beginning of the AOM prompt above. This will instruct your account to always use this protocol.

🛠️ (0:30) Baby Caelan as a Podcaster

video preview

🗞️ New AI News This Week

  • Musk sues Apple & OpenAI for AI antitrust collusion
  • Google reveals Gemini’s environmental footprint per query
  • A Pro-AI Super PAC is pouring millions into US elections

The Future of Intelligence is 🛠️ Agentic 🛠️

I've been leading training workshops for Agentic Intelligence, where I have joined as their Head of Learning & Enablement. You can see some of the workshops I lead here: https://agenticintelligence.co.nz/training

Reach out if you'd like to discuss an in-person workshop in Christchurch, or a webinar series for your team.

Your GenAI Trainer,

Caelan Huntress

[email protected]

+64 027 575 1345

Follow Me On:

📈

Upcoming Live Workshops

Register Now→

🎨

Video Tutorial Library

Start Learning →

📘

Join a Webinar Series

Upskill Now

You've subscribed to a newsletter, downloaded a lead magnet, or attended an event with Caelan Huntress.

You can unsubscribe if you don't like this newsletter, but you will miss out on dope memes in the future.

PO Box 8081, Riccarton, Christchurch 8440
Unsubscribe · Preferences

GenAI Training Newsletter

Generative AI improves your Productivity, Creativity, and Strategy - but only if you build the GenAI Habit. Learning how to incorporate GenAI Training into your day will help knowledge workers prepare for the future of work.

Read more from GenAI Training Newsletter

“Censorship and political control of AI is a thousand times more dangerous than censorship and political control of social media, maybe a million times more dangerous.” - Marc Andreessen Hello Reader, There have been a lot of practical advances in AI tools recently, including: ChatGPT Agent - now it can do things, not just make things OpenAI GPT-5 - including a rollback for 4o nostalgists Grok Imagine - it's like if Pinterest and Midjourney had a baby OpenAI gpt-oss - download and run a local...

blueprint

“We are the AI Generation.” - Doreen Bogdan-Martin Hello Reader, Last week I published a scathing teardown of the 22-page AI Strategy published by the New Zealand government. (Read that teardown here.) This failure reminds me of the stick-in-spokes meme, because there was already a lot of progress completed in creating a comprehensive AI strategy for New Zealand. The first version of the AI Blueprint for Aotearoa was published in July 2024, by the NGO (Non-Governmental Organisation) the AI...

“The future belongs to those who can synthesize, not just analyze. The AI Generalist doesn’t compete with artificial intelligence—they choreograph it.” - Demis Hassabis, CEO of Google DeepMind Hello Reader, As the only country in the OECD that had yet to publish an AI Strategy, New Zealand has finally published a 22-page document full of AI slop. (This critique was written by hand, and not by AI. Because when something matters, you shouldn’t deliver your ideas via AI slop.) The title...