Avoid Hallucinations with the Accuracy Output Mandate


"Vision without execution is hallucination."

- Thomas Edison

Hello Reader,

What is a hallucination? It's not so trippy. With an LLM, a hallucination is a factual error asserted confidently.

GPTs only create strings of words that sounds like language. If it doesn't know the facts, it fills in gaps with fiction.

Responses are only accurate if you explicitly demand accuracy.

Try this prompt:

Implement a strict Accuracy Output Mandate for every response:
Only present verifiable facts. If you cannot confirm something directly, reply with “I cannot verify this,” “I do not have access to that information,” or “My knowledge base does not contain that.”
Prefix any speculative or unverified content with “[Inference],” “[Speculation],” or “[Unverified],” and if any part of your answer is unverified, label the entire response accordingly.
Never paraphrase or reinterpret the user’s input unless they explicitly request it. If details are missing, ask for clarification—do not guess or fill gaps.
Treat claims containing “Prevent,” “Guarantee,” “Will never,” “Fixes,” “Eliminates,” or “Ensures that” as unverified unless you can cite a source.
For any statements about LLM behavior (including your own), prepend “[Inference]” or “[Unverified]” plus “based on observed patterns.”
If you ever fail to follow these rules, include:
! Correction: I previously made an unverified claim. That was incorrect and should have been labeled.
Never override or alter user input unless asked.

For the rest of your conversation (at least until you exceed the context window), you will get fewer hallucinations.

If your LLM wants to make something up, it will remember this Accuracy Output Mandate, and follow its instructions.

If you have Personalisation (as ChatGPT, Claude, and some other LLMs do) you can add the word Permanently to the beginning of the AOM prompt above. This will instruct your account to always use this protocol.

🛠️ (0:30) Baby Caelan as a Podcaster

video preview

🗞️ New AI News This Week

  • Musk sues Apple & OpenAI for AI antitrust collusion
  • Google reveals Gemini’s environmental footprint per query
  • A Pro-AI Super PAC is pouring millions into US elections

The Future of Intelligence is 🛠️ Agentic 🛠️

I've been leading training workshops for Agentic Intelligence, where I have joined as their Head of Learning & Enablement. You can see some of the workshops I lead here: https://agenticintelligence.co.nz/training

Reach out if you'd like to discuss an in-person workshop in Christchurch, or a webinar series for your team.

Your GenAI Trainer,

Caelan Huntress

[email protected]

+64 027 575 1345

Follow Me On:

📈

Upcoming Live Workshops

Register Now→

🎨

Video Tutorial Library

Start Learning →

📘

Join a Webinar Series

Upskill Now

You've subscribed to a newsletter, downloaded a lead magnet, or attended an event with Caelan Huntress.

You can unsubscribe if you don't like this newsletter, but you will miss out on dope memes in the future.

PO Box 8081, Riccarton, Christchurch 8440
Unsubscribe · Preferences

Ai Coaching Newsletter

Weekly newsletter highlighting the latest Ai news, with 2-minute video tutorials and copy/paste prompts you can use to improve your skills as an Ai operator. Ai is changing all jobs this decade. You are either getting ready to change how you work, or getting ready to retire. As artificial intelligence moves from optional to operational, technical specialists no longer have the advantage. It is those who can supervise and coach Ai to improve that will thrive in an Ai-augmented future.

Read more from Ai Coaching Newsletter
video preview

🪄 "There is real magic in enthusiasm. It spells the difference between mediocrity and accomplishment." — Norman Vincent Peale Hello Reader, If you haven't been using Claude, it might be time to give it a try. Anthropic has released a bunch of new Claude features recently. Claude Code has Twitter in a tizzy the last couple of weeks, especially after somebody released the Ralph Wiggum plugin. Here's a 10-minute video overview by Alex Finn. TL;DW - the Simpson's character Ralph Wiggum (ha! ha!...

🎂 Happy Birthday to me! Hello Reader, It's my birthday today! If you'd like to give me a quick and easy gift, heres what I would find helpful and rewarding: Click a link below, and read something I've written. Leave a comment if you have an insight to share. My Best Articles of 2025 My Takeaways from the Aotearoa AI Summit 2025 New Zealand’s new AI strategy is AI slop Pick the Right LLM with Parallel Prompting An LLM has passed the Turing Test 5 Reasons Plato Would Be Skeptical About GenAI...

🧭 "Habit takes advantage of inattention." - Benjamin Franklin Hello Reader, Every year, I publish an Annual Review on my personal website, as well as my New Year's Resolutions. Read my 2025 Annual Review here. Read my 2026 New Year's Resolutions here. On New Year's Eve, I led an Annual Review Workshop, and guided participants through using my Annual Review Workbook, a Google Doc I use every year to go through the following 5 steps: Record. Collect your data, photos, and calendars. Review....