The title and an image of a hierarchy
Leah Clark
May 15, 2025
May 21, 2025

Inside MagicForm: Stories and Adventures

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Large Language Models (LLMs) like GPT-4 have transformed how businesses automate support, onboard users, and interact with customers. But LLM output can still be unpredictable—and at times, confidently wrong. This raises an important operational question:

What kinds of mistakes do LLMs make, and how can we constrain them when they're speaking on behalf of a business?

To address this, we need to understand three overlapping but distinct error modes often attributed to AI-generated content: lying, bullshitting, and hallucinating.

First: Can an LLM Lie?

Technically, no.

Lying presupposes intentional deception: a lie occurs when a speaker knows the truth but deliberately states something false or omits key information to mislead. LLMs don’t “know” anything in the human sense. They don’t possess beliefs or intentions. They generate outputs based on learning from training data. There’s not an internal understanding of reality.

So when an LLM says something untrue, it’s not lying. But that doesn’t make the output safe or acceptable—especially in a customer support or sales context.

What Is “Bullshitting,” Then?

Google's AI response

The philosopher Harry Frankfurt described bullshit as communication made without regard for truth. The bullshitter isn’t trying to lie or tell the truth—they’re trying to impress, persuade, or fill space without concern for accuracy like your Uncle Jerry, who is a great storyteller, when he starts explaining the theory of relativity.

Talking with AI could be like a long conversation with Uncle Jerry. If a model isn’t explicitly told to prioritize grounded, source-based responses, it might optimize for plausibility or user satisfaction rather than truth. It might produce generic or “confident-sounding” responses that are, in fact, false or even dangerously misleading.

Example:

Customer: Does your software support single sign-on?
AI: Absolutely! MagicForm.AI seamlessly integrates with major SSO providers like Okta and Azure AD.
Problem: That might not be true. The model is optimizing for fluency and helpfulness, not veracity.

Hallucination: The LLM-Specific Error Mode

“Hallucination” in LLM terminology refers to any instance where AI outputs factually incorrect or fabricated content.

A Google AI example of Hallucination

Unlike traditional software bugs, hallucinations occur because LLMs build plausible responses based on linguistic patterns, not referring back to the database. They might generate:

  • Nonexistent API endpoints
  • Incorrect citations
  • Fabricated statistics
  • Imagined product features

Hallucinations are particularly dangerous in high-trust environments like legal and healthcare, or maybe a Google search, where someone is expecting to learn accurate facts about a topic. They can erode credibility and introduce liability.

Setting Up Rails: How to Prevent These Errors in Your Chat Agent

MagicForm.AI is designed to mitigate these risks through a layered architecture that combines grounding, guardrails, and customization. Grounding, in a general sense, refers to establishing a strong foundation of facts. When talking about LLMs, grounding refers to the process of ensuring that the responses are aligned with real-world knowledge rather than just statistical patterns in the training data. Here’s how to deploy an AI agent safely and prevent misinformation from reaching your customers.

1. Ground All Outputs in a Verified Knowledge Base

The most effective strategy is retrieval-augmented generation (RAG)—the model doesn’t rely purely on pretraining but instead retrieves the most current and relevant content from your company’s documentation and generates responses based on that.

MagicForm.AI uses this approach to ensure answers are anchored in your actual content:

  • Upload help docs, product manuals, FAQs, SOPs
  • The AI retrieves relevant excerpts at runtime
  • The LLM generates responses based on this context

Benefits:

  • Reduces hallucination
  • Keeps answers aligned with current policies
  • Makes updates easy—just update your documents and reprocess them

2. Define System Instructions That Prioritize Accuracy Over Eloquence

Behind every LLM application is a system prompt that shapes behavior. The default behavior of many models is to be helpful and verbose—but you can constrain this.

Effective system instructions for reducing hallucinations might include:

  • "If you cannot find an answer in the knowledge base, say you don’t know."
  • "Never fabricate product features or make assumptions."
  • "Cite exact sources where possible."
  • "Speak clearly and briefly. Do not guess."

MagicForm.AI, in the “additional instructions field,” lets you edit the system prompt that governs your chatbot’s responses—so you can tailor its behavior to match your industry, tone, and risk tolerance.

3. Use Q&A Pairs to Lock in Mission-Critical Responses

To eliminate variability in how key questions are answered(e.g., pricing, compliance, integrations), MagicForm.AI allows you to edit and save accepted responses to specific prompts.

This creates a lightweight form of instructional fine-tuning:
When the AI sees a similar question, it pulls from this trusted answer instead of generating a new one.

Use this for:

  • Legal disclaimers
  • Payment and subscription info
  • Feature availability
  • Competitive comparisons

4. Set Fallbacks and Refusals Intentionally

Sometimes the best answer is, “Let me get back to you.”

You can configure MagicForm.AI to:

  • Redirect with a question when missing context is detected,
  • Default to a fallback phrase like:

“That’s outside of what I can confirm. Let me flag that for a team member.”

This avoids hallucination and sets a clear handoff to human support, improving the customer experience.

From Freeform Generation to Controlled Dialogue

LLMs are powerful; however, unbounded, they are unreliable. Businesses need a framework for thinking about AI errors and a strategy for containment.

MagicForm.AI helps you put into action these safeguards with intuitive controls. When your AI assistant responds, you can be more confident it’s telling the truth, not just sounding like it is.

Want help auditing your current chatbot’s responses for hallucinations?

Let us know. We’ll help you tighten the rails so your AI always speaks in alignment with your brand—and your actual product.

Your business, automated smarter.

This article was written with the help of AI and further refined by Leah Clark @Nuestra.AI
No items found.