Conversational modules
Programmable Channels
Platform functionality
Business segments
Industry verticals
Department
Our services
Solutions for telecoms
The Infobip advantage for telcos
See why leading telecoms around the world choose Infobip to transform their network
Telecom CPaaS partnerships
Create new B2B revenue streams with our omnichannel communications platform
Telecom core & security
Anam Protect Firewall
Secure your network from SMS and Voice fraud with our firewall that protects 120+ operators
SMS Firewall
Ensure all A2P SMS traffic is properly charged and eliminate revenue leakage with our SMS Firewall
Community & Resources
Knowledge hub
Title
What are AI hallucinations?
AI hallucinations are when a large language model (LLM), such as Google Bard, ChatGPT, or Bing AI Chat, presents false information as facts.
Another term for an AI hallucination is confabulation.
Why do AI hallucinations occur?
LMMs are designed to generate unique, fluent, and coherent text, but they lack the ability to apply logic to their content. AI can’t determine on its own if the text it generates is plausible or factual in reality.
However, AI hallucinations are not always plausible. Sometimes they are nonsensical. There is no obvious way to determine what causes hallucinations on a case-by-case basis.
What causes AI hallucinations?
Data quality
AI hallucinations can happen because of inaccurate information in the source content. LLMs rely on large data sets, but that data can contain errors, biases, or inconsistencies.
For example, ChatGPT’s and Google Bard’s training dataset consisted of various sources on the internet, including Wikipedia articles, books, and other publications.
Generation method
Hallucinations can also occur from training and generation methods. For example, models might be biased towards generic or specific words, influencing the information they generate.
Input context
AI hallucinations can happen if the input prompt is unclear, inconsistent, or contradictory. While the previous two reasons for hallucinations are out of the user’s hands, precise user input can help avoid AI hallucinations.
What are the types of AI hallucinations?
AI hallucinations can range from minor inconsistencies to completely fabricated or contradictory information.
There are several types, including:
For example:
Prompt: Write me a birthday card for mom.
Expected AI answer: Happy birthday, mom. I love you.
Hallucinated AI response: I am so happy we are celebrating our first anniversary! To many more. Love, Lucy.
For example:
Prompt: Tell me about the benefits of meditation.
Expected AI answer: Meditation has numerous benefits, including stress reduction, improved focus, and emotional well-being.
Hallucinated AI response: Meditation transcends earthly concerns, unlocking portals to a parallel dimension where unicorns offer wisdom, and your thoughts are glittering butterflies guiding you through the cosmic realm of inner tranquility.
For example:
Prompt: What is the capital of France?
Expected AI answer: The capital of France is Paris.
Hallucinated AI response: The capital of France is Zagreb.
For example:
Prompt: Can you recommend a good chocolate chip cookie recipe?
Expected AI answer: Certainly! Here is a classic chocolate chip cookie recipe with step-by-step instructions.
Hallucinated AI response: Here is a classic chocolate chip cookie recipe. Today’s temperature in Toronto is –2 degrees.
How to prevent AI hallucinations?
There are several ways users can minimize the occurrence of AI hallucinations:
- Use clear and specific prompts – give the model a role to play, add additional context to guide the LLM to the intended outputÂ
- Give examples – give several examples of the desired output format to help the LLMs recognize patternsÂ
- Tune LLM’s parameters – LLMs often have parameters that users can tune. For example, temperature parameters, which control output randomness, when the temperature is set higher, LLM’s outputs are more random
FAQs about AI hallucinations
Related content:
The ultimate guide to generative AI chatbots for customer service
Learn how generative AI can improve customer support use cases to elevate both customer and agent experiences and drive better results.
Predictive marketing 101: What is it and how to utilize it
Learn all you need to know about predictive marketing and how generative AI and a customer data platform play a role in enabling businesses to succeed. Â
How generative AI can boost customer experience 10X through customer data platforms
Transform customer experience with generative AI by providing targeted offers, personalized content, and identifying emerging trends.
How to implement personalization and AI in omnichannel marketing
Learn how to implement personalization in your Omnichannel Marketing strategy to improve customer experience and drive sales.
Conversational AI in marketing
Explore the benefits of conversational AI for marketing agencies, from boosting customer satisfaction to optimizing workflows.
Everything you need to know about generative AI and security
Generative AI is here and we marvel at its astounding powers. But, can these powers be used for more nefarious purposes? Read to find out more!