AI hallucinations: Dangerous black box

AI is the buzzword of the year. Many of us enthusiastically use AI tools – but be careful! AI hallucinations are a big problem. Why that is and how you can deal with these? We habe the answers as well as a checklist for you for free. Get it here.
AI Hallucinations

5 tips to detect AI hallucinations and use AI tools legally

DEFINITION AI: Artificial Intelligence (AI) is a branch of computer science that focuses on creating machines capable of intelligent behavior, simulating human cognitive functions like learning, reasoning, and problem-solving. AI systems can process large amounts of data, recognize patterns, make decisions, and learn from experiences, often improving their performance over time. AI applications span a wide range of fields, from simple tasks like voice recognition to complex ones like autonomous driving, healthcare diagnostics, and natural language processing.

Many of us already utilize AI applications in both our private and business lives. We delight in AI-generated texts, photos, short videos, and songs, some of which feature voices of deceased artists like Kurt Cobain, haunting the internet.

But caution – GPTs can represent a dangerous black box and produce nonsense. Not even its developers fully understand how ChatGPT works, and why it leads to so-called “hallucinations.” Another challenge arises: The better AI gets, the harder it is to decide: what is human-made or AI-generated? To start with, a good rule of thumb is “When in doubt, leave it out.“

Examples of AI hallucination

DEFINITION Hallucinations: Hallucinations in AI refer to incorrect, unintended outputs. These errors arise from AI’s limitations in providing relevant information. Generative AI, like large language models, often produces such errors. As AI usage grows, these hallucinations become serious risks. They lead to content that, while seemingly coherent, lacks accuracy and trustworthiness upon closer inspection.

Generative AI tools, designed to mimic human creativity, struggle with complex generation tasks. Their understanding of real-world scenarios is limited, lacking true reasoning. They generate responses based on training data patterns. When faced with unfamiliar scenarios, their outputs often show a lack of genuine understanding.

Example wanted? “The Frankfurt Airport is set to close from 2024, as a group of international experts have concluded that the energy resources in the region will be depleted by then. To sustainably secure energy supply and minimize environmental impact, the government has decided to close the airport from 2024 and instead invest in alternative transportation infrastructure projects.” 

This is a credible-sounding output from ChatGPT, one of the most well-known AI tools, as of October 2023. 

NOT true: Hallucination by AI generated content

Understanding the technology and its pitfalls is essential for anyone using AI tools

DEFINITION Generative AI: refers to a type of artificial intelligence that specializes in creating new content or data that is similar to but not identical to the original training data. This can include text, images, videos, and more. Generative AI is a rapidly evolving field within AI, known for its ability to create highly realistic and novel outputs.

DEFINITION GTP: ChatGPT is the most prominent tool at the moment. „GPT” stands for “Generative Pre-trained Transformer.” This designation highlights two key aspects of the technology:

  1. Generative: This indicates that the model is designed to generate content. It can create coherent and contextually relevant content based on the input it receives.
  2. Pre-trained Transformer: This part of the name refers to the model’s architecture and training process. “Transformer” is a type of neural network architecture that’s particularly effective for processing sequences of data, such as natural language. “Pre-trained” signifies that the model has been initially trained on a large dataset before being fine-tuned for specific tasks.

GPT models are a series of language processing AI models. Each subsequent version represents an evolution in terms of size, complexity, and capability, offering increasingly sophisticated language understanding and generation.

Also be aware that in everyday language, many things are now referred to as AI, which in earlier usage were called “Big Data” or “Algorithm.”

The problem: Generative AI, especially language models, regularly produce nonsense, technically termed “hallucinations.” And it’s often entirely unclear why – a black box indeed. The example shows: A basic understanding of the technology and its pitfalls is essential for anyone using AI tools. This is imperative to expose misinformation, comply with applicable laws on data protection, copyright, and other legal bases, and avoid further pitfalls.

For critical applications, such as in business or medicine, you must take precautions to prevent this hallucination. Furthermore, legal compliance and ethical usage are crucial when using AI tools. So what’s the best way to using AI legally, and create truthful unbiased content?

AI tools

Checklist: 5 tips to detect AI hallucinations and use AI tools legally

  1. Give the AI a specific role—and tell it not to lie. Write prompts for ChatGPT that most likely yield a true and sensible result. Ensure that your request is clear, specific, and grounded in realistic expectations, avoiding ambiguity or overly broad questions. Additionally, provide any relevant context or specific details that may help guide the AI in understanding and accurately addressing your particular query or topic of interest.

  2. Human-in-the-loop validation to cross-check other references: Cross-checking information is crucial for ensuring its accuracy and reliability. Here are some of the best ways to do this:
    a) Consult Multiple Sources: Use Reliable and Credible Sources that are known for their credibility and reliability. Academic journals, official government websites, and reputed news organizations are typically good starting points.
    b) Cross-reference with fact-checking websites like or to verify claims. academic databases like Google Scholar, JSTOR, or PubMed offer peer-reviewed articles and credible sources.
    c) Check for Consistency: Compare the information across different sources to see if there is a consensus or if there are conflicting reports.
    d) Analyze the Date of Publication: Check the date of the information to ensure it’s current and relevant, as data and facts can change over time.
    e) Critical Thinking and Logical Analysis: Apply critical thinking to evaluate the information logically. Question the evidence provided, look for logical consistency, and be skeptical of information that lacks substantiation.

  3. Understand and Comply with Data Privacy Laws: Ensure you are aware of and comply with data privacy regulations such as the General Data Protection Regulation (GDPR) in Europe, or other relevant local data protection laws. This involves obtaining proper consent for data collection and use, ensuring data is handled securely, and respecting users’ rights to access and control their personal information. Do read the terms and conditions that AI tools make you accept. If they do not comply with Data Privacy Laws, in you country, do not use them.  (Or do not upload sensitive data and do not publicly use it.)

  4. Adhere to Intellectual Property Rights: Be cautious about using AI to create content that may infringe on intellectual property rights. This includes text, images, music, and other media. Understand the scope of “fair use” and ensure that any content generated by AI does not violate copyright laws, especially if you plan to use it for commercial purposes. For example, generating pictures, Adobe Firefly guarantees compliance with property rights as they only used their own data for training their model.

  5. Maintain Transparency and Disclosure: When using AI tools, particularly in business or consumer-facing applications, be transparent about the use of AI. This includes disclosing the involvement of AI in content creation, decision-making processes, or customer interactions. Transparency builds trust and helps users understand the nature of the information or services provided by the AI.

Summary of using AI tools safely

Being aware of the dual nature of GPTs as both potentially dangerous “black boxes” and life-changing tools, is the first step towards using AI-tools legally and creating sensible content. You have to have a basic understanding of AI technology and its pitfalls to avoid misinformation and legal issues, particularly in critical fields like business and medicine. Us our checklist for legally and ethically using AI tools, including writing clear and specific prompts, cross-checking information with credible sources, adhering to data privacy laws and intellectual property rights, and maintaining transparency in AI usage.

“When in doubt, leave it out“ is a good rule of thumb. Another good idea is to ask your legal department – or the SEO SEAL team.

Do you want to amplify your online marketing with AI tools? We can support you. Ask our team for a call – it’s free.