Skip to Main Content

Artificial Intelligence Literacy: Basics

Constantly updated guide to the right use of AI generated content in the study and research

Overview

What is "hallucinations" in AI? 

  • a result of algorithmic distortions which leads to the generation of false information, manipulated data, and imaginative outputs (Maggiolo, 2023).
  • the system provides an answer that is factually incorrect, irrelevant, or nonsensical because of limitation in its training data and architecture (Metz, 2023).
  • the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context (Marr, 2023).
Factors Affecting AI Hallucinations (Maggiolo, 2023) Classifications of AI Hallucinations (Jennings, 2023)
  • Insufficient data or context - not enough data 

  • Excessive generalization - too much generalization resulting to bizarre and illogical data connections

  • Overfitting - memoralization instead of generalization

  • Contradictions - inconsistencies and discrepancies on the context and data

  • False facts - fake sources of information, false statements, fabricated contents

  • Lack of nuance/context - lack if requisite knowledge to reconcile meaning, slight or subtle differences in languages or context

Why it is a problem? (Marr, 2023) Red flags to look out for (Maggiolo, 2023)
  • Erosion of trust - technology might be untrustworthy as it provides misleading data

  • Ethical concerns - potential perpetuation of misinformation

  • Impact on decision making - poor choices with serious consequences

  • Legal implications - exposure of AI developers and users to potential legal liabilities

  • semantic inconsistencies - inherent lack of semantic comprehension

  • response appear unusual - may indicate potential distortion

  • lack of coherence in responses - response is coherence or contradicting

Ways to Address AI Hallucinations

  • Semantic analysis - examining language and context inconsistencies.
  • Careful prompting - includes improve training data.
  • Refining prompts - using multiple prompts or iterative refinement.
  • Being skeptical of AI responses - includes questioning generative AI responses, asking for explanations and reasoning; requesting for sources and evidences that upholds transparency and credibility.
  • Use different AI models - encompasses utilization of other genAI tools and addressing biases by checking multiple perspectives.
  • Manual searching - verifying sources of information through lateral searching and double-checking information independently.
  • Humans-in-the-loop - some AI tools include reinforcement learning from human feedback (RLHF) model which aims to include humans for fine tuning of data, context, and responses.

(Codecademy, 2024; Maggiolo, 2023; Univ. of East London libguide, 2024)

Bibliography

Codecademy Team. (2024). Detecting hallucinations in generative AI. Codecademyhttps://www.codecademy.com/article/detecting-hallucinations-in-generative-ai.

Jennings, J. (2023, Oct. 10). AI in education: The problem with hallucinations. eSpark. https://www.esparklearning.com/blog/ai-in-education-the-problem-with-hallucinations/

Kingston, M. (2023). AI for education: Lesson 3: Hallucination detective: Digital and print learning packet. AI for education. https://aiforeducation.io

Maggiolo, G. (2024, Oct. 5). Can AI experience hallucinations? How to identify false information generated by neutral networks. Pigro. https://blog.pigro.ai/en/can-ai-experience-hallucinations-how-to-identify-false-information-generated-by-neural-networks 

Marr, B. (2023, Mar. 22). ChatGPT: What are hallucinations and why are they a problem for AI systems. Bernard Marr & Co.https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/

Metz, C. (2023, Apr. 4). What makes A.I. chatbots go wrong? The curious case of hallucinating software. The New York Timeshttps://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html.

Ramponi, M. (2023, Aug. 3). How RLHF preference tuning works (and how thing may go wrong). AssemblyAI. https://www.assemblyai.com/blog/how-rlhf-preference-model-tuning-works-and-how-things-may-go-wrong/

University of East London. (2024, Feb. 26). Using AI critically. Artificial intelligence (AI). https://libguides.uel.ac.uk/artificial-intelligence/using-ai-critically

Library Homepage Facebook Youtube Instagram Twitter Telegram E-mail