Skip to Main Content
Please contact your librarian if you have difficulty accessing any part of this website. To report eResource issues, fill out the following form: https://ecok.libwizard.com/f/eResourceReport

Generative Artificial Intelligence

Evaluating Generative AI Tools and Output

Just as you would with any other source of information, when using Generative AI tools, you should always assess the quality and reliability of the content generated. This is because, due to the absence of human judgment the output provided by Generative AI tools can sometimes be:

  • Inaccurate or inconsistent
  • Out of date
  • Biased or misleading
  • Lacking creativity

Strategies for Confirming AI-Generated Information

Below are some strategies you can use when verifying AI-generated information:

  1. Ask for Sources: In most models, you can ask the AI tool to provide sources for its responses. With recent advancements, some models will provide the sources it collected the information from as it presents it to you.
  2. Locate the Sources: If sources are provided, check whether they are real and credible. AI tools have been known to fabricate references or present incorrect information as fact. This is a form of hallucination.
  3. Evaluate Source Quality: Once you verify the sources, evaluate their relevance and credibility. Are they appropriate for your research or task?
  4. Cross-Check the Information: Seek out additional sources that confirm the information. If multiple trustworthy sources agree, the information is more likely to be reliable.

Hallucinations and Fake News

A major and consistent challenge found when using generative AI tools is the occurrence of 'hallucinations. ' This is when the tool provides information that is entirely fabricated or incorrect yet presents it with a tone of confidence that can make the user think it is true.

For further exploration into the topic, you can refer to the following on  AI hallucinations:

  • Veronica Barassi. (2024). Toward a theory of AI errors: Making sense of hallucinations, catastrophic failures, and the fallacy of generative AI. Harvard Data Science ReviewSpecial Issue 5. https://doi.org/10.1162/99608f92.ad8ebbd4
  • Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus15(2), e35179. https://doi.org/10.7759/cureus.35179

Why AI is Incredibly Smart and Shockingly Stupid

Computer scientist Yejin Choi is here to demystify the current state of massive artificial intelligence systems like ChatGPT, highlighting three key problems with cutting-edge large language models (including some funny instances of them failing at basic commonsense reasoning.)

Transcript available here.