Title:
Why Large Language Models Hallucinate
Tags:
Source:
Channel:
Publisher:
- IBM
Author:
Published:
- April 20, 2023
Added:
- May 8, 2023
- @ 1:36 pm
Special Notes:
Large language models (LLMs) like chatGPT can generate authoritative-sounding prose on many topics and domains, they are also prone to just "make stuff up". Literally plausible sounding nonsense! In this video, Martin Keen explains the different types of "LLMs hallucinations", why they happen, and ends with recommending steps that you, as a LLM user, can take to minimize their occurrence.
Description:
Attached Images:
Attached Files:
Download Files - A
Download Files - B
Download Files - C
Download Files - D