Resources

Category:
Type:
Title:
Why Large Language Models Hallucinate
Publisher:
Author:
Published:
Added:
Special  Notes:
Large language models (LLMs) like chatGPT can generate authoritative-sounding prose on many topics and domains, they are also prone to just "make stuff up". Literally plausible sounding nonsense! In this video, Martin Keen explains the different types of "LLMs hallucinations", why they happen, and ends with recommending steps that you, as a LLM user, can take to minimize their occurrence.
Description:
Attached Images:
Attached Files: