Resources

Category:
Type:
Title:
GPT 5 is All About Data
Publisher:
Author:
Published:
Added:
Special  Notes:
Description:

Drawing upon 6 academic papers, interview snippets, possible leaks, and my own extensive research, I put together everything we might know about GPT 5: what will determine its IQ, its timeline and its impact on the job market and beyond.

Starting with an insider interview on the names of GPT models, such as GPT 4 and GPT 5, then looking into the clearest hint that GPT 4 is inside Bing. Next, I briefly cover reports of a leak about GPT 5 and discuss the scale of GPUs require to train it, touching on the upgrade form A100 to H100 GPUs.

Then the DeepMind paper that changed everything, focusing LLM research on data rather than parameter count. I go over a lesswrong post about that paper’s ‘wild implications’. And then the key paper: ‘Will We Run Out of Data’. This encapsulates the key dynamic that will either propel or bottleneck GPT and other LLM improvements.

Next, I examine a different take, that perhaps data is already limited and caused the Sydney model of Bing. This opens up to a discussion on the data behind these models and why Big Tech is so unforthcoming about where it originates. Could a new legal war be brewing?

I then cover 4 of the ways these models may improve even without data augmentation, such as Automatic Chain of Thought, high quality data extraction, tool training, including Wolfram Alpha, retraining on existing data sets, artificial data generation and more.

We take a quick look at Sam Altman’s timelines and host of Big Bench benchmarks that they may impact, such as reading comprehension, critical reasoning, logic, physics and Math. I address Altman’s quote about timelines being delayed by alignment and safety and finally, Altman’s comments on AGI and how they pertain to GPT 5.

Attached Images:
Attached Files: