AI Safety Fundamentals: Alignment

Deep Double Descent

June 17, 2024 BlueDot Impact Season 13
Deep Double Descent
AI Safety Fundamentals: Alignment
More Info
AI Safety Fundamentals: Alignment
Deep Double Descent
Jun 17, 2024 Season 13
BlueDot Impact

We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don’t yet fully understand why it happens, and view further study of this phenomenon as an important research direction.


Source:

https://openai.com/research/deep-double-descent


Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

Show Notes Chapter Markers

We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don’t yet fully understand why it happens, and view further study of this phenomenon as an important research direction.


Source:

https://openai.com/research/deep-double-descent


Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

Model-wise double descent
Sample-wise non-monotonicity
Epoch-wise double descent