BlueDot Narrated
Audio versions of the core readings, blog posts, and papers from BlueDot courses.
BlueDot Narrated
An Overview of Catastrophic AI Risks
•
BlueDot Impact
•
Season 3
•
Episode 3
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Audio versions of blogs and papers from BlueDot courses.
This article from the Center for AI Safety provides an overview of ways that advanced AI could cause catastrophe. It groups catastrophic risks into four categories: malicious use, AI race, organizational risk, and rogue AIs. The article is a summary of a larger paper that you can read by clicking here.
Original text:
https://www.safe.ai/ai-risk
Authors:
Dan Hendrycks, Thomas Woodside, Mantas Mazeika
A podcast by BlueDot Impact.
Catastrophic AI risks can be grouped under four key categories which are summarized below.
1.Introduction
2.Malicious Use
Bioterrorism
Unleashing AI Agents
Persuasive AIs
Concentration of Power
Suggestions
3.AI Race
Military AI Arms Race
Corporate AI Arms Race
Evolutionary Dynamics
Suggestions
4.Organizational Risks
Accidents Are Hard to Avoid
Organizational Factors Can Mitigate Catastrophe
Suggestions
5.Rogue AIs
Proxy Gaming
Goal Drift
Power-Seeking
Deception
Suggestions
6.Conclusion
Frequently Asked Questions