Consistently Candid
AI safety, philosophy and other things.
Episodes
17 episodes
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we having fun yet' and 'could we be...
•
1:25:53
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience. In this episode, we compared our experienc...
•
52:49
#15 Should we be engaging in civil disobedience to protest AGI development?
StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest. In this episode, I chatted with three of founders of StopAI – Remmelt Ellen, Sam Kirchner ...
•
1:18:20
#14 Buck Shlegeris on AI control
Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI. We discussed Redwood's research into AI control, why we shouldn't feel confident that witnessin...
•
50:52
#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion
In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast popu...
•
1:53:51
#12 Deger Turan on all things forecasting
Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute. In this episode, we discuss how forecast...
•
54:21
#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk
Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is best known for conducting the
•
1:16:34
#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more
Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking and analysing developments in AI. We chatted about where we currently stand with ...
•
1:54:22
#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development
Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among the sponsors of California's proposed AI bill SB-1047. We chatted about why Sh...
•
49:50
#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter
Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p(doom) is 2-9%, whether we should pause AGI research, and more!Follow Nathan on Twit...
•
1:28:07
#7 Noah Topper helps me understand Eliezer Yudkowsky
A while back, my self-confessed inability to fully comprehend the writings of Eliezer Yudkowsky elicited the sympathy of the author himself. In an attempt to more complete...
•
1:28:31
#6 Holly Elmore on pausing AI, protesting, warning shots & more
Holly Elmore is an AI pause advocate and Executive Director of PauseAI US. We chatted about the case for pausing AI, her experience of organising protests against frontier AGI research, the danger of relying on warning shots, the prospect of te...
•
1:48:01
#5 Joep Meindertsma on founding PauseAI and strategies for communicating AI risk
In this episode, I talked with Joep Meindertsma, founder of PauseAI, about how he discovered AI safety, the emotional experience of internalising existential risks, strategies for communicating AI risk, his assessment of recent AI policy deve...
•
46:43
#4 Émile P. Torres and I discuss where we agree and disagree on AI safety
Émile P. Torres is a philosopher and historian known for their research on the history and ethical implications of human extinction. They are also an outspoken critic of Effective Altruism, longtermism and the AI safety movement. In this episod...
•
1:47:21
#3 Darren McKee on explaining AI risk to the public & navigating the AI safety debate
Darren McKee is an author, speaker and policy advisor who has recently penned a beginner-friendly introduction to AI Safety named Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. We chatted abo...
•
51:40
#2 Akash Wasil on transitioning into AI safety & promising proposals for AI governance
Akash is an AI policy researcher working on ways to reduce global security risks from advanced AI. He has worked at the Center for AI Safety, Center for AI Policy, and Control AI. Before getting involved in AI safety, he was a PhD student study...
•
1:10:57
#1 Aaron Bergman and Max Alexander argue about moral realism while I smile and nod
In this inaugural episode of Consistently Candid, Aaron Bergman and Max Alexander each try to convince me of their position on moral realism, and I settle the issue once and for all. Featuring occasional interjections from the sat-nav in the Ub...
•
1:08:17