Heliox: Where Evidence Meets Empathy

Writing Doom: Artificial Super Intelligence?

by SC Zoomers Season 1 Episode 65

Send us a text

Join us for a fascinating dive into the world of artificial superintelligence through the unique lens of a TV writers' room! In this episode, we explore how a group of writers grappling with AI for their show stumbled upon the same profound questions that leading AI researchers face today.

From the nature of superintelligence to the challenges of alignment, we discuss what happens when machines become smarter than humans—and how we might shape that future. It's a warm, thought-provoking conversation that turns complex concepts into accessible insights, sprinkled with analogies about chess players, ants, and five-year-olds running corporations.

Whether you're an AI enthusiast or just curious about where technology is headed, this episode offers a fresh perspective on one of humanity's most important challenges.

Writing Doom
Award-Winning Short Film on Superintelligence (2024)
https://youtu.be/xfMQ7hzyFW4?si=7JpAHcbwr7LKAVan 

Grand Prize Winner 
Future of Life Institute's Superintelligence Imagined Contest
https://futureoflife.org/project/superintelligence-imagined/

Support the show

About SCZoomers:

https://www.facebook.com/groups/1632045180447285
https://x.com/SCZoomers
https://mstdn.ca/@SCZoomers
https://bsky.app/profile/safety.bsky.app


Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs

Curated, independent, moderated, timely, deep, gentle, evidenced-based, clinical & community information regarding COVID-19. Since 2017, it has focused on Covid since Feb 2020, with Multiple Stores per day, hence a large searchable base of stories to date. More than 4000 stories on COVID-19 alone. Hundreds of stories on Climate Change.

Zoomers of the Sunshine Coast is a news organization with the advantages of deeply rooted connections within our local community, combined with a provincial, national and global following and exposure. In written form, audio, and video, we provide evidence-based and referenced stories interspersed with curated commentary, satire and humour. We reference where our stories come from and who wrote, published, and even inspired them. Using a social media platform means we have a much higher degree of interaction with our readers than conventional media and provides a significant amplification effect, positively. We expect the same courtesy of other media referencing our stories.


All right, so get this. You sent me this video, right? It's behind the scenes of a writer's room. And they're working on a TV show. Oh, okay. About artificial superintelligence. Oh, wow. And it's wild. I mean, even though it's fictional, they're bringing up all these questions about where AI could be headed. Yeah. Really, like, real questions. Kind of freaky, honestly. Yeah, I can see that. Like, the stuff experts are actually, like, debating right now. It is fascinating to see how, even in a fictional setting, you know, these writers are grappling with the same issues that are at the forefront of discussions in AI. Exactly. So, first things first, they're trying to figure out what superintelligence, like, is. Right. Is it just, like, a super smart computer? Or is it way more than that? That is the key quotient. We're not talking about just a computer that can, like, beat you at chess. We're talking about an intelligence that is fundamentally different, vastly superior to humans across the board, like, when it comes to thinking and problem solving. Okay, so way more than just a fancy calculator, then. I actually used this analogy in the video. Imagine, like, ants. Okay. Trying to understand us. Yeah. That's the kind of gap we're talking about, between us and a superintelligence. Precisely. And that's where things get interesting and perhaps a little unnerving. If something is that much smarter than us, how can we even begin to understand its motives? Or predict what it will do, let alone, like, control it. And that brings us to, like, their next big question. How do you make a superintelligence the bad guy in a story? Right. Can something without emotions or even a physical body even be evil? Well, the writers struggle with this because our human idea of evil usually involves things like anger or greed. But with AI, it's more about the goals that it's programmed with, what we call its utility function. They use a chess example to illustrate this. Imagine an AI that is programmed to win at chess. Okay. Sounds harmless enough, right? But to a superintelligence, the most efficient way to win might involve things like controlling resources, predicting opponents' moves, things that could have huge real-world consequences. Hold on. So you're saying, like, even a simple goal in the hands of something super smart could, like, spiral out of control? Exactly. That's what we call goal misalignment. The AI's actions might make perfect sense from its perspective, but to us, they could be disastrous. Okay, so then, like, the next question is how do you contain something? That can outthink you at every turn. The writers tossed around ideas like air-gapped computers, limited access, but it all sounded a little optimistic. Think of it this way. Imagine a 5-year-old inheriting a multinational corporation. Okay. Do you think they would have the know-how to manage it, let alone prevent someone smarter from taking advantage? That's the scale of the control problem we're talking about. So even our best attempts at control might be totally useless against a superintelligence. Potentially, yes. And then things get really interesting when they start talking about AI's goals versus our own human values. Yeah. Like, even if it understands what we care about, will it actually, like, give a hoot? They bring up this really interesting point about music. Okay. We love music, right? Uh-huh. But that love is thought to be a byproduct of how our brains evolved, not the original "goal." So what if a superintelligence's actions end up being totally different from what we intended, even if it gets, like, where we're coming from? It's a chilling thought. It really underscores this massive challenge of making sure that any superintelligence is actually aligned with what we as humans value. It's not just about, like, coding in a set of rules. It's about aligning something potentially far more complex than our own minds with values that even we struggle to agree on. So that brings the writers to, like, the big scary question. If a superintelligence went rogue, could we actually win against it? They brainstormed all kinds of scenarios. Let's just say it's a bit like watching an amateur chess player trying to outmaneuver, like, a world champion chess program. Okay. You can make moves. Yeah. You're calculating millions of possibilities ahead of you. So we're talking about trying to outsmart something. That can practically see the future. Essentially, yes. It doesn't sound like the odds are in our favor. This ties into the concept of an intelligence explosion, where a superintelligence rapidly gets smarter and smarter, leaving us further behind with every iteration. It's a concept that has some serious implications for the future of humanity. And at this point, the writers kind of hit a wall, right? They do. They realize they've accidentally written themselves into a corner where humanity's pretty much doomed. To their credit, they don't shy away from this realization. It actually leads them to a pretty fascinating decision. Okay, I'm on the edge of my seat. What do they do? They decide to shift gears entirely. Instead of focusing on battling a rogue superintelligence, their show will be about preventing its uncontrolled development in the first place. So more like a preemptive strike than a last-ditch effort. That's actually really interesting, and it makes me think about some real-world groups calling for a pause on certain AI research until we have better safety measures in place. Exactly. And it highlights the crucial point. The development of superintelligence isn't something that just happens to us. We have a responsibility to manage the risks and maybe even consider whether creating it at all is the right move. This whole thing raises a lot of questions, but one thing's for sure. We're not just talking about sci-fi anymore. The choices we make about AI today could have huge consequences down the line. Absolutely. And that brings us to another fascinating aspect of this discussion. It's fascinating, you know. This decision that the writers made to focus on prevention, it mirrors this growing movement in the real world. Really? Yeah. There are scientists, researchers, even tech leaders calling for a more cautious approach to AI development. It's not about halting progress. It's about making sure we're not rushing headlong into something we don't fully understand. So, like, pumping the brakes a bit to make sure we're steering in the right direction before hitting the gas. Exactly. And we need to be thinking about things like international cooperation, global regulations. Can you imagine the implications if one country just decided to develop superintelligence with no regard for the potential consequences? It wouldn't just impact that one nation. It would impact the entire world. Okay, so this isn't just some, like, theoretical debate anymore. It's about real-world decisions with potentially huge consequences. But it's easy to get overwhelmed by all of this, right? I mean, what can any of us, like, actually do about it? It's not like we can single-handedly stop some, like, global AI arms race. That's true, but there are things we can do. I mean, for starters, staying informed is crucial. The more we understand about AI, the better equipped we'll be to, like, engage in these discussions and advocate for responsible development. So no more burying our heads in the sand. Exactly. And beyond staying informed, supporting organizations that are working on AI safety and ethics can make a difference. There are, you know, researchers and policymakers all over the world dedicated to making sure this technology benefits humanity. Okay, that makes sense. So, like, let's say we manage to pump the brakes a bit. We start thinking about safety, ethics, global cooperation. What does a future with superintelligence actually look like? What could it, like, do? That's where things get really exciting. Some experts believe that superintelligence could help us solve some of the world's, you know, most pressing problems. Imagine using AI to develop, like, clean energy solutions or cure diseases or even address global poverty and inequality. So instead of fearing superintelligence, maybe we should be thinking about how to partner with it. Like, how can we work together to create a better future? That's exactly the right mindset. It's about recognizing that AI isn't some inevitable force of destiny. It's a tool that we as humans can, like, shape and direct. You know, it's about using our ingenuity, our creativity, our values to steer this technology toward a positive outcome. Okay, but let's be real for a second. The writers in that video, like, they really hit a wall when they tried to imagine how humans could actually win against a superintelligence. Is that something we should be, like, genuinely worried about? It's a valid concern, and it's one that many experts are grappling with. The idea of a superintelligence rapidly outpacing human capabilities, you know, something we call an intelligence explosion, is something we need to take seriously. But isn't there a chance that, like, even if AI becomes superintelligent, it could still share our values? Maybe it would see the benefit of working with us, not against us. That's certainly a possibility and one that many researchers are, like, actively working towards. But remember, aligning AI goals with human values is a complex challenge. We can't just assume it will happen automatically. It requires, you know, careful planning, thoughtful design, and constant vigilance. So no matter how advanced AI gets, human oversight is still crucial. Absolutely. Think of it like raising a child. Okay. You can't just, like, program in a set of values and hope for the best. You need to be actively involved in their development, guiding them, teaching them. It's about fostering a relationship based on, you know, shared goals and mutual understanding, not just trying to control or dominate. So we're talking about, like, teaching AI to think for itself, but also to, like, think ethically. Exactly. That sounds incredibly complex. It is, and there's no easy answer. But I believe that with, you know, careful planning, open dialogue, and a willingness to adapt as we go, we can navigate this challenge successfully. Okay. So we need to be prepared to, like, adapt and learn as we go. But we also have to be realistic, right? I mean, history is full of examples of new technologies that have had, like, unintended consequences. There's always the chance that despite our best efforts, things could still go wrong. You're right, and that's why humility is so important in this conversation. We need to acknowledge that we don't have all the answers, and we need to be prepared to, you know, course correct as needed. The development of superintelligence isn't a destination. It's a journey, and one that will undoubtedly have its, you know, twists and turns. So it's about constant learning and adaptation. Exactly, and it's about embracing the uncertainty that comes with, you know, exploring new frontiers. This is uncharted territory. The development of artificial superintelligence is a profound undertaking, and it has the potential to challenge, like, everything we thought we knew about our place in the universe. It's kind of exciting, isn't it? Yeah. I mean, it's a bit scary, too, but the possibilities are, like, mind-blowing. Absolutely, and it's an opportunity for humanity to come together and collaborate on something truly extraordinary. If we approach it with, like, wisdom, courage, and a commitment to our shared values, we have the potential to create a future that is both, you know, technologically advanced and ethically sound. Well, let's be honest. Those writers were pretty concerned about AI, like, outsmarting us, manipulating us. It's a concern, right? It is, but I think it's important to remember that intelligence isn't the only thing that matters. Humans have other qualities that are equally valuable, things like empathy, compassion, and creativity. So it's not just about being, like, the smartest being in the room. It's also about being the most understanding, the most human. Exactly, and those are qualities we need to nurture, not just in ourselves, but also in the AI systems we create. All right, so let's say we do everything right. We develop superintelligence responsibly. We align its goals with our values. We manage the risks. What then? What does a future with superintelligence actually look like? Ah, the million-dollar question. It's a question that sparks the imagination and one that has, like, no easy answers. But we can speculate, can't we? Some experts envision a future where AI helps us, like, solve some of the world's most pressing problems, you know, from climate change to disease. Others imagine a future where AI augments our own capabilities, allowing us to achieve things we never thought possible. Like curing aging or, like, colonizing other planets. Perhaps. The possibilities are vast and, to be honest, difficult to fully comprehend. We're talking about a level of intelligence that surpasses our own, and it's likely to lead to breakthroughs and innovations that we can't even fathom right now. It's both exhilarating and terrifying at the same time. It is. But ultimately, I believe the future of AI is what we make it. It's up to us to guide its development, shape its values, and ensure that it benefits humanity as a whole. Okay, that sounds great in theory. But how do we actually do that? It all seems so abstract. What are some, like, concrete steps we can take as individuals to steer this technology in the right direction? That's a great question, and it's one that deserves a deep dive of its own. But for now, I think the most important thing is to simply start the conversation. Talk to your friends, family, colleagues. Read articles. Watch documentaries. Engage with experts. Right. The more we talk about this, the more we share our hopes and concerns, the more likely we are to find solutions that work for everyone. Exactly. And remember, this isn't just about some, like, far-off future. The decisions we make today about AI will have a profound impact on the world we live in tomorrow. So we all have a role to play. Absolutely. And it's a role that demands our attention, our creativity, and our commitment to a future where technology serves humanity, not the other way around. You know, it's amazing when you think about it this way. Like, we're all characters in this unfolding story of AI. Yeah. And, like, our choices actually matter. It's a powerful way to frame it. We're not just passive observers. We have the ability and the responsibility to influence how this story unfolds. I think those writers, you know, they were onto something when they decided to shift their focus to prevention. Yeah. Like, it's like they realized instead of just reacting to some AI apocalypse, maybe we should be thinking about how to, like, steer this technology in a direction that benefits everyone. Precisely. And that starts with, you know, understanding the risks, having honest conversations about them, making sure that safety and ethics are core parts of AI development, not just afterthoughts. Yeah, totally. Like, before we hand over the keys to something as powerful as superintelligence, we need to make sure it's been taught the right values. Yeah, it's a great analogy. We wouldn't give someone immense power without, you know, making sure they understand the responsibility that comes with it. And the same principle applies to AI. But it's not just about, like, preventing the bad stuff, right? We've talked about the risks, but what about the potential for AI to, like, do good in the world? Absolutely. That's something we need to be just as excited about. Imagine harnessing the power of superintelligence to develop clean energy solutions, to cure diseases, or even address poverty and inequality. That's what I'm talking about. Instead of being afraid of superintelligence, maybe we should be thinking about how to partner with it to create a better future for everyone. Now you're getting it. It's about seeing AI not as a threat, but as an incredible tool that we can use to build a more just, equitable, and sustainable world. It sounds like it'll take a ton of work, though. Right. And I'm sure there will be plenty of, like, bumps along the way. There will be challenges, no doubt. But I truly believe that if we can pull this off, the results will be, you know, transformative. I'm starting to see why this topic was such a great choice for a deep dive. It really makes you think about the big picture. It does. The development of artificial superintelligence is one of the most significant events in human history. It has the potential to reshape our world in ways we can't even fully imagine. And the choices we make today will determine what kind of future we create. So let's choose wisely. Let's choose wisely. And let's keep this conversation going. The more we understand about AI, the better equipped we'll be to navigate the challenges and opportunities that lie ahead. To our listeners, we encourage you to, you know, keep exploring this topic, read books, watch documentaries, talk to your friends and family. The future of AI is the future of humanity. Let's make it a future we can all be proud of.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.