Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Do You Really Want the Government in Your DMs?
In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:
- Commission opens formal proceedings against Meta under the Digital Services Act related to the protection of minors on Facebook and Instagram (European Commission)
- Meta, TikTok, other platforms told to expect EU guidelines soon on child protection, age verification (MLex)
- Got a text that you think is a scam? S’pore’s new centre to fight online harms can help verify it (Straits Times)
- Bipartisan Bill To Repeal Section 230 Defended In Facts-Optional Op-Ed (Techdirt)
- Indian journalists turned to YouTube to dodge Modi’s censorship. Some of their channels are now being blocked (Reuters Institute for Journalism)
- She was accused of faking an incriminating video of teenage cheerleaders. She was arrested, outcast and condemned. The problem? Nothing was fake after all (The Guardian)
- Commission services sign administrative arrangement with Ofcom to support the enforcement of social media regulations (Pub Affairs Bruxelles)
- Singapore’s proposed online safety laws look like more censorship in disguise (Rest of World)
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Mike, you'd be forgiven for not remembering Google Buzz, the very short lived.
Mike Masnick:I remember Google Buzz.
Ben Whitelaw:It was only around for a year or so, so I don't know how, but um, it was the kind of network that turned your email inbox into a social network, right? And although it was only around for a short period of time, I thought I'd bring it back for today's episode of Ctrl-Alt-Speech and, and pose its call to action to you. To to share what you're thinking today, please
Mike Masnick:and this may, may sound odd given what I think we're going to be talking about in the rest of the episode. But what I'm thinking today is that I am optimistic about the world of online speech, uh, for a few reasons. I have had a really interesting week in which I have been to two different events, and I actually missed a third event that I think also would have made me optimistic all with eyes towards the future of online speech and trust and safety that made me optimistic for the first time in a while. I was at, I was at an event that was put on by Fordham Law and the Atlantic Council, called the Cyber 9 12, where they have, College students do these, contests around cybersecurity where they have cybersecurity scenarios, but they just did one, their first one ever that was trust and safety focused. And so I saw all of these students who were with a deep, interest in trust and safety, taking on a trust and safety scenario, and coming up with thoughtful, clever solutions to very realistic, challenges around trust and safety. And so it was fascinating to see students who are interested, who want to work in trust and safety, are interested in the future of online speech, and were thinking creatively about it. And that was kind of cool to see. So I am, I am optimistic. And then I also went to a, dinner just last night with a bunch of folks who are working on rethinking the internet and how we build a more decentralized internet where the power is taken out of the hands of few giant companies and moved out to the edges of the network where really interesting and compelling things happen. And so it was a fun and really fascinating conversation of a bunch of people thinking about how do we. Make the internet better and more like it should be? And so those two things sort of bookended my week, you know, uh, other than recording this
Ben Whitelaw:of course
Mike Masnick:they both had me thinking like, huh, you know, maybe the future isn't so bad.
Ben Whitelaw:Nice love
Mike Masnick:what, what, what are you thinking?
Ben Whitelaw:Well, i'm wondering How close Ctrl-Alt-Speech is to having its youtube channel taken down by the indian government? Because it seems like um, as we'll find out later in today's episode that we are not the only ones we are not the only ones um, so
Mike Masnick:Yes, you're, you're getting good at the foreshadowing part of
Ben Whitelaw:to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you again with the financial support of the future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. Your must read weekly email about content moderation and safety. Trust and safety. And I'm joined by the, uh, traveling event, go tour. That is Mike Masnick, who has seemingly had a very busy week this week.
Mike Masnick:This has been a very intense week and unfortunately I missed, there was also an event put on by Altec as human about the future of trust and safety that I heard from people. Was excellent and I have a, a bit of FOMO that I missed out on it. Uh, I ended up having to go somewhere else instead. Um, but, um, I'm excited that people are thinking about this stuff and, and exploring ways that, you know, the future can be better.
Ben Whitelaw:Definitely. And shout out as well to, uh, there's two events happening today. We're recording on Friday, the TSPA event in Dublin, which it's its annual showcase in EMEA. And also the all things in moderation virtual conference, which is put on for. Trust and safety and content moderation practitioners as well. So it's been a big week for events. Um, we couldn't go to all of them or you can go to all of them. Although, although seemingly you tried and, uh, but we have a great episode, I think planned as well today. Nonetheless, you've, you've kept up with your reading Mike, which is great. we'll start our kind of journey through the week's news with, somewhere we kind of started two weeks ago, actually with where, when you and Alex were helming the podcast, where you talked a bit about the digital services act and, an investigation that Meta, had just had unveiled. And, and it's happened again, two weeks on, we're talking about pretty much the same thing.
Mike Masnick:Though a different investigation, right? Same company, same law and same EU commission, but a different investigation. And sort of, you know, there's a part of me that wonders, you know, why don't they just put these all together, right? If you're going to investigate meta. And of course, you're going to investigate meta because that's better. That's part of the deal here, like, you know, figure out what you want to investigate them about and put them all together rather than announcing one. And then two weeks later announcing
Ben Whitelaw:Yeah, like buses, eh?
Mike Masnick:but, but, um, so this is This is another investigation, and it's covering a few different aspects of, you know, what it was. I, the last investigation was around election disinformation, misinformation. This one is more about, children online and addictive behavior and things like that. of that nature. I don't think there was too much that was that surprising in the announcement and kind of the things that they're going to be investigated for. These are the kinds of things that are in the media that everyone is talking about around kids and online safety and things of that nature. But the thing that I, I wanted to talk about in particular, so one aspect of that investigation is around age verification. And this has been sort of an ongoing concern, and we've certainly talked about it in other contexts as well, where there are real questions around the different trade offs and the quality of different age verification solutions, and how well any of them work, if they work, and, you know, are there, there privacy concerns associated with them. And, a lot of the promises behind the people putting together these regulations has been that, you know, we understand that there are trade offs with these, technologies. We're not trying to force you into it. Often there's, people will say like, we're not going to force you into age verification. We're going to do something a little gentler around like age estimation or age assurance or things of that nature. And yet here, it does appear that, the EU is saying that what META is doing, which, you know, they have put a lot of resources and a lot of work into it. I think they're working with Yodi, which is, one of
Ben Whitelaw:Yeah.
Mike Masnick:larger, more well recognized providers in this space. They're saying that that is not enough and that is concerning because then there's a question of if, if meta isn't doing it enough, like what is enough? And then I was looking at, there was an article in Emlex, which is, you Got me a little bit more concerned in that it went a little bit deeper into the age verification thing, saying that the EU commission, was effectively looking to, move further and sort of, effectively force the different social media platforms to use the new EU digital wallet, was just recently approved. And, the description in the, MLEX article was the regulator is working hand in hand with national governments to come up with an age verification solution that will leverage the new EU digital wallets, which will soon be available since the formal approval. And. They note that the guidelines that they're working on would allow platforms such as Facebook, Instagram, or TikTok to have a safe solution for age verification and to comply with the DSA requirements. So in other words, what they're saying is that to be compliant with the DSA, it may turn out that you need to use the eus own home built digital identification
Ben Whitelaw:Okay So we've got kind of two parts of this story, right? There's like the fact that the eu commission has opened up this investigation, which we kind of expect it's not a massive surprise perhaps the speed of it since the last investigation was announced two weeks ago is a bit surprising but then the fact that kind of age verification is so prominent and that you feel like there's a push towards using the digital model is, is what you feel like is the underlying story, right? It's like maybe what's happening, in a kind of subverted way here. That's interesting.
Mike Masnick:Yeah. And it just, it's worth watching, right? Because there is the possibility that the EU will do this investigation. They will look at what Meta is doing. Meta will come back and give some transparency to the investigators and the end result may be one that Meta is okay and that they're doing everything fine, or that there are small tweaks that they can do to adjust things and that would be, the DSA working the way that everyone hopes the DSA will work and sort of like, okay, we can, sort of figure out the best practices and maybe figure out some slight improvements. The fear is that if it is really going too far and there's a feeling in, in the way this is described in this article that just makes it sound as if, First of all, like nothing is ever going to be good enough and that the EU commission is always going to push for further, you know, no matter what happens, something bad is going to happen somewhere, somehow. And, being able to sort of blame the companies for failures that might just be, not really their responsibility, just sort of more larger societal things as an issue. But then this implication that the safe harbor to getting around another investigation is to use the EU's own solution seems sort of worrying, right? There's just a concern of when the government is the one providing the identity aspect here. there are fears around surveillance and privacy and who has access to this information and do you have to use a government I. D. because that can go wrong and can go wrong in dangerous
Ben Whitelaw:Yeah. I mean, we don't want to go too much into digital IDs because we're, fair way away from online speech in a sense, but we should just quickly, I think, think that through. So like you're, you know, your fear is that actually the use of a digital ID is worse than somebody uploading their, passport or their driver's license, which we, you know, already have, there are concerned around privacy and surveillance, but you're. You're, you're worried that this might be a step further than that? Is that what you're saying?
Mike Masnick:Yeah. So, I mean, so much of the discussions around, things that are happening online today. If you take them far enough and talk about them long enough, eventually you get back to the question of online identity. And how is online identity working and who has access to information and who has control over information? And, um, It's interesting that the EU is developing their own digital identity wallet, as a solution, but then it just raises questions about, do you want the government to control your online identity?
Ben Whitelaw:Yeah.
Mike Masnick:And if we're getting to the point that the major internet services are effectively told the identity that works for your service has to be tied to a government run identity, I mean, people were talking about ways to protect privacy with age verification, being some third party who is not the government and not the company that will, you know, you can upload, your identity documents to them, they will look at it and then just issue a token that says you are old enough or not old enough or whatever. Uh, so that, you know, as a way of protecting privacy, and there are, privacy issues with that and that, you know, then that company becomes a threat or that organization becomes a threat. Target. Um, but here we're now talking about something way beyond that, which is like the government itself controlling the identity service, which, it's easy to say like, well, that it's only going to be that we're not going to have access to other things, but like, you can see how, The mission creep happens there where once the government has access to your online identity, then there are surveillance issues and privacy issues and all sorts of concerns that I think should be really concerning. And if the idea is that the only way to officially comply with the DSA is to use the EU's own identity system, that feels really, going down a road that we were told we weren't going to go
Ben Whitelaw:Yeah. I think, I think that's fair. I think the investigations that the commission has announced so far, I think this is the fifth or sixth now, this definitely does feel like the most interesting in that respect. With the most kind of riding on it, right? The others, Are ones that I think are within our understanding or like about issues that have cropped up before competition in, e commerce apps. As you talked about the kind of, missing disinformation, these are kind of known issues that the commission is working on, I think, in very front footed ways, this digital wallet stuff, if this analysis in Mlex is on the money. Does feel like somewhere that's new and that is kind of two areas that are colliding, right. Online speech and, digital IDs, which have a long and complex and controversial history. and that's in some ways the beauty of online safety as a, topic, right. Is that it kind of butts up in this very unneat ways against topics that make it really interesting for you and I to kind of talk through.
Mike Masnick:Yeah, yeah, absolutely. And, and again, this analysis could be wrong. And, maybe it is just a regular, old fashioned, I can't say old fashioned about the DSA, but like, you know, regular old fashioned investigation that just comes up with, you know, Oh, you have to fix this and that. And the other thing, But this analysis in MLEX really sort of, you know, cause I think when I saw the original announcement from the EU commission, I was like, okay, this is kind of what we were expecting. They're going to check on safety on kids. that's part of the point of the DSA. But then reading this, I was just like, oh gosh, uh, you know, just sort of the mission creep aspect of it. Is really concerning.
Ben Whitelaw:I mean, I would say also that the point around. Nothing being enough for the DSA is a really interesting one. And I, I noted that in relation to recommender systems, which the investigation is going to look further into. So the DSA makes a key point of platforms, certainly the, the VLOPs. So those bigger networks with more than 45 million users within, the block of, of making sure that the recommender systems are transparent and that users understand how recommender systems work in the terms of service. So, Meta and other platforms have done a hell of a lot of work over the last 12 to 18 months to try and adhere to those stipulations, right? And if you go on to Meta's website, you can essentially look at all of the signals that roll up into how you get certain content in your feed. And there are 15, I was looking before the podcast started, 15 different, um, Recommender systems that it uses, and you can essentially kind of reverse engineer them using what the information they provided, right? It's the most comprehensive kind of library of signals that it's ever put out, I would say. And to your point, that information is still not enough. So it does beg the question of like, what is it they're looking for? We know that there are going to be risk assessments that they're going to have to complete. There's this idea of kind of systemic risks that it wants platforms to actively avoid in the design of products. feels like it's going to be a real bind for platforms to adhere to that, which maybe is the point.
Mike Masnick:Yeah. Right. And like, I, I know I've gotten some criticism for being too harsh on the EU on this podcast. So, um, to, to give it, to put it in the best possible light, like, you could say that this is just them making sure that these things are actually happening, right? So that, when Meta is putting out this information, is it actually working the way they said it is? And this is the government just going in and doing a check like other regulated industries where we're sort of stress testing, are these things actually working the way you're claiming they're working? Is this good enough? And so it could come out of this process that again, as I said, with the age verification piece, that they'll look at it and say like, yeah, you know, this is good and, and you know, everything's good here and now we're satisfied, or maybe there's a few small tweaks, there's just, again, just the worry of, is it ever going to be that way when it feels like some aspects of this are. We just want to hit these companies over the head over and over again, and sort of prove that we're doing something and sort of, we've staked out this position. This is where I'm going to get yelled at by, by Europeans, but like the, the, we've staked out this position of we are the, the internet's regulator now, and we're going to come in and, and do our thing. But we'll see.
Ben Whitelaw:yeah, yeah, definitely a change for US headquartered platforms, that's for sure Um, let's let's move on now. We've spent a whole bunch of time talking through that I think there's lots of interesting aspects that story. Thank you for unpacking that. We don't often have stories Mike from Asia, but I wanted to bring one to our listeners of, that is essentially a story from January that I'm, um, there's been a kind of minor update to which I'm using to, to really go back to that story and unpack. So this, this, I'm going to take you back to January and kind of put you in Singapore, which, is, uh, often the headquarters of platforms who have an Asian presence. So interesting from that respect. Basically, in January, the Singaporean government, ministry created a online trust and safety program and launched a brand new center that it called the Center for Advanced Tech and Online Safety. Basically made a huge investment in, bringing together kind of research partners, companies, experts around trust and safety. And it, announced the A chunk of money, 20 million at that point to where it was going to create tools and products that would help combat harm and misinformation. Now, Singapore, as most listeners will know, it's kind of partially free when it comes to, I would say, um, If you, I look, you know, the Freedom House,
Mike Masnick:partial, partially free is, uh, yeah.
Ben Whitelaw:I think how Freedom House, the kind of non profit categorizes it. So there are restrictions in terms of what people can do in the real world and also online. And so the announcement, in January is kind of clearly about how to create a civil, uh, Online environment that it could better control its citizens was, was really what the analysis was at the time this week, they have invested a further 30 million to that effort, where there's a bunch of money coming from the national research foundation. And that money is going towards basically expanding the efforts to not only create the tools, but also license them and create kind of service level agreements where they can, I guess, be scaled and sold. So. This is really interesting for me in a couple of different senses, which I'll just unpack. The first is that there is this kind of clear focus on creating a market for online trust and safety technology, right? Governments are trying to stimulate innovation when it comes to tools and technology, because they can see that regulation is going to create that market and platforms of all sizes are going to have to. Invest in these tools in order to make their users safer in some, in some capacity. And we've seen research in the UK and over the last few months as a research report by Duco, which you I think refer to in a previous podcast, which says that this market is growing. And so it kind of makes sense that Singapore and other governments are looking to stimulate that for their own economic benefits. But this, the scale of it is also really interesting. So this, the UK government did an element of this over the last three or four years, as it was building out its online safety act. It created, what's called the safety tech Innovation network. I actually, um, they commissioned me to do a podcast. I should say my first efforts to do a podcast before you and I got together. And, they invested in, in kind of creating this network and also trying to stimulate tools and technology in the same way they invested, I think 400, 000 and then 700, 000 pounds basically give startup funding to some technologies. It wasn't quite the incubation, of it, but basically gave some cash to try and stimulate this. 50 million is a hell of a bigger chunk of money to try and stimulate this. So we're talking here about really scaling the technology piece when it comes to trust and safety and, a really serious effort, I think, to try and get, Singaporean startups and technology tools into the hands of platforms, probably across the world. so yeah, I think this is interesting. In terms of the fact that the, if the new funding is come so soon after January, I think it's the scale of it is fascinating. And I think again, this, trust and safety tooling market, which we're seeing grow is going to be really something we have to keep an eye on, on the podcast. Yeah,
Mike Masnick:to see what comes out of it. I would say, and this is maybe like the Silicon Valley bias in here, where it's like 50 million is a drop in the bucket, like, you know, for, for startups now where it's like, every VC, within five miles of where I'm sitting right now has like, you know, 5 billion funds, you know, like 50 million is nothing, uh, and which is ridiculous and silly, but for companies that want to build like large scalable tools, a VC firm that, is running off of a 50 million fund, but though, obviously it can go up as it just did, is not that much, but, you know, as a way to sort of stimulate growth and investment in, in this space is a really interesting thing. It'll be interesting to see what comes out of it, where the concerns are, goes back a little bit to where I was on the last story too. I'm I don't want to. Just be seen as like the anti government person here, but, but I, I, I'm, I'm going to raise some concerns around like, when the government is involved in things around trust and safety, which involve decisions around speech, you always have some concerns about how that is going to play out. Right. And this is like the crux of the whole Murthy case here in the U S was. Is the government too involved in decisions around trust and safety on online platforms? And so when you have the government investing in these kinds of tools, it's going to lead, whether it's true or not, it's going to lead to accusations that the, whatever comes out of this is on behalf of government interests. And we have that a little bit like in the U S for example, there's a venture capital firm called In Q Tel, which is, you know, The CIA. And so the CIA invests in startups, and sometimes it's for obvious reasons where it's like tools that, you know, are going to be for, CIA analysts and national intelligence work or drones and things like that. Uh, and in other cases, it just feels like, Well, why is the CIA interested in this? But like, it always leads to accusations among the more conspiratorial bent folks out there of like, you know, don't trust, uh, this company because they have funds from the CIA. That is mostly garbage, right? I mean, In Q Tel, while it is connected to the CIA, like getting funding from In Q Tel does not mean that like you are now part of the grand, industrial, espionage complex or whatever. But there are concerns about how these things work. And especially when you're doing things like, you know, the article around this new funding also talks about this particular service checkmate. Which is sort of like a. Sort of a fact checking thing where you can send information in and, if you see something on WhatsApp or whatever, and you want to know if it's real or fake, you can send it to the service and it's like, there have been concerns in Singapore in particular in the past where like they put in place a fake news law. Not that long ago, and there were significant concerns about how that would be used to suppress political opposition, which in Singapore, there is, you know, being partially free as they are, there have been concerns about, the ruling party and the sort of, iron grip that they have over, over the government there. And so, if you're doing things like determining, is this real or is this fake news and you're using a service that has been funded by and approved by the Singaporean government, do you trust that to be accurate? And so there are some concerns, but I do think it is still like, seeing more funding in the space and seeing more experimentation is good. And if that investment leads to better overall services that can be used certainly outside of Singapore and more globally, that could be really interesting. It's just, you know, you always be a little cautious around the government funding side of
Ben Whitelaw:Yeah, definitely. I mean look I I share the same reservations in terms of those kind of toolings and technology I guess it's interesting The extent to which the government feel like they need to stimulate this market, right? you know because government funding is different to vc funding and you know, the fact that the government is putting its cash citizens cash into being used in these ways is definitely, I think, interesting and concerning at the same time. Um, so, we'll post the link in the show notes to a rest of the world. article as well that last year talked about how, the Singaporean government does this thing where it presents a threat to citizens and then class itself as the role of protector, which I think is kind of what all governments do in some respects. And, you know, it's a really interesting context to this new, development of the story. So, yeah, I just, we, we never really have, I think good reason to talk about. Definitely Singapore and sometimes further afield. So I wanted to kind of bring that to you and, I'm going to move us swiftly on because I, I know that we will now go into the kind of quickfire round, but I know this next story is not going to be quickfire. And this is because it's about section 230. And, and I want to, you know, I'm not going to pretend like it's going to be a couple of minutes because it won't be. And we should prepare our listeners for that. tell us about this new bipartisan bill to, uh, sunset section 230.
Mike Masnick:Yeah. So, so this bill just came out, last weekend and with it, there was like a wall street journal op ed, slash press release from, the, Bipartisan, set of house, representatives who came up with it, which is Kathy McMorris Rogers, who is the lead on the chair of the house energy and commerce committee, which is the committee that technically sort of has jurisdiction over the
Ben Whitelaw:I actually never knew that. That's really interesting. Do we know why, why is it live there?
Mike Masnick:I think it's the commerce part of the energy and commerce. Uh, I don't quite fully understand it. Some of the like jurisdictional issues around. Committees is like, it's such a weird arcane science, and there, there are fights over other things. So like, if it's copyright issues, for example, that goes to the judiciary committee and then like the energy and commerce might fight over it because you'll have issues that are like copyright related to the internet, then who has jurisdiction and it's. Getting deep in the weeds, but yes, but the, the energy in Congress says, and so last month they had a hearing and absolutely ridiculous hearing just about section 230. And we've had hearings for years about section 230 and they're often full of nonsense, but usually there's like a mix of, one or two witnesses who will speak some amount of sense. And then a bunch of people who will, have their, their sort of pet theories is the way I'll put it slightly diplomatically. This, hearing was, it had three witnesses, all of whom were vehemently anti Section 230, did not see any redeeming qualities or values in it. And in one case, one of the witnesses was just, seemed to just have no understanding of what Section 230 did at all, which was a very bad thing.
Ben Whitelaw:are you gonna name them?
Mike Masnick:Uh, it's a professor, Alison Stanger. Um, and I wrote a, I wrote a piece about it. This was a few months ago where I said it, like it literally like got terms wrong, like important terms, like things that matter just did not understand what they were and describe them in ways that was, absolutely inaccurate, like totally wrong.
Ben Whitelaw:Okay. Good to clarify. Okay.
Mike Masnick:But the feeling was, and I'd heard this from a few folks in and around D. C., that the hearing was just sort of like, it was a way to vent. It was a way to allow representatives who were mad at the internet to have this hearing and blame 230 for everything bad, and nothing was going to come of it. And then, like a few weeks later, What does come of it is this bill from the chairman of the house energy commerce committee, McMorris Rogers and Frank Pallone Jr. Who is the ranking member, which is the, the top Democrat. So you have the bipartisan, both parties came up with this bill, which is literally a bill to sunset section 230. Which the law is very simple. It just says as of December 31st, uh, at the end of December, midnight on December 31st, 2025. So, you know, uh, a year and a half from now, section 230 is no longer in effect.
Ben Whitelaw:So like a stay of execution?
Mike Masnick:It's, well, it's one way to put it. Yeah. And in their announcement about it, they make it entirely clear that it's, it's funny because the announcement says this won't do harm, which is ridiculous because it would do tremendous harm. And even, I've even seen people who are believers in reforming 230 and have all sorts of ideas for reforming 230, who are like, wait, what, no, this is crazy. Like you can't do that. You can't just repeal it. That is, Just all sorts of problematic. But their argument in their piece was like, this is effectively putting a gun to the head of the internet, and saying, you know, we've been trying to work with you to fix the problems of the internet and you have been stymieing us. So now you have until the end of 2020. 2025 to come to us with a plan to fix the internet or we shoot the internet. I mean, that's, that's really sort of the impression that is here. So it's, it's sort of a hostage taking scenario. But there are a bunch of problems with that. Which is like, if there was a good way to fix the internet, like, yes, we could have these discussions. The problem is that like, there's not a law that fixes bad people. And that is really the root of a lot of this. And there is not because of. The first amendment in the U S there is not a law that says we can outlaw bad speech and speech. We don't like, and that's a speech issue. The other like really sort of significant problem is there is a built in assumption in the way that Palone and McMorris Rogers are presenting this bill that. Section two 30 is only protecting the biggest tech companies, Meta and Google and maybe a few others, and it is their problem and they need to fix it. And so it is very much presented as in meta and Google come to us with a solution on how to reform section two 30, or we nuke it. And what that leaves out is that meta and Google are not the primary beneficiaries of section two 30. It is the users of the internet, who are major beneficiaries of it. All of the other companies, Meta and Google have buildings full of lawyers. If they're going to get sued for ridiculous, you know, liability lawsuits, they can handle it. They have the money, they have the lawyers. They're able to do it. It is every other company, uh, the smaller companies, the ones we want to build competition to. To build new services, to create all the wonderful new things, the startups that we're seeing. Those are the guys who are really the ones who are most protected by section two 30. And this. Bill and the proposal does not even suggest that there are stakeholders in this or that the users of the internet are stakeholders in it. Instead, it's basically saying meta and Google come to us with your reform bill. otherwise we killed the rest of the
Ben Whitelaw:Yeah, it's got, it's got grudge match filter, isn't it? It's kind of like 1v1 when it, when it takes all. I wanted to ask like, what, why 18 months? Yeah. Like what do we know?
Mike Masnick:I think it's basically just like, this is how much time you have to come up with a solution. So, you know, they sort of know, even though they claim it wouldn't hurt the internet, or wouldn't hurt speech, they're, they're wrong. But they're sort of saying like, you have 18 months effectively to give us a solution to how we should perform section 230 in a way that you're not going to then use your lobbyists to block. Um, and again, just the idea that it, it assumes that the stakeholders here are Congress and Google and Meta, and not all of the people who use the internet and not all of the many, many other companies that rely on section two 30, is really problematic. And then the issue is that like, it feels like this bill is actually, might actually have some momentum just because there is a general anger towards section two 30 in Congress next week. They're holding a hearing about it. They haven't announced the witnesses yet. I'm assuming that they will be probably pretty much as bad as the hearing a few weeks ago. Um, and, normally when bills come out that have no chance, I don't, Hear much about them. Like people don't raise the alarm. This bill, like my email inbox is full of, activists and civil society and trade groups and everybody being like, Hey, wait, this is, this is bad. Did you see this? This is, this is a problem. Which usually indicates that they've heard, some news from Congress that this bill actually has a chance to move, um, And so there's some concern. Like when I saw it, my initial reaction was like, this is silly. Nobody is ever going to actually do this because even the people who want to reform two 30 recognize that this is a crazy, crazy way to do it. But maybe it is actually going to go somewhere. And therefore we should allow me to rant about it for a few minutes.
Ben Whitelaw:I'm glad we did i'm glad we did. I mean so so that hearing is happening next wednesday the 22nd So if you're
Mike Masnick:And in theory, sometimes I will note that like, hearings have, will often move. So from when they're originally announced to when they actually happened, so it might move, but as of right now, it is scheduled for Wednesday morning. So next week we may have something to talk
Ben Whitelaw:Okay. Awesome Thanks, mike. That is really helpful and kind of gives us something to look forward to I would say Something to something to Live for next week. Um, great. So just in terms of the next on our on our story list this week I wanted to flag a super interesting story that we both actually read and it's about youtube and youtube's blocking of several independent media channels on youtube so as listeners will know The Indian election is happening right now. It's a gargantuan process. It's due to wrap up in two weeks time, roughly. And a lot of people are using YouTube to get their political commentary and to understand the election and what's happening and who to vote for. So it's a really interesting move by YouTube to, essentially follow the Indian government's, lead it's ruling to take down these channels, so, this is something that the Reuters institute has written about and also MediaNama, which is a kind of Indian tech policy site, which we'll include in the notes. But basically, a couple of channels, one called Bolta Hindustan, which had 275, 000 subscribers, was essentially taken down with no warning, tried to appeal, wasn't told anything. And only when the press went to YouTube and asked for a comment, did YouTube say, That it was because, it's policy is to take down channels that have false claims and that undermine the trust in electoral democratic processes. So, no one's really sure why this channel was taken down, but the feeling is that one video or something has, led to it. The Indian government to, use rule 15 of its own regulation, the IT rules that came into fruition in 2021 to take down this independent media channel. So it's like, that's, we know that Modi and India have a really strict regime when it comes to, to kind of press freedom. But this is, YouTube doing its bidding in many respects. And,
Mike Masnick:Yeah.
Ben Whitelaw:uh, we've also seen in the course of the last few weeks, other channels being demonetized for similar reasons, no explanation, no transparency, no means of appeal. And these aren't small channels, right? This Bolta Hindustan had 7 million views in the 28 days before it was taken down. So this is like a serious media outlet, used by lots of people. And this is, is I can concerning in lots of ways. What did you think about it?
Mike Masnick:Yeah, I mean, there are a few different storylines in here that are all interesting. One is, how journalists were resorting to YouTube because they were recognizing that that, other ways of getting the news out were not as effective. And YouTube was a way to route around some of the censorship that they were seeing within the country, and then the censorship sort of following them to there. Um, that was interesting. I think the use of the IT laws, was interesting because, you know, if you go back a decade. It was very interesting because India actually had very strong intermediary protections, somewhat similar to Section 230. In fact, there were some legal disputes about a decade ago that effectively established that the IT laws in India were very similar in the way they were set up to Section 230, meaning that, you know, a YouTube would not be liable for the speech that people were posting to it. Now, Over the last few years, those laws have changed and they have directly sort of written them back. And, one of the concerns that I will often raise about moving away from section two 30 is that it opens up opportunities for government censorship. And here seems like a really clear case of that exact thing happening where they are using the nature of the law. this law, where there can be liability on the intermediary for hosting speech that is deemed to be, misleading or misinformation, forcing them to shut down those channels or demonetize those channels and effectively kill them. And it becomes a tool by the government, for censorship. And then the, the one final thread to pull on here, which is also super interesting is, you know, As this year, everybody's talking about elections and misinformation and how are the companies going to deal with it? This is where that can backfire as well, where the government is presenting this as like, this stuff needs to be taken down because it is election misinformation. How is it election misinformation? Well, it's critical of Modi, right? You know, and so as soon as you give someone a tool to say, what is a government, a tool, and this is like becoming the theme of the podcast, you give the government a tool to determine what is legitimate and what is not, it opens up an opportunity for a more authoritarian, illiberal government to. You use that to remove content. And that appears to be what is happening in this scenario.
Ben Whitelaw:No, it's true and I mean one of the points made in the piece is, if there was a false claim of some kind in one video, then take the video down. Don't take the whole channel down. You know, this is like, using a hammer to crack a walnut situation and
Mike Masnick:But that, that's what happens. I mean, that is not, it is not a unique situation
Ben Whitelaw:yeah, no, definitely. Okay, so, um, Let's move on to our third story. One that you'd read. I actually hadn't read. So I'm, I'm planning to get around to this, but tell us about this guardian piece that you read.
Mike Masnick:Yeah. This is really fascinating. It's a long, it's a long read, and slightly terrifying, uh, and it involves a lot of people who you will not like by the end of this piece for a variety of reasons, but it is, about a woman who was arrested, for supposedly, creating a deep fake fake. Video, which has been, you know, there's been lots of discussions around deep fakes, um, a deep fake video of a, teenage cheerleader at a local high school. That showed the cheerleader supposedly vaping, which would have violated some sort of policy on like. Partying or something, and so that, that was sent around and the discussion was, you know, was this an attempt to like, get this cheerleader kicked off the cheerleading squad, where it was like a big deal. This feels like a story that is not like of major importance, but, but it is an interesting one because the police came out, they looked at the video, they declared that it was a deep fake, that this woman had faked the video in order to harass, this young woman who is the cheerleader, and therefore she was arrested. It later turns out that she didn't, the video was actually real. Um, but the, the claim that it was a deep fake was, wrong by the police. And then they sort of stuck to it. There were all sorts of signs from very early on that there was no way this could have actually been a deep fake one, like deep fakes are fairly difficult to do, and this was a few years ago before even all the tools that are available now were, but they were assuming that she had like used an iPhone eight. And this was someone with very little technical experience or knowledge to create a faked video using like pictures from social media. There were some other accusations around there too, of like altered images and things like that, which may have been done by other students. But basically the crux of the story. Is like this sort of general fear of like what the technology is going to be able to do around deep fakes and, modified images and, and we're hearing all of these stories and law enforcement and the media, in some cases, totally overreacted to this one case that was a woman sending a real video around maybe for questionable purposes, and claiming that it was a deep fake. And we've seen this in some other contexts where, for all the talk of, you know, deep fakes are going to impact elections. We haven't really seen any indication of that for real yet. That's not to say it won't come, but most often where it's come into play in politics is when someone has been caught doing something bad, they actually just claim it's a deep fake as an excuse. And it just becomes this sort of excuse. And in this case, not only was that excuse made by the teenager, but the police bought it and then arrested this woman and she, she had to go to trial. And so this. It's, I'm sort of fascinated by this seemingly, one off story of just an indication of how we tell these stories about technology and what they can do and why we should be afraid of them because things like deep fakes and, manipulated, content is going to be such a problem. And when it appears in this. sort of small town story, people just jumped to the conclusion like that must be what happened and it completely messed up this woman's life. And like, she had neighbors and people like screaming at her as this horrible person. She might still be a horrible person for other reasons, but like it struck me as this kind of really interesting example of, when we just tell the story of technology being used for terrible purposes, people will assume that in all sorts of cases, and that can lead to other
Ben Whitelaw:yeah, no, that's true. It's also really telling for me and I'm going to go back and read this, but from what you said, it's really. Hammer's home how important cheerleading is in the U. S. could
Mike Masnick:Part parts of parts of the U S yes. And that is like, you know, that is a part of it, uh, where it's, it's just sort of like, being on this cheerleading squad is like a hugely important deal. And there are competitions about, you know, so they're like, part of the story is like, you know, was she trying to get like rival girls from her daughter kicked off the cheerleading squad and she, that may have been true, and whether or not you approve of that sort of activity for a parent, like it's
Ben Whitelaw:Like we think about deep, we think about deepfakes being used in politics, but like maybe somebody is deepfaking their child being really good at football in order to get them a scholarship or something. Yeah. Like it's, it's, it's, it's, remarkable how many, you know, scenarios where this kind of, Kind of gap between what is real and what is not will emerge over the next few years. Um, I will go back and read that. Thank you, Mike. That's really helpful. maybe you can do more of that for me, filtering the best stuff so that I don't have to read long reads that I don't enjoy. Um, last but not least today, just a brief mention, we're kind of journeying back to the EU now for a story that I thought was just kind of funny from the perspective of the UK leaving the EU in 2016. I won't review my, um, how I voted in the Brexit referendum, Mike, but there's, there's something quite ironic about the story this week that emerged that Ofcom and the EU commission have signed an administrative agreement to share kind of training and knowledge and best practices about online safety and, internet regulation, this is going to make Brexiteers squirm. This is essentially. This is like going back a decade, um, when, the island that I live on used to be kind of connected in, in more meaningful ways to the mainland. But yeah, that's, that's a kind of a lighthearted point, but the way that regulators are working together is also a kind of more serious point here. There's a network of regulators. Eight of them from folks in Australia to, New Zealand and, the UK as well who are coming together to kind of learn from each other. It's called the Global Online Safety Regulators Network, and, there's essentially this kind of elevated group of regulators who are, working together, which, if you think about what kinds of regulation we're seeing and the similarities between them, kind of is an interesting point to, to recognize, right, that if you're trying to regulate the internet, the best way to do that is to go to the people who have already done it, And to find out what works and what research they've done and to learn from them So there's a question there about whether we're going to see any kind of new types of regulation whilst those regulators are working together in that way But I thought you know just kind of funny that the the brexiteers Won't you know won't be getting their way for when it comes to online safety Yeah,
Mike Masnick:was kind of like, let's regulate the internet as if Brexit never happened.
Ben Whitelaw:yeah. Yeah.
Mike Masnick:of my take on it. Um, but yeah, I mean, it, it is interesting. I understand like the value of, of. Communicating and sharing thoughts and best practices. I worry about that again, sort of cutting off like interesting experimentation and recognizing like different approaches might work in different scenarios if everyone sort of gets locked into, this is the one way to regulate services on the internet. But yeah, it'll be interesting to see how this plays out.
Ben Whitelaw:definitely which brings us to the end of our stories this week mark. We've we've whizzed through those. Thank you very much for taking us through some really chunky reads this week. Is there anything you wanted to share before we wrap up?
Mike Masnick:No, I mean, I think that that's basically it. You know, there's obviously a lot of other stories that happened this week. We were, you know, we had a long list of other potential stories. Um, but, um, I think there's, there's a lot going on in the world of online speech and, uh,
Ben Whitelaw:I think it's interesting for listeners to know that we spend basically an hour before we start recording, filtering down maybe 15 or 20 stories sometimes into the six to eight stories that we talk through in each podcast. And we kind of try and pull out the best pieces and ensure we don't speak over each other and say the same thing. But if, you know, if there is interest in seeing the long list. Do let us know, get in touch with us via the, uh, control all speech website, that's CTRLaltspeech. com. And, uh, we'd be happy to share them with you. And, uh, thanks again for listening and, uh, speak to you next week.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.