Ctrl-Alt-Speech

This One Weird Trick to Save the Open Internet

Mike Masnick & Alex Feerst Season 1 Episode 8

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Alex Feerst, former General Counsel and head of trust & safety at Medium, and co-founder of the Digital Trust & Safety Partnership. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. 

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Mike Masnick:

In the words of the now, I guess, pretty long gone Google Plus, Alex, what's on your mind?

Alex Feerst:

Well, I mean, it depends in which of my circles you're in, like, uh, uh, which version you get, but I have lately been thinking about scooter safety since, uh, I've been wondering how long before San Francisco, passes a law stopping everybody from. Buying like a non speed governed scooters and doing 35 in the bike lane. Um, it's, it's like, it's also like the paradigmatic case of tech regulations. I think about it a lot while I'm scootering around San Francisco and hoping that they forget to regulate this one for a while.

Mike Masnick:

Excellent. Excellent.

Alex Feerst:

How about you?

Mike Masnick:

Uh, I, I am trying to figure out how to, how to run this podcast because normally, uh, it's been doing this part of the discussion and I'm trying not to mess stuff up. So that's, that's how I'm doing.

Alex Feerst:

Fair enough. I remember being at an EFF cyber law trivia event in like 20. 12 or 2011 when the Google team like named their team plus, cause they were super proud of it. Well, we go that week and we're just like, wait for us to dominate social media. You fools. And we're, and we're all very scared.

Mike Masnick:

Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of all the major stories about online speech, content moderation, and internet regulation. Uh, this week's episode is brought to you with speech. financial support from the Future of Online Trust and Safety Fund. I am Mike Masnick from TechTert, and you may notice that for the first time on Ctrl-Alt-Speech, there is no British accent leading the way. Uh, Ben is traveling for work, and so suddenly I need to learn how to steer this particular ship and sitting in for Ben. Though, in some cases, I'm sitting in for Ben and sitting in for me, is Alex Feerst, who you just heard. Welcome,

Alex Feerst:

Great to be here.

Mike Masnick:

And, uh, for those who don't know, Alex is a lawyer and trust and safety expert who was general counsel and head of trust and safety at Medium for many years, and also co founded the Digital Trust and Safety Partnership. And whenever I want, uh, someone to better understand trust and safety. Alex is usually one of the people I try and get to talk to, to someone who is misunderstanding stuff because he's, one of the best trust and safety explainers that I know. So

Alex Feerst:

thanks Mike. Yeah. And I first we find the facts and then we can distort them at our leisure. Part of, part of my favorite pro bono practice.

Mike Masnick:

Excellent. Excellent. Um, so, just before we get into our, big stories of the week, is there anything interesting that you're working on these days besides, uh, preventing regulation of scooters in San Francisco?

Alex Feerst:

Yeah. So in the, in the more, it's, it's funny cause I'm writing, I'm working on a thing that I'm like tentatively calling against safety, um, about sort of trying to take a more historical view of what are the baselines of how much safety people expect from the world at different, um, Places and times, because it feels like now we're on like a one way ratchet to safetyville. Um, and, and, and I may be one of the people who helped cause this and without politicizing it, it does feel like there's an, it may be that human beings will spend an infinite amount of resources and time trying to make the world safer for themselves and others, which is in some ways a nice thing. It, you know, it has all the like trade off problems with the other things. We care about, but I have sort of started to think about it started with thinking about like whether young people just have a higher expectation of safety. And that sort of the concepts of like physical harm had migrated to psychological harm in a big way as like younger people or maybe more mentally health aware. And so I just was sort of thinking about like, how much safety we expect the government or, or private companies to provide us at any given time and place and whether it's moving in a particular direction.

Mike Masnick:

Yeah.

Alex Feerst:

and what that, you know, and again, sort of like migration from offline safety to, online safety. This is like, I sometimes wonder about like, are there more narcissists in the world now? Like when was the high point of narcissism in human society globally? Like, so, yeah, so I'm, I'm, I'm a, amateur historian who likes doing this stuff.

Mike Masnick:

that's, that's interesting. I mean, I would definitely be very interested in, in reading about it and maybe discussing it on the podcast, uh, when you've written it. So,

Alex Feerst:

Thanks. Yeah. I mean, I think the basic idea of like what we often talk about, even trust and safety. So in terms of like, this was insufficient. And so part of the question is like insufficient relative to what? And if, if we're doing in a democratic way, like, how do we come to an aggregate belief about how much safety is the right amount, you know, for

Mike Masnick:

Yeah. Interesting. Very interesting. All right. Well, let us jump into our stories of the week. As always, we're going to try and do two larger, more in depth stories, and then a somewhat lightning round, of faster stories. We'll see how we do on any of that. We're not always great at timekeeping here, but I want to start with this lawsuit that was filed earlier this week, where I've been jokingly referring to it as Zuckerman versus Zuckerberg, though I was corrected by Ethan Zuckerman that he pronounces his name Zuckerman and not Zuckerman, so it's Zuckerman versus Zuckerberg. Zuckerberg or Zuck versus Zuck, uh, and that Ethan Zuckerman, who is a professor, and has done a whole bunch of stuff has been around. He's one of the, you know, open internet, early folks. Though he also created the original pop up ad, uh, and much of his, his, his work since then has been as penance

Alex Feerst:

that's, that's one for the tombstone, like you have like a second pop up tombstone over your, over your normal one.

Mike Masnick:

Uh, so he has been working on a whole bunch of projects related to, sort of a more civic and democratic internet and trying to, bring back better spaces in which to communicate. And, he has now filed a lawsuit against META, represented by the folks at the Knight First Amendment. Institute at Columbia University. And the basic claim is that he wants to recreate, an app. that somebody else had created a few years ago called unfollow everything or everyone. I'm suddenly blanking. And, and the original version of that someone had created, and it was just an app that automated the ability to do what it says, unfollow absolutely everyone on Facebook and have a blank feed. So you would no longer have a feed when you log into Facebook. You could still go and check out people's accounts if they allowed non followers to see those accounts. And when that guy published the original one, he was blocked and banned from Facebook for life. And so Ethan wants to reintroduce it, but also include an academic research component to it, which is that you can, through checking a box when you sign up also. agree to submit back some anonymized data so that on the academic side they can study what happens if you don't follow anyone on Facebook and what sorts of information you are exposed to. So they are suing Meta for declaratory judgment that they are not infringing on any of Meta's rights. And the interesting element to the case is that they are using section 230 to do this, which is, um, surprising

Alex Feerst:

Yes, this is Unlev 230 C2B.

Mike Masnick:

C2B, right? So when everyone talks about, section 230, they're usually talking about section C1, which is the 26 words, the, the infamous or famous, depending on where you sit, uh, 26 words that. you know, says you're not liable for the stuff that someone else has done effectively.

Alex Feerst:

is the other stuff that then I guess representative Cox wrote while he was sitting on an airplane.

Mike Masnick:

Yes. Yes. Famously. And so C2A comes up in some conversations, because that's the part that says, you know, you can block for things that are lewd. whatever, uh, otherwise objectionable content that a site can block. But C2B is like, it basically never comes up, but basically this is no liability for, tools that, remove access to content. So it was basically designed to be about filtering, I think. Right.

Alex Feerst:

Right, exactly. And, and, I don't think there was great distinction between like things that were made by users, things that were made by the platform. It's just sort of like tools, like ways to make it better. And as happens with statutes, this is sort of like a Chekhovian pistol lying on the mantelpiece for the last. And somebody who decides to like weaponize it and be like, let's see if it still shoots. Um, I feel like in law school, I knew people who like wanted to bring the third amendment case about quartering soldiers that was going to make their career. And so, yeah, it's, it's nice when you have like a relatively. You have a law and then you watch it make its way in the world. And it has this like lopsided life. Um, and you know, possibly nobody would have predicted that like those words would have become a 20 year confusing debate about platforms versus publishers. But then you have these other like moderation protections, that are lying around. And so it's definitely cool at minimum, cool and clever to like, Style the lawsuit this way. And to see like, I think whether this part of CD 230 could be used as maybe not a sword exactly, but like not a shield or like a, like a, a shield that you use to hit people,

Mike Masnick:

A weaponized shield, right? Because, because 230 is normally like a defensive thing, right? Somebody sues you and you say, I'm immune because of 230. And here they're using it, you know, I mean, it is still a shield, right? Because they're trying to say like, Meta can't sue us, but we just want to find out ahead of time., And so we should know that there are some like procedural concerns about standing and whether or not you can really bring a declaratory judgment for this kind of thing.

Alex Feerst:

Yeah. From, from the day that I was awake in Civ Pro, it's like, you can bring a DJ action because somebody sort of, sort of like puts a cloud, you know, of doubt over whether the thing that you're doing is legal, whether it's a breach of a contract or some other thing. And so in order to like get clarity around whether your conduct is okay or not, before you start doing it, investing in it, you sort of asked the court, like, could you like tell us, we're going to sue you in advance because somebody did a thing that scared me about my legal rights, which, but normally like they have to scare you by like threatening you. Um, this is like somebody threatened. Another guy who I am sort of following.

Mike Masnick:

right.

Alex Feerst:

and so maybe this is also a cloud of danger over me that the court should clear up and and sort of clear the path for other people.

Mike Masnick:

Yeah, but the, you know, the more interesting thing to me and, and I think probably to people who are listeners to this podcast is, what this could mean if it succeeds. And again, there are lots of procedural reasons why it might not, but if it actually succeeds in theory, it would mean that at least a certain class of middleware you know, middleware being third party apps that can interact with content on social media or other platform, could not face legal liability from those platforms. So they couldn't sue them for like a CFAA violation, which is the most standard thing that has happened. There's been a bunch of those related cases in the past about like, apps that we're trying to build on Meta or Craigslist or LinkedIn, or, sometimes scraping data that were, you know, face CFAA lawsuits. And so part of the argument here is that we, we can't face the CFAA lawsuit for this is

Alex Feerst:

Yeah. And so, here, like 230 C2B is sort of like the advanced phalanx of shields behind which middleware can sort of advance and start doing things in the world for people. Such as getting data and also, creating middleware API, you know, using API APIs and it's a little bit of a, like a hearkening back to the olden times of like 2006 through 12 say of like when things tended to be more open,, like the Twitter API and Facebook API before things got shut down, nominally for security reasons. And like quality control reasons. And so this would be like an interesting return to the source to like re enable a path that sort of withered a long time ago. And it's like sort of interestingly different than the high Q stuff and other scraping things.

Mike Masnick:

Right. Because Haikyuu is just about like collecting data from LinkedIn and whether or not LinkedIn could use the law to block that, whereas this would be empowering users rather than just like pure data collection. Right.

Alex Feerst:

Yeah, so it's definitely it's like I said, it's cool and interesting. The app itself, I sort of, we were talking about, I sort of wonder about like, it pushes back like there's like middleware is potentially good part of it. Um, and like, People should not fear a CFA violation for noodling around. And they also ask for sort of a DJ that the terms of service about like something, which is, which is both interesting. I guess I've always thought you can sort of change the terms of service pretty easily, but maybe this is sort of like a, um, nevertheless, it's like an advanced pronouncement on how this stuff works. Um, the app itself to me gets it part of like the reality of personalization of like. I feel like if you go to Facebook and follow nobody and, you know, and you could manually unfollow everybody in every group and leave everything, you just get that like matrix with no people version. Um, and so you get, you get, in some ways you get a, you get your own reflection and you get the thoughts in your head, but you're not immersed in a personalized, like algorithmically created reality, which is sort of the whole. Uh, you know, um, and it's not for you to, to, to edit it. It's for, it's for it to be a fun environment. That social media creates. And so in some ways. It's like. The right to have a social media. Or the rights to like have social media. That is on, like, does not do the thing that it's creators really want you to do with it. Um,

Mike Masnick:

so there is a question of like. How many people would actually use something like that? Or, what is the likelihood that there is a, this big marketplace for these kinds of services, right?

Alex Feerst:

yeah, but it's like, you know, it could be, it could be a first shot and like. What some people think could become like a flourishing marketplace of like middleware that lets people structure their own social media and internet experiences, better.

Mike Masnick:

And I saw, I mean, I've seen a few people who have been somewhat concerned about the potential, they think negative. downsides to this. And you had raised the fact that, you know, in the earlier era, when everyone was more open and had APIs and were willing to sort of encourage this sort of middleware type of behavior, and then they locked it down. And one of the reasons you mentioned was for security. So there is an argument. Some people are saying is that this could harm. Privacy and security. You could have, I mean, the examples people just bring up, or you have the sort of Cambridge Analytica or the Clearview AI kind of situation where are you enabling something that then creates like a major security or privacy concern?

Alex Feerst:

Yeah. And this, and this is like an interesting, to me, there's an interesting layer of trust and safety work, which is like, you're entrusted aside from like the legal requirements of the toss and whatever, like you're sort of taking on the idea that you're entrusted with protecting the folks. who are using your site as users from bad actors who are using your site to do bad things. And I've definitely had the experience sometimes where, you know, those third parties will be like, well, who are you? Like, we're, you know, who are, like, who are you to say what's bad or good? We're offering a service. And it's, it's a curious thing because you're, you know, it's one of these things that to me is a, Sort of soft layer that sits on top of the legalistic parts of trust and safety, which is like, yes, you sort of self deputized as the sheriff of whatever platform to protect people from the people you deem to be bad actors. You're like highly incentivized to deem anybody who messes with your business as a bad actor. Or anybody who like uses the platform in a way that's different than like your product vision as a bad actor. And so you get this like inevitable problem of like, um, it's not exactly biased, but like your judgment about what is really bothering people, is, is certainly is, is not objective and it's potentially very questionable. I just call this like the citizens of Gotham problem, which is like the citizens of Gotham, like getting caught in the crossfire when, whenever Platforms enable non platform actors to gain power and control and access. There's always this tug of war between them and the platform potentially. And then everybody sort of says they're acting in the name of the people and maybe believes it, but then the citizens of Gotham are just like taking on all these bullets as, as you like, figure out who can scrape.

Mike Masnick:

Right. Right. All right. Well, uh, I do think this is going to be a really interesting case to follow. I think there's some interesting challenges involved with it, but, it's a really unique way of using Section 230. And it'll be fascinating to see how the case goes. I do think there's a little bit of potential irony in the idea that, um, you know, people have talked about Section 230, I think, Misleadingly, it's like a major gift to Big Tech, and here is a way that Section 230 might be used to sort of knock down some of the walls of Big Tech and potentially empower, I guess, the people of Gotham.

Alex Feerst:

Yeah, yeah, well, it'll be interesting to see what meta how meta responds and they may very well file like a motion to dismiss. That's sort of like, there's nothing to see here. Um, is my guess, but but we'll see. And just to touch on briefly, we talked a little bit about, like, this also sort of contemplates like, that there's more than one type of interactive computer service under 230.

Mike Masnick:

Yeah.

Alex Feerst:

for a lot of years, when you put the statute in front of somebody who hasn't spent 20 years fighting about like the word platform.

Mike Masnick:

Right.

Alex Feerst:

They'll find the, like the word platforming in there. And there's this thing called an interactive computer service, which like from what we can tell was like what people thought prodigy was at the time, you know? And so it certainly migrated a whole bunch. And if your Facebook app or whatever is one type of ICS, you now have these unfollow apps that are like a different type of ICS. And so this is like. Maybe the first time I think this like a pluralistic interpretation of like interactive computer service types, um, or like ICS is are sort of being put on a collision course with themselves a little bit. Or at least like, you know, like, like Facebook's lawyers are potentially not used to arguing against interactive computer services being able to do what they want. Um, So it's like an interesting, flipping of the polarity in the fossil record, uh, to like, see, see where this goes. Yeah.

Mike Masnick:

I mean, uh, and, and I know we should move on, but I'll bring up one other point. There's so many like little interesting nuggets in this particular case, but like, there's nothing in this case, or in Section 230, I think, that would stop Meta from using technical means to block the app. Um, there's nothing that says you have to, have to carry it. But there is an interesting component that some people have raised, which is that if, if it becomes a, sort of back and forth war of technical means, you know, Facebook tries to block, they come up with a workaround or whatever, in theory, the one avenue that is still open to meta then would be a DMCA 1201 claim for, uh, For circumvention of technology because section two 30 has an explicit exemption for IP claims. This is getting deep into the legal weeds, but like it's kind of this funny element, uh, I think that's, that's in

Alex Feerst:

This is like, this is like an action movie, like resurrecting somebody that we thought was dead for a long time.

Mike Masnick:

Yeah, yeah, yeah,

Alex Feerst:

me to circumvent Facebook. Um,

Mike Masnick:

the printer and garage door opener cases from, from a decade ago, uh, arguing over this stuff. So it'll be fun to explore. We'll see how, how this battle goes, but I'm sure this is not the last we will hear of it. All right, but let, let us move on. And we're going to jump across the Atlantic ocean to the EU, which has opened a formal DSA investigation into Facebook and Instagram on a variety of different Things, related to information and the various elections that are happening, the EU election happening very soon, obviously, and there are a whole bunch of concerns in here, some of which seem contradictory, but they're worried about how well meta is handling deceptive advertising and disinformation, and, uh, About the visibility of political content, because, you know, Meta has made it clear in the last few months that they don't want very much political content because it's a seems to be a pain for them. A couple of other things in here, the, EU is concerned about the removal of CrowdTangle, which is a sort of transparency app. And then a little bit about the mechanisms to flag illegal content. So Alex, what, what is your reaction on seeing this particular bit of news?

Alex Feerst:

right. Yeah. As, as often happens, I don't know who to root for or against. Um, this is like, I need like a big foam hand of like, can I root against everybody? Um, I do think as these things go, and I know I'm like a dyspeptic, skeptical of you, you stuff, but like, this feels to me like a, sort of a grab bag of stuff. We're unhappy about. As opposed to like a particularly focused, line of criticism about a phenomenon, because I think, you're not showing enough political, like the transparency aspects you know, and the servicing of political content are like related here, but I think they, they just strike me as like a bunch of things that were pissed about and let's just like, Open an investigation because we've been planning to do this since day one. Um, I think, I think we could like, talk about each of them in isolation. It does make me, you know, in a lot of the reporting to me, the reporting that I saw was just like, you says, you know, met his met his work is like, quote, unquote, insufficient

Mike Masnick:

Right.

Alex Feerst:

area. And this is sort of like, gets back to what I was teasing earlier, which is just like insufficient compared to what? Like, if they show up and say, we spent like umpteen billion dollars on this, you can be like, well, you should spend umpteen plus 10 billion on it. It's not, it's not enough. Keep doing it. Or it's like, you're not spending your money wisely. We European commission or whatever, know how you should be spending your billions on this and it would be better if you did something, or is it just like generally the same enough? So anyway, yeah, it sort of feels like sending it back to the kitchen without really saying exactly what's wrong. It's just sort of like, we're not happy with you. So we, I think we talked about the isolated claims, but that, to me, that's like the vibes,

Mike Masnick:

Yeah. And it's, you know, there, there are some parts that are interesting to me and like the argument that I've had, about the DSA is, is that the, the folks in Europe keep insisting that it is not, a law to regulate speech. And yet every time it seems to be like there's an investigation that happens, it seems to be about them regulating speech. Um, and so I have an issue with that.

Alex Feerst:

Yeah. Yeah. And I, I, I sort of described it as like, we. Folks don't want to talk in a purely consequential, consequence driven manner about like, which speech they want to get rid of or censor or whatever you want, whatever you want to use. And so we've gotten into this, like, very lawyerly, sort of, indirect proceduralism in the way that these laws are drafted, because in the US, at least, like, you're trying not to trip over your own feet on the First Amendment. And then I think even in Europe, there's a sense of like, It is not cool to just do direct, you know, naked speech regulation. And so we've got this sort of proceduralist approach to everything, which is what we do because we're lawyers, but it, when you do it in certain ways, it becomes sort of like gnarled and circuitous. You know what I mean? When the law is like, well, publish what you want. But you should publish like a transparency report in this font. And like, you should have like a piece of software that measures this other thing. And also like, there should be a minimum of X number of people who speak Y language. If your thing is, and like many of these are like perfectly good and understandable threshold mechanisms to about like the robustness of your trust and safety work. And like we did a lot of this at DTSP to think about like met, you know, things that are modular as opposed to absolute. But. I do think this is sort of what happens when you take this highly like process driven approach, which is like when you do openly consequence driven stuff, it just sort of looks weird and potentially like takes the veil off all that.

Mike Masnick:

Yeah, and we can look a little bit more at the different concerns. But, you know, the one that definitely stood out to me as like, this seems weird was the the visibility of political content one.

Alex Feerst:

Yeah.

Mike Masnick:

Because, you know, the way it's written is that Meta has made clear Instagram and threads and Facebook that basically, I think, they're sort of sick of dealing with the controversy. And so the way they've decided to deal with that, especially in election years, basically, we're just going to, demote the value of political content and often that means news content in anything that you post on our platform. So they show up much

Alex Feerst:

Yeah. This is, this is like a pox and all your feeds

Mike Masnick:

yes.

Alex Feerst:

just like go, just knock it off. Um, but it's, it's not, it's not exactly a scalpel.

Mike Masnick:

Right. And, and like, that's, it's, it might be a position that I don't think is a good one for them to take, but it's one that I understand where they're coming from. And it strikes me as a little bit odd that that is, something that the EU has any legitimate concern over. To say like, you need to put more political speech into everyone's feed seems like a very weird stance that I don't fully understand.

Alex Feerst:

Right. And also there's like a two step there where it's like, you should have more political speech and it should be accurate. It's

Mike Masnick:

Yeah. And less

Alex Feerst:

well, we should, it's like, we have plenty of political speech.

Mike Masnick:

right. Because, because like, you know, the fact is like, hopefully this doesn't surprise anyone. But like when you're dealing with political speech, there's often a fair amount of disinformation and propaganda included in that political speech. This investigation seems to be suggesting that they want more political speech, but less of the misleading political speech.

Alex Feerst:

Yeah. Yeah. And it's like, I, I get it. If you come at it from where they're coming at, it's like, we don't want our elections to get messed up.

Mike Masnick:

Right.

Alex Feerst:

And, and for me, everybody I know who works at companies, large, small and in between, like nobody wants to be the platform that gets blamed for working some election or whatever. And I don't know what the correct amount to invest is, but I would say people. People invest plenty in election protection. Again, they may not invest it wisely, and it may just be an intrinsically hard thing to do. But the notion that platforms and other tech folks don't care about elections, I think, is important. It seems hard to believe at this point, um, because nobody, nobody wants to like be accused of messing up an election. So I totally get where they're coming from. They're just like, fix this in advance. But I also think this is one of those things where it's like, you know, Creating ever larger sticks does not create more precise sticks. It just gives you bigger sticks to hit people with. And I'm not sure like an infinite amount of clubbing the election protection team or whatever at Meta. I don't think is going to make them Work harder to, protect European elections. I don't know, maybe it will, but it just seems like we've gotten to like a very atavistic place on this

Mike Masnick:

right. I mean, there is, you know, the one specific thing that was raised here that is interesting, right, is the issue with CrowdTangle. And CrowdTangle, you know, is this tool that Meta had purchased a while ago. And, it became very useful for academics and journalists in particular, to, like, look at things that were happening on the platform and often, point out things that perhaps Meta was not doing well. Um, and so the shutting down of the tool has certainly raised eyebrows, not just in Europe, obviously in the U S I know a lot of academics and media folks are pretty upset by it. And, and people have been asking Meta to not shut it down.

Alex Feerst:

yeah, it's a bummer. It was a good tool. In fact, when some people when I first started them, they didn't even believe it existed because they were just like, how, how, how is it possible that they make this available?

Mike Masnick:

Yeah.

Alex Feerst:

And I guess with, with my like platform lawyer hat on for a second, if like, I think part of the, the sense of dread when I think of this from the company side, it's like, okay, I'm for research. Like I want people to know stuff. The problem is there's like multiple, groups of chaos agents with different incentives. And so like for any given researcher, they're going to do some research on the data, like, great, knock yourself out, like let's learn something. But like they, and I, you know, I can't. Make them say anything or not say anything. It's tempting to like cut them off from the data. If they're say something I don't like, but and then some reporter for some other reason, like potentially. inadvertently mischaracterizes it or writes a headline that's like super spicy, but, from the platform's perspective is like misleading or, mischaracterizes the paper. And then by the time you get to like seven generations of reblogging of the spicy headline, like it's, it looks nothing like what the data that was picked up by the researcher. And then you have like a legislator hold up the like eighth generation Spicy, you know, misinformation headline and use it as the basis of a law. And so that chain of events is sort of what's in my head when I see people being like, Oh yeah, we're just going to look at some data and that's not a reason to not let them see data, but it's definitely an incentive to be like, no good deed goes unpunished. Let's shut up the free data thing because it's yet a different type of loaded weapons sitting around. For people to just like, and, and it's like, those groups are not coordinating, but it's just all the chaotic incentives or bad incentives. Everybody has to me, at least often leads to that kind of outcome. So you don't want, and if you're a decent lawyer, you're supposed to like, be responsible to the company to try to lower the probability that that happens.

Mike Masnick:

right. Well, as, as

Alex Feerst:

That's like the devil, the devil's advocate, but you know, that's how the world looks to them.

Mike Masnick:

I think that's, it's valuable perspective. I was going to say as the eighth generation blogger in this conversation, I'm going to take the other

Alex Feerst:

you're, you're, you're, you're, you're the first generation. Absolutely.

Mike Masnick:

I mean, I, I honestly, I do think like I, I understand that side of it. But I am also kind of annoyed by CrowdTangle going away because it was a really useful tool for those of us who think that we try and explain these things accurately and use the data, intelligently. But

Alex Feerst:

And it's like data, you know, I would say it sounds harmless, but like, information, great have some researchers poke around in it. And it was great to be able to reference some kind of ground truth. There were the days when, like, I remember after 2016, when you look at CrowdTangle and you'd be like, Oh, eight of the top trending stories today are like conservative stories. So like the notion that conservative stuff is being suppressed to the point where nobody's seeing it is just, is provably untrue. like that was useful for having some like basic conversations.

Mike Masnick:

Yeah, yeah, absolutely. And, and also, I mean, we should know, right, this is just an investigation at this stage, and it is entirely possible, if not probable, that, the investigation could turn up, that Meta is actually doing things okay, and that there is no, cause for further action. Um, so, you know, we'll see how this turns out, but it is

Alex Feerst:

Is there data on how often investigations and like, all right, good job, everyone. Here's a report. Everybody go home.

Mike Masnick:

Well, I mean, the DSA is new, so I don't know that any of any investigations have concluded yet. But yeah,

Alex Feerst:

third or fourth, I think.

Mike Masnick:

I think so. I think there's like two TikTok investigations. There's like a AliExpress investigation. So I don't know. It's still early.

Alex Feerst:

Yeah. And I think nobody should be surprised that they were going to use this to mount some investigations. That's the whole point of part of the thing. Um, I think we're all interested to see what those investigations look like, and like, which things they chose to prioritize. And like I said, it's perfectly sensible that elections would be an important thing for them. And then maybe there is a greater sense of plan or structure as to like, how these different, Threads fit together into some sort of coherent, like, here's what we're trying to show is happening, but we'll, we'll learn something.

Mike Masnick:

Yeah, absolutely. All right. Well, now that was our second big story. So we'll move on to our attempt at a lightning ish round. Um, and, uh, we're going to leave the EU. But just jump across the, uh, channel to the UK where Ofcom is apparently investigating OnlyFans for potentially violating some element of, their duties to protect people under 18 through its, age assurance system. So, Alex, do you want to explain what happened here?

Alex Feerst:

Yeah, so it looks like, um, only fans in, in trying to, uh, Sure. That only folks over the appropriate age are using it, where it was using this third party company called Yoti has a surface called FAE, which is facial something estimate age, age estimate. So you basically turn on your webcam, look at your face, and then it determines like how hard you've been living slash how, how old you are. Um, and then. You know, gives you access or not. And this is part of the sort of like third party age verification regime that's been talked about one way or another for the past couple of years. You know, he's like one of the main third party players that, as I understand it, like a bunch of porn sites and other folks use. So they are one of those like, um, infrastructure, load bearing, things for a lot of folks is age assurance. So what happened here is like, I, so supposedly, and the OT CEO sort of wrote some stuff trying to clarify what happened to explain it was not their fault, um, that only fans believed that they had set their filter age to 20 to 20 to 23 and therefore, it was using an age that didn't make sense. And so self reported to Ofcom that they had done something bad. And then it turns out that they actually weren't doing the thing that they had self reported for. Um, so that's, that's what I was seeing. And, and the OTCO's point was just like, whatever, it's not the app's fault. You got to like set the age in the right place. Brings like this interesting question of like, if you're trying to filter below 18, With current like accuracy confidence intervals, whatever, where do you set the age to like get the right optimal balance of false positive and false negatives and like people wearing Ronald Reagan masks or like whatever, whatever other weird, weird shit people do.

Mike Masnick:

Right. Yeah, cause the idea is that, you wouldn't set the threshold to like 18, because obviously, if you get that wrong, you're going to let a lot of people under 18. So the normal thing is, you put some higher age on there with the, assumption that, some people are going to be over 18 but will still get caught and maybe there's like a secondary way of checking. If you don't pass, if they think you're, you know, under the age of 23 or whatever, then there's probably some sort of secondary check that might involve uploading an ID or something else which has privacy issues and, and all sorts of stuff. But the, the issue here was like everyone seemed to think that OnlyFans had set the thing at 23, but they'd really set it at 20. Um,

Alex Feerst:

And so the, all the baby face 22 year olds were suffering.

Mike Masnick:

right,

Alex Feerst:

were, they were the collateral damage here. Um, yeah. And as I said, the best thing about this would be that if it causes zoomers to start drinking and smoking more in order to, in order to like see porn, they can boomerize their life and like start aging, aging the way the rest of us have for once.

Mike Masnick:

yeah, you know, it's interesting because there's lots of talk all over the globe right now about, you know, age assurance, age verification, age estimation, and all these things. And it's interesting that we're, starting to learn about the, challenges in terms of, how this is all going to work out. Um, and so it'll be interesting to see what comes out of this particular

Alex Feerst:

Yeah, and if I made and DTSP put out a report on age verification practices that sort of goes through all the different it's like a pretty carefully well written report about like each of the different approaches and the trade offs and how to think about them. And it's, you know, was not written by me. So it's very good. Um, but it's in the family of what I would call like, everything is bad. Like, there's no good candidates. There's only bad candidates. These candidates and they all, as you say, there's the privacy problems, there's accuracy problems, there's how it affects people who are clearly old enough, but are still now having to submit an ID to do something that's perfectly lawful. Um, so yeah, I actually, gosh, where I was in Arkansas recently where one of these laws were. So like, these are now like,

Mike Masnick:

Yeah.

Alex Feerst:

they're, they're out there in the world. Um,

Mike Masnick:

Yeah. Yeah. We have, there's like, you know, probably seven or eight states. I think that already have them and a bunch of others that have bills. Australia just announced that they're going to, do a pilot program, even though last summer they came out with a report that basically said none of these tools actually work very well. So we're not going to push forward with them, but now they're doing a pilot anyways.

Alex Feerst:

Yeah. And I, interestingly, like I, I'm sort of. Ambivalent at best about the like, moral panic over children, getting porn brained. Um, I, I have even seen, like, I saw a tweet from recently from somebody who claimed to be, you know, a young person saying like, essentially like, I wish you all hadn't built this technology that porn brained us. Because it's like prevented me from having a healthy relationship in my twenties. Um, so whether that's like false consciousness or, generations speaking, you know. Speaking their truth, but like it's far far beyond for me to say where the right thing is. But I do wonder again, as a historical baseline, I guess, back in the era where people, I guess,, stole porn from a card store or like, stole it from their parents or something else, like, I imagine the volume on the Internet must be way, way, way, way higher.

Mike Masnick:

Yes.

Alex Feerst:

Um, but returning to the sort of like Victorian hypothesis of like protecting the children. In an airtight way from seeing adult things like this also strikes me as like yet another way that if you had like very, very tight enforcement, it creates an atmosphere. That's like way more stultifying than what we had before the internet

Mike Masnick:

Yeah. Well, let's, let us move on to the next story. Uh, this one, we're going to bounce back across the Atlantic and to Canada. And this was a story, in the, the CBC had a story about Canadian banks, needing to deal with abusive e transfers. And this struck me as a really interesting story and kind of demonstrating where like trust and safety problems can show up in unexpected places. And, and the, the underlying story was effectively, acrimonious divorce happens. And former spouses are no longer speaking and have blocked each other on social media and all this stuff, but someone who still knows the bank account of the other, someone will send them a dollar or 5 along with a. Nasty message, and there's very little that the recipient can do. And so there's this sort of harassment angle. And apparently this happens more often than you would expect. According to the article, they have a bunch of examples of people getting harassed by exes or stalkers or whatever, who somehow had access to their bank account information.

Alex Feerst:

I feel like there's like a university of Chicago economist paper to be written here on like how much people are willing to pay to say, fuck you to their ex. Um, like if you set, set the bit above threshold of, because essentially like you, this is like reveal preference, natural experiment,

Mike Masnick:

Yeah.

Alex Feerst:

in the price of being able to get the last word.

Mike Masnick:

Yes. Yeah. You blocked me on Facebook. I will pay five dollars to tell you fuck you through your

Alex Feerst:

Right. Right. Well, it's, it's basically like functionally the same as like Facebook letting you pay 5 to get yourself unblocked. So you can like. Say what you want to say. Um, so yeah, I mean, I, I sort of love these. I mean, like, obviously it's jarring and harassing and not good. The banks should know, maybe not should know, but anytime you make a messaging app, you're inviting yourself into a world of pain and so calling it something innocuous, like a memo or whatever they call it, does not exempt you from the world of pain that creating DMs, um, advice, advice. I do think this is a great example of something that is both like. A trust and safety problem slash circumvention of the intended use, and also very core to how users think about the world, or at least the users we deal with and how tech thinks about the world is like, because you are inverting, or at least changing the relationship between like, the intended function and the, incidental payload.

Mike Masnick:

Right.

Alex Feerst:

So it's just like. The point is to pay somebody the incidental thing is a memo, but like the real point is now to say something and like the fee is just like a whatever nominal thing. I'm not saying that the tracking pixel I feel like is like a similar example of inverting nominal function to like do something interesting. Another one was, um, part of Bob Lee's like great insight when he was at square. Was using the refund rails for payments because there was a very low statutory fee on refunds. And so part of the way that cash, as I understand it, I saw him give a really great talk on it once was like, the first insight was like, if you use the normal payment rails, you're going to be paying percentages as the.

Mike Masnick:

Uh huh.

Alex Feerst:

networks require you to over the bank networks, but if you do, but if you do things as a refund, even if it's like not really a refund, you're basically getting this like statutory maximum rate that was in pennies, um, that had been said, I can't remember the law. It might've been Dodd Frank, but, but essentially that, that is like a product insight that launched a giant product. And it's not really different than the logic of like, let's use the bank to like tell off my ex.

Mike Masnick:

Right. Right. Yeah. I mean, it's, it's interesting how you see this sort of like this weird emergent behavior, which is like people are going to communicate, like the other example that I thought of that was, again, not the same thing, but. In the same vein was, there were schools that were for fairly obvious reasons blocking access to social media on school computers but then giving kids access to Google Docs because that's where they were doing all of their work and stuff and kids immediately effectively recreated social media through Google Docs comments because they could comment on each other's work and that was a way of of creating the social media system. And so suddenly there are these weird trust and safety content moderation questions that come up in places that you don't expect. And what it seems like in this particular story is that the banks are completely taken by surprise. And we're just like, you know, Oh shit, what do we do? And, at best are like looking into like, you know, curse word filters or

Alex Feerst:

Yeah, it's actually a really rich little topic because I think also part of going back to like the only fans dust up of a couple of years back, like the, um, banks are generally number one, they're like in this highly regulated space and they're used to like. Acting like grownups, dealing with people acting like grownups. Um, second, they're not really about expression. They're really about payments. And so they're used to anti fraud and they're used to like social engineering and all sorts of uses of expression for like a very specific fraud purpose. But they're not used to like expression problems. Because it's just like not their beat. Their beat is like people stealing money. So here it's like nobody's stealing any money. They're just sort of fucking with you. It's just like not the normal way. And I feel like this is a little bit reminiscent in a very different way of how the, um, whatever it was when MasterCard had leaned on OnlyFans a while back, that was like a case of the payment rails becoming like the indirect Are aware of like expression upstream, but this is one where like the expression has like, sort of come home to roost. And it's like, well, if, if you're not going to let us use only fans, we'll just like send, we'll just like sext at each other through sending payments. Um, but you know, and again, I guess this is why some people think the perfect super app in the future will like, let us do all this stuff because maybe these like payments versus expression. Buckets that we currently have in our head are going to be obsolete soon.

Mike Masnick:

Yeah. Interesting. All right, next story. Uh, this one, we sort of head over to the Middle East, I guess we're bouncing around the globe this time, which is, there was a really good and interesting story in Forbes, by Emily Baker White, who does some really amazing reporting on, on all things sort of trust and safety, about how TikTok and Meta have not The war in Israel and Gaza, and what is happening there. And this I thought was just a really interesting and thoughtful read, but raises a bunch of the questions and trade offs around what the hell is state propaganda and how do you label it? And, you know, we had some questions, there was a whole dust up last year where Elon and Twitter labeled NPR and the BBC as state media. And everyone was up in arms. And now we have this situation in the Middle East where Meta and TikTok, there, there are a bunch of news organizations that are at least somewhat partially funded by various Arab states and are not labeled as such. And so there are questions about how you handle that.

Alex Feerst:

Totally. And this is one of those things where I think in the early, in the early days of trust and safety, we knew immediately we were out of our depth on this one. Um, because this is like the interface of like state department intelligence agencies, state diplomacy, at least for my sense where it's like. If you're working test for what is state propaganda is like, it should be funded partially or wholly by the state and the state should have some serious level of control over what gets published or at least the ability to like threaten people or manipulate them when they need to. The thing that would generally happen would be that all of the. Various sort of the other side of the joke that Elon was doing was like, you, if you ask, some publication of like, of course we're not state run like we have editorial independence and they sort of like by Fiat, we'll just be like, well, we're independent. Um, and then if it becomes on you as a platform to be like, um, Well, actually we have a mole and, you know, so it's like, so it's like, it's sort of, it's sort of in order to solve the question, in order to answer the question with any level of like rigor and integrity, you essentially needed like intelligence style resources. Um, so otherwise you were just having to take people at their word. Um, so there's, there's like a big conundrum for It's an interesting one, and I don't know that labeling NPR State Media was like, you know, dunno is right or wrong. It was definitely provocative and it like weirdly mirrors some of the recent like NPR stuff in the news, right. That like

Mike Masnick:

Yeah, yeah.

Alex Feerst:

folks who was more Right. Sort of resigned or was fired. Um, and yeah. Resigned. It, it, it also, I mean, to one of the, the things that I, that you and I talked about other time was that. To me the more complex an issue is, and this may be one of the most complicated issues where none of us know very much, I, I definitely know how little I know about Middle Eastern politics, but it just seems like virtually everything is potentially some regurgitated form of state propaganda because the amount of like independent reporting as opposed to like motivated reporting or reporting that is funded by one or another side is like Seems to be very little. So I think a lot of us are sort of reciting state propaganda probably without knowing it all the time.

Mike Masnick:

Right. And, and in that, in that realm, does adding a state funded media label actually do anything if all of the information is, you know, in some way, state funded effectively?

Alex Feerst:

Right. And I think also because the flood really is downstream from the state. So it's like, even if you have like a fastidious person who like looks at the little badge and it's like, okay. You know, good to know state funded when they talk about it and they repost it and share it and do all this other stuff that, metadata doesn't really make it downstream. I think this is like the classic problem of just like all of these, like ostensibly useful labeling metadata projects have yet to figure out how to address the fact that most stuff is like half remembered downstream. You know, or, or share it in a format where it's like, you know, not, not going to be the badge. Um, I feel like some of the badge people would like badge to be like the blockchain where it's like, thou shalt only ever badge. You cannot repost this

Mike Masnick:

Yes

Alex Feerst:

without like a dunce cap and a badge, but, um, but it doesn't seem like it's going to work that way.

Mike Masnick:

Yeah Yeah. No, it's, it's an interesting question. Interesting to think about an interesting challenge. All right, moving on. We also saw it was interesting. Google released some details about, how they're dealing with bad apps. And so this was just, you know, one of the things that we've noticed these days is how important the app stores have become as a sort of like trust and safety layer in terms of blocking bad apps or, preventing them from, from reaching phones. And obviously it's like, is a part of the TikTok ban bill. It's based on the app stores. And it was interesting to see just how much they're actually blocking. Uh, so, you know, Google said that they prevented 2. 28 million policy violating apps from being published in the store and they. Went after a bunch of others that had changed things or misused, permissions in the store. It's just interesting to think about how much they've become. I don't think Apple has released similar numbers for 2023, but I did find their numbers for 2022. And it was similar. They had said they had, um, rejected 1. 7 million apps in the Apple app store. Um, and so did you, what's your

Alex Feerst:

No, no. I mean, I think it's, it's good that there's more general awareness of like How? At the app store and the play store or like upstream bottlenecks in a huge way. I feel like with Tumblr, I don't know how, how, how verified this is, but I feel like with the Tumblr stuff, it was really about the app store threatening to kick, kick Tumblr off, because of adult content. And so once you have a mental model of like all the different upstream dependencies, um, Where you have to sort of, please another company that you largely interact with, like through a help desk, unless, you know,

Mike Masnick:

Right.

Alex Feerst:

unless you're like really important. It is an odd way to structure the world. Um, and I, you know, I, as I told you, like nowadays I practice in front of the court of Apple more than anything in terms of like trying to persuade somebody at the app store to do a thing or not do a thing. And like, It's their forum. They can do what they want. Um, but you know, the next generation perhaps will like be admitted to the Apple bar, you know, and have special, special privileges there. But it, but it is like, it is, it is that serious. And I have no like antitrust view on it, but I definitely think, The state of the world is like, you could sideload stuff or you could like load stuff off a link, but it just seems like realistically that is a drop in the bucket. Um, I feel like I've talked to tech talk people who are like, Oh yeah, people will just sideload tech talk. It'll be fine.

Mike Masnick:

Yeah. I don't think so.

Alex Feerst:

Yeah. And so I guess this also goes to like the power of defaults, and these ad mark was, and I think like to go back to the article, like the security teams at the app store and the play store, like Among the various undersung heroes of trust and safety, they're probably among the more undersung because like, um, the controversies are like one layer removed from pure expression.

Mike Masnick:

Right.

Alex Feerst:

the types of disputes and controversies are also potentially just not, don't have the same texture as like, I wrote a politically important thing and you took it down as opposed to like, I made an app that's sort of like another app. And then you took it out. Um, um, but it, but it does just one last thing. It does get at the classic, like, In olden times when people said like, Oh, the, the whole point of the first amendment is that you need like a cushion around the conduct, because like, if everybody is afraid that their conduct might be close enough to violate it, according to some like, you know, faceless arbiter, then you get like the classic chilling effect of everybody, like self policing because of the unknown risk of like cutting it too close to the line. So that's why you need the like proverbial cushion here. Nobody knows where the cushion is. Um, you know, and that's not to criticize the app stores do process and like, they're doing a great job and they have a billion of these things process in the play store, but it's just like, that is a classic place. I think of like non cushioning, you know, risk management where people are going to think about all sorts of expression and whatever. Um, I think with, with adult content, it's probably, you know, it's a huge area. And so there's a lot of like. Below the surface or upstream decisions that cause things to not reach consumers. Yeah,

Mike Masnick:

point for on the political and regulatory side as well which you know we see with the TikTok set up. Um So let's, let's move on to our final story. We'll, we'll move from the Apple bar to the Facebook bar, I guess. Uh, and just very quickly, I think, there was a story this week about the Meta Oversight Board, uh, potentially preparing to lay off a bunch of workers. And there are questions about whether or not this is, the Meta Oversight Board has really been around for three plus years at this point. And there are questions about whether or not it's really sort of This is the, the end or has it, you know, has it, is it a failed project? Did it reach its potential? Whatever.

Alex Feerst:

yeah, yeah. And I'm, you know, I'm not, I feel like again, a lot of us, I would say wanted them to succeed, but like wanted to give them a fair shot. Um, and so like, like I said, I think I know lots of folks who like pulled their punches and some of the early criticism because they wanted to like, let the thing have a chance to live and, and show what they could do. And, you know, a lot of folks have worked hard at it. And so I'm sure nobody celebrates like, layoffs or whatever economic stuff they have to retool. But it, it, it does, it does sharpen the question of like, what is this for? Like, like to me again, like there's no circuit splits in social media. So like. And so I, I think the notion of like what, and I noticed that they, they, they were initially sort of criticized for not doing enough cases and then they like really got took on a whole bunch of cases like last year. So then they like really uptick their productivity. So, but they still do like 50 cases or something.

Mike Masnick:

Yeah, I don't remember the exact number, but it is still a low number, especially given the size, obviously, of the properties overseen by the board.

Alex Feerst:

And so I think as a like we are settling important issues of like quote unquote like. Internal Facebook common law or whatever, like, sure, maybe, but like unclear that those things are like they're chosen because of like political importance for the chosen because of like relevance to how many what volume of complaints or other things. It just to me like it's structurally dissimilar enough to how the US Supreme Court works that I never fully understood exactly what it was going to do when it worked. Um, I also you know it was it was started I think in 2018 when like. calling something the Supreme Court made it sound more legitimate. In 2014, 2024, potentially like being like, we should get some of that legitimacy that the Supreme Court has, like seems, seems like, uh, not, you know, like better times. But yeah, and,

Mike Masnick:

I always got the, I always got the sense too, that, that the, the goal of the oversight board was not to just handle meta properties that they wanted to bring on board other companies. And so I think some of this is a recognition that they, were unsuccessful in convincing anyone else to use them.

Alex Feerst:

The aspiration of being a sort of like private speech arbitration. For lots of different low level disputes of it, like, I guess, you know, it was worthy. And I think there's other folks who like probably thought about getting into it, like consumer reports or, um, but I, but like, I, this is definitely the thing that I, I, I thought about writing a thing on this a while ago, but like there, there's, there's like adjudication in everyday life, which is like, how often does a thing happen? Then a third person has to be like, you're right. And I think in like pre internet or pre social speech land, sure, things would escalate to becoming a real defamation case occasionally. But the reality of it was like, you get into a fight with somebody and like, you leave the bar or you like, don't talk to them anymore or whatever. And so the, the way that social media brings to a head more disputes over expression, um, It just seems like the amount of adjudication of like these little disputes that we now see ourselves as needing today must be, I would speculate like exponentially a lot more than it used to be. There's just like more demand for lots of adjudication of disputes and it's got to, it doesn't have to come from somewhere, but when there's demand, it's like, okay, well, somebody is going to supply it and and I think part of like doing things industrially at scale is like. Having an unsentimental factual view of like how expensive is an adjudication? Um, cause like in federal court and adjudication is like many hundreds of thousands of dollars. That's why they can't do that many of them. And so like with this one, it's like, what is the per unit cost of each like adjudication that would make this whole thing possible to run. And it seems like it potentially has to be really, really low. Like if it was like 12 cents per decision. And those were decisions like that would still cost a lot. And so to me, this is the Facebook Supreme court, like gets it trying to have a structure that sort of mimics the courts in some ways, but doesn't address this problem, which is that we have some people at least asking for a private speech arbitration system at a scale that like dwarves all the courts in the world put together.

Mike Masnick:

Right.

Alex Feerst:

Um, and so endowing it was always going to be a challenge.

Mike Masnick:

Yeah. I mean, I still think it's an interesting experiment. I mean, it is still existing and I think will continue to exist at least for some time. But, and certainly some of the decisions were interesting because they were sort of fairly thoughtful and gave some insight into ways to think about these things. and so I think that part of the experiment has been useful to see.

Alex Feerst:

Yeah. And I, I always felt like, like, as, as like a, you know. a humble country tech lawyer. Like it was interesting to see the like really fancy people of substance show up in trust and safety land. Cause for me it was like, whoa, these are like really great first amendment scholars and, people with like moral authority and who, who are very thoughtful about all the, the non tech issues. And then I always felt like if they were lacking anything, it was somebody to be like, So here's how blocking works. And like, here's how muting works. And like when it's three dots, it's not two dots, but sometimes there's a pop over and like asking somebody who's like spent 30 years being a first amendment scholar to like, Engage seriously with like, whether the popover is too much speech was like, felt like this weird collision of like the very trivial and the very grave and August

Mike Masnick:

Yeah. I mean the, the thing, I mean, I had done a panel, with members of the, the oversight board, uh, like a year and a half ago, or most, almost two years ago at this point. And like the question I asked and that got this like sort of stunned silence is like, why isn't there anyone on the oversight board who has any experience in trust and safety? Which. Seems like an important point that would be useful to have people there with some more experience in, in the actual practicalities.

Alex Feerst:

yeah. And I don't, what did, what did they say? I would guess what they would say is like, Well, they probably never thought of it or they said like, well, those people are implicated because they're like in the system already. They're going to

Mike Masnick:

wasn't even that it was, it was just kind of like, it's difficult to find the right people to, who can go from that to this, I think was, was more or less the answer that I got.

Alex Feerst:

Yeah. And that's, that's probably fair. But I do think it, it gets at a fundamental thing that really isn't about the legal system, but it's about like, I don't want to call it like the, the stylization and like, um, structuring of human experience through social, if you're not a social media person or an internet person, and you're just rather like. a senior professor who knows a lot of stuff about the world or a particular geopolitical problem or the law. It probably seems, I don't know if I'll say trivial, but like, when somebody is like, okay, so here's what happened. Like, first he muted her and then she went and screened it like a burner account and then that muted him back and then he blocked her but then he also sent a message like it seems like whenever I try to explain the mechanics of trust and safety tooling to like a serious intellectual person like that, I always feel like a teenager explaining, like, a fight at my high school

Mike Masnick:

Right.

Alex Feerst:

and then asking him to like, write a reasoned opinion, but I mean, courts courts have to do this too now. So it's absolutely worked its way into life. But I think part of the, like, atmospherics of the oversight board was this collision of, like, very high in terms of, like, the prestige and the substantive, knowledge and seriousness of the people on it with the, like, bottom up aspects of, like, how do social media disputes happen? Now, many of these were not trivial. They were, like, very serious, disputes over important like social issues, but I just think that the mechanics of how social media works and the mechanics of how trust and safety tooling works was a very weird channel for this whole project to like orient around.

Mike Masnick:

Yeah, yeah, yeah. Makes

Alex Feerst:

But great, you know, but like, you know, some, someday we'll have to have like AP muting and blocking in high schools so that like people.

Mike Masnick:

Oh my gosh. Yeah. That'd be a fun curriculum to write. All right. Well, uh, we, we, uh, didn't do so well at the lightning round, uh, but we are going to wrap up this, uh, particular episode of Ctrl-Alt-Speech. Alex, thank you so much for stepping in, and providing your expert, analysis and opinions on these, these

Alex Feerst:

Thank you so much for having me. This is a blast and an honor to get to be here.

Mike Masnick:

Excellent. And, uh, we'll be back next week and, uh, Ben will be back to guide this ship that I have, uh, steered into the, uh, into the shore here. All right. Thanks everyone.

Alex Feerst:

take care.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode