Ctrl-Alt-Speech

Over to EU, Elon

July 12, 2024 Ben Whitelaw & Domonique Rai-Varming Season 1 Episode 18
Over to EU, Elon
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
Over to EU, Elon
Jul 12, 2024 Season 1 Episode 18
Ben Whitelaw & Domonique Rai-Varming

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Domonique Rai-Varming, Senior Director, Trust & Safety at Trustpilot. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Domonique Rai-Varming, Senior Director, Trust & Safety at Trustpilot. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Domonique, I'm guessing you remember the Ask Jeeves search engine, right? And I'm guessing you remember what the famous Jeeves used to say, which I pose to you today on this episode of Ctrl-Alt-Speech, which is, have a question, ask. Yes,

Domonique Rai-Varming:

am old enough to know and remember Jeeves, Ben, but my question to you is, how many goals are we scoring on Sunday?

Ben Whitelaw:

fellow England fan,

Domonique Rai-Varming:

Yes.

Ben Whitelaw:

football's coming home. That's the other question. I

Domonique Rai-Varming:

Are we going to get a bank holiday? If it comes home?

Ben Whitelaw:

Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with the financial support from the future of online trust and safety fund. My name is Ben Whitelaw. I'm the founder and editor of everything in moderation, and I'm joined this week by a fellow football slash soccer fan, Domonique Rai-Varming, who, is a change of co host, on Ctrl-Alt-Speech, but is a very welcome one. Domonique, very much. Welcome to the podcast. Thanks for joining. How are you?

Domonique Rai-Varming:

Thank you for having me. I'm well, thank you. I'm very well.

Ben Whitelaw:

Very exciting to have you here. so just by way of introduction, Domonique is a senior director of trust and safety at Trustpilot. Domonique reached out when we issued a call for co hosts and guests a few weeks back, which we were delighted by. She's exactly the kind of person we wanted, on the podcast and she brings a wealth of experience. Um, Domonique, talk us about. Your role at Trustpilot, Trustpilot does for those who don't know. And, uh, why you decided to put your hand up to be a co host of Ctrl-Alt-Speech.

Domonique Rai-Varming:

I can take that last question first and put it down to a moment of madness. Here we are.

Ben Whitelaw:

it now, are you?

Domonique Rai-Varming:

We'll see. But thank you for having me to start with. Let me tell you what Trustpilot is, what and what we do. So Trustpilot is an open and independent reviews platform, which is free to use and open to everybody. the key requirement being that reviews must be based on somebody's genuine experience with a business. And so our proposition is that we're here to help businesses build, trust, grow, and improve their services by listening to their customers. So businesses that use our platform get a deeper understanding of what their customers are saying through positive and negative reviews. And consumers can write reviews of any business they want, whether they're on our platform or not. as I said before, reviews have to be based on a genuine experience. and consumers have the benefit of learning from their peers and other consumers about where to buy from, where to have an experience, that sort of thing. Our mission is to be a universal symbol of trust for businesses and consumers. And so we see our platform as a place to increase trust between businesses and consumers.

Ben Whitelaw:

I'm a big fan of Trustpilot. I have to say, I go to Trustpilot and I'm not saying this just because you're here,

Domonique Rai-Varming:

Okay.

Ben Whitelaw:

I go to Trustpilot whenever I'm, kind of having a service brought in, when I'm ordering stuff, it's a big, Bonus if the service has a high trust pilot score, which is great. And you have a really interesting background, that kind of brought you to the trust and safety world. Right. Do you want to talk a little bit about that?

Domonique Rai-Varming:

Yeah, definitely. I'd be really glad to. So I'm actually a lawyer by trade. I think actually meeting more and more people within the TNS space is there's no one bio to fit the trust and safety mold. So uh, yeah, I'm a litigator by background. I was working, City law firms for 10 years, and then actually had a change of lifestyle in 2019. Post Brexit, I moved to Copenhagen and stumbled upon, our old chief trust officer, at Trustpilot and ended up being at Trustpilot as an, I started as a legal counsel, so managing all of Trustpilot's litigation work, which completely suited my background. And actually what I learned pretty quickly was that content was at the core of all of the disputes. that we face at Trustpilot and it continues to be, fast forward a number of years now, I think almost five. And as you say, Ben, I'm the Senior Director of Trust and Safety. And what does that mean? So our trust and safety flywheel and structure is made up of a number of teams and a number of multidisciplinary teams. we have fraud analysts, fraud investigators, policy and content creation team, dispute team, a content moderation team, and a public affairs and corporate comms team. So the way that works is public affairs are out trying to influence and shape the direction of legislation. Which then gets, rolled into our product and our policy. So the policy should capture the reg and then that's moderated through our content moderation arm. So it's a really neat flywheel, a very intentional and neat flywheel,

Ben Whitelaw:

Yeah, really. Really interesting to hear a bit about how you're structured internally and organized. you, you mentioned before we kind of started today's call about this reviews coalition as well, which Mike and I have talked about actually on a previous episode where Trustpilot and a few other review sites have got together to share learnings. Do you want to talk a little bit about that and just kind of how that came about and what that involves,

Domonique Rai-Varming:

Yeah, definitely. I mean, I think what we've all noticed, over the last couple of years is that this landscape that we're all operating in is, um, Is evolving and it's evolving really quickly, whether that be through consultations, legislation, there's a lot happening in different geos as well. That's not always linked. And so as platforms, mentioned the coalition that we've co founded along with, Amazon, Booking, TripAdvisor, Expedia and Glassdoor with a coalition for trusted reviews. we came together through sort of identifying commonalities. And we've been working together for the last sort of 18 months now. There are a number of other sort of coalitions and industry groups, but, our focus is trusted reviews. And what are we doing? We've come together to try and establish best practices for the industry. And, you know, a difficulty we found with, the perhaps regulators and, enforcement bodies are trying to grapple with is what, what's the fake review? There's no one definition of. Um, and we're, you know, sort of answering different consultations in different ways because the question is different, but actually there's lots of power in us coming together as an industry and sort of working together to set that definition. So we're trying to establish a working definition, best practices. We've done this. We're also trying to work together more. In the enforcement space. So, you know, a fake review on Trustpilot is bad for not just Trustpilot, but for the whole reviews industry. And. What we weren't doing before was sharing information between platforms. And so that's something else that we're trying to do because our biggest threat is review brokers, review sellers. Um, and they, they don't really operate just on one of our platforms. They tend to operate on all of our platforms. And so actually sharing insights is another arm of what we're doing. And the final arm is sort of advocacy. So we're trying, the plan is that, you know, hopefully in time, we're We can advocate together as a voice.

Ben Whitelaw:

Brilliant. That's great to hear. And, you know, there's many similar alliances that have sprung up over the last few years where companies, platforms of a similar ilk have got together to share learnings and it's great to see the reviews Group and platforms do the same. we're really lucky to have you here, Domonique. I'm super excited to bring some of your experience to bear on some of the stories that have, been coming out this week, we have a really interesting selection and we're going to kick off with breaking news story by many, accounts, a story that just as I was hitting send on everything in moderation today, Was announced by the EU. And that is the fact that the EU commission has charged Elon Musk's, X slash Twitter platform for, a number of breaches of the DSA. So I was prepared before we kind of jumped on the call to talk a bit about how the EU was, it's had a busy week, right? It's it's, uh, sought information out from Amazon about some of its recommender systems. It's announced the 25th of LOP. a porn site called XNXX, very catchy name. So it's already had a very busy week. And then all of a sudden it puts out this press release saying that. There are several breaches of the DSA after a seven month investigation. And, X has basically become the first platform to fall foul of the DSA, which is a big story. Something Mike and I've talked a lot about in previous weeks. When was the DSA going to bring proceedings against platform? there are a few Parts of it that we'll go into in a second and I'll break down a bit of that. But what was your reaction? As soon as you heard about this story, Domonique, what came to mind

Domonique Rai-Varming:

Uh, what came to mind? Um, not surprise is I think what I would say. Not surprise.

Ben Whitelaw:

read? Basically, uh, it always felt like Twitter was going to be the first, right. You know, back in, December, it was last year. Thierry Breton had gone after Elon Musk and, and made this big fanfare about the fact that, Twitter hadn't complied with the DSA. And so it would have, it made a lot of sense that it was the first platform for the investigation to be announced. And yeah, I'm not surprised either. There's been a kind of long standing beef between the European commission and Twitter as we'll go into. Um, there's a few things that kind of are worth flagging just about, The EU commissions, announcement today, they're basically kind of the breaches. Is in three parts. The first part is that the, X slash Twitter has been found to, breach it's a clause on dark patterns and here the EU has found that the verification. Of Twitter users is basically misleading. And since Twitter has changed the way that verification worked and obviously the blue check stopped being something for, Dignitaries and politicians and became something that anybody could buy. the EU believes that that is a breach of what causes it start patterns piece of, the legislation. It's also fallen foul of two other parts of the DSA ad transparency and also, data access for researchers, which is something that, you know, has been loaned for an unknown for a long time. Now that Twitter has rolled back on in recent years. And. We don't have much more information than, than that. There's a really, interesting quote from Tieri Breton who, is very kind of punchy in his, commission's, new investigation. He's, very proud, I think, by the sounds of it, to kind of bring this to bear. And yeah, we're, this is one of four investigations that, that EU has had in the process so far. We've still got three outstanding, on AliExpress meta and also TikTok as well for various different potential breaches. So we're here, we're in DSA, enforcement now, Domonique, you know, what's your kind of feeling on the DSA, how has Trustpilot viewed the legislation, since it became law? Talk to me about kind of your, your dealings with it.

Domonique Rai-Varming:

Yeah, thanks Ben. I think the starting point for me and for us as a platform is Let's think about what this legislation is there to address. I think that's, it can sometimes get lost in the headlines and sort of the enforcement, but I don't think anybody can argue against DSA. We all agree that protecting everybody online has to be priority number one. Nobody can argue that. And it's really important to keep that in mind and to keep that focus, rather than focusing on perhaps, What's going wrong with the legislation or what might be wrong with the legislation, I think from sort of my thinking is This is a punchy piece of legislation by the EU. It's flagship. Let's not forget that. They're the first to sort of dabble into trying. Well, it's not really dabbling, it's a bit more than dabbling, but they're the first to take decisive action, in dealing with this sort of content. You know, they've been at the EU have been really ambitious with the DSA. and this is this sort of outcome on Twitter on X. Sorry. It's definitely a key moment for this piece of legislation. Perhaps now, you know, this has happened. My sense is that we've all just sort of become compliant or we're working towards becoming compliant with the DSA. The legislation really needs time to sort of bed in a little bit and to see where it may or may not bite. I can say that from a position of relative comfort for now is Trustpilot's not a VLOP yet. Um, and so we're almost a passenger for now, but, um, Yeah, it's a really interesting development.

Ben Whitelaw:

Yeah. So talk about your, the kind of work you've had to do to ensure that. Trustpilot complies with the DSA, even, even at a kind of lower level,

Domonique Rai-Varming:

Mm hmm.

Ben Whitelaw:

obviously it was very much touted from several years ago, really. You knew it was coming. There was a deadline set of, February for, everyone. And talk us through kind of what you and your team's about to do to make sure that you're compliant.

Domonique Rai-Varming:

Yeah. We've been involved with the DSA for a very long time. As you said, Ben, it's been, they've been consulting on it for many years. It's been through many rounds of drafts. And so we were involved in the consultations at an early stage. And from our perspective, it was, you know, It's really important for us to explain and to communicate to the regulator how a one size fits all approach doesn't work. As I said before, we're not a VLOP and there needs to be nuance to these sorts of legislations because there are smaller platforms, there are marketplaces which are different to social media. You know, we all have a different role to play and, and actually the harms that each platform poses. But we all, you know, or could be faced with this is different. Um, there's not, not a one size fits all approach. And so of course we've known it's been coming for a long time. We've been involved in the consultations and then once we knew the shape of it, the starting point for us was building the story to the business around. What this is about and why it's important. And as I said before, that's something that can get lost sometimes. Particularly if you've in a world where there's so much legislation coming down the line, you want to avoid that feeling of creating a compliance culture. And so the storytelling piece has been important for us, right? We are part of this online safety ecosystem. The DSA is there to protect everybody in particular, children, vulnerable users from egregious. Horrible, harmful content. That's really important. It was really important to tell that story and to explain to our colleagues across the business, the really important that they all play, that we all play in enforcing that.

Ben Whitelaw:

Yeah. You forget in some ways that when something like the DSA comes along, not everybody is so in the weeds of it as maybe, you know, we are. And so this idea of kind of going around the business and talking people through why it's important and what it might mean for them is super interesting. What did that mean in practice? Cause I'm sure there are control speech listeners who. We'll be thinking about compliance with the DSA and other legislation like it. What did that mean in practice? Like, what did you do? You kind of like knocking on people's doors, setting up meetings, offering cookies as bribes.

Domonique Rai-Varming:

I'd say an element of all of the above Ben. So of course the, the storytelling piece was really important to set the seeds. And then what did that mean? the way we operate is all of this is run through our trust and safety team. And so we have a, we assembled a core team who would be running the project. So a core team made up of our policy and regulatory lawyers. It was lawyers who actually run. The implementation predominantly, once it had got past, you know, draft legislation stage. And it was literally a case of line by line going through what would apply, what wouldn't apply, mapping that against our existing product infrastructure. So what do we do today? Identifying really quickly, actually, what may need to change. So big product changes, comms changes from our content moderators, you know, the reasoning for our decision making. Had to be tweaked to account for the requirements of the DSA. There was obviously a lot more to it than that. And it was a project that spanned about 18 months, actually, and we only recently got our API key, because there was a delay from, there were many delays in sort of handing those out. So yeah, it's an ongoing project and it's obviously something, there's an ongoing maintenance associated with it. So it's been a long journey and a continuing journey.

Ben Whitelaw:

Yeah, this kind of compliance piece never really, never really answers it. Um, it really interesting to hear about that. I mean, from, Twitter's perspective, from X's perspective, and don't worry. I also never use the new title of, of Twitter. But you know, there's potentially massive fines at stake. I think that's, what's really interesting, right? The DSA, could bring about a 6 percent fine on, global turnover. So although Twitter's turnover last year was, 20 odd percent down on the year By virtue of actions of its new owner and advertisers reaction to those, you know, it's still, you made 3. 2 billion last year. So 6 percent of that is, north of 200 million. So it's a big fine. That's potentially at stake. do you think these kinds of fines are, sufficiently prohibitive for companies to. make them take safety seriously do you think it's, it kind of gets them in the right frame of mind to address some of the issues that would have been coming up in this investigation?

Domonique Rai-Varming:

think actually the, yeah, the level of fines and the way that these new pieces of legislation are being structured sort of since GDPR to where we are now, and I don't imagine that this way of enforcing or regulating is going to change. I mean, it certainly sets a tone, doesn't it? Is it the right way necessarily? I don't, don't know. I mean, going back to the purpose of the legislation, it really feels to me that the purpose of the legislation and what it's there to do should be at the fore of driving compliance. but appreciate we live in a capitalist world, um, where, you know, money has lots of power. so appreciate there's a balance to be struck there.

Ben Whitelaw:

Yeah, indeed. And, the next steps as far as this story concerned, and, it won't be the last investigation that you bring, but for, as far as X and Twitter concerned, they now have a window of time where they can defend itself against these, breaches and they can propose updates to the product and to its work to try and address some of the concerns that the EU has brought to them. How it does that, it's going to be really interesting because as we know, It's trust and safety team have been really cut very deeply. We heard from the Australian, e safety commission a few months back that it had been chopped by a third, after Elon Musk came in. So the number of people they're able to do this work is far fewer. So we'll be really interesting to see how this plays out. And, um, I think everyone have their popcorn at the ready to see how Musk responds, but, uh, Thanks very much, Domonique. That's a really interesting take from you. we'll move on now to our next story, which is, again, related to something that Domonique and I care about as two Brits. It's not football, but it is the general election that's taken place recently here in the UK. And Domonique, this is something that you'd seen and been tracking really through the, the election. You, you've been keeping an eye on politicians and, AI fakes.

Domonique Rai-Varming:

Yes, definitely. So, Ben, this is, an article that was in the Guardian. It's about a week or so old now, but it, the story is this. there were several high profile British female politicians, Angela Rayner, Penny Morden, Gillian Keegan naming a few who'd been targeted by deep fake pornography sites. So websites that were using their images, digitally altering their images to create explicit content, obviously without their consent. Now this is. This is difficult and challenging and awful in many ways. It's malicious. but it speaks to a broader trend of cyber harassment against public figures, especially women. at the time this article was written, there had been some of these images circulating and the fear ago was that perhaps there would be more of this, which would that could impact the election. I think luckily what we've seen now post election is that actually this didn't happen. on mass as expected. but what it's kept on the agenda, actually going into this new government is the need for more governance, around deep fake and harmful content. And we know that this was something that featured quite heavily and significantly in the labor manifesto. And so what I'm interested in actually seeing how this comes out in the King's speech, which I think is coming soon.

Ben Whitelaw:

Yeah. Talk, talk more about the King speech for people who don't necessarily know about the kind of weird oddities of, the process. It's, it's kind of big deal, isn't it? In terms of setting the tone for the next five years, right?

Domonique Rai-Varming:

Yes.

Ben Whitelaw:

Yeah. I

Domonique Rai-Varming:

I mean,

Ben Whitelaw:

Royal kind of. Royal, is not my thing either. So, but

Domonique Rai-Varming:

it's not my thing.

Ben Whitelaw:

we know that we know that I definitely think you're right about Labour's commitment to, online safety. And they did make a big deal in their campaign about building on the online safety act. which obviously is UK's kind of major piece of legislation, which took an awful long time under the previous government to come to, pass and, it was kind of shoehorned into the, the early part of this year. kind of talked about bringing forward provisions as soon as possible. It hasn't really gone into more detail than that. But obviously we know that Ofcom has been doing an awful lot of work behind the scenes to create, guidance and guides to, you know, allow people like you, on the platform side, on the intermediary side to kind of figure out what changes it needs to make as you discuss. So, yeah, I think we're probably going to see. A further commitment to the online safety act. There are obviously natural concerns from civil society, and other organizations about whether this is going to have on privacy and people's, you know, data rights and et cetera, et cetera. So, but for now, I think the fact that, that women as affected as, we possibly thought during the election with regards to AI fakes is a good thing. question I always have, I'd be interested to see, get your thoughts on here is like does it need to be widespread for it to be a problem? You know, the fact that individual MPs and women are being targeted in this way, to what extent, can we, should we really zoom in on these, individuals who are causing this harm? It's, it's a challenge, isn't it? You know, individual harm versus kind of widespread harm is something that platforms are having to constantly juggle.

Domonique Rai-Varming:

Absolutely, Ben, and it's sort of dealing with the individual harms could be taken, you know, on the one hand, you could see that as a needle in a haystack, but then actually the impact of that harm could be huge. And there's a balance again to be struck between where responsibility should lie and where focus should be. But actually, I think with the technology that we have, We've almost got at our fingertips in many respects. there's an opportunity there to deal with, with it all, um, irrespective of the scale. So it's about identifying and finding the right way and the most balanced way forward.

Ben Whitelaw:

Yeah. at Trustpilot, is there ways that you're able to kind of zoom in on, individuals who use tools and technology to create the fake reviews? Is there, how are you able to kind of discern that you actually kind of find the perpetrators? Is there a process for

Domonique Rai-Varming:

it's a good question, Ben, about how we detect fake reviews and, and there's a number of ways, uh, in which we do that. technology of course plays a massive part in that. So for a number of years, we've been using, machine learning, and different tools to identify fake reviews and to look for fake reviews. So we are looking at behavior trends and patterns and, you know, we can build pretty high we, we have quite high confidence levels in the technology. But we also rely on our people and the people within our community, be that businesses, be that consumers, reviewers, they have the ability to flag reviews to us and to let us know, you know, if something looks suspicious or harmful or obscene, there are mechanisms in place for dealing with sort of things through both of those lenses.

Ben Whitelaw:

Great., This is something that I think is hopefully, an isolated thing. Really. We don't see the pervasiveness of politicians being targeted in this way. but with the U S election coming up, don't know if that's actually realistic. I'm, I am sure there'll be more to this, but yeah, thank you for talking through this and, We managed to get through the election segment without declaring whether we were happy or not about the results. So congratulations to us. I'll leave listeners to work out How we voted, great stuff. So we'll move on now to, series of other stories that, Domonique and I have flagged and talked about in advance of today's episode. I will start with a really interesting new research project that has, launched this week called the data workers inquiry. Now, you might not have heard much about this. I'm really excited by some of what's come out of it so far. essentially it's a new community research project run by a number of partners, in Germany, and elsewhere, including the Data AI Research Institute, run by Timnit Gibrou. And the, project is essentially foregrounding the experiences of data workers. Including data labelers and content moderators. And so this is a, really interesting, long term research project where they have brought in 15 people who've worked as content moderators, moderators and labelers in various countries. And they're using, using their experience to guide. The research, so they're getting them to kind of figure out what it is that is interesting about labor rights and data rights and the nature of this kind of work. And they're basically allowing them to do their own research into a whole number of different inquiries, which, which they term them on their website. And so this is, people working in Venezuela, in Syria, It's people who've done data work in Kenya, uh, and in Germany as well. A real mixture of people and through different means, they're bringing this experience to bear. So they're producing videos and podcasts. One of the data workers is producing a zine, uh, which is available on the website. And the data workers inquiry did a virtual event this week, which I listened into, which is really interesting. And they've got a series of kind webinars over the course of the next, six to eight months would really recommend kind of checking it out. I've been covering, data work and content moderations who are working on the frontline in countries outside of the kind of, big English speaking, Western countries for a long time. And there's a number of really clear examples of where. These moderators have not been allowed to, unionize and not been given mental health support and to basically, you know, not being able to organize and, and, fight for their very important rights. So fascinated by this research project. Hopefully, you know, we'll be able to bring some of the folks from the data workers inquiry onto the podcast at some point in the future. What did you think of this, Domonique? Are you kind of interested in hearing from this group of workers who obviously unbeknownst to us are helping refine the models that help keep people safe online all the time?

Domonique Rai-Varming:

Uh, yeah, I mean, Ben, I actually hadn't heard of this story. So thank you for sharing this one. And I read it with interest, actually. It was something that actually hadn't thought about at all. Um, and upon reading it, it made me think of. Sweatshops, you know, we used to talk about factory sweatshops for making cheap clothes and it's almost, it feels like this is essentially what's happening here without us sort of even thinking about it when I wasn't thinking about it or realizing. Yeah, I mean, there, there's no doubt that I'd love to know more. And it's obviously really important and crucial to ensure that there's proper ethical principles in place to A, guide AI development, but B also look after all workers, whether they are open AI or whether they are sort of where this is sort of looking the workers, the, data input is in Kenya, Venezuela, and Syria, um, it seems so important.

Ben Whitelaw:

Yeah. Agreed. And, you know, the idea of, you being a, a refugee in a camp in Syria, doing data labeling is something that you can't really capture in a traditional research project, you know, a typical, you Research paper, which reduce, I guess, those experiences down to a few quotes. If you, if you got quoted in a, in a new story or, you maybe would get one or two quotes. But what's fascinating here is that the, these folks are being encouraged to really unpack their experience through a number of really creative means. And I, and I'm, I'm excited to kind of. Dig deeper into it and see what they produce really. Cause I'm hoping this gets picked up by the media. I'm hoping this gets picked up by politicians at various levels who start to understand the kind of trauma is sometimes involved in, sitting on cues for a long time and, hitting yes or no on, various bits of content, applying policy at scale. There are really difficult things to do. I guess Trustpilot have moderators. You talked a bit about it at the top of, the episode. how do you kind of work with that team to ensure that they're applying policy that they're looked after? Do you have process in place for that?

Domonique Rai-Varming:

Yeah. Yes, we absolutely do. I think, I can distinguish the sort of types of content that our moderators see from other platforms where. we don't host video, we don't host, images. And so actually the harmful content that this story can speak more deeply to isn't something that necessarily applies to Trustpilot, today. but in terms of how our moderation team work, they sit within our trust and safety org. So they're very close to knowing we see them more as like trust and safety professionals, right? Not content moderators. necessarily, which I think is maybe an older term now, but you know, they're trust and safety experts. They know what's going on in the legislative. Space, they know what's coming down the line, we're a tight team. They are involved in helping us frame policy and identifying, okay, if that happens, if the OSA is going to require this on us, we'll need to change these three policies and these five email macros that we send. So they're always in the loop and very involved, in all of our processes.

Ben Whitelaw:

That's interesting. So, so even though they're kind of content moderators in, like you say, this old term and they are working at a kind of more elevated level than perhaps, lots of people are used to their, their inputting. on TNS strategy in a way. That's, that's a fascinating trend that we're seeing, I think, where the humans who are involved in content moderation, they are increasingly kind of experts on regulation on applying policy and enforcing and setting up operational systems to do that in a way that probably It's five, six, seven years ago. They weren't, or there was few of them that were able to do that. We're seeing a real, I guess, professionalization of, the whole industry, particularly those folks who, you know, we're doing some of the most difficult work at the bottom.

Domonique Rai-Varming:

I think that's right, Ben. And I think there is of course, ticket turning, ticket work element to it still, which is really important because that's the stuff that needs to get done. But it's not a one and done approach for us. It's that bigger flywheel piece and, with the incorporation, as we're starting to think about using technology more, for example, that reinforces the need for our. People are moderators, are superheroes, actually, to be really sharp on policy and be the subject matter experts to guide what we'll be doing in the future.

Ben Whitelaw:

Yeah. Okay, cool. That is great. That's a really interesting research piece that we've covered there. we'll move on now to a similar story, many ways, a case study actually that you found Domonique about. Machine learning and human rights. And this is something that the trust and safety foundation have published. Very recently you actually, I think, have been in touch with the author. Which is a helpful thing in this case. Talk us through that.

Domonique Rai-Varming:

Yeah. So this is a really interesting, um, case study that I came across this week, which was written by Alan Kyle., uh, it's about machine learning and how that can undermine human rights. Yeah. And it's the focus and the platform that it's honing in on is YouTube and how they used, AI to moderate the Syria crisis, back in 2011. so the backdrop to this was that obviously the civil war broke out in 2011 and part of the suppression, at the time was around free speech. And Citizens and journalists were being persecuted because of their online activity, right? And on the ground, this online information was really important. It was really important that it be uploaded to internet platforms, and it helped provide sort of really valuable information to human rights investigators. It was essential. so evidence was uploaded to social media platforms, including YouTube, to try and build human rights defense cases or prosecution cases. And what happened was that lots of, machine learning tools were used, To moderate the content. and let's remember, I think it's important to remember that this was a number of years ago, right? So perhaps the models weren't as sophisticated as they are today. But given there was so much content that was going through classifiers, What we've found or what this paper has found with the benefit of is that the sheer volume of the content and the limited contextual information, which we now know is really important when it comes to prompt engineering and things like resource constraints. So you've got volume, context, complicated context. Resource constraint made it really difficult for the models to make the right decisions.

Ben Whitelaw:

so I, I think this is a really interesting case study for a number of reasons. So I think the fact that you're right, this is, seven years ago now, basically since YouTube introduced this new classification system and applied it to, you know, videos in Syria means that there's, you it's a warning in many senses of, how relying too much on technology can bring about speech, that's really the takeaway from this research and from this case study. It's a reminder that actually, even though platforms are kind of working their best interest, and this was something that, was a kind of a reaction to, obviously what was a very, you know, difficult story, difficult event, and kind of involved some, I think political pressure to, to address the speech on the platform. Actually, you know, there are downsides to rolling out classifiers like this without the right testing, without the right, you know, Kind of red teaming really, I would hope that it's different now, as you say, I would hope that the models and the, the kind of sophistication of the work being done on platforms is much better, but, it's a warning in a way, isn't it? It's kind of, we are at risk of, perhaps doing this again, maybe at a different scale. And we know in conflicts like Gaza and Israel and, you know, elsewhere in Ukraine, where this is still a problem. people's. Posts, people's captions, people's images being hidden, being shadow banned to use the term that many people use again, through means that they are unclear about because of the classifiers on the platforms.

Domonique Rai-Varming:

I think it's so interesting I agree with you, Ben. I think it's, really, important, it's difficult to strike the right balance, right? I mean, the technology that we've got at our fingertips, Is so powerful and has such huge capability, which is exciting. And of course can be really, really attractive to platforms, to businesses. Right. The opportunity, thinking of this case, one of the issues that came up was resource constraints, right? if they didn't use tech. They would have needed, I don't know, thousands of additional people to review all of the content. And so the opportunity that comes with using technology in this way is ripe. But I think that with that use rightly comes responsibility. and I think for me, what we've learned from this case study and from some of the other news that's been out this week is that it's really important. That we always have humans in the loop. and I think for content moderation generally, feels like we're entering a new era, a new era in content moderation. And our moderators are almost our superheroes on the front line, protecting us all from this sort of content and, engaging with context and nuance that would take a model a lot longer to catch up on. and I think what also came out quite strongly from this Was the importance around building trust in AI and AI use. You know, we've come a long way this year with AI governance. I mean, it's not all the way there yet, but the importance of transparency. I don't think it was clear in this year's case, whether decisions were being made by classifiers. And we now know best practices. We need to be transparent when AI is making decisions and explainability. You know, we need to bring the general public along with us. In this AR use journey, and educating the general public around how decisions are being made and whether they're being made by machines or people.

Ben Whitelaw:

Yeah. And you're right. And it's the other thing I think is the access to data for researchers be able to do this work as well. You know, like Alan Carl could only really do a kind of case study like this and do research like this because he was able to get access to data. We talked at the top of the episode about the DSA. Coming for X Twitter because of a lack of access to data and for researchers. So even seven years on, it's a really helpful thing to, to note and to, to recognize that the classifiers, perform poorly and suppress speech in a way that hampered human rights investigations issues. I think that's a massive thing. We need to kind of keep a long lens and a long view on some of these questions because they will take a bunch of time to research and for academics to dig into and figure out.

Domonique Rai-Varming:

Absolutely

Ben Whitelaw:

Yeah, super. Okay. So let's move on to probably our last story today, which, is I think a really interesting one because we've got a policy expert, uh, the podcast today. it's a story again, that's, been kind of moving in the background for the last couple of weeks, but it's Etsy's decision to, basically strengthen its approach to mature content. On the platform. So Etsy has long been a place where you can buy, adult toys. I haven't done it myself. I will say that, but, uh, I know people will be thinking it, um, but it, you know, you can buy sex toys. You can buy, adults. imagery, you can't, it doesn't veer into pornography, but you can buy kind of explicit, items and images on the platform has always allowed that. There's been a policy in place for that. And sellers have had to tag the items that they're selling with mature and basically kind of go through an upload process to ensure that users of the platform can filter it out if needs be. is. in the last few weeks, and this will now not be possible at all for sellers. So, the platform is going to do three things. It's going to limit adult toys and sexual accessories. It's going to prohibit items that depict sex acts and genitalia. And it's going to introduce stricter criteria for listing images that feature mature content. And this hasn't gone down very well at all. So the Guardian wrote a piece. Having talked to sellers on the platform who have basically said that this is a kind of massive reversal in, Etsy's policy, you know, one of them said, that it was a lazy solution to the problems of noncompliance and non enforcement that you as an Etsy created in the first place. A big shift in policy for Etsy. We don't necessarily know the reasons kind of why, Etsy hasn't performed hugely well. Uh, last year I had to make a number of layoffs within the company. I think about 250, if I remember rightly. And so this might be Uh, result of that financial pressure, as you talked about Domonique from a policy perspective, from somebody who has had to kind of wrangle with policies in the past, had to make big shifts in policy. What do you think when you see something like this?

Domonique Rai-Varming:

I think sort of, as you, as you mentioned there, Ben, there's probably a lot more going on, behind the scenes. And I think with policy and decisions like this, There's often a number of things in play and striking the right balance at any one time is important. And I think what's not clear to me from this story, as you've already identified is what's driven this shift necessarily, is it regulation? Is it regulatory pressure? We don't know. Is it internal resourcing pressure to actually deal with the content? We don't know. For the financial pressure of moderating, you know, enforcing, it's probably, probably takes a while to get through these sorts of reports. You know, it's very manual, could be that. Um, so there's always like a number of considerations. And I think on this one, it's not, it's not crystal to me.

Ben Whitelaw:

Yeah, it's, it's a stranger, isn't it? Um, what they have said is that it would, it would take effect at the end of July on the 29th. It's about, what it calls maintaining its position as the destination for truly special creative goods. I mean, you could argue that.

Domonique Rai-Varming:

pun intended.

Ben Whitelaw:

So yeah, special creative goods are as a subjective thing may or may not include sex toys. So it's, it's a, it is an interesting thing and I'll be interested to see if there's any further fallout from sellers on Etsy, as to this policy. Cause, uh, it is a bit of a shift, but interesting to note and yeah, really helpful to get your take on that as well as we have done throughout the podcast. Thank you very much for joining us. Um, how have you found it before we wrap up today?

Domonique Rai-Varming:

Oh, thank you for having me. It's my biggest, uh, thing to say. I've, I've really enjoyed it, Ben. Um, this podcast Ctrl-Alt-Speech is my go to, uh, every week, as is everything in moderation. The newsletter, it's something that I share with my team every week too. So, it's a pleasure and a privilege to be able to be part of it. So thank you.

Ben Whitelaw:

Brilliant. Well, thank you for taking the time for giving your insights, for joining in Mike Stead. Mike will be back next week. Um, so we probably won't be talking about, EU or the UK election or football quite as much. I would guess soccer for our us friends. Yeah, great to speak to you, Domonique. Thank you very much for joining. If you enjoyed today's podcast, uh, rates and review wherever you get your podcasts and we'll speak to you next week. Take care. Goodbye. Bye bye.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.