Not Your Parents' PR

Navigating AI in Newsrooms

March 19, 2024 Marla, Mads & Erica
Navigating AI in Newsrooms
Not Your Parents' PR
More Info
Not Your Parents' PR
Navigating AI in Newsrooms
Mar 19, 2024
Marla, Mads & Erica

Struggling newsrooms + generative AI = magic, right? Not exactly.

Mads chats with Christina Veiga, sr. media relations director for the News Literacy Project to chat about all things AI in journalism including: Newsroom AI fails and wins, how to identify credible information, disclosures, the proliferation of deepfakes moving into election season, and more.

Find more on the News Literacy Project at https://newslit.org/ or @NewsLitProject on Instagram.

That's all for now!

Follow Us:
LinkedIn: 212 Communications
Instagram: @notyourparentspr, @MarlaRose__ @MadsCaldwell

Show Notes Transcript

Struggling newsrooms + generative AI = magic, right? Not exactly.

Mads chats with Christina Veiga, sr. media relations director for the News Literacy Project to chat about all things AI in journalism including: Newsroom AI fails and wins, how to identify credible information, disclosures, the proliferation of deepfakes moving into election season, and more.

Find more on the News Literacy Project at https://newslit.org/ or @NewsLitProject on Instagram.

That's all for now!

Follow Us:
LinkedIn: 212 Communications
Instagram: @notyourparentspr, @MarlaRose__ @MadsCaldwell

Speaker 1:

Hi everyone. It's Mads and I am here with a guest today. I don't usually have guests on this podcast, but it's becoming a bit of a thing because I'm driven by inspiration and curiosity and this topic my partners and I are endlessly curious about, which is the use of AI in newsrooms. Christina Vega is here. You're a senior media relations director with the News Literacy Project, a nonpartisan, non-profit building a national movement to advance news literacy in America, to create better informed, more engaged and more empowered individuals and, ultimately, a stronger democracy. Welcome, christina.

Speaker 2:

Thank you so much for having me and thanks for paying attention to this issue.

Speaker 1:

Yes, I feel like we could chat about a lot of interesting things and maybe we will do more in the future, but today we'll narrow the focus to AI in journalism, and I've been having some conversations with newsrooms. Many are not there yet. With AI there's another. There's one newsroom in one of our hometowns that's playing around with a bot that writes real estate content with publicly available information and edited by humans, so it feels like it's sort of in test mode or in the beginning stages of being tested. So I just want to start with a grounding question when is the news industry when it comes to AI?

Speaker 2:

The news industry is all over the place when it comes to the use of AI. There's been a lot of high profile stories about newsrooms that have used AI sort of to their detriment by producing stories that were published without editing and that results had errors in them or just used kind of weird language that was clearly not human generated. You also have outlets like the New York Times and others who are suing open AI, claiming copyright infringement. And then you have the American Journalism Project, which is a venture philanthropy that supports local news. They kind of deal with open AI for $5 million to help local newsrooms experiment with how to use AI to assist in their reporting, just like you were describing at the intro. And then, of course, you have the AP, who had been using AI well before all of this hype to automate sports scores, that sort of thing. So it's an incredibly complicated landscape and within that landscape I'm obviously with the News Literacy Project. Our stance is that AI just makes it more important than ever for people to be able to identify credible information.

Speaker 1:

That the humans be involved. Ok, and as someone working in media relations daily, you are as well. We're fully aware that newsrooms are shrinking, they're getting younger and greener and haven't really found footing in a sustainable model. I could see how generative AI can be a huge tool to bolster the sustainability of newsrooms and feed the community's appetite for content, but what would need to change in order for news media across the country to implement AI safely, effectively, effectively?

Speaker 2:

So right now, ai has a real credibility problem. It makes up things, so newsrooms need to be very careful. First and foremost, journalism is dedicated to facts and getting things right, and AI is often confidently wrong, so that would be something that needs to change. We think also that newsrooms need to disclose how they're using AI, if and when they do. A lot of the controversies that I mentioned earlier were also because people were sort of surprised that news organizations were using AI in this way. They hadn't really told anyone, and then it turned out that it kind of blew up in their faces, and so we also think that as more newsrooms start to experiment and implement this technology, they need to be upfront with how they're doing that.

Speaker 1:

Yeah, and I saw a piece that you wrote saying this, and this is just a quote from it this new era requires that newsrooms develop new, clear standards for how journalists will and won't use AI for reporting, writing and disseminating the news. And I think you also brought up related to the trust and transparency we often consume news and then want to know the source who wrote this and so being clear that it was generated by AI.

Speaker 2:

Yeah, part of being news literate is looking for signs of credible journalism, like the sources are disclosed. This organization is following ethical policies. There is diligent fact checking and editing and place that sort of thing. And so AI when you're using AI and maybe skipping some of those steps, that does not go a long way to building trust with your audience, and trust right now in media is at all time lows. So news organizations need to be doing everything that they can to build that up, and a lot of people also don't understand what goes into making credible journalism, and so, whether it's AI or any other part of the journalistic process, we really want newsrooms to be transparent, with all of the layers of confirmation and fact checking and a dedication to fairness and looking out for bias that sort of thing. Whether or not AI is involved in producing the news, journalists need to explain their work, because we believe that when people understand how journalism gets made credibly, they will trust it more.

Speaker 1:

Yes, yes, you recently wrote an opinion piece for Pointer called to build trust in the age of AI. Journalists need new standards and disclosures. And one thing you said is, as journalism practices inevitably evolve, both aided and disrupted by this new technology, transparency is key. And you, as a true communicator, you brought up the idea that once a news organization decides on guiding standards for implementing this technology, they must communicate it to their audience, like communicate the why behind that decision. I thought that was so insightful. As you're discussing, because it is, the distrust does feel pretty widespread, so consumers could feel you know or could lose trust in the publication if they aren't communicating.

Speaker 2:

Yeah, and there's a great organization called trusting news that we've partnered with in a number of ways and they work with newsrooms to help build trust with their audiences and they have all kinds of resources available specifically with AI. So there are organizations out there to help. News organizations were thinking through these challenges.

Speaker 1:

That's great to know. You acknowledged a couple of news organizations in the upfront about that are doing it right. Can you share a couple of those examples and what that might look like specifically?

Speaker 2:

Someone that comes to mind is wired. They were one of the first news organizations that came across my radar that had an entire web page dedicated to explaining how they would or would not use AI in their reporting, and not only did they lay out that, they laid out their reasoning behind it, so, and they also provided a contact email for people to reach out if they had any questions. And those are all really best practices. People want to understand what are the decisions that you're making, how and why are you making them, and part of being news literate is also knowing that you can reach out to news organizations when they fall short and to hold them accountable, and so giving people ways, clear ways, to engage with the newsroom is, we think, a sign of credibility and something that more news outlets should do.

Speaker 1:

I'm curious if you have any data on how it's going with like a wired or others who have started to implement Thank you the use of generative AI like productivity data, readership data, people clicking on AI articles versus human articles. I'm sure it's all very scattered right now, but I'm curious how it might be measured.

Speaker 2:

That's a good question. I'm not really sure, but AI is changing so rapidly it's getting better and better as well, so that's part of the equation here is being reflective and willing to go back and rethink how and if you're using AI. So I would hope that newsrooms and others are collecting that data to inform decisions going forward.

Speaker 1:

That makes sense. Do you have any other considerations specific to PR people listening that you'd like to add? Do you envision a future where we will be pitching bots our stories and they'll be playing editor to select them? Or really any insights?

Speaker 2:

Well, so far, ai has not lived up to the hype I guess with when it comes to replacing journalists. I just think about how I've tried to use it in my work. I remember when the hype was first really reaching a fever pitch and I uploaded a press release to look for grammatical errors in it and it came back with all of these errors and I was like, oh my gosh. Then I went back and looked at the release and actually there were none. It had made them all up. So high profile cases, not just in journalism, of AI messing things up, but also lawyers have tried to use it to create briefs that just included wholly made up references to laws, that sort of thing. So really our message to anyone who's using this tool is to know that it has real limitations in its credibility as a source. So if you are a PR person who's relying on chat, gbt or any of these technologies, you really need to have an editing process in place to make sure that you don't end up doing anything embarrassing. The other consideration to keep in mind is copyright. A lot of the lawsuits right now involving news organizations and AI companies revolve around copyright and whether their work was used to feed these chat bots in a way that is not permissible, so that's another thing to keep in mind, especially if you're creating images, that sort of thing.

Speaker 2:

So a lot of the conversation around AI is a lot of hype. It's going to destroy the world or it's going to save it. I think right now, we just need to be aware, work within the limitations of AI and where it will lead. I'm not in the business of making predictions, because I feel like a lot of them will end up being wrong, but it's here. It's here to stay, and so we should learn how to use it responsibly. Yes, agreed.

Speaker 1:

Treat it like your greatest, most coveted intern and edit everything and appreciate it.

Speaker 2:

Yeah, yeah, it's probably a good approach.

Speaker 1:

For our last couple minutes. I just want to switch gears briefly because I thought this was sort of important and I would love for your wisdom to be shared. You recently did an interview about deep fakes and the election and for our listeners who don't know, deep fakes are like photos, video, audio recordings that seem real but have been manipulated in some way with artificial intelligence. It can be a bit scary, but one of your roles is to help educate the public on these tactics so we can be informed. So what do we need to know about deep fakes with the upcoming election season?

Speaker 2:

The first thing to know is that mis and disinformation swirls around election season and just you need to be aware. So if you see a claim online that seems sensational or that really gets you feeling a strong way whether that's fear or anger or even hope that's a good sign to pause and to consider the source of this information and the purpose. Especially this election season, where AI is more widely used and more mainstream than ever, there are concerns that that could lead to the propagation of even more mis and disinformation. That's really convincing sounding and looking, but the good news is that it can be easy to spot this Right now. There are limitations just visually and otherwise, in some of the AI stuff that gets produced. So that's one easy way to spot it.

Speaker 2:

But the basic news literacy skills that apply to assessing any piece of content applies to AI. Ask yourself what the source of it is. A lot of these AI images that end up going viral were actually created on AI forums and then they kind of break loose from there and get shared out of their context, or they're shared a satire and again they break loose from the context and it spreads as though it's real. So take a beat, look at the source Is it a satire account, for example? And then look for multiple credible sources and what they've said about this content that you're seeing. So just open up a new tab and do a quick search and oftentimes you'll find that fact-checking organizations have already jumped on this. Or if you don't see any coverage about a sensational thing that you've seen, that's a good sign that it's probably not true?

Speaker 1:

Yes, thank you so much for sharing your wisdom. I just want to finish with a quote from you. No technology, regardless of how sophisticated it is, can replace the trust journalists spilled with their audiences. So, christina, thank you so much for being here, and I appreciate your time and talent. Thanks for your interest. All right, bye-bye.