Pressing Matters

Second Anniversary Episode: AI and Journalism

Big Valley Marketing Season 2 Episode 12

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 49:26

Hello everyone. We've got a very special episode for you this month as we celebrate the pod's second anniversary. Given that AI is the topic of the decade, not just in tech and journalism, but the whole wide world, we asked two of our favorite guests back to discuss where this is all going. Jennifer Strong of the Shift Podcast, who also was a Pulitzer Fellow for AI accountability. And Connie Guglielmo, the former CNET executive editor, who for nearly two years now has been its SVP of AI edit Strategy. Together they are two of the foremost authorities on AI in the tech and journalism world. We brought them together to discuss AI as a technology, AI and journalism, and a few of our favorite quotes from this past season's guests for this episode of Pressing Matters from Big Valley Marketing, the podcast that brings you conversations with the top media and influencers in B2B tech. 

I'm Dave Reddy, head of Big Valley Marketing's Media and Influencers Practice, and I'm your host. Through research and good old fashioned relationship building, we've identified B2B tech's, top 200 media and influencers, including Jennifer and Connie. Here's our special chat about AI with Jennifer and Connie. Enjoy. Jennifer, Connie, welcome back to Pressing Matters. My first two repeat guests for this very special episode about AI in journalism.

Dave Reddy (00:00):

Hello everyone. We've got a very special episode for you this month as we celebrate the pod's second anniversary. Given that AI is the topic of the decade, not just in tech and journalism, but the whole wide world, we asked two of our favorite guests back to discuss where this is all going. Jennifer Strong of the Shift Podcast, who also was a Pulitzer Fellow for AI accountability. And Connie Guglielmo, the former CNET executive editor, who for nearly two years now has been its SVP of AI edit Strategy. Together they are two of the foremost authorities on AI in the tech and journalism world. We brought them together to discuss AI as a technology, AI and journalism, and my new favorite term, “bot shit” as well as a few of our favorite quotes from this past season's guests for this episode of Pressing Matters from Big Valley Marketing, the podcast that brings you conversations with the top media and influencers in B2B tech. I'm Dave Reddy, head of Big Valley Marketing's Media and Influencers Practice, and I'm your host. Through research and good old fashioned relationship building, we've identified B2B tech's, top 200 media and influencers, including Jennifer and Connie. Here's our special chat about AI with Jennifer and Connie. Enjoy Jennifer. Connie, welcome back to Pressing Matters. My first two repeat guests for this very special episode about AI in journalism.

Jennifer Strong (01:28):

Awesome. Thank you so much. Happy to be here.

Dave Reddy (01:31):

So a year ago, and certainly two years ago when GPT-4 came out, AI was the solution to everybody's problems. It was going to solve tech, it was going to make journalism great again, you name it. To quote Gartner, we seem to now be in the trough of disillusionment, which was probably predictable. Do you agree that we're in that trough and if we are as AI experts and journalists, what needs to be done to get to the other side of the trough, Connie?

Connie Guglielmo (02:01):

Okay, well, first of all, I don't agree that last year it was a solution to every problem. I think there were certain parts of the industry, your venture capitalists who were investing billions of dollars in AI startups who certainly had a lot of hubris and a lot of wonderful things to say about this new technology in journalism. I don't think anybody was saying it was the solution to any problems, but what it was, was a way for businesses, and if you recall, there were all sorts of reports from Goldman Sachs and Forrester and Gartner and people McKinsey about the productivity potential for Gen AI to save money. And what you saw last year was a lot of people put the cart before the horse and say, okay, wait, we need to cut programs. We need to cut jobs. We need to reposition things because we got to invest in AI because ultimately it's going to save us money.

(02:56):

And I think people on the ground who understand how technology works, I am sitting in my son's bedroom down the street from the Apple garage where Steve Jobs started, apple, there's always a lot of excitement about new tech when it comes out. Certainly that was the case with Jet TBT, but there's also a lot of magical thinking about what it can do. Oh, it can fold your laundry and slice bread and do all these great things. And so we were in the magical thinking phase in certain segments of the population because chat GPT was very different. It was a proof point for generative AI and how you could have a natural language conversation with an AI. I mean that is an amazing advancement, but putting that into action, right? Devil is in the details. So you saw a lot of investment hubris. You saw a lot of people get excited about how much money they could potentially save, and that's where you saw the job cuts and all of this redirection of resources.

(03:54):

But people were actually on the ground, people like me in a newsroom. My job last year was to evaluate it from a pragmatic perspective and understand truly how you could bring these tools in, not what people's theory or magical thinking was. Well, you could see wasn't the be all end all, and that there is a lot of work that needs to go into implementing it in a way that adds value across business. So I think what we're seeing is a reset of expectations about those productivity gains. Goldman Sachs did a big report in June saying saving time and money in repetitive tasks is not worth a trillion dollar investment. What trillion dollar problem is Gen A going to solve? So they are even cautioning this over investment, over enthusiasm in this technology that is profound. There's no questions. It's general purpose technology, and in the history of humanity, there have been what, like 22 fire money publishing the internet. Gen AI is a general purpose technology that will transform the world. We're just not there yet.

Jennifer Strong (05:01):

Yeah, okay. A hundred percent of everything Connie just said. And also as a reporter covering the stuff, I would say in the chaos of last year, there was, I don't know, you just sort of wanted to stop and remind people that no business has cut its way to greatness, and I kept waiting to see someone do something that was truly novel or took advantage of aha. I have a tool I can now go after that thing that I've been waiting to go after for X number of years. At any rate, I also want to back up and just say, I don't think we're in this trough of disillusionment, but I do think a cycle of hype and disillusionment with AI is not new for decades. We had a name for that. It's called an AI winter. So this said we're not in one of those right now. If anything, I would echo again that some portion of the population is a little more pragmatic now or realistic at least about what this can and can't do, which I see in that I'm kind of cheering it because if we're going to drive this bus, we've got to be practical and realistic. It's just as necessary as important, especially in fields where facts and precision are the key to their very existence like law, medicine or journalism.

Dave Reddy (06:10):

So the AI winter is not coming.

Jennifer Strong (06:12):

I don't know if it's coming, but I don't think it's here year.

Dave Reddy (06:16):

This whole discussion is sort of typical of two things that Americans tend to do. We tend to quote Connie, we tend to have magical thinking about things that pop up and we think they're going to solve everything, and we also love instant gratification, but particularly in America, the problem with instant gratification is it takes too darn long. So with that, let's start hearing from some voices who were on pressing matters this past year. CNBC's Lorianne larocco on the need for patience with AI. She seems to agree with you folks very closely.

Lorianne Larocco (06:51):

AI is definitely being used, but you got to do it the right way. I mean, I am privy to a lot of great innovative product and my gosh, it still has a long way to go, a long, long way to go. I don't see it in a matter of short-term years, but maybe over the course of this decade, that's when we'll see AI really get to be strong, be it from better driving patterns to the way warehouses are used with robotics. There's still a lot of fine tuning out there, a lot of opportunity, but we have to be smart in terms of how do we create that opportunity and in what ways that we control that opportunity.

Dave Reddy (07:38):

So Jennifer, I'm not sure if Lorianne sees an AI winter, but she's putting AI a little bit more in sort of the quantum computing category of we won't see much until the end of the decade. What do you think about that?

Jennifer Strong (07:49):

Well, I don't know if she's saying exactly that we won't see a video. I don't think it's exactly quantum computing, but I generally agree with her quote. I agreed with her in person at a dinner recently too. I think as I've said to you before, Dave, I spent enough time with this tech too to know there's plenty that it's still pretty stupid at, and it doesn't mean it's always going to be that way, but it could come quicker than we think. But for now, we just don't know and we don't know a lot about how it works today, so it's kind of hard to guess exactly about tomorrow or the long run further down. But yeah, I'll just repeat what she said. We have a long way to go and we need to be smart about how we create that opportunity and smart about how we decide to control that opportunity. If we do.

Dave Reddy (08:36):

I may have been putting words into Laurie Ann's mouth with the quantum computing comparison, but Connie, where do you stand here? Are we looking at the end of the decade or more middle of the decade?

Connie Guglielmo (08:46):

No, I think we'll see progress incrementally over the next few years because I think the mistake that people make is that AI is not one thing, and generative AI is generative AI. That's natural language conversations and it enables a whole bunch of use cases. You have machine learning and other flavors of AI that we've already been living with all your recommendations from Amazon or Netflix. That's a form of AI that's been in place for a while. So I think what we're going to see is people get more serious about the use cases that they can reasonably accomplish and start to focus on probably smaller projects because I think as we all know, and if anybody tracks Nvidia stock, you need a lot of capital. You need a lot of compute power, you need server farms, you need electricity. There's a lot of power involved in running these massive AI engines and sure technology is always getting better, but going back to what Goldman Sach said about the trillion dollar investment that companies are expected to make or thinking of making in five to seven years, at some point you have to be pragmatic about what do you want to solve and how can you do that?

(10:08):

Just because you can do something with tech doesn't mean you should do it. And so that pragmatism is what we're saying. My earlier point was that in the interim, people are using it as an excuse to clean house and to redirect focus and say, we're all about AI. Well, you can be all about whatever you want, but until you actually come up with a use case that makes business sense. If you're a business, there has to be an ROI exercise and that ROI exercise is not that we can just potentially process something faster or we can have somebody else write things for us if we have to then fact check every single word because of hallucinations. And so actually the cost for doing it is higher than you think, not lower because we have this hallucination problem to deal with. So I think there will be incremental advancements. We're already seeing some of those and it's not going away, but I don't expect to wake up in the next year or two and oh my god, the secret of the universe has been solved. No, I don't.

Jennifer Strong (11:13):

Yes, a hundred percent. This is going to be how it's going to go, Dave. I'll just keep saying yeah, what Connie said. I mean, it makes me think of the radiology example, how we told people they shouldn't study radiology anymore. We went up to the worldwide shortage of radiologists and then we came back and went, oh, actually AI is fantastic at reading these scans and we need people with it. And for now that's going to be the game. And then there'll be the other use cases that are just going to blend into the background. To Connie's point, the ML and AI things that live in our phones and in our TVs and in our lives and in our cars, and we don't really think about 'em because they work for now and they're just there. We're going to have more of that, but it's not, when we talk about some, we're not going to see a super intelligence, I don't think anytime soon.

Dave Reddy (12:02):

Your radiology example segues nicely into our next clip, which was from Evan Kristel, the self-styled B2B tech flu, who was on the show back in October. He said the winners in any profession will be those who've magnified their skills with AI, quote,

Evan Kirstel (12:21):

Using it to the degree I'm using it is going to be sadly left behind. I think that's going to come to every profession. Pick your professional lawyers, journalists. So if someone joked, you're not going to be put out of business by AI, you're going to be put out of business by someone using AI, and I think that's sadly going to be the case.

Dave Reddy (12:46):

So again, on one hand there's the need to be very smart about this, to understand where it's going, but heaven makes a good point. If you're not at least playing with it, you might find yourself out of a job.

Connie Guglielmo (13:01):

Yeah, so I agree with that. Look, when I started as a journalist, we had typewriters and notepads and pens, and then I got a computer, then I got a smartphone that did all my audio and video and auto transcription tools evolve. They change. So AI for everyone in the world I just mentioned, it's a general purpose. Technology akin to fire and money and the internet, that's the scale. It's a tool that you need to learn to use that augments or enhances what you're doing. You are still the driver of that tool. Even though I have Otter do my transcriptions of my audio interviews, Otter is not writing my story. It's not going through and prioritizing the quotes and telling me what the theme is, although I think they would like to try. So every profession needs to understand how to use news tools. This is a new tool.

(13:54):

The second part of that quote about will you be competing with businesses that use AI and don't, let's put that aside for now and just focus on the individual in jobs. If you look at LinkedIn and DICE and all these hiring websites as I have, having on your resume that you understand how to use some AI tools or have certain amount of hours playing with them is now becoming a standard thing that employers might be filtering out your resume for when they do job hunt. So yes, everyone should make an investment and spend some time understanding how these things work. And I just looked up the last survey that Pew research did in February was that 25% of Americans had used chat GPT, which means 75% of Americans have not. So we live in a bubble in tech land thinking everyone's using it, blah, blah, blah, but that's not the case.

(14:48):

So is it a tech you should use? Yes, will people who are masters of their area of focus, you're an accountant, you're an engineer, you're a writer, you're a doctor, you're a radiologist, you're a chef. Those people are the ones who are going to figure out how to use these tools to help them because they are accustomed to understanding how new tools can help them be better or more efficient. So all of that has to happen. We're not there yet. People are working on it now and trying to imagine how this technology, if they can trust it, if they can afford it, if it works seamlessly in a way that works with their workflow. So all of that has to build, but in the future, when you graduate, your kids in elementary school or in high school and college, will they need to know how to use AI and have that as a baseline skillset. Yes.

Jennifer Strong (15:42):

Yeah, agreed completely. Again, with Connie, also agreed with Evan with the exception where he says that AI paired with people will sadly be the case in terms of taking your job. I don't know if it's sadly the case or just simply the case, but I do also on top of that, think that we're going to need not just skills of using AI, but still some critical thinking skills and a few other skills that are going to be a little bit harder to get, I think in the years ahead. So there's focus in two different areas, but for the overachievers who have great critical thinking skills and also have plenty of experience experimenting with their tools that are coming down the pike for their careers or their particular fields, yeah, that's probably going to be a successful way of going about it.

Dave Reddy (16:27):

And of course the courts can't help and the politicians can't help but get themselves involved. There are thousands of bills out there. We don't break news here on pressing matters since this will run about three or four weeks from when we're taping here. But at this very moment, a bill has just been placed on the California Governor's Desk, governor Newsom about whether or not gen AI can be used in certain ways in film. It's Bill 26 0 2 quote would regulate the use of generative AI for performers, not only those on screen and films and tv, but also those who use their voices and body movements and other media. Personally, I would love it if a bot wanted to take over for me here on pressing matters, but Gavin Newsom may get in the way of that. So let's talk about that. I mean, fundamentally, and there's many of these different flavors out there, but fundamentally this is let us do anything we want. We have to innovate. And then there's folks saying, whoa, whoa, whoa, slow down. Who's right, Jennifer,

Jennifer Strong (17:27):

Who's right? I would just remind us this is how this has always worked, that if you cover public policy for a minute at all, there will be those who will say that any regulation will crush everything. And those who say that without it we're doomed, the answer is probably somewhere in the middle, and it will be easier for industry to deal with this a few times instead of hundreds or thousands of times. So there's some, we're going to see how this nets out, but just like we saw though, I think with data regulation, it will be easier again for people to comply with some national or international rules rather than handling the state by state and what the governor does. I guess we're just going to have to wait and see because there's some pretty powerful influences and constituencies on both sides. So I'll be watching,

Dave Reddy (18:14):

Connie, what do you think? What's he going to do?

Connie Guglielmo (18:16):

So I'm looking at SB 1 0 4 7. That's the one that Scott Wiener, who is the representative for San Francisco put forth. That's the one that just passed the California assembly this week, and that Newsom can either sign, reject, or allow to pass without his signature. That bill puts safeguards around any high-end AI that costs a hundred million dollars or more to develop, and it also offers whistleblower protections to employees of some of these AI companies who highlight problems. Will he sign it or not, as Jennifer said, and of course I agree with everything she said as well. We are knowledgeable women who know what we're talking about. Amen.

(18:58):

Industry doesn't want California to pass this law because they want, they claim they want it at a national level. They want the federal government to do one regulation that affects everyone. And California says, yeah, well, if we waited for the government to do anything, we'd all be dead from AI in the interim because there's no safeguards in place and people are building these powerful systems and they're collecting all sorts of information, and it might be too late. The politicians are politicians who have a different agenda than the normal person trying to do the right thing. What I will say is that SB 1 0 4 7, whether it passes or not, highlights the need for us to take a closer look at technology solutions and systems and do some sort of regulation earlier rather than later. In the past, tech has always gotten a pass when it comes to regulation because you know what?

(19:49):

They're the innovators. They know what they're doing, they're the smartest people, blah, blah, blah. But as we've seen with the rise of misinformation and fraud and abuses of systems and cryptocurrency, et cetera, you need to have some regulations. Tech people are not the smartest people on the planet. Case in point, Elon Musk, they might be the richest, but they're not the smartest. And so there's a lot at stake with these systems, and it's not just people's livelihood, like Hollywood actors and creators worrying that they will have their voices and recorded once and they'll never be paid again. It's also about equity to information bias and systems that make decisions like whether you can get a mortgage or whether you can get health insurance. We don't know how all of these AI systems are going to be implemented, and therefore there has to be some smart framework for at least assessing the potential harms and notifying people, and also putting the tech companies on alert that they have responsibility and accountability for gauging and watching for those harms and not waiting for things to explode and saying, whoa, what do you want us to do? We're ready. Just tell us which has been kind of the way that it's worked over the past, whatever.

Jennifer Strong (21:10):

It absolutely has been the way it works, and nobody can see this at home, but my neck is going to get sore from nodding along here. Exactly this when I grow up, I want to be Connie. And this so organized and to the point, I

Dave Reddy (21:24):

Was really hoping for a replay of 1990 CNN Crossfire where you guys would just yell at each other. That seems to be very popular.

Connie Guglielmo (21:31):

Did you miss the part, Dave, when I said, we're both really smart women who know?

Dave Reddy (21:35):

No, I didn't miss that and I already knew that. But yeah, no, a little dissent would be good, but that's okay. Maybe this will make some dissent. Let's shift to journalism, which has, I think we would all agree has been broken for a long time in a variety of ways, but particularly broken as an unintended consequence of technology. And both of you're looking at this in your jobs in different ways. Connie, you're the SVP of AI edit strategy at CNET. You've been looking at this for a couple of years now, Jennifer, this obviously applies to you not only as a podcaster and a reporter, it applies to all podcasters, reporters, and even us flax. But in your role as a pool at your fellow for AI accountability, every guest this season, we all had a similar discussion with every guest and on pressing matters with regards to this discussion in the newsroom. And it seems to be a very hardy discussion. And one topic, you mentioned this a little bit earlier, I think, Connie, how much writing should you do versus AI? Here's Alex Conrad of Forbes.

Alan Conrad (22:40):

I'm using AI to help transcribe my interviews and help me pull things out of the interviews. But right now, there is no AI tool that really helps me write at the standard that I hold myself to. And I do believe that if an AI could perfectly replicate my writing process, I am bad at my job.

AI is the kind of thing that's going to separate the wheat from the chaff, meaning AI is the kind of capability that if you are engaged in creating commodity content, something that is not distinctive, but you are a distribution platform reverse engineered into a content strategy, AI is going to blow up your spot.

Dave Reddy (23:20):

So this is a question we actually asked last year on our anniversary episode when I had Charlie Cooper, our internal EIC and former EIC of like Connie of cnet. Surprisingly, given how curmudgeonly Mr. Cooper is, he actually thinks there might someday be an AI who could write as well as the two fine journalists I have in my program today. Connie, what do you think? Are we ever going to get there or will AI ever have soul or is it just good for the things? I don't need to do shorthand?

Connie Guglielmo (23:52):

So no, and AI will never write as well as Alex or me or Charlie or anyone because we interview people and go get original content. We come up with angles based on our interactions with the world that we engage with. And right now, even the best gen AI model, I'm looking at T'S hallucination board, the kind of stuff I look at every day,

(24:22):

Chat GPT four has a halluc rate of 1.5%. And when you're a journalist, every word matters. So 1.5% is 1.5% too much hallucinating. So that means you have to fact check every single word that it's writing and the inferences it's making. So, we're not there yet, but I have to go back to the question, why is the assumption that an AI will replace writers? Why can't the assumption be that an AI is a tool that can aid writers? And that's the premise that I took into our newsroom last year when I was asked to look at AI and what were the valid use cases and invalid use cases for using these tools. The people who want an AI to write stories are business people who don't want to pay writers to write stories. So they're looking at a cost cutting exercise on the one hand, because in any media company, the greatest expense is salaries people.

(25:19):

That's the biggest cost. Everyone knows that. So if you can reduce that cost at the same time, if you can increase productivity and more output, that's a winning business formula. But it's not a pragmatic one because I said we have a problem with hallucination. Show me an AI that's going to go out and interview you, Dave, and ask you all sorts of questions and spend time with, I dunno, bill Gates or Jeff Bezos, or pick your prompt Dolly Parton, pick whoever you want to interview. And if you were to say to that famous person, oh, we're going to get an AI to do it. You know what? You're not worth our time to talk to, but we're just going to be like, come on. So again, it comes back to what are the use cases? So I have a problem with the fundamental premise that AI is going to replace journalists.

(26:07):

It's not. And anybody who has used an AI tool to write a story or attempt to write a story like Alex did, we'll see it does not. It writes what some people might think is, oh, that sounds reasonable, sounds good. But when you actually read it, it's not really very good. It's lowest common denominator. And again, probably riddled with errors. So then the question is, how can you use an AI to augment a job or whatever it is that you're doing? And in the case of journalism, I spent five months doing our AI playbook internally after interviewing everyone internally and talking to experts externally. Well, you can use it for transcriptions. And Otter now has a component that auto transcribes your interviews and gives you a very credible transcript that used to take individuals hours to complete. Great. You still have to fact check every word, but great, that saves time.

(27:02):

You can use it for brainstorming. I wrote a story about this topic. What are some of the popular angles that people want to know about this topic? You can ask an AI that you can say, here's the headline that I wrote. Give me five other examples of ways that I could tell that same story in 65 characters, which is the standard character limit for headline. It doesn't mean that's going to output great ideas, but as a tool that you can play with and experiment with. Sure, go ahead and try that. So I'm just going to stop there because I'm sure Jennifer has a point of view, but again, I think the premise is wrong. And the last thing I'll say before I say that the reason I know the premise is wrong is because I interview a lot of executives and CEOs and they always talk about how AI can be this productivity and they can get efficiencies, AKA, they can fire people that they don't need to do that work because an AI will be able to do it now or sometime in the future.

(27:57):

And I always say, oh, okay, so you've seen Iron Man and Tony Stark has Jarvis, that's amazing, right? And he says, yeah. And I say, well, first of all, we don't have an AI akin to Jarvis, which is a general artificial intelligence. We have tattoo bt, which is autocomplete on steroids. But secondly, Tony Stark is a genius and he's the one who's driving the AI. So if you think you're going to replace knowledgeable people who know how to vet and curate information with inexperienced people that you can pay cheaply, you're not going to get a good result. And then there's always some banter, and I always say the single most expensive person in any company is the CEO, so why don't we get rid of you? Why don't we get an AI to do your job and make analysis? And then you always get the, well, I have experience and I have this and I have that. And I'm like, well, why does that all count when it comes to you? But it doesn't count when it comes to other people and rant.

Jennifer Strong (28:53):

That's

Dave Reddy (28:53):

Fabulous metaphor.

Jennifer Strong (28:56):

Mic drop. Yes. Okay, so all of that, exactly, and no, I don't use AI to write for me for all the reasons that Connie just listed, and I disagree with the premise wholeheartedly. So thank you for making that point. So I don't have to, and I'm a writer. I love writing. So why would I want something to write for me? Even if it was capable of it versus having something to assist me in this job? I would also back up and talk about efficiency and say fact checking something written by AI is a completely different experience than fact checking something written by a human. And here's why. When we get things wrong, it's not usually stuff like a pet's name. You think about the written book about Gary Marcus that brought up a pet chicken named Henrietta. He has no pet chicken named Henrietta. He has no pet chicken at all, but that was missed by fact checkers.

(29:51):

It's now out in the world, it'll be there forever. And what do you do about this thing that's now going to be baked in for time eternity? And it's kind of funny when it's a pet chicken, but it's pretty scary when you think about medical documents or journalism or legal things that we need to be especially precise. But anyway, going back to tools, I do use every other journalist on the planet. I use AI for transcription if with some degree of awareness that I don't know how that audio is being used by models or others in the world. And so I wouldn't use it for something sensitive unless it's maybe on device transcription like on a phone. That's probably safe. And I think we really need to have more conversations about how the tools work than most of us do, oftentimes, myself included, because I think even those of us who do know how some of these things work, we tend to forget some pretty heavy details.

(30:44):

Like we talk about AI summaries a lot. Most of these tools cannot synthesize or summarize, they are dropping words from generated text. We all know that subtracting words out of a long list of words is not the same thing as synthesizing information. So once again, there are very specific tools or uses that are going to come along as we go down this road. But for the moment, anyone who is highly precise working in say, law, journalism, medical, we need to be reading our original documents, processing that information and providing what we do best, which is our synthesis, our expertise, and our knowledge. Hear here.

Dave Reddy (31:22):

So Andrew Nusca of Fortune was on in July. He agreed with you and he took the thought one step further talking not just about content, but content strategy.

Andrew Nusca (31:33):

AI is the kind of thing that's going to separate the wheat from the chaff, meaning AI is the kind of capability that if you are engaged in creating commodity content, something that is not distinctive, but you are a distribution platform reverse engineered into a content strategy, AI is going to blow up your spot.

Dave Reddy (31:53):

So not surprisingly, Andrew went on to say similar to what you folks would say, that he's very confident the core act of journalism will remain untouched by the AI revolution, and that's what folks are missing, that this core of journalism or content is more than writing words on a page. Although yes, that is difficult to get right both in terms of the rhythm of the language, whichever language you're writing in, as well as the facts, which is rather important, particularly in journal. That said, I will not name this publication because we have this from a member of my staff under friend da. That's a new one for me, but there is a mid-size tech magazine out there that has told us, and not exactly sure how far they mean they're going to go, but they're going all AI. And I don't mean that they're going to cover nothing but AI. I mean that all of their content is going to be AI generated, and this is a legitimate publication that you would recognize. On the flip side, of course, we heard today, and it's escaping me, which one? Oh, a non-tech out of business today. So we can talk about content strategy and we should, but there's a problem here. Some people think AI is going to fix it. How do we find this middle ground? Can it do that? Can it make journalism great? Again, to paraphrase somebody, Jennifer,

Jennifer Strong (33:14):

I think to go back to what Connie was saying, our content strategy, which is assist people who are really great and have great passion for what they do to do it faster and potentially better to me feels like more of a path than some of these other options. When I hear this that a tech magazine is going to have all the content generated, I think, okay, but maybe nobody really reads what they write right now, and so they don't think people will know the difference, and either people will know the difference between honestly what I think of as being kind of junk that will may or may not be correct more than 90 something percent of the time and may or may not get to the point and et cetera and so on. But I do think generally what the quote was about in terms of separating wheat from chaff, that I generally agree with it.

(34:01):

I think we're forgetting though at the moment that generative AI is also pumping the internet with a lot of garbage that is drowning out. So this is a hard time for small publishers in a bajillion ways, but the bajillion in one, if you think about distinctive small publishers, sometimes independent journalists writing really great stuff and having a hard time getting it surfaced because the algorithms are downplaying it because they are so small, but the bots are having no problem finding it. So the bots are able to steal the content that independent newsroom put together and then send that out to the world regurgitated or just fully stolen and figuring out what we're going to do about that. It's not just going to be distinctive. Voices will win for the moment. It's going to be distinctive voices with a large enough outlet or backing or something that Google and other search engines trust it and surface it and then kind of taking it from there.

Connie Guglielmo (34:53):

So yes to everything Jennifer said as usual, but let's parse a bunch of the stuff that Andrew talked about. He talked about that the value of journalists who really know their craft and are going to put together skillful stories with great context, make you smarter after you read it that has value and will continue to have value. That is correct. But let's go back to an analogy that I always think of, and that is bread. In the olden days, you could buy bread from a bakery and it was probably pretty good and probably very affordable, and at some point there became bread factories and we ended up with no disrespect to them, but I'm going to disrespect them a little bit. Wonder Bread, which is a commoditized version of bread that is inexpensive and can be mass produced, and there's a distribution network to sell it and the profit on it, I'm sure the profit margin is high.

(35:45):

So when we talk about journalism, we also have to think about the business models of journalism, which is something that as the editor-in-chief, I've seen that for nine years and as a former Bloomberg and Forbes reporter who read nothing but Financial Sheets understands that is if you can't make money, you can't fund a staff, right? And what is the expectation for making money? The people that are looking at AI today to write stories are looking at commoditized content that they can put out at scale for very little cost. So it costs me whatever sense you think, whatever you think it's going to cost you, and this is the return on it because I'm going to pump it through the channels and it's going to be geared toward SEO and clickbait. And by the way, there's words for what this AI generated, mass produced, commoditized, somewhat meaningless content is.

(36:37):

And if you haven't heard it, I did not coin these terms, but I will tell you it's either called slop, AI slop, SLOP, or Harvard calls it bot shit. And again, that's their expression, not mine. You can Google these. You don't have to take my word for it. So slop and bod shit. So then you have to say, okay, to tech publication, thank God it's not seen it. Who is your audience for that content? Is your audience direct marketers who want their ads placed against something that will be distributed through whatever channels, and maybe somebody will click on the headline and they'll get a click on it because there's a click through rate because it's not going to be people who are looking for insightful, useful content because it could be error ridden riddles with errors because it's produced by an AI. So we have to talk about the business of journalism.

(37:30):

The problem is that nobody wants to fund the business of journalism. I can't even tell you how many content ideas for companies that I've had as a journalist sitting here in Silicon Valley, and I have access to pretty prominent venture capitalists and people that I can go pitch my ideas to. And the first thing they always tell me is, wow, you have a lot of credibility. We love your experience. We love your background. People that you want to have work with you on this all have amazing credibility. Then the second thing they tell you is, content is king. Content is everything. You need content. Content brings value. Then the third thing they'll tell you is, we don't want to pay for it. It's not valuable to us. Can you sell underwear instead and get an e-commerce click out of it? So you have to have someone who recognizes the value of what they're selling and wants to invest in that.

(38:18):

Because I told you journalism as a career is media businesses or people intensive businesses. Now, that's not to say that you can't use AI in a newsroom for it to be effective. As I've explored for a year, there are newsrooms that are successfully according to an AP study that was released in April, and I encourage you to go look it up. They're using it to take stories that I write, or Jennifer writes and do those little bullet point summaries at the top that Axios is so popularized so well, and to author those bullet points, it still needs to be vetted by a human to make sure that they're accurate, three sentence summaries, but that saves production time. So somebody who's reporter doesn't have to spend their time thinking that up and pulling it out and they can fast forward. AI technology in newsrooms is being used to deliver personalized content.

(39:09):

Okay, you clicked on these six stories about, I don't know, microphones, chances are you're interested in audio and microphone tech. Would you also be interested in these stories? Right? And by the way, we can auto assemble a list of products for you based on your preferences for your budget, et cetera. So journalism as a profession is under fire because it's a capital intensive business and people are looking for efficiencies and they think AI will bring those efficiencies. People who do good work will or survive, but do you have a corner bakery that's making bread that you can buy affordably on your street corner or not? That answers the question of where the investment is going.

Dave Reddy (39:48):

I'm going to start using bought shit. I like that. And to your point about the algorithms, I dunno if you guys have noticed this, and we'll get into a very good quote from Rich Jarislowsky about the business and journalism in just a moment. But first, have you noticed that on social media channels, not only are they, obviously they've been figuring this out, not only do they know what you want to hear about now, they're feeding you content written by a bot, and it kills me. I get all this sports content and music content, I'm thinking, oh, I'm going to want to read this because I'm a big music fan and I'm a huge sports fan, and I get about a paragraph into it and it has that sort of, I don't know, bot shit chic flavor to it that I'll call it. There's a new one, and I actually feel guilty like I'm reading a machine. So given that it sort of leads into another guest we had on Rich Jaraslovsky now with Smart News, the father, if you will, of digital journalism having founded wsj.com back in the mid-nineties when we were still spelling Internet with a Capital I, he is concerned about platforms versus publishers. Sort of goes to your point earlier, Connie, I want content. I don't want to pay for it.

Rich Jaraslovsky (40:59):

I know the publishers are determined not to repeat the mistakes that they made in the nineties when they basically allowed and the information wants to be free. Era allowed their content to be used by others to build businesses that eventually took away their own business models. So the publishers understand what the stakes are here, but the publishers have much less leverage now than they might've had in 1995 or even 2000 or 2005.

Dave Reddy (41:40):

Relatedly Rich was on in April, which was just around the time that the New York Times and several other titles filed a lawsuit against OpenAI, which last I checked is still grinding through the ever swift currents of the judicial system. So Connie, final thought, is it too late for publishers? To Rich's point, have they given up the leverage they had in 95, 2000 to 2005? Is journalism now a platform business as opposed to a publisher business?

Connie Guglielmo (42:08):

So it's not a yes or no question. There's a spectrum, but I would say the New York Times lawsuit against Open AI, which was filed in December, and it's also against Microsoft because Microsoft uses open AI chat. BT as the basis for Bing is going through the courts, these existing chatbots out there, large language models like Open AI Chat, GBT and Google Gemini have scraped the internet. They've already taken and vacuumed up and Hoovered all of the content that's out there. So that's problematic. The lawsuit with the New York Times Broad is could they do that open AI, Microsoft claim, it's fair use. New York Times says, no, we produced that. We spent money. That's our ip. And so it's going to be interesting to see what the courts say. Of course, as a creator of content, I side with the New York Times, because these large language models, if you don't have journalism to feed it, what are you feeding it?

(43:04):

You're feeding it. Joe Bob's blog from who knows, maybe it was an AI bot it blog who wrote it, creating content content has to come from someplace. And so yes, they have lost a lot of sway because the argument that you need to pay for content, it's difficult as every publisher on the planet has found who doesn't have a successful subscription model. Wall Street Journal has the most successful subscription model in the business, I think because you can write it off as a business expense. The New York Times has a subscription model and it goes up and it goes down how many subscribers, depending on whether people are interested in the politics and want to subscribe or not. So we have to come up with better business models. But that doesn't invalidate the fact that there was effort put into creating these content sets and these AI companies think they can just sweep it.

(43:54):

So there has to be a point where you say, no, somebody created that. Somebody has ownership and value over it just because Google as a platform has created a distribution mechanism that they've now gotten people to come to, doesn't abdicate their responsibility to pay people for their content. Although in California a law that was going to compensate publishers for their content, there was a last minute deal and now Google only has to pay $150 million. And it was unfortunate that it ended up the way that it did and didn't actually push that point. If you create something of value, should you be paid for it? That's the question that we're facing in journalism. Sure, but tomorrow it could be anything. If you create something of value, should you be paid for it or because I'm a big AI engine that everyone uses and I've hoovered it up, well, I should be able to give it away to people for free, but by the way, I will monetize it.

(44:52):

There's money behind all of these. These things are not for profit. They're making money. So then the question is, why aren't they paying people? We saw this happen in the music industry. This is why artists were so reluctant when places like iTunes came out because they were concerned they were not going to get paid for individual songs. Well, guess what? Amazon and Apple figured out a way to pay people for individual songs. It was Amazon's one click patent that Apple licensed that built the backend business model, if you will, the banking system that empowers iTunes. Smart people can solve these problems. They can figure out how to pay you. The question is, should they? And that's what these court cases are about today. So I think that yes, publishers made a mistake when they put everything up on the internet, but I don't think that means that they now forever have lost ownership of the work that they produce,

Dave Reddy (45:44):

Jennifer.

Connie Guglielmo (45:45):

Yes,

Jennifer Strong (45:47):

Exactly.

(45:51):

Well, thanks for both, for being on today. If I knew what to do to fix media, I would be the most successful person in this business. Unfortunately, I'm a journalist and I'm better at documenting the present than I am at predicting the future. We're going to need to leave a lot of this to Connie. But I will say I think there's also another element to this, which is just the undermining the value of journalists and journalism. I at this point get more of my income from public speaking than I do from journalism. Now, the irony is that my speaking is about my journalism. So if I wasn't doing the journalism, I'd have nothing to talk about and no one would bring me in. But whereas it's hard to convince someone to spend a couple bucks on some journalism, it seems to be relatively simple to get them to spend a lot more to bring someone to a conference or an event to talk about that thing that we published.

(46:45):

So I mean, to the point that I've been told not to call myself a journalist anymore, I should call myself an executive producer, and I'm like, well, I do produce my journalism because it's audio, but really, and had an interesting conversation with Julia Anguin about that recently. So I don't know, but I of course agree that just because publishers made a mistake 30 years ago, they should not be forever unable to grow or seek correction to that mistake on their part or to you've lost their property forever because someone took it from them. I don't think that's how we deal with victims or mistakes generally in business or life. And it would be nice to see. I don't, it would be nice to think that it's not a question of whether we should compensate the artist creators and the intellectual property holders and creators, but I do also spend enough time writing about this, including in that documentary that's going out to NPR stations where we put a section in there to consider the fact that you may really like a certain artist's work, but years from now, you're going to like something that was influenced by that person's work without really knowing where it came from and that it's not just the voices that get lost and the writers who get lost.

(47:59):

It's small businesses and tiny artists on Etsy. It's a lot of things that are getting lost right now that make up the fabric, and it's a passion also, by the way, that can get lost. Think about what writer, what creator from music on down wants to go into an uncertain world where they may or may not ever be able to survive. It's not like it was easy in the first place.

Dave Reddy (48:22):

Yeah, I mean, the very future creativity maybe on trial here, and we will leave it with that. As much as I would love to talk to the two of you for the entire, we're entering a holiday weekend, the entire holiday weekend about this. We do have to close at some point. Connie, thanks for being on again. You are, as you said, a very smart woman as is Jennifer. You're both right about that, and if more people agreed with you the way you guys agreed with each other, we might be in a better place both with journalism and just the world. So I'm going to leave with that optimistic note, and thank you both for being on.

Connie Guglielmo (48:55):

Thank you so much, Dave and Connie. Thank you. It's been fun.

Dave Reddy (48:58):

I'd like to thank you all for listening today, and once again, a big thank you to our guest, Connie GMO of CNET and Jennifer Strong of the Shift Podcast. Join us next month when we launch our third season with yet another interview with a member of the B2B Tech Top 200 folks already on the docket this season include Patrick Sites of Investors Business Daily, Esther Ayo of TechTarget, and Michael Nunez of VentureBeat. Many more as well. Please plan to tune in the meantime if you've got feedback on today's podcast or if you'd like to learn more about Big Valley Marketing and how we identified the B2B tech top 200, be sure to drop me an email at d ready@bigvalley.co. That's DE double DY at BigValley, all one word.co. No M. You can also email the whole team at pressing matters@bigvalley.co. Once again, thanks for listening, and as always, think big.