Tech Travels

EP6: AI Unleashed: Ethics, Creativity and the Future of Digital Authenticity with Ian Harris

March 06, 2024 Steve Woodard
EP6: AI Unleashed: Ethics, Creativity and the Future of Digital Authenticity with Ian Harris
Tech Travels
More Info
Tech Travels
EP6: AI Unleashed: Ethics, Creativity and the Future of Digital Authenticity with Ian Harris
Mar 06, 2024
Steve Woodard

Send us a Text Message.

Embark on a transformative journey with our esteemed guest, tech strategist Ian Harris, as we traverse the evolving landscape of artificial intelligence and its profound impact on businesses and the workforce. This episode promises to unveil the ways in which AI is revolutionizing our society, paralleling the monumental shifts seen during the Industrial Revolution. From the reshaping of job markets to the alteration of economic structures, Ian enlightens us with his perspective on aligning technological advancements with genuine business needs and the broader societal consequences.

As we navigate the intertwined paths of ethical AI development and regulation, we confront the critical societal implications of artificial intelligence as both a tool and an agent of change. The conversation delves into the complexities surrounding fair compensation for creators, data ethics, and the necessity of maintaining digital content integrity in an era increasingly difficult to discern human from AI-generated work. The significance of digital signatures and watermarks emerges as essential tools against the tide of misinformation that threatens to engulf our perception of truth in the digital age.

Concluding our thought-provoking session, we probe the delicate balance of trust and truth in AI, drawing parallels between the principles of journalism and the verifiability of AI-generated content. We ponder the pursuit of artificial general intelligence (AGI) and how the vast reservoir of human knowledge accessible online has propelled AI to its current prowess. As we visualize a future where AI seamlessly blends into business processes and daily life, this episode is an invitation to explore a world where innovation and efficiency are poised to redefine our collective potential.

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Tech Travels +
Get a shoutout in an upcoming episode!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Send us a Text Message.

Embark on a transformative journey with our esteemed guest, tech strategist Ian Harris, as we traverse the evolving landscape of artificial intelligence and its profound impact on businesses and the workforce. This episode promises to unveil the ways in which AI is revolutionizing our society, paralleling the monumental shifts seen during the Industrial Revolution. From the reshaping of job markets to the alteration of economic structures, Ian enlightens us with his perspective on aligning technological advancements with genuine business needs and the broader societal consequences.

As we navigate the intertwined paths of ethical AI development and regulation, we confront the critical societal implications of artificial intelligence as both a tool and an agent of change. The conversation delves into the complexities surrounding fair compensation for creators, data ethics, and the necessity of maintaining digital content integrity in an era increasingly difficult to discern human from AI-generated work. The significance of digital signatures and watermarks emerges as essential tools against the tide of misinformation that threatens to engulf our perception of truth in the digital age.

Concluding our thought-provoking session, we probe the delicate balance of trust and truth in AI, drawing parallels between the principles of journalism and the verifiability of AI-generated content. We ponder the pursuit of artificial general intelligence (AGI) and how the vast reservoir of human knowledge accessible online has propelled AI to its current prowess. As we visualize a future where AI seamlessly blends into business processes and daily life, this episode is an invitation to explore a world where innovation and efficiency are poised to redefine our collective potential.

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

Welcome to Tech Travels hosted by the seasoned tech enthusiast and industry expert, steve Woodard. With over 25 years of experience and a track record of collaborating with the brightest minds in technology, steve is your seasoned guide through the ever-evolving world of innovation. Join us as we embark on an insightful journey, exploring the past, present and future of tech under Steve's expert guidance.

Speaker 2:

Welcome back, fellow travelers, to another exciting episode of Tech Travels. Today we're excited to journey into the heart of technology and innovation with one of the most brilliant minds in the industry today, ian Harris. Having shaped product and technology strategies for leading global firms, ian's insights turned complex concepts into accessible knowledge and his work is pivotal as we navigate this concept around AI, content creation and the business frontier. Ian, it's fantastic to have you here. Could you give us a glimpse into your journey and what tech innovations you find the most thrilling today?

Speaker 3:

Steve, great to be on the program.

Speaker 3:

I really appreciate you having me having me along.

Speaker 3:

I've spent a long time many years helping companies build platforms, so helping them build technology that helps them deliver the services for their business to their customers, and so, as you can imagine, a lot of that is in translating what businesses need and turning it into tech talks so that engineers can build the thing that the businesses require and really understanding, from a business perspective, what their requirements are, so that when we actually deliver something, it actually gives them the real benefits that they need, rather than what they think they need.

Speaker 3:

And so I think the thing I've gathered over the years is that there's often a disconnect between what technology can bring and what businesses really need, and I think we're at a very interesting crossroads right now, especially with the emergence of these new AI technologies, as we all, we're all trying to work out what does it mean in terms of these capabilities and what does it mean for business? What could we use it for? How is it going to affect jobs? What will it change in the economy? I think that if your listeners have enjoyed a few of your previous podcasts, they'll be up to date on what this AI thing is all about, but I think it'd be great to explore today what this means for us in business and in even our personal lives and our jobs. How is this going to affect us on a day-to-day basis?

Speaker 2:

Yeah, absolutely, and I think the intersection of business needs and the technological capabilities is a pressing topic, Especially with AI rapidly advancing. There's a lot of debate around this as its impact in the workforce. Just recently I saw a report around. The Stanford Social Innovation Review, for instance, said that AI could significantly reshape the job market. So let's delve into this and demystify really what this means around AI and for businesses and for individuals in practical terms, and how it's shaping jobs, the economy and our daily lives.

Speaker 3:

Yeah, it's a good question. So let's just take a step back for a moment. When we talk about this AI revolution, I'm seeing a lot of comparisons between this and the Industrial Revolution. So let's just be clear about what we're talking about here. What do we mean by the Industrial Revolution? What do we mean by the AI revolution?

Speaker 3:

If we go back to a few hundred years ago, so kind of mid 1700s to mid 1800s, we had a fundamental change in the way we as a society decide how we want to work, and the transformation there was really from a bunch of people that are providing services, agriculture and broadly providing food and small amounts of services to local people. So you had farmers that were providing food for locals and you had not much more than handy work really done at home making clothes, making shoes, making things on a kind of one by one basis for people that I expect most of these artisans would know, the people that were selling their products to. So that's where we were. And then the Industrial Revolution, which was brought about by some fundamental technology changes, and that's where I think the kind of link comes in. Well, we went from making a small amount of something for a small amount of people to making a large amount of something for a large amount of people, and so we went from broadly a distributed workforce across an entire country to moving a lot of people into cities, because, despite the technology, we need a lot more people in factories making these things and then being able to distribute these things to more people in more places. So much more global audience rather than just the local people that you knew. So that's, that's our Industrial Revolution. So technology changed, but it was a was a fundamental shift in how we as humans on this planet operated, and it took place over probably 60, 70, 80 years. So it took a long, took a long time from beginning to end.

Speaker 3:

Is what we consider? That when we're looking at the AI revolution, it's only been a few months, actually maybe a year, where we've had this capability, where we've been talking about AI for many years, and it's certainly been helping us for some time. It's helping us with translations, it's helping us with search, it's helping us with some fundamental tasks that, broadly, humans are not very good at. It's very difficult to do translations very difficult to do. It's very difficult to do search. It's very difficult to come gather lots of information from lots of different places, but now we're at a point where AI is starting to do things that seem a lot more human, like that, seem to be able to do things that humans actually think they're pretty good at on an average basis, and it can do it at a level where actually that's actually not too bad. And so there's two areas that are probably worth talking about there.

Speaker 3:

The first is in terms of large language models and their ability to manipulate and generate text, so generate actual words that make sense in a sentence on particular topics, and also in terms of graphics, so being able to create new, novel graphics, almost photo like realism from a textual description. And now we're starting to see actual videos being made, so stringing those individual frames together and creating a video as well. So we've got this situation now where we're starting to get technology that feels like it's much more like a human doing human stuff. I mean, even a few years ago, if you said that you could type a few words in and get something to generate a poem or generate a couple blocks of an essay or a news article, or could summarize text, you'd be hard pressed to find something that could do it, do it well and could fool the human into thinking some other human had had actually done that. But now, now we're at that point.

Speaker 3:

So what does that?

Speaker 3:

What does that mean in terms of a revolution? What does that mean in terms of our workplace? Well, if we are someone that generates text for a living, like we're a content writer, and we now have the ability to type a few words in and get pretty much a you know, in a few seconds, an equivalent sort of text, that's good enough for many purposes. Yeah, absolutely. That's going to change the way that I do my work, because me, as a content writer, I'm directly threatened by something that can do pretty much what I do, or at least some people think it's a broad equivalent. Okay, so that's that's interesting.

Speaker 3:

As a graphic designer, if, if I have a company that's that's building logos or pictures or generally puts out ads and needs some images, instead of hiring a photographer or employing a graphic designer, maybe I can just type in a few words and get a bunch of images that are yeah, actually that's pretty good, that's close enough for what I need.

Speaker 3:

And the difference there is that we can now do that at a cost that's a tenth, a hundredth of what it would take to get a human to do the same thing. So I think that's where the sense that people are getting that there's some threats to jobs, some some sort of revolution happening Now, the equivalent to the, to the Industrial Revolution, which took place over over many, many years, over a long period of time and fundamentally changed our society. I don't think it's that at that level, in terms of the fundamental shift in what we do everywhere, and I think that there's probably a few different analogies that are better off in describing what we're doing with AI. Steve, do you know the? Do you know the fairy story of the elves and the shoemaker? Do you know that story?

Speaker 2:

No, no, I don't, please, please tell me.

Speaker 3:

So if you, if you find a, find a kid's book with fairy stories in it, this in this particular story, the shoemaker and his wife, down to their last piece of leather, they kind of, they're kind of done, they're pretty poor.

Speaker 3:

They put it out one night and, for reasons unknown, a couple of a couple of elves rock up and make a beautiful, beautiful pair of shoes beautiful, perfectly stitched, because the elves are tiny make beautiful pair of shoes and the shoemaker and his wife wake up the next morning, find this beautiful pair of shoes and put it out in the window and someone walks by and goes oh my goodness, beautiful pair of shoes, pays a lot of money for it. Then they buy two pieces of leather, leave it out the next night, end up with two pairs of shoes and sell another pair of beautiful shoes and over a few nights, weeks it's not really clear in the story the shoemaker goes and his wife go from being poor and destitute to doing very well for themselves and at some point they go hang on a second, we'll be working out what is going on every night, so they stay up and and watch what happens and realize that these elves have been making these shoes for them.

Speaker 3:

And so they they bust in and and want to say thank you. So they make some clothes for the elves who don't generally get clothes made for them. They're very thankful and they go off on on their way. And so you've got this situation where the, the shoemaker, is using a magical force in order to produce better goods than they could on their own. And so, if you look at it as a kind of analogy of the Industrial Revolution, we're going from just being able to do things with your hands to using some magical force, industrial tools mechanism, to be able to do something more productive or faster, or make more of them. For sure. But it's actually a better analogy of the AI revolution that we're seeing at the moment, and that is.

Speaker 3:

I think that what AI gives us and this is a good example is in terms of computer programming is it's a kind of force multiplier, so whatever energy you put into it, you can get even more out of it than you could before, because something's kind of giving you, giving you a boost.

Speaker 3:

And so when AI is used for computer programming now, instead of having to sit there and type every character that the programmer needs to put into to convey to the, to the computer, what it needs to do, or indeed copying a chunk of code off the internet and then customizing it to your requirements, a bit more efficient.

Speaker 3:

But then, if you can tell it, I need a piece of programming that does this particular task for me, using this particular language and here's the things you need to know, and then it goes and types it up for you.

Speaker 3:

Then you can see that I can grab that chunk of code that, basically, is completely customized to my requirements and move on with something else. And now I'm dealing with the whole programming at a much higher level than having to worry about individual variables and lines of code and even the syntax. I don't have to worry about it anymore because that's been taken care of for me, and so it is a much more. It's much more like a force multiplier in that respect. So it gives me much more power. I'm still in control of the program still in, I still have to make sure it works, I still have to pull it all together and make sure it makes sense and I still have to understand the business requirements. But now I can get so much more done than I could in the same period of time, because I have this magical force behind me to do, to do something that wasn't there before, and so I think that's an interesting comparison with with what we're talking about here in terms of the AI revolution.

Speaker 2:

Yeah, absolutely, and I think the the potential of AI to streamline our own workflows and enhance productivity is really incredible. I know in my own role, I know I'm, I know I found AI to be a very powerful tool to automate complex tasks without needing a large team of developers by my side. And I think it's really about financing AI to achieve a greater sense of velocity at our work. Yet I think, as we integrate these capabilities, it keeps coming back into my mind that the ethical considerations really continue to keep coming to the forefront. It's how do we balance efficiency gains with the potential impact on jobs, particularly in creative fields? I think about it. If, if you were to have a critical dialogue that intersects that technical with the human aspect of AI, where do you see this conversation heading, especially in terms of ethical use and its impact on the workforce overall?

Speaker 3:

Yeah, that's a good question. So, as as we're, as we're kind of barreling down this course of of trying to work out what we do with this technology, I think there's been a few interesting articles written about how do we handle this as a society. What does this mean for us in terms of the choices that we make with this tool that we've been, that we've been given, that gives us these magical capabilities? Now, how do we, as a society, make decisions that are fair, that are beneficial? And again, it comes down to how we make decisions about, about our society. But there's been a couple of good examples where I think that it can be used in a great way and we can get many benefits as a society.

Speaker 3:

I don't think anyone really wants to go back to having farms and making, making clothes on an individual basis. I don't think we're going there. So are we gonna step back from AI and the and all the benefits that it gives us? Well, you know jobs are going to change, or at least Tasks are going to change within a job. So you might still be a computer programmer, but how you go about doing that? There's no way that you would go about doing it much more slowly. We as a we any company is gonna say, well, you know, if I can use an AI powered programmer and by a power I mean a human that he's Got the benefit of AI and I can get maybe 30, 40, 50 percent more done for the same amount of expenditure that you'd be crazy not to take advantage of that particular capability. For sure, it becomes a little more nuanced, I think, when it comes to art and creativity and what that means and the the recent negotiations in Hollywood in respect to AI and it's using in films, is this kind of starting point for, for what we mean is is how we decide to treat that. But there's always going to be a push to try and get the most efficient way of doing something with the tools you have available. And so if we keep that in mind, if we can get more done with less, fundamentally, then what does it mean? What's that free people up to do?

Speaker 3:

That is not what they're doing right now, and I think that we have to break apart those two, the two problems there. One is I love what I'm doing and I want to keep doing it. That that's probably gonna change. You probably not gonna have much choice about that, but what possibilities does it open up for new things that I could be doing that were not possible before, and so I think it'll be exciting to see what artists are doing with AI as a again as a force multiplier. What can I do as an artist that I couldn't do previously? But now I have these technology capabilities that enable me to do new and exciting things that are not available before.

Speaker 3:

But as we look at the, the legislation that's coming in place in in the EU, for example, in terms of how we, how we deal with data now in in Europe, very they're very passionate about the Privacy of of personal data and how that's used, and even in terms of particularly, copyright. So there's probably some areas that are on the edge of AI at the moment where we need to Not rain it in, but but control a bit more carefully in terms of what data is used for, for teaching AI and do that in an ethical way. I think that's very important that if you're producing, if you're producing work and it's being used for a business purpose by someone else, you should be compensated for the thing that you're producing. That sounds. That sounds very fair. That sounds very reasonable. You shouldn't just be able to take stuff and and do whatever you like with it without compensating the person If they hadn't, if they hadn't made that available publicly, for example. So I think that's an important part.

Speaker 2:

Yeah, ian, I think that you've raised compelling points around AI as a force multiplier, especially in creative fields, and as we look at things like AI generated art, it becomes more prevalent and it distinguishes and distinguishing features between human created and AI generated work Really becomes crucial. So your thoughts on digital signatures is intriguing. Could this be the key to preserving the individual identity and possibly the ownership of a person or a person's identity in the digital realm? How do you see this playing out, particularly with the increased discussions around Data ethics and copyright in the AI's?

Speaker 3:

Yes, it's a very good point, and I think we're seeing a little more emphasis on that now. So Places like companies like mid-journey and to some extent that way. Three, you're actually embedding watermarks into the images so that we can detect whether or not they are an AI generated image and Whether it's a or whether it's been done by a human, and so Facebook meta, for example, is now Implementing a process by which they are going to analyze the images as they're uploaded and then be able to put a tag on them If they've been, if they've been generated by an AI source. So that's good, because that and I think I think we need to be honest as well Like, if it's generated by AI and and it's not something that is actually real, there's a fine line between a nice, pretty picture of an elephant that's eating a banana and the next step to Some known person doing something that they wouldn't want to be seen doing, but there's a picture of it.

Speaker 3:

There's a line there where you really need something to tell you if the photo of the famous person doing something they don't want to see a photo of, if it's genuine or not.

Speaker 3:

That's, that's very important.

Speaker 3:

What if it's a Fantasy picture that looks interesting and is pretty, it's probably not so important, but there is a, there is a line there, especially in terms of our society, our politics, our, our decision-making is based on the information that we're given, and so knowing that it's genuine that's, I think, the important point there.

Speaker 3:

So you're absolutely right we need a way of being able to determine whether some images is true or not, and so companies are starting to take that into account now, and we're also seeing it on the other side as well, so camera manufacturers are now starting to put in watermarks into their photos. So it's the other way around, so that you can tell it's an actual photo that was taken through a lens onto a sensor and it was recorded onto a disk, and we know that it actually came from a camera of a real thing that happened out there in the world. And so we're seeing it from both sides now, both in terms of being able to analyze AI images and tell that they are AI, but also in terms of this was a genuine photo, taken in the real world, of something that was in front of the camera.

Speaker 2:

It's interesting. It's like the more we crave more things such as artificially generated things, the things that we kind of crave for, is that on the flip side of that is that we're still searching for something real and genuine and actually created by a person.

Speaker 3:

Yeah, and this is the kind of funny thing about humans I mean we love we have such a great imagination and we love that kind of fantasy of being able to create anything, but we also crave that connection of reality and is this something that I can trust? In that sense is also very important to us as well, and I think you bring up some very good points about where the subtleties and the dangers, in fact, in respect to AI, are going to be as we try and work out what is real. What does it mean? Where are we? Where are we placing our trust in terms of the images that we're seeing or even the text that we're seeing? If I can produce a pervasive text, persuasive text that tells me about a topic that I'm interested in and I'm convinced about it, but it came from an AI, is that different from if a human-generated text that persuaded me of a? But it's some very interesting kind of subtleties there in terms of how we get information and where we place our trust as well. That's a very good point.

Speaker 2:

Talking about trust I really have to bring this up is I saw a news article that Google was going to start kind of funding their own kind of private laboratory for them to be able to use.

Speaker 2:

That way they can use this lab so they can look at and evaluate and apply framework and standards around their AI models.

Speaker 2:

And I'm all for regulation, I'm all for frameworks, I'm all for prescriptive guidance, but I almost kind of think it seems a little bit, almost too much like there's not going to be enough transparency there in terms of what type of framework is really being applied here. And I think I kind of see across cloud platforms, across the different entities that can create AI there's so many different models out there and everyone's got a ethical AI framework or some sort of AI framework you see in the industry and there's so many of them out there. No one's. You know you can always kind of pick and choose which framework you want to apply to, but it really kind of seems like is there a way for us to kind of like take a look at the industry as a whole, put a broad spectrum, or is there a way to kind of put just some sort of governance and framework around AI, so there's more transparency, more trust that we can build into it. That's not just hey well, we give everything to Google, so we have to trust them.

Speaker 3:

Yeah, yeah, exactly. Well, it's a good point. I mean, it's fine for Google to find out about their own models themselves, but it's not clear exactly what they're exposing about what they found as well, and the challenge is manyfold there. So, for a large language model, if you ask it a question, it will give you an answer, but if you ask it the same question again because it starts off with a random number it's going to give you a different answer, and so there's no cut and dried direct response that you can kind of say well, it said this and so that's the answer. You kind of have to ask it a lot of times, and so this complexity added to the fact that, what can you ask it? What area are we interested in politics? We're interested in facts about the unit, we're interested in a legal argument so many areas that you could delve into. It's going to be really difficult to work out a kind of subtle framework that works in all cases. But I think, as technologists, as a society, one of the important things we need to do is work out what are the implications of the things that we're doing and work back from there. So if I'm now creating text that is going to be influencing people, then what are the principles that I would normally apply in, say, journalism? So I'd want a couple of sources. I don't want to be able to back it up. I want to be able to have some sense, at least for myself, that it was true. How can we build that in to ensure that the things that AI large language models, are generating have some sense of believability and trust in them? Or if it's the case that we can't trust them at all and we just have to treat them like it's a fiction writer, that's fine too, but we kind of need to decide where is that line that we're willing to cross in terms of where we place our trust in the results from these particular engines.

Speaker 3:

So I think that it's great that we're doing experiments on our own products. That's fantastic, and good that we know what we need to do to fix them. That's very important. But I think you're right, it needs to be a transparent result. What do we find from doing these tests? Ok, so we found that 87% of the time it told the truth, 13% of the time it made it up, and the difficulty then, of course, is OK.

Speaker 3:

So what does that mean? I mean, again, we also seem to have different expectations on computer systems than we do on humans. So if I have an accounting system, I expect the numbers to be correct, I expect them to be right every time. There's no doubt about my expectations there. But we all know that humans make mistakes and we're influenced, and we have devious, often duplicitous, methods of achieving our aims. We are not entirely trustworthy ourselves, let's be honest. So how do we translate that expectation on humans, which, again, we expect humans to have opinions and feelings and make mistakes, and so in some ways we're actually setting the bar higher for these artificial systems. We want them to be true all the time, or at least know that they could not be true. But we kind of need some methodology, some system that is similar in some ways to the way we treat humans.

Speaker 3:

So, steve, I've listened to many of your podcasts. I think you speak with interesting guests. You obviously are well respected in the industry, and so I put you in a position of trust in terms of what you say. If you said something new to me, I'd be like OK, well, the other stuff I heard from him made perfect sense, so I expect what he says now is true as well. Yeah, we kind of build trust over time, so we need a similar system in a transparent way for these artificial systems to build that trust over time as well.

Speaker 2:

And it's interesting. I really want to kind of start to kind of transition into this segment around AGI and I really want to talk about, you know, what are some of the technological challenges and breakthroughs that we really need to achieve AGI and what's the timeline that we start to foresee this happening.

Speaker 3:

Yeah, so we're. By AGI we mean an all knowing, all perfect intelligence that can answer anything and give us everything we need all the time. It's the kind of ultimate AI goal and the challenges that the companies that are building these models have kind of run out of data. They came about these companies because humans as a whole have basically taken everything they know and shoved it on the internet. So everything we know now is all out there, it's all publicly accessible and so effectively, they were able to slip this all up and shove it into a very large computer and get out a database that enabled them to generate text. As a really simple way of describing what happened with large language models. Now the challenges, the challenges that they've slurped up all the text there is. There is not much left other than private collections of stuff that is available for these large language models to learn from.

Speaker 3:

So what do we do next? Well, we can get large language models to generate text, so maybe we can learn from that, and so we're starting to see artificial text being used to train AIs, and again, of course, it's not as good quality because it's not as good as humans yet, but now that's starting to have an impact, but the I think the pace of change is going to take place, because now we're seeing companies that are doing high level negotiations with with organizations that have more data. So we've just seen an agreement with with Reddit, for example. We're not sure who it's with just yet, but they've agreed to sell the Reddit data to an AI company so that they can again slip up all the stuff that's in Reddit, which is generally regarded as as quality because it's by real humans answering real questions. And so there's an example where these AI companies now trying to find every other corner of data that they can get access to, and I think what we'll find is that other companies that hold large quantities of data and information and text will be hard for them not to not to not to negotiate a good deal with one of these companies to get hold of all the rest of the data available to humans.

Speaker 3:

And then we need some. As every year goes by, we get more computing resources for less cost, and it's because we're at this point in in 2024 where we can run these massive, massive models at all. We still need, we still need more computing power to be able to get to that point. So I think we're at the point now where the best language models that we've got is is doing. Is is doing a very credible job, but it's not quite there yet. There's some areas that it's better than humans. There's many areas that it's not. But I think within the next two to three years will be at a point where it will be indistinguishable from from different styles of humans.

Speaker 2:

Incredible. What are your predictions for the next five years in terms of? You know how people can look at AI in terms of their normal everyday life. With what can they kind of expect to see and what should you be kind of looking at from or advising businesses from? A business perspective, on their outlook, on how they're going to, how they're going to adopt AI into their platforms.

Speaker 3:

Yeah, I think the big advantage we're going to see from AI is that things that were things that were really expensive before and very, very hard to do will become a lot easier for a lot more people. And that kind of segues nicely to a project that I'm working on called pulse podcasts, where we are creating podcasts for companies that would not normally be able to afford. I mean, steve, you know what it's like to make a podcast. You've got to go record it, you've got to edit it, you've got to have script, you've got to have or you have guests, you've got to spend time recording, then you've got to get all the whole thing together and package it up and write this. There's a lot of work involved in producing producing a podcast, and the process that we're going through now enables us to take existing content that companies are creating for their marketing purposes, like newsletters or blogs, and we create scripts using large language models and then we use some of the best voice over AI engines to create the actual voices, and now we're at a point where we can create a podcast from an existing set of marketing content for about a tenth of the cost it would normally take to make the equivalent podcast. That's a good example where we're going from something that would just be not economically feasible previously to being okay. Well, now this is within the cost framework. That works for me, for my business, and so, whereas previously some companies would not be able to afford a content writer to write content for their ads or be able to create beautiful images, or be able to afford a top end photographer to take pictures, now they can do that for a much lower price point. And so we're going to see the democratization of content and art and words, and the great thing there is that that'll make that much more accessible to more companies in more different ways, and so, instead of having to try and reduce costs by outsourcing call centers to different countries, we'll be able to customize our chat models to understand my business deeply and to not make mistakes and less make mistakes less often than humans to be more responsive.

Speaker 3:

And I'm sure you've you've been on the other side of a chat with some company where you're trying to return something or get some feedback about a flight or some some kind of interaction, and you kind of know that the person on the other end is clearly dealing with a dozen different chats at the same time and they're cycling between them and they're trying to give answers and you give them a response to something and they come back two minutes later. We're going to see a lot more service level improvements in terms of those those interactions with with systems. So companies are always looking to reduce costs, they're always looking to make it a more efficient business, and I gives them a chance to take take costs out of their business in one area and then again, you know, as a society, things will change. There'll be new jobs, there'll be new opportunities and we'll have a change from spending time and money on labor and spending time and money on making efficient systems and improving performance and being able to grow the grow the business, but by doing it, by improving the quality of services being delivered to customers. So I think that's the important thing to consider.

Speaker 3:

In terms of AI, it does enable us to reduce costs. It will, in many cases, change the way people are doing their jobs, but that frees up frees up money to spend in other areas where we can actually grow a business and improve it and take new ideas and turn that into a reality. So I think, steve, we're at a very exciting time and, as all the companies are looking at how they can use AI and what does it mean for my business. The key, the key key takeaways, I think, are it's a great force multiplier for things that you're already doing and there's areas that you can get into that might have been previously way too expensive for you, but now we can do that in much lower price point and enable them to do conduct business in new and exciting ways.

Speaker 2:

Yeah, totally agree. I couldn't agree more. Wow, the hand. That was absolutely incredible. That's amazing. And again, your insights are. You're a fountain of knowledge and I could go all day on this. It's been an absolute pleasure having you on our tech travels today. Your insights and experiences have really shed a lot of light on to this pivotal role in how it's shaping our future, especially in the content creation and business strategy aspects, and I just want to say thank you for taking the time to share your wisdom with us and our listeners, and we're all looking forward to seeing how this is going to kind of it can impact all of us in the next few years. So your insight is greatly appreciated. Thank you so much for coming on the show.

Speaker 3:

Thanks, Tim. It's been a pleasure to be here. Thank you.

Exploring AI Impact on Business
Navigating Ethical AI Development and Regulation
Challenges of Trust in AI
Impact of AI on Content Creation