Preparing for AI: The AI Podcast for Everybody

Office Work: How future-proof are executive roles?

March 19, 2024 Matt Cartwright & Jimmy Rhodes Season 1 Episode 3
Office Work: How future-proof are executive roles?
Preparing for AI: The AI Podcast for Everybody
More Info
Preparing for AI: The AI Podcast for Everybody
Office Work: How future-proof are executive roles?
Mar 19, 2024 Season 1 Episode 3
Matt Cartwright & Jimmy Rhodes

Send us a Text Message.

Will the AI revolution render your white-collar job obsolete, or can it actually catapult your career to new heights? Join hosts Matt Cartwright and Jimmy Rhodes as they dissect the seismic shifts AI is having on the office work landscape. This episode peels back the curtain on the murky decision-making processes of AI, particularly when faced with the challenge of misinformation in complex language models. As we traverse the globe from EU regulations to the evolving laws in China and the US, we arm you with the critical understanding of AI's integration into professional roles—be it research, copywriting, or editing—and the profound implications it brings to the work we once thought was exclusively human.

Chatbots are already outpacing the receptionist and algorithms augmenting the C-suite; displacement of many executive roles is not as far off as you might think. We question the slow churn of AI adoption in industries where you'd expect more rapid progress, like within the chatbot interfaces of major financial institutions. Yet, not all roles quiver before the march of automation—discover why jobs in social work and suprisingly HR management remain steadfast in their human essence. Meanwhile, CEOs candidly admit that their positions are not immune to AI's reach (although no doubt they will find a way to survive!), raising the question: how high up the corporate ladder will the wave of automation climb?

To wrap up, we offer some advice for optimizing AI use in your workplace, from crafting compelling prompts for personalized content to choosing the right AI models—whether you're inclined towards OpenAI's ChatGPT, Google's Gemini, or Anthropic's Claude. We also lay out the blueprint for navigating the AI landscape responsibly within your organization, balancing transparency with the need for customized solutions. 

Show Notes Transcript Chapter Markers

Send us a Text Message.

Will the AI revolution render your white-collar job obsolete, or can it actually catapult your career to new heights? Join hosts Matt Cartwright and Jimmy Rhodes as they dissect the seismic shifts AI is having on the office work landscape. This episode peels back the curtain on the murky decision-making processes of AI, particularly when faced with the challenge of misinformation in complex language models. As we traverse the globe from EU regulations to the evolving laws in China and the US, we arm you with the critical understanding of AI's integration into professional roles—be it research, copywriting, or editing—and the profound implications it brings to the work we once thought was exclusively human.

Chatbots are already outpacing the receptionist and algorithms augmenting the C-suite; displacement of many executive roles is not as far off as you might think. We question the slow churn of AI adoption in industries where you'd expect more rapid progress, like within the chatbot interfaces of major financial institutions. Yet, not all roles quiver before the march of automation—discover why jobs in social work and suprisingly HR management remain steadfast in their human essence. Meanwhile, CEOs candidly admit that their positions are not immune to AI's reach (although no doubt they will find a way to survive!), raising the question: how high up the corporate ladder will the wave of automation climb?

To wrap up, we offer some advice for optimizing AI use in your workplace, from crafting compelling prompts for personalized content to choosing the right AI models—whether you're inclined towards OpenAI's ChatGPT, Google's Gemini, or Anthropic's Claude. We also lay out the blueprint for navigating the AI landscape responsibly within your organization, balancing transparency with the need for customized solutions. 

Speaker 1:

Welcome to Preparing for AI with Matt Cartwright and Jimmy Rhodes, the podcast which investigates the effects of AI on jobs, one industry at a time. We dig deep into barriers to change the coming backlash and ideas for solutions and actions that individuals and groups can take. We're making it our mission to help you prepare for the human and social impacts of AI.

Speaker 2:

So welcome back to episode three of Preparing for AI. I'm Matt Cartwright and I'm Jimmy Rhodes, and today we are going to be looking at white collar office work, which is not really an industry as such, more of a kind of general kind of look at administration office jobs. But we just wanted to make a kind of note before we started that we're not necessarily going to stick rigidly to kind of a particular industry, so we may cross into things which would come under the industries and later in the podcast we will probably focus in on some of these industries, but we thought that going a bit more broad to begin with. So this episode about office jobs and then the next one looking at robotics and manual work will give a really good overview. And before we started I thought it was worth just noting that this week we have seen the EU pass the AI Act, which is the world's first comprehensive AI law.

Speaker 2:

There are still steps to take. This is the EU right, so they obviously legislate more and before everybody else. But this is a start and it shows, I guess, that people in power are standing up and making note and taking a stand on the kind of importance of getting regulations and stuff in there. So I thought it might be interesting, with the EU being where they are, just to kind of summarize what I think are probably the other main players in AI. So China is believed to be developing its own AI laws and in July last year, the cyberspace administration, along with several other authorities, released their interim measures for generative artificial intelligence services, which address potential harms in misinformation, algorithmic discrimination and infringement of personal data and copyright. In the US so the US in late May 2023, they enhanced an AI governance strategy by releasing revised national AI R&D strategic plan, and this was part of a broader effort by President Joe Biden's executive order on safe, secure and trustworthy development. The order of safe, secure and trustworthy development on the use of artificial intelligence signed into law on October, the 30th 2023.

Speaker 2:

In the UK. So the UK continues to push for a light touch approach and to distinguish itself from the EU's position. However, there have been some recent developments. The government recently published its response to its own AI white paper consultation and outlined a future roadmap for the development and enforcement of AI in the UK and, additionally, the artificial intelligence regulation bill was introduced to the UK's House of Lords in November of last year, and this bill aims to regulate AI, proposing the creation of an AI authority to ensure that regulators take AI into account and are aligned in their approach. So that's where we are in terms of regulation. I guess, in line with other episodes, jimmy, do you want to give us an update in terms of technology and AI and where you think we are now in terms of office work.

Speaker 3:

Yeah, I will do. Just something interesting that popped out while you were talking about that was. One of the things that came up was misinformation. We're going to talk a bit about misinformation today and how large language models, because of the way they're trained and because of the way they work, they can give out incorrect, inaccurate or misinformation. So we can discuss that later on in the show. I'm really curious as to how, what we think on and how they're going to actually apply that, because it's pretty tricky. These AIs are kind of black boxes that even the designers and developers don't necessarily fully understand, so tuning things like misinformation out of them might be more difficult than we think.

Speaker 2:

Maybe some people are not aware, but really even the current technology of large language models, no one really knows how it works. They know what's been put in there, but no one really understands how it works, which is quite frightening, to be honest.

Speaker 3:

Yeah, exactly so. For anyone who doesn't know how these models are trained, artificial intelligence is these or these. Artificial intelligence models are neural networks where we've designed the framework or the architecture for it, but then we just let them. We just train them on all the information that's out there In the case of large language models, basically all the books that have ever been written, everything that's on the internet that's ever been written on Reddit or even 4chan and yet they just train themselves. And so when you turn off the training and you get to the inference stage, which is when you're actually using something like chat GPT that's called inference. Then the people who designed the AI don't know what it's going to say, and so there's a whole then step of actually looking at what that model is outputting and trying to get it to become more ethical and sort of grading its responses and things like this, which is also pretty time consuming, but to the end of misinformation it's, it's it's. It turns out that it's really difficult to train some of the behaviors out of these models, and there's something called jail breaking, which is getting these models to do things that they're not supposed to do, which continues to this day. So, even though GPT four has been around for around a year. People can still find ways of backdoors to actually get it to do things that it's not supposed to and say things that it's not supposed to, and so I think there are going to be significant challenges around meeting some of those regulations. So, good, start to the pod.

Speaker 3:

So, yeah, stepping back a little bit, I suppose I should go. I should actually explain what kind of tools, because we're talking about office work. So we're talking about things like back office jobs, like administration assistance, accounting, clerks, a lot of white collar roles that involve writing, analysis, presentations, things like research analysts, copywriters, editors. Those are the kinds of roles we're talking about today. It's obviously going to impact on crossover a little bit with some of the some of the jobs we're going to talk about in the future, like journalism and law, but this, for today's episode, we're just focusing on all of those, basically anything that's office work.

Speaker 3:

So the main, the main thing for those roles is, as I say, large language models. So what a large language model is is it's a model that's been trained on all of the information that's on the internet, chat, gpt or GPT, for unless you've been living under a rock for the last year and a half. You've probably heard of it. Those are large language models and what they do is you can type in a prompt and the model will go and look at all the information that's inside of it. So all the information in that's all of the written recorded information in human history, essentially and it will predict what the next words should be, based on your prompt and based on the knowledge that's embedded inside it. And so, for example, you can ask it to write you a story and you can give it a style of a story. So you can ask it to write you a children's story about the Grinch or you know any kind of story, any kind of story title you can come up with, and it will happily go away and write you a novel story. Obviously, it's going to be based on all of the information that's already inside it, which will come on to some of the issues around copyright and some of the issues that have been raised around copyright later.

Speaker 3:

But effectively that's what it can do and in terms of how it can apply to office work, you could, for example, get it to write your appraisal, you could get it to write an appraisal of somebody else and you have to feed it a certain amount of information. It's rubbish in, rubbish out. So you have to be able to craft a decent prompt to get good information out of a large language model. But if you can learn how to prompt one, then you can actually get very good quality output from it. And it usually needs a little bit of tweaking and a little bit of refining, but it can save an awful lot of time in these kind of analytical, back-end office jobs what you call office jobs.

Speaker 3:

You can get it to. You can get it to analyse huge amounts of data really quickly. So that's something where, for example, you have to do some research. You have to read something on the internet, you have to read something from books. You can put that whole thing into a large language model and get it to spit out the key summary, or you can get it to summarise it with respect to a specific piece of work. You do what you're doing. So, for example, if you're writing a new piece of policy guidance, you can enter the parameters that you need, that you're interested in, for your policy, and you can copy and paste in all of the previous policy and it will be able to reshape that into whatever kind of policy you're building, but how?

Speaker 2:

much can you trust that? I mean, there's something called hallucination right, which you know I haven't seen too much of, but I have seen examples of things where, for example, even references which have been made up.

Speaker 3:

Yes. So as with I would say as with anything on the internet, you have to double check things, but this is actually the chat GPT.

Speaker 2:

Is this term hallucination? Is is literally making things up. So this is not about the quality of data, because you get that. You know. Obviously it will always be limited by the data that's there. So if you have an open source model, it's limited by what's on the internet. If you have a smaller model, it will be limited by your own data. But this idea that chat GPTs are just making things up, is that something to worry about, because you're going to get caught out pretty quickly if you're summarizing or researching something and you're being fed back something that is not even you know, it's not a case of bad information, it is just not true.

Speaker 3:

Absolutely so there was. The most famous example of that was somebody, a lawyer, lost their job in the US because they used chat GPT to basically write notes for their case and it made up a reference. It made up a reference to a court, a previous piece of legislation which had never happened or a previous court case which had never happened, and he lost his job for that. Obviously, there are massive limitations with it and in that particular case, the individual in question was not very smart about the application, but I think that's like any tool that's available. So if you you know, you could refer to Wikipedia, and 99. Something percent of it is accurate, but Wikipedia itself is not making up information.

Speaker 2:

People are putting information. You know, wikipedia is open for people to input stuff and then to check it but and chat.

Speaker 3:

Gpt is trained on Wikipedia.

Speaker 2:

Yeah, but what? But if it's making stuff, how is it making stuff up? So how is it hallucinating? I mean, where, where does this come from? Or is it? Is this a sign that it already has its own conscious?

Speaker 3:

consciousness. So the reason for hallucination and I'm not I'm not actually an expert in this, this area, but effectively, if you, if you get into how language these models actually work, what they're doing is they're predicting the next word based on all the previous words that have come before. They call them tokens. They're basically words. It's going to base it off all the information that's inside it, and if you ask it to be factual, it will try to be factual, but the more and more it gets out of its for what of a better word comfort zone in terms of what information's inside it, then that's where it might, because it's always going to want to please in terms of want to provide the next word, the word, and these models don't necessarily like saying I don't know the answer, Maybe like a little bit of a human like trait. Yeah, they don't want to not spit out the next word, they want to, they want to continue the conversation, and so that's when I think they get into creating spurious references.

Speaker 2:

We should probably say that this term hallucination is not. There's not. We're not calling it hallucination. This is a technical term for when a chat GPT is is, like we said, just hallucinating, it's inventing something that's not there.

Speaker 3:

Yeah.

Speaker 3:

So in terms of the way I use GPT and I think this is why, if you don't already and haven't already had a play around with a large language model, whether you're going to use it for your work or not, and what you know, if you work in one of the, if you work in one of these office work type type roles, I think these tools are going to start to come into the office place in the future, if they're not already.

Speaker 3:

Some corporations, some, some enterprises are already applying them and already allow the use of them.

Speaker 3:

So this is where I think it's a good idea to familiarize yourself and understand hallucination and understand, start to actually use large language models, start to use something like chat, gpt or one of the other models and understand how it works and what its limitations are, because if you're going to apply it at work, you need to know what, where, where that line is and be aware of you know, be aware of how to fact check it and when it might do things like hallucination. One thing I would say is, in terms of my personal use, instead of just asking a large language model to freestyle something, which is where it does tend to become more fictional and hallucinate more If you give it a load of information. For example, in the example I gave before, if you give it a whole book to read and then get it to summarize it, it will stay on task and it won't usually hallucinate in that situation because you're giving it a very specific task, very specific inputs and asking it to you know, asking it to do it.

Speaker 2:

So the parameters are important.

Speaker 3:

Yeah, parameters are really important. Yeah, absolutely.

Speaker 2:

I thought I would just sort of summarize. And this is the question. So this is from chat GPT. So I asked chat GPT to tell me about the main kind of white collar office jobs which were being affected. So this was the kind of top list. So customer service and receptionist obviously you know customer service. We know chatbots, although it amazes me how bad some of the chatbots for example, you know banks. The quality of the chatbot considering, I think even I could use in chat GPT could write my own custom GPT better than HSBC or NatWest or any of the major bank ones. At the moment they don't seem to reflect the advancing technology, but I guess that will change very quickly.

Speaker 2:

Receptionists I was kind of amazed at this. They're talking about most businesses that have already automated their receptionists. I guess what they're talking about here is your initial kind of phone call rather than having a robot sat at a desk. So we're sort of talking about that in a future episode but I don't think we're at that point yet. But I guess if you're calling to a switchboard or you're emailing to a general account, then I guess AI is already that kind of first step, even if you still get to a person eventually Accounting and bookkeeping, so obviously it can be more efficient and flexible.

Speaker 2:

I guess if you have a more complicated case, probably you're not using automated accounting or bookkeeping but for more simplistic tasks, and then technologies like blockchain can perhaps make it even more secure. A lot of people will probably have experience of their expenses being automated tax reporting. You see a lot of adverts now for services for tax returns, which I guess that will be. Hopefully it'll be a lot cheaper and hopefully that gets passed on to people. Sales and marketing so sales roles. You can also see a lot of the kind of market research and insight work, if not being replaced, is at least becoming much more labor intensive.

Speaker 2:

Although and I will talk a bit later about small language models Remember that if your company has its own insight data, these large language models are running on open source data, so they're looking at what's out there on the internet. They won't have access to your organization's specific data. Insurance underwriting Obviously very easy for AI to analyze within a kind of strict set of formulas and structures, which is what you were talking about, and I found it really interesting. I can't remember where this is quoted, so I can't give you a quotation on this, but CEOs admitting that 60% to 70% of their work will probably be replaced by AI, but I'm sure they'll be fine and will make sure that they continue to get well rewarded and take advantage of all of those productivity gains.

Speaker 3:

If anyone will still be employed after all this, it'll be the CEO. Might just be the CEO.

Speaker 2:

It may be just them and a lot of robots. Interestingly, I thought on the index, so there is an index of most and least replaceable jobs. I think social work was actually number one on the hardest to replace, but on the list of the top 20, hr managers was on there and I guess initially you think of that as a job. It's an administrative job. You think it would be easy to replace and I think obviously a lot of recruitment and HR functions are being automated. But I guess you need to have, or you need to have, the Seneca. He says you need to have someone to help tell everyone their jobs been cut. So you still need a person in there to do it, but you still need someone to provide the human touch. There are some industries that I guess you would be surprised to hear and maybe not the easiest ones to replace.

Speaker 3:

Yeah, I think HR there'll be at least someone in HR who's second last to go before the CEO, Because you need someone to fire them.

Speaker 2:

So it's the HR direction, the CEO, exactly. Let's change tack a little bit and talk about what sort of driving change. So you've touched on it a little bit and I think one of the reasons this episode hopefully is sort of more relatable and we can give more practical tips is it is really things like chat, gpt and large language models that are kind of central to some of the, at the very least, that the increase in productivity and competitiveness that we're seeing at the moment. So, for example, things like speech writing, summarizing reports, chat GPTs can't do all of it, but I don't think it's an exaggeration to say they can do two thirds and three quarters of the work. And where you need the expertise is at the beginning to set the parameters and what information you're running that on and then to provide the finishing touches at the end.

Speaker 2:

So it is the bit in the middle and I guess that is the. You know the bit that is quite boring and there's a sort of positive story here. If people are still in employment, this is where you're going to see, I guess, the more boring parts of the job being removed, and we are seeing that. You know, I can see examples now If you're writing a speech where before you'd write the whole speech, you put the parameters in, you write the speech and you tweak it at the end. It's not perfect but it's at least three quarters of a saving on there.

Speaker 2:

But I guess it's that the productivity boost over time. They still drive job displacement, because you know we talked about this, I think, in the first episode. But if two to five people can do what Tend did before, for most businesses there'll be exceptions to this. But for most they're not going to just keep growing and growing, they're going to be able to cut costs and be more productive rather than expanding and expanding On speech writing as well, just as a sort of, I guess, an AI tip.

Speaker 3:

If you ask an AI to write you a speech and give it the content, it will write you a speech. It won't write it in your style, but it will write a speech. What you can actually do is give it. For example, you can give it 10 of your previous speeches and say this is how I would write a speech, can you write a speech in my style? And then it will actually do it in your style or, you know, a close approximation of it. So it'll get a much better first draft.

Speaker 2:

And I think that's something maybe people are not aware of, so maybe you could talk just a little bit about. You know, we talk about how large language models are trained on everything that's out there. But you don't need to use that so you can give them, as you say. You can give them a set of data and set of parameters to work on right. You can put that information so you can drop files, you can paste text directly in. But you can ask it to analyse and only work on certain data and ignore, you know, the sort of open source data out there, right?

Speaker 3:

Exactly. There are some really good websites, actually OpenAI. So OpenAI is chat GPT. They're the creators of chat GPT On their website I can't remember what page is, but they've got some example prompts that you can use and they go through how you should prompt an AI. The same thing for Anthropic and Claude. They actually have a page that I was looking at just yesterday which goes it gives you like some templates for prompts that you can use and so you can start to understand how to prompt them.

Speaker 3:

I've seen people, I've heard people before, say oh yeah, I tried chat GPT and it was rubbish and it's like well, what did you ask it to do? And then they just typed in write me a speech, something like that. And of course it's going to be rubbish, it's been trained on the whole internet and you've just said write me a speech and maybe write me a speech about AI or something like that. But you need to put a bit of effort into it to actually getting something decent out of these AIs. So, for example, if you ask it to write your end of your performance evaluation, it doesn't know what you did that year. It's not omnipresent, so to speak. So yet yet it's not omnipresent yet.

Speaker 3:

So if you've got a record of all the things you did that year, you can stick it in. You can ask it to do it in a specific format, like in the star format or something like that, and then it will give you a much, much better output. Whatever the you can, you know you can use whatever format it is that you're supposed to use at your workplace. You could paste in previous examples of your appraisals, and then it will actually give you an output that matches the style that you've put in. So prompting is a definitely a thing and takes a bit of work. So if you're going to have a go with some of these models, don't just chuck in something really generic and then be like oh, look at it, it's rubbish, because you need to put a bit of legwork in yourself in terms of in terms of that prompt, but it will save you an absolute ton of time.

Speaker 2:

And I find as well that telling, telling the GPT what you want it to be. So you know, tell it. You are a researcher working for a magazine. You are a you know you are writing a podcast on preparing for AI. You are a medical consultant interpreting in a document. You know the more you can give it to reduce the parameters down, the better the quality, because it's it's you know, it's less thinking, and you've got to remember, like we said, that it's got access to every everything that existed up to April 2023, plus web search. There's a hell of a lot of information in there, but actually you don't want it to necessarily dip into all that information. You wanna make it as focused as possible. So the better job you can do of focusing your questions, the better output you're gonna get. And also, you see that the model sometimes gives you half of something and ask you how it's doing and then you can ask it to continue so you can adjust it and help it to kind of learn. I know one of the things announced this week, I think, with chat GPT is that it's gonna have a memory, if you want it to, so it can remember the questions and the prompts you've asked it for and it can kind of start to understand the things that you're looking for. And not everyone will want to do that, but you know more customizable way of asking for and creating data. ["the Last Song of the Year"]. ["the Last Song of the Year"].

Speaker 2:

We've talked about sort of GPTs and larger language models quite a lot, and you mentioned Claude and sort of OpenAI, chat GPT, I guess based on kind of main three out there. So chat GPT, openai's chat GPT, google Gemini, anthropics. Claude, could you give a bit of an example of the sort of differences between them and which should people be using? I mean, is there a specific? If you're looking at this, here's the best model or is it a case of just try around and see which one you like? First of all.

Speaker 3:

There's a difference between. So you can access any of these models for free, but there's a free tier and a paid tier. Now I'm not recommending everyone goes out and starts paying for a model. The first thing to do is probably just try one out. But if you want to pay for them, you obviously get access to the latest models which have significantly better memory and significantly better features. So, to give an example of that, if you use chat GPT and you use the free version, you'll get access to something called chat GPT 3.5 Turbo, which is very good, but it's chat GPT-4 came out around a year ago now and chat GPT-4 has a much bigger context window, which means it has a longer memory quite significantly. So GPT 3.5 can remember about 4,000 words, roughly about 3,000, 4,000 words, whereas the paid version chat GPT-4 can remember 32,000. So it's the difference between being able to have a much longer conversation or having much shorter conversation before the model starts to forget what you were talking about. Basically, here's what those 4,000 words are, or 4,000 tokens are. The other difference is a chat GPT-4 has access to go on the internet, so it can actually go and find the latest information, whereas 3.5 can't. It can generate images and it can do. It can write code, which GPT-4, gpt-3.5 can do, but GPT-4 has code interpreter, which is significantly better at coding effectively, and it's similar across the other AIs. So the other big ones that you talked about were Anthropic and Claude and Google Gemini. They both have a free and a paid model.

Speaker 3:

If you're going to use a free version right now, honestly, I'd probably recommend using something like Gemini 1.5 or Claude. The reason for that is because they came out more recently and, while they're not necessarily, they're definitely better than GPT-3.5. They're not better than GPT-4, but they have much longer context windows. So, for example, claude is 200,000 tokens and Gemini is 128,000 tokens, which are significantly, significantly longer memory. Basically and you'll be able to with Claude or with Gemini, you'll be able to paste in huge amounts of text, as we were talking about earlier, and it'll be able to maintain, remember all of that in its context window, which will help it answer questions and stay on task with respect to the documents that you've provided. If you haven't tried any of these models before, I'd recommend just giving them a go to begin with and just have a play around. Get used to chatting with them, create some fiction, do something with some documents so that you can see how it can write emails, how it can summarize things, how it can help you with the kind of tasks you would be doing at work.

Speaker 3:

So I don't think really it matters too much which one you use. I mean there are some nuances between them. So, for example, claude's been specifically trained to be more ethical and honest. So, for example, claude so some of these large language models they'll refuse to answer certain questions. So if you ask it to make you for instructions on how to make a bomb, for example, it's not gonna answer that question because it's unethical. It's been deemed unethical, obvious for obvious reasons, and dangerous. So there are certain questions that they won't answer. Claude is a little bit more skittish in that respect, so it sometimes will say I can't answer that question, even though it's kind of a fairly benign question.

Speaker 3:

Gemini is also similar to GPT. I think it's one of the things that Gemini tends to do a little bit better is provide rationale for some of its outputs, and it does provide links by default. So GPT3 won't necessarily, or GPT4 won't, provide you links to internet resources where you can fact check it, whereas Gemini does do that. So there are differences between the models. I would say try all three and give them a go and see which one you prefer as well. They all have slightly different styles in terms of prompting them, which we won't get into in detail in this on the podcast. But do you have a personal favorite? I'm always chopping and changing.

Speaker 2:

I'm a little slut. Well, there's nothing wrong with playing the GPT field, jimmy, but at some point you're going to have to make a commitment, so you don't have a favorite.

Speaker 3:

Yeah, at the moment my favorite is Claude because it's a bit younger.

Speaker 2:

Shall, we move on.

Speaker 3:

I think we should.

Speaker 2:

Okay, so shall we take a look at where we are seeing cuts or where we've seen particular impact on the industry? I had a look. It's particularly difficult, I think, in these areas because, as we've talked about previously, we sort of have this kind of stealth effect of job losses. And if we see productivity gains, but they're kind of I'm not saying minimal, but they're not huge. It's not the case of wiping out a job. It's a case of if we talk about you can make something two thirds as efficient, then this is a really good place for you to sort of see that kind of stealth loss of jobs, because as you see the increase in productivity, you don't necessarily see the increase in productivity, you just kind of see less of a need for rehiring people.

Speaker 2:

I found a few references, so Darrell Ethinton in TechCrunch, who has a lot of talks with industry leaders and they've talked about in private about how AI is complicit in what he calls the great and continued culling. So everyone agrees that AI is playing a part in the scope but not in the timing and that's where things are being attributed to the economy. We do have some examples of Salesforce recently cut 700 rolls and SAP. They delivered some massive cuts last year as well, and both of those companies have said they expect headcount to be the same at the end of this year. But here's the thing these are companies that use experience massive headcount growth year on year. So even flat hiring would be a step back from the usual business, and it should be noted. These companies are laying off staff while they still make massive investments in AI. So the economic effect, if it's there, it's minimal. They're able to spend money to augment, as they like to say, or in reality, replace the work that's being done by people.

Speaker 3:

So you talked about the scope, but not the timing. What did you mean there?

Speaker 2:

We know that AI is going to replace jobs at some point. So I don't think anyone is arguing that, that the scope is going to happen. But at this point the argument is still oh no, no, the jobs are not being replaced because of AI at this point. So that's the kind of timing thing. I don't think anybody could straight-face say there's going to be no effect, right, it would just be disingenuous. So I think this idea, the scope of the effect, we acknowledge it, whether it's one year, two years, five years, 10 years, but at the moment, none of these jobs at the moment are apparently being lost because of AI, and we've discussed this before. There are other influences. The timing is not all about AI. But we've just talked about all these tools that are being used and we've given practical examples and we know that companies are using them now. So that's where the kind of timing argument are the jobs that are being lost now or the lack of expansion. In the examples I gave, how much of that is AI and how much of it is not?

Speaker 3:

Yeah, so I've done a little bit of research on this and what tends to be the general opinion at the moment is that last year was in terms of large language models. Anyway, last year was the sort of birth of these large language models really Like. As we said last time, chat GPT launched 3.5 at the end of 2022, I think and then GPT-4 came shortly afterwards, and what everyone's scrambling to do now is develop how you apply those to the workplace, and so this year is gonna be the year of what's called the AI agent, and the example I'm gonna give applies more to software development, because that's considered to be one of the kind of low-hanging fruit, in a sense, for these agents, and it's also quite a skilled job where people are well paid, so it is a bit of a target, but there's been a number of advances in AI agents in the last few months. So things like auto GPT, which is something that Microsoft produced, and very, very recently, something called DevIn, which is it's basically all of the components you need to be a software engineer within a single interface and so you can ask it a question about.

Speaker 3:

You know, I want to develop a webpage to do X, y, z and it will just go off and do it and you don't even need to intervene at any point. It will just like a human worker, like a software developer. It will just go off and complete the project to end, from end to end, and come back to you and ask questions, as it needs to, whereas up to now it's been. You know, you ask for some code from GPT, you paste it into. You paste it into a code editor, it gives you some errors, you paste that back in. So it's a very, very human with AI sort of interaction and I think you know, as I say, that's looking at software engineers, which is it is an office job, but it's slightly different to what we're talking about today. I think what you're going to see over 2024 is more of these agents start to come out where they become, they're tailored for a particular industry or a particular role right.

Speaker 3:

Yeah, exactly. So you can imagine a front desk receptionist type agent and if they get to the point where these AIs can answer the phone and talk in real time which they're also getting close to I'll talk about an example of that in a minute. But if they can get to that point where AIs can respond to inquiries in real time, then you can almost take all the different roles that a receptionist does, so sending emails, coordinating things, making calendar invites, answering calls and responding to queries. You can take all that, most of which is office-based, and build that into some kind of AI agent. So everyone's got an AI PA.

Speaker 3:

Yeah, that's the first iteration, I think, which you can already kind of do. You can already get yourself an AI PA. It's a bit clunky at the moment, but that's where the development's going to go in 2024. You're going to start to see these sort of drop-in replacement agents that can do specific jobs or certainly massively augment specific jobs. So as a receptionist, you might be still answering the phone calls and still dealing with inquiries, but an AI will deal with everything else. It'll be able to transcribe the conversation you've just had, write the emails, send the calendar invites, do all the coordination and maybe it'll ask you to confirm is this what you want to do? But have a confirmation with a human-in-the-loop type confirmation. But it's going to significantly. It's going to mean that one receptionist, for example, can do the job of five or 10, which is what we've been talking about.

Speaker 2:

I wanted to give one other example. So from Nandoodle on Twitter, currently known as X, I tweeted a story about a Shopify employee who violated their NDA to reveal that Shopify had been firing global customer service staff across the US, Canada and Ireland and replacing them with AI chatbots. Doesn't sound surprising, and they have announced, since their assistants are called Sidekick and they will help respond to all of the kind of merchant inquiries, provide information on sales trends and more. I guess that sounds like a kind of agent actually, which has already been being employed. So concrete examples that we can already see.

Speaker 3:

Yeah, and just quickly, the thing I was going to talk about, which is a really good video, came out on YouTube a few days ago, which was talking about something called Grock, which is Grock with a Q G-R-O-Q as opposed to Grock with a K, which is X's, twitter's AI, twitter's large language model.

Speaker 3:

So Grock with a Q is a company, that is, they have their own large language model that you can talk to, but they're using a new kind of chip and a new technology that's just under development. So that inference, which is how large language models respond to queries, can be done basically instantaneously and it's not something we want more of. But in the video that I watched, they demonstrated how Grock can actually do a sales call in real time where someone's talking to it, it's translating that and then going through chat, gpt or another large language model, and then coming up with something to say back and then responding in real time. The reason I mention it is because up to now, the problem has been that couldn't be done in real time. The inference the going through chat, gpt4 and converting the speech to text and text back to speech would take like 10 seconds, and so you couldn't actually have an automated AI agent doing sales calls.

Speaker 2:

Let's talk about what people can do, because I think this is an episode where, hopefully, we've got a bit more positive than maybe on some of the other episodes, so there's a lot that people can do.

Speaker 2:

I mean, I think if you think specifically about where you want to improve your productivity or the things that you don't want to do, if you spend a lot of time translating or summarizing policy reports, then maybe look at the kind of prompts and tools that will be best possible for you to do that Again, with you kind of supplying the finishing touches to it. If you spend a lot of time producing reports or presentations, how can AI tools shortcut that process or help you keep the content precise? If you need insights and data, you can use chat, gpts, but remember that you've checked the outputs and remember that these large language models, they're running off the internet. So if you want them to run off your own data, you need to be giving them access to that data and I guess, jimmy, there are concerns about that if you're giving them access to your own personal or organizational data.

Speaker 3:

Yeah, that's something I was going to. We should have probably had a disclaimer around this earlier on in the episode. Please make sure that your organization you work for is comfortable with what you're using AI for If you're not using it through your company. So your enterprise or your company can actually sign up for chat GPT for Enterprise, which is significantly different to the one that you use on the internet, in that if you use chat GPT for the open source version on the internet, they have a line in their terms and conditions that says they will use whatever you put into it for training, future models. So if you start putting your sensitive company information into it, you could get yourself in trouble very quickly. So, first of all, be open and transparent with your company. If they say you can't use large language models for your work, then don't use it. If they say you can, then obviously that's very encouraging and it's probably quite progressive. But, yeah, just make sure you're open and transparent and be really careful about what you put on there.

Speaker 2:

And you can get the large language models that are trained. There are plenty of organizations that will create small language models which are specifically for your organization. So my wife's organization has essentially access to a number of different GPT's, but only run on their internal data set so they can look at all of their historical reports, meeting minutes etc. Using the GPT model to have that interaction as those questions, but the only data it's looking at is internal. So if you are running an organization or you're responsible for implementing these tools, then you can look at ways in which you can. You know you can purchase and and and use tools which are specifically focus on your organization, share point data or, you know, if you and you know have a lot of insight data that you want to use, you want all your staff to be able to easily put that into presentations. You don't have to use the open source models. There are plenty of organizations out there now which are, which are bringing these things to to kind of business and enterprise.

Speaker 3:

If your organization allows you to use a I or if they have their own internal I, then you can. You can have a go at using that. But even if not, then to familiarize yourself with these tools, which I'd recommend that you do have a you do record, you do familiarize yourself with them, then you can. You can use one of these. A is to help you write a letter to your MP, help you write a letter, any kind of letter you need to write. You could potentially get it to give you a hand with your tax returns. You could get it to write your story, much like the dot com boom you know, twenty years ago. This is the a I boom now and I think these tools are going to be around to stay and you're probably going to use them in the workplace at some point soon.

Speaker 2:

Of course, you know training. So mooks or massive open online training courses you know there are so many of them on a I now and prompt engineering might be useful now. It might not be in two years time, but at least you're using that to understand how to prompt. Do you understand the application of a I rather than rather than becoming a kind of expert and personally I? You know I'm looking at getting involved in the kind of government governance and alignment side. So I've looked at something called the I safety fundamentals governance course, which is something you need to apply for and be selected on, and it's not something that you can just sort of sign up to, but it is free. Is there anything you would recommend? Should people still be learning to code? Do you think it's useful to learn python or do you think that is kind of dead now?

Speaker 3:

think in the short to medium term for coders, for developers, I think you're still going to need Human in the loop, so you're going to need people who can check what's going on with the, with the code that comes out of a i's. I do think that in the longer term it's probably going to disappear to a large part and so You're going to be looking at more like architect type jobs, system architect type jobs, looking in in the world and seeing what applications are needed and then working with an I to and the I will be the software developer effectively. So I think those will be the kinds of jobs you would look to move into as a coder. I'll just just to finish up, I've got. I actually asked claud, I asked what people can do, how to prepare an up skill, so I'll just read this outline, for lines is pretty good actually For everyone.

Speaker 3:

Get hands on experience using lm, writing assistants and tools for managers. Strategically integrate a while motivating teams for employees. Develop skills that combine human judgment with a I capabilities For students. Push for curriculums covering a I, literacy, ethics and prompt engineering for everyone. Again, advocate, advocate for a boost governance as lm's become ubiquitous. Finally, it says don't panic, rebut reskill. Focus on roles demanding human traits like empathy, strategy and creativity. Emphasize the human in the loop collaborative model with a I. So there you have it advice from an a I to the human race.

Speaker 2:

I guess that seems a good point to finish on today. So thanks everyone for joining us. Next week we will be looking at manual work and robots, so that one should be really fun. So join us then, and for now, goodbye and have a good week. I guess that seems a good point to finish on today. So thanks everyone for joining us. Next week we will be looking at manual work and robots, so that one should be really fun. So join us then, and for now, goodbye and have a good week.

Welcome to Preparing for AI
A summary of AI laws and regulation
AI advances and office work
Hallucinations
Office jobs already impacted by AI
The drivers of change
Chat GPT, Gemini or Claude?
Examples of jobs cuts in office roles
How can you get ahead (or at least keep up)?