What’s the BUZZ? — AI in Business

Designing Human-AI Collaboration For Adaptable Processes (Guest: Enrico Santus)

Andreas Welsch Season 2 Episode 23

In this episode, Enrico Santus (Human-AI Collaboration Leader) and Andreas Welsch discuss designing human-AI collaboration for adaptive processes. Enrico shares his experience in balancing the strengths of humans and AI for optimal results in business processes by gaining trust and sharing a long-term vision.

Key topics:
- Design processes for humans and AI working together
- Learn what makes processes adaptive
- Understand key challenges when designing systems for human-AI collaboration
- Implement principles for effective collaboration

Listen to the full episode to hear how you can:
- Leverage the strengths of humans and AI
- Define AI as a far-reaching goal vs. small projects
- Assess complexity, ambiguity, and risk for optimal use of AI
- Share long-term goals for using AI to ensure trust and avoid overreliance

Watch the full episode:
https://youtu.be/R8rxQN60eGM

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about how you can design human-AI collaboration and processes around that. And who better to talk to about it than someone who's passionate about doing just that. Enrico Santos. Hey Enrico. Thank you so much for joining.

Enrico Santus:

Hi, Andreas. Thanks a lot for the invitation. Very excited.

Andreas Welsch:

Yes, it's same here. Hey, why don't you tell our audience a little bit about yourself, who you are and what you do?

Enrico Santus:

I work as Head of Human Computation in the CTO Office of Bloomberg. I joined the company about a year ago, and, in this year I investigated what human computation actually means and, the reason being that it's a discipline that changes over time with the change of technologies. So that has been what I've been doing. In my past, I used to be a machine learning, natural language processing engineer in a certain sense. I've worked in pharma, I worked a lot in academia. Got a PhD in Hong Kong in computational linguistics. And before then, that's something very peculiar from my career. I used to study in the humanities. So I have a humanities background that then transitioned to engineering and that's probably a good way to think about humans and AI together.

Andreas Welsch:

That's awesome. Too many folks and too often, we think about AI as this technology topic and we completely neglect the human or keep them out of the equation and only put them back into the equation at a later point. Sounds like you're able to do that from the very beginning with your educational background and the work that you're doing. Super excited to have you on.

Enrico Santus:

Yeah. Like starting in it from a humanities background I have noticed also that in the pop culture there is a lot of speaking about AI as a threat to humanity. And this is happening also in in the media, right? We read every day oh, AI will substitute people in their jobs and so on. These topics are very interesting and definitely, they make me think because we cannot exclude that, right? But on the other end, there are needs, there are concrete needs of having AI supporting us. And maybe, I can tell you something like from my previous career where when I was working in pharma one of the thought that I had was make healthcare sustainable. And we know that the population is aging. And with the aging of the population, we will need support from AI, from machines. It is suspected that in 50 years from now, one third of the population will be over 65 years old and so on. We are gonna have serious issues in in the aging scenario. It's crucial that we take advantage of this technology in the right way.

Andreas Welsch:

Should we play a little game to kick things off? What do you say?

Enrico Santus:

Absolutely. Let's try.

Andreas Welsch:

Okay, perfect. So this one is called In Your Own Words. When I hit the buzzer, the wheels will start spinning and when they stop, you'll see a sentence. And I'd like you to complete that sentence with the first thing that comes to mind and why, in your own words. To make it a little more interesting, you'll only have 60 seconds for your answer. So are you ready for, What's the BUZZ?

Enrico Santus:

Let's try.

Andreas Welsch:

Okay, here we go. If AI were a bird, what would it be? 60 seconds on the clock. Go.

Enrico Santus:

Wow, that's that's a great one. Let's say a parrot for the moment. and the parrot because language models are basically repeating things that they've learned from our data. So they're not creating anything completely new. They are very great at combining things that we have already generated in the past, but, not creating something new. This is somebody is suggesting power there, but then I want the motivation in the shot.

Andreas Welsch:

Alright. Well within time. If you're a fan of R, I'm sure you're a pirate. So the parrot on the shoulder pairs nicely with that, too. Maybe we jump right into our topic then from here on. Look, I think we hear so much about the impact of AI on the workforce and about AI replacing jobs. You alluded to that at the beginning a bit already. But I think there's an alternative perspective that humans and AI can actually achieve the best outcomes when they're working together. So I'm curious, what's your perspective on that, especially as you're thinking about human-AI collaboration and designing for that?

Enrico Santus:

Look, going back to what I was saying before about healthcare domain and being in Italian, I was looking at the news a few weeks ago and what I found out is that in Italy, in a few years there will be 2 million employees missing. So there will be empty jobs in the market. And again, this is a problem related to the demographic situation of Italy. It's related to the skill sets and so on. So I wonder why we don't use AI exactly to cover these needs. So this is the first answer to your question. And the other one is that AI shouldn't be substituting humans. It should enable humans actually to contribute in the way they actually adapted to them. Humans and AI are totally different. We know, humans are very adaptive. They are very great in ambiguous domain. They are empathetic in the sense and so on. If we think about their role today, there are so many jobs that are instead not exploiting these skills. They're exploiting the skills that actually AI could be better at. So why don't we enable humans to use their best skills, their intelligence, their capacity of get adapted to the environments, while we can leave to artificial intelligence all the other tasks. And let me say one more thing. It's not that this is a black and white situation, right? There are like some tasks that can be done together, where the human, for example, in a hospital. The human doesn't need to carry the weight of another human that cannot move. A paralyzed human. But it can speak to that human. It can make that human feel more comfortable and dealt. So that's a perspective. And this can be applied to every industry, of course.

Andreas Welsch:

I really like that the part where you say it's a matter of a adapting. And it's not just the human adapting, but also making sure that the systems adapt. So I'm curious, in our preparation, you've also talked about adaptive processes. What do you see being key for an adaptive process? What are they to begin with and how do they work?

Enrico Santus:

Yeah. In a book that was written by Daugherty and Wilson very recently about human plus AI, they found that we are going towards the next revolution, the next digital transformation, business transformation. So after they started, they call it the third one, after Ford and after the personal computers in the eighties. And now there is AI, right? And AI and humans, they have identified several principles of human-AI collaboration that will bring us to a new industry where the processes are adaptable, that are very sensitive and flexible to factors that are external to the operation itself. Those factors can be changes in the market, changes in the user needs, personalization of the product towards certain different cohorts of users, and all these kind of things in our societies are happening much faster than they used to happen in the past. Having them be able to get adapted very quickly to those changes in needs is definitely a winning point. Just to get to the five principles that these authors have identified. They said that we shouldn't be using AI for just efficiency gains, but we should think about AI in a holistic way how it can contribute holistically to the business goals. They recommended to go towards the experimentation because there are no best practices about human-AI collaboration. The frontier between what can to humans and what can be assigned to AI is very checked. It's very hard to define. So there are of course criteria that can be understood, and I can tell you something more later, but there's a lot of experimentation that needs to happen there. And then, they also recommended to have an AI strategy that is tangible and that is not like focused on new business opportunities, too. If we look at them, companies that are benefiting of AI are mostly tech company, while all the others have yet succeed to actually find a business value in AI. Like in the industry, the percentage of failure in AI projects is about 85%. And this means a lot of money and a lot of resources that are getting wasted without actually creating any value. And finally, this happens for something that is related to the fourth principle. The fourth principle is to think about data. We need to think about AI in a data-centric way. And not only in a model-centric way. Data constitutes 6 out of the 10 main reasons for AI project failure. Maybe investing more in creating an ecosystem where data is collected, stored and updated in a safe way is definitely crucial for the success of this new business 4.0. Finally, cultivating the employee skills. So really caring about how employees can perceive AI and how can contribute to AI systems. This principle seems a bit abstract, but the authors of this book have done a survey of 1,700 companies in different domains. I think they cover 12 domains. And they found out that the more principles were adopted by the companies, the better was the performance. And performance was calculated in many ways, including revenue, client retention, and some other criteria. This performance improvement if somebody adopted all the principles was up to seven times. While, with the traditional way, it was only twice as without AI. So we went from twice to seven times.

Andreas Welsch:

That can be really significant indeed. I do have a question for you, especially around adapting your processes, right? If you've been in business for a few decades, it gets pretty hard to change a process. Because there're so much legacy, systems, history. All the complexities that we see in large enterprises and their IT and data infrastructures. How do adaptable processes fit into that construct? Where do you start? Do you start with one business line or one line of business in that sense, finance or procurement? And you see what's one use case that we can go after is there's a different approach knowing that enterprises are so complex and are not always the fastest when it comes to adapting and changing their processes?

Enrico Santus:

Yeah that's a great question. And of course I don't have the perfect recipe for answering this question. But I can tell you that one of the principles that indeed these two authors mentioned is like thinking holistically. Don't try to dig into one task in a complex operation because that means actually substituting, automating basically something that you already know, right? And it's an efficiency gains. Efficiency gains are very short term and small gains. They're not something significant. So what they recommend is to step back. Look at your processes overall your operations, and think about how to re plan them in a way that they are more adaptive. As if you think about the first powerful machines that we're getting into the revolution. I don't expect them to try to put a machine in a very small piece their operation. It was like such a huge change. It took time of course. But when it happened, it was like, holistic, organic, and it changed the overall process and business. And the same was for Ford when, I thought about the line and the personal computers in the offices in the eighties.

Andreas Welsch:

That's an interesting point. Yeah. I wonder if, say in olden times it was easier to introduce that change because there was probably more human in involvement in the process and less dependency on systems and dependencies within them even.

Enrico Santus:

It's a problem of time, right? We are at the very beginning, so now we are scared. We are trying to understand does it fit it? Let me try an experiment in a small part of my operations and so on. But at a certain point, there's no way back from AI. There's no way back. It will, it'll get everywhere. So at a certain point, we will have to face it. And when we will have to face it, we better do it organically and holistically.

Andreas Welsch:

I think that makes perfect sense. I wish, more leaders adopted that same mindset. Now we already talked about a few challenges and I think that's an important topic to talk about in addition to the way companies are set up in their operations work and their ability to change and willingness to change. What would you say are other key challenges when you're designing systems and processes with human-AI collaboration in mind?

Enrico Santus:

As I was mentioning before, which tasks can be assigned and so on. There was a paper recently in the Harvard Business Review about whether preferring the center's approach where you actually split the task and say, okay, this task will go to AI, this task will go to human, or the cyborg approach. And the cyborg approach is more interactive. And when you do the cyborg approach, you may want to think about is it the human giving recommendation to the machine and then the machine to executing on those, or it's the other way around is the machine that gives recommendation on humans and you must take the final decision? So all these all these things needs to be assessed and also you may want to assess the level of intervention of the human. The human can be a kind of observer, or it can be literally an operator, which is the opposite of the observers really acting on it. There are three criteria that I think needs to be always kept in consideration when somehow working on AI in humans, which are complexity, ambiguity, and risk. We know that technology is advancing very quickly and we know that it's gaining a lot of ground on complexity. It is getting some ground on ambiguity like language models are much more capable to work in ambiguous scenarios than the old classifiers, let's say. So there is a gain from the technological perspective in this, too. But there is a loss of ground, a huge loss of ground in the risk part. And that's the reason why whenever there is a risk involved there, you want much more human engagement and involvement. And also another thing that you need to take into consideration is that humans are not machines. Humans are moody, they get tired and so on. And so when you design a process where humans are involved, you definitely need to design a process that is somehow stimulating. You cannot ask humans to click a million times. Yes or no. Yes or no. Yes or no. It won't reach any actually good results or outcome. And that's something that they discover when they went from the crowdsourcing platform through the gamification. In 2005, they came out with Amazon Turk and all these other CrowdFlowers and other platforms to collect data to scale their projects. And it worked relatively well. But then they look at the accuracy, the quality of the data, and it was like horrible. And the reason is that humans are not good in repetitive tasks, right? They get tired. And it's only in 2010 that some kind of psychological tricks have been used to somehow stimulate motivate humans to perform better. And that's the time of the Captcha by the way.

Andreas Welsch:

What do you think where's the role of trust in this, especially in this human-AI-driven collaboration and design? If I have been running a process as a subject matter expert for the last 5 years, 10 years, 15 years, I know how this works. And I know how this works, maybe not only at our company, but even in previous companies that I've been in. Now all of a sudden, this AI thing gives me a recommendation or, I'm asked to use this to create a draft or a summary, and then I need to go back and edit it again because it didn't do it quite perfectly as well as I thought it would. So maybe my level of trust in this system, in this capability, the next time I use it, diminishes over time or maybe gets eroded directly if it does something that's completely against my expectations or even my values. Where do you see trust fit in that design process?

Enrico Santus:

People are skeptical about AI, but they're also very excited about it. What I've noticed, I run an experiment sometime ago to see whether AI could help the annotation process. It's one of the most important part in a AI project. Most of the AI systems are supervised. So, they need the annotations in order to perform. And during our evaluation, what we did was like to run a large language model of our data, make the large language model pre-annotate, so create a silver data set. And then we asked humans to evaluate it. And what we noticed, we didn't give any specific instructions to humans. We said just look, this was generated by language model. Can you check whether you need to fix it or not? So what we noticed without any suggestion is that people were different. Because people are different. And there was one annotator that was overtrusting the model. So the, it was like clicking. Yes. okay. Okay. All the time. So the quality was exactly the, one of the models, 70%. Nothing exciting. and, but the speed was incredible. So in that sense, like if you need speed over accuracy, that was a perfect scenario. The other annotator instead was mistrusting the model so much that it was correcting every small thing the, this annotator achieved like 90 and other percent of accuracy. But of course the timing was much larger timing that he invested or she invested into it. Was much larger. And because of that we what we saw is that also the annotators need to be somehow instructed on the level of confidence that the model has and the level of trust that, that they have to to give to the model, right? So it's very variable. It depends very much on the tasks and it depends very much on the other factors that the model that is being used.

Andreas Welsch:

That's a great example. Thanks for sharing that and I think that again, speaks to, on one hand the speed and the level of quality that AI can provide out of the gate. But again, if we pair it up with humans what else is possible, but also knowing that no two humans are alike. To your point, we have different experiences, different states of mind. Maybe we're moody. Maybe we're tired. Maybe we're a little suspicious.

Enrico Santus:

I believe that a way to think about all these process and so on, is to actually look at the long-term goal. So if humans know the long-term goal they're much more motivated and much more flexible actually to fix these minor methodological issues because they know the final goal. If you ask a human to work with AI, at the beginning, the human can be scared,'cause we may think that AI will take their job. But if you ask a human to work with AI, telling that your goal is not to substitute them, but actually is to expand the coverage of your product. Let's try to be more efficient here because we want to expand, then humans is much more willing to work on the product. And I believe that's something that we all need to do, like work with a long-term vision rather than these efficiency gains.

Andreas Welsch:

That's a great suggestion. I really like that. If others are in a data or AI leadership role or looking to move into one, and maybe they're not yet thinking so much about human-AI collaboration and pairing the two up for using their individual strengths together. What should those leaders looking to implement human-AI driven collaboration have on their radar? What should they know?

Enrico Santus:

On top of what I have already said, is understanding the differences between humans and AI. There was a paper in HCOMP. HCOMP is one of the main conferences in the human computation. And it found four major differences. One is in the, task definition is like the fact that humans and AI have different objectives. AI is a very specific objective. For example, sentiment analysis only sentiment analysis. Humans have very complex objectives and they have at the same time. So I go to work to execute my work well, but I'm going to work also to earn a salary to go back home and be with my family and so on. So it's much more complex what humans want to achieve at the end of the day. And so this is a big difference that needs to be taken in consideration. The other thing is that the difference in the type of information that we acquire. AI only relies on the data. It gets in input, generally linguistic data or maybe images or maybe other type of data but only that type. It's not embedded in the world or embedded in the world. So we are embedded. We get information of every type including touch, right? So for us, our representations of the world are much more complex, much more rich. The other one is the internal processing. AI works in a statistical way and repetitive way, right? AI models work in a kind of correlation way. For humans it's much more complex. We have very complex mental models of the world that also rely on the information that we got an input. And finally, the output and the output is concerned with the amount of available actions and decisions that humans and the AI can take. I can take a limited amount of decisions. Let's think about a classifier. Classifier may have a thousand classes and it has to decide with this thousand classes, it can be also 100,000 classes. But it needs to decide within those 100,000, you must have an infinite potential of actions and decisions. So these are the things that need to be taken into consideration by leaders when they're planning the utilization of AI.

Andreas Welsch:

Thank you. Look, we're coming up to the end of the show and I was wondering if you can summarize the key three takeaways for our audience from our session today?

Enrico Santus:

First of all, human computation is gonna be crucial for actually the future deployment of AI. That needs to be in a holistic way in order to actually get like long-term outcomes. And the criteria, principles we have discussed, in order to actually see the differences between humans and AI, and see how they can match together.

Andreas Welsch:

Thank you so much Enrico. Thank you so much for your time with us today, for sharing your experience with us, and for those in the audience for learning with us.

Enrico Santus:

Thanks Andreas. Thanks everyone.

People on this episode