What’s the BUZZ? — AI in Business

Are Your AI Agents Ready For Enterprise Decision-Making? (Guest: Kence Anderson)

June 23, 2024 Andreas Welsch Season 3 Episode 14
Are Your AI Agents Ready For Enterprise Decision-Making? (Guest: Kence Anderson)
What’s the BUZZ? — AI in Business
More Info
What’s the BUZZ? — AI in Business
Are Your AI Agents Ready For Enterprise Decision-Making? (Guest: Kence Anderson)
Jun 23, 2024 Season 3 Episode 14
Andreas Welsch

In this episode, Kence Anderson (Founder & Machine Teaching Expert) and Andreas Welsch discuss how you can determine if your AI agents are ready for enterprise decision-making. Kence shares his learnings on building autonomous AI agents for top Fortune 500 companies and provides valuable advice for listeners looking to get started with agents in their business.

Key topics:
- Understand the prerequisites of complex agents
- Learn when to apply machine teaching for agents
- Differentiate between simple tasks and complex reasoning
- Get tips for getting started with agents

Listen to the full episode to hear how you can:
- Teach high-performance AI systems for increased quality and trust of decision-making
- Build a foundational understanding of data, LLMs, deep reinforcement learning, optimization, and software architecture
- Compose AI agents with technology-independent building blocks
- Create hybrid teams of business and technology experts for maximum relevance of AI products

Watch this episode on YouTube:
https://youtu.be/Ev0sYU3tFS0

Questions or suggestions? Send me a Text Message.

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

What’s the BUZZ? — AI in Business
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, Kence Anderson (Founder & Machine Teaching Expert) and Andreas Welsch discuss how you can determine if your AI agents are ready for enterprise decision-making. Kence shares his learnings on building autonomous AI agents for top Fortune 500 companies and provides valuable advice for listeners looking to get started with agents in their business.

Key topics:
- Understand the prerequisites of complex agents
- Learn when to apply machine teaching for agents
- Differentiate between simple tasks and complex reasoning
- Get tips for getting started with agents

Listen to the full episode to hear how you can:
- Teach high-performance AI systems for increased quality and trust of decision-making
- Build a foundational understanding of data, LLMs, deep reinforcement learning, optimization, and software architecture
- Compose AI agents with technology-independent building blocks
- Create hybrid teams of business and technology experts for maximum relevance of AI products

Watch this episode on YouTube:
https://youtu.be/Ev0sYU3tFS0

Questions or suggestions? Send me a Text Message.

Support the Show.

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Andreas Welsch:

Today we'll talk about how you can get your enterprise ready for AI agents, and who better to talk about it than someone who does just that. Hey, Kence. Thanks for joining.

Kence Anderson:

Absolutely. Thanks for having me, Andreas.

Andreas Welsch:

Wonderful. Hey, why don't you tell our audience a little bit about yourself, who you are, and what you do.

Kence Anderson:

Excellent. I'm the CEO of Composabl. Composabl is a platform for creating intelligent, autonomous agents. For the last seven years, we've I've been working exclusively on what we now call intelligent autonomous agents for industrial use cases, mostly manufacturing and logistics. And over that seven years at three different companies, I've personally designed over 200 of these intelligent agents or autonomous AI systems for companies like Pepsi, Coca Cola, Bayer, Bosch, BP, Shell, AB InBev, and many others.

Andreas Welsch:

Hey, there's been barely any big name that you've left out here. Wonderful to hear that. And I'm sure there are lots of good learnings along the way that you and your team have made. So super excited to have you on. So folks, for those of you in the audience, if you're just joining the stream, drop a comment in the chat where you're joining us from. I'm always curious to see how global our audience is. Now, Kence, with that out of the way, should we play a little game to kick things off?

Kence Anderson:

Yeah, let's do it.

Andreas Welsch:

All right, perfect. This game is called In Your Own Words, and when I hit the buzzer, the wheels will start spinning. When they stop, you'll see a sentence, and I'd like you to answer with the first thing that comes to mind, and why, in your own words, right? And to make it a little more interesting, you'll only have 60 seconds for your answer.

Kence Anderson:

All right.

Andreas Welsch:

For those of you in the audience, please pop your answer in the chat as well. I'm curious to see what you come up with as well. Are you ready for What's the BUZZ?

Kence Anderson:

Let's do it.

Andreas Welsch:

Okay, great. Here we go. If AI were a month, what would it be? 60 seconds on the clock.

Kence Anderson:

Yeah, it would be March. It would be the, it would be the month of March. First of all, in in NCAA basketball it's called March Madness. So first of all there's, a lot of activity and some madness going on, but it's also early in the year. It's, still pretty early in the year.

Andreas Welsch:

Awesome. I love that. I think it's the first time somebody's had a sports analogy. I really liked that. So make sure you pick your bracket. And bet on whoever's going to come up.

Kence Anderson:

Exactly. And this is the thing about March Madness. My favorite thing about college basketball in NCAA is there's always some team you've never heard of that shows up to the tournament. And there's always a team that you barely know about that goes really far and does surprisingly well. There's also kind of incumbent teams that are expected to do really well that, that don't win. It happens every single year.

Andreas Welsch:

Wonderful. Why don't we jump to the questions that we've talked about, because I'm really curious to get your perspective and have this time with you.

Kence Anderson:

Nice.

Andreas Welsch:

Now, look, we've talked about machine learning in the industry for a number of years, right? 2016 through 19 to even almost the end of 22. Now everybody's talking about generative AI. And that's the new hype topic and talk of the town. I know you're pioneering a different approach. You've shared you're pioneering machine teaching. So I'm curious, what is that all about?

Kence Anderson:

Yeah, thanks for asking. It really comes from the idea that if machines can learn, that's what machine learning is about. If there's something that's like learning, and we're not talking about the exact same processes in the human brain, but we're talking about the ability to seeing different data and seeing over different scenarios change your behavior, in analogous to learning. If something can learn, then you should teach it something. And this is in order for it to to work well, in order for it to learn efficiently. We guide human behavior by teaching and really teaching is breaking down a task into separate chunks or skills. You see this in almost every team sport like basketball or baseball or soccer. You see this in You see this in music. There's always this idea of breaking things down into smaller pieces and expert teachers do it all the time. And then, drawing these kind of bounding boxes around promising areas to practice. There are some areas of practice that are not promising, and some areas of practice that really are. One of the objections I hear as well AI as if it's all seeing and knowing AI you're going to limit it by drawing these boxes around it. But that's not what we do with the human brain. The the mighty human brain, the only AGI that exists, the only artificial general intelligence, we draw these bounding boxes all the time. And there's no one on earth that can say that they've never been taught anything. Teaching guides learning.

Andreas Welsch:

Wonderful. I love how you phrase them, right? Then it has to be broader than. The limited scope that we've been, teaching systems or having had them, them learn so far. So I'm taking a look at the chat real quick and I see folks are joining in from Silicon Valley, other parts of California, Amsterdam. Wonderful. Thank you for, being with us. Now when we look at agents my simple definition is it's something that's, it's a piece of software that can deal with limited complexity, different options, and identify what's the best choice. So it can interact with its environment and take that into its decision making. Now I'm wondering then, if we talk about agents, if we talk about machine teaching. When do businesses actually need to teach their AI systems and what do they need to teach them?

Kence Anderson:

Beautiful. So first let's formalize our working definition of agents. Agents like you said, take in information like sensor information or information about the environment, about the world, and they take action. They make decisions. So for example, a computer vision model is not an agent. Because it perceives something, and it has an input and output, but it doesn't make a decision. It doesn't take action. So the second thing we should say is even agents under that term have been researched for a long time. I was talking to someone from Siemens the other day, and they were saying their PhD postdoc was about agents in the like 1990s. And what in industry, in industrial engineering, we call control systems, they're agents. So like the PID controller, which is invented by the U. S. Navy in 1912 to control the rudder on a ship, it's an agent. It uses math to calculate what to do, which is maybe a limited way to make a decision. But in the context of certain decisions, a control system is a very useful agent. It takes in information about what's happening in the case of the U. S. Navy, like the heading of the ship and the heading that you want the ship to go to, and then it makes decisions about the rudder. The next part of your question is really, second part of your question is really about, what kinds of agents are useful for what kinds of things? And I like to separate it by two axes. One is the value of the decision you're making, and another axis is the, the complexity or the the risk. And some say, and I would agree with that, that maybe complexity and risk are on a different axis, but I'd say maybe they're proxies for each other. So when you have lower value, lower complexity tasks, so think about like repetitive tasks then things like RPA, robotic process automation, things like maybe an agent that's just an LLM maybe something that's, that is a simple control system could make that decision repeatedly in a very reliable way. But once you get into that upper quadrant where there's a lot more complexity, there's a lot more risk on a higher value decision, then you need more human like decision making characteristics. And I have six of them that I think that I found and discovered over the course of designing these 200 some odd agents for major corporations. One is perception. I, so many times I'll go into a factory and I'll say what do you need to do to be an expert operator at this controlling this? And they'll say, Oh, you have to see this. You have to see this color change. You have to hear this. You have to hear how the machine sounds. Literally. You have to predict this. You have to classify this, cluster that. So, perception. The second is learning itself. The adaptability that happens, the robustness that happens when you learn something by practicing in a lot of different situations. You get good at it in a lot of different situations. And for example, that's what's one of the things that's most challenging about optimization algorithms as the situation changes, they don't do very well. You've got things like strategy doing different, fundamentally different courses of action in different situations. So I've had business people and factory operators and such say to me, Oh, when the scenario in this scenario, when the machine looks like this, you operate it this way. When it looks like that, you operate it in almost the exact opposite way. So that's where strategies come into play because just trying to treat it all as one problem is not going to work. Things like forward planning, things like deduction and things like language. The ability to communicate with the agent, the ability for the agent to communicate with you in natural language, and even the ability for parts of the agent to communicate with itself in natural language.

Andreas Welsch:

Awesome. That definitely resonates really deeply because I think what you mentioned, the human element of it, that people who have done this job or this role for a decade or two, like you said, they listen for different things. They watch for different things. And they those are probably the things that are not in the operating manual often.

Kence Anderson:

There we go. And the word that we're dancing around is expertise. And I want to make a clear distinction between expertise and intelligence. Part of our scientific mind, and I think this happens more in Silicon Valley, maybe than other parts of the world, is that the more intelligence you have, the better you're going to do. But for those highly complex nuanced decisions, that's not true. Which it's the reason why no one's going to hire you as a worker in a steel mill, and no one's going to hire me as a long haul truck driver. I don't have the expertise. I would like to think I'm very intelligent. I know that you're very intelligent, Andreas, but we don't have the expertise. So the differentiation is not true. There has to be a baseline of intelligence. As I mentioned, there's certain intelligence characteristics, but there also has to be expertise. And that's where that kind of human element comes in.

Andreas Welsch:

Now you're making me even more curious. So, good that we are only at the midpoint of the show. So I'm looking at the chat and there are two questions that I think are highly relevant and interesting. One is, what about agentic RAG? Is that applied to teach an agent or maybe asking the question a different way? How do you teach an agent?

Kence Anderson:

Beautiful. I'm going to ask everyone to buckle up for Kence's opinionated take on agents. First of all, I just want to say that, I believe that, high performance agents that can operate in that quadrant, that high-complexity, high-value are, it's going to take more than LLMs. It's going to take an engineered approach that combines multiple techniques. And in fact, even of course, LLMs the original LLMs that we, know about used other techniques in them, like reinforcement learning. But, statistically significant combinations of letters is not decision making. It's actually not a decision making paradigm at all. There's really only four decision making paradigms. You can calculate what to do. You can search options for what to do. You can. Look up stored expertise, which is what an expert system is, or an expert rule is, or what you're doing when someone says, protect your queen in chess. You're looking up past stored expertise. Or you can learn by practicing. Those are the four decision making paradigms. Statistical patterns of letter combinations is not a decision making paradigm. Okay, now that we've got that out of the way, what are some examples of teaching? So there's lots of examples of teaching in machine learning. One is, for example, when alpha chess. When the makers of alpha chess said, Hey, there's going to be part of this agent. And I call it an agent because it's a decision making system. Part of this agent is going to use supervised learning to evaluate the board, the strength of your position as your King safe and all these kinds of things. And then another part of the agent is going to search, look ahead and search for options using Monte Carlo tree search, looking for a good line of play. And another part of the agent is going to what I call set strategy by pointing that search in the right direction of the space. That's teaching. That's breaking down a task into skills, separate concerns, figuring out what's happening on the board and looking ahead for what to do are completely separate concerns. And so a teacher knows you can't learn those two things simultaneously. So they created separate modules even when so that's teaching. Another example is ChatGPT. When it was trained using reinforcement learning, learning human feedback. And humans were saying, that is not an appropriate response to this kind of prompt, and that is an appropriate response to this kind of prompt. That's teaching. Guidance based rules. When using deep reinforcement learning, you use what's called action masking, which is like a set of rules during its learning that says, you're not allowed to choose this right now, or you're not allowed to choose that right now. Or when model predictive control sets constraints and says, you, you cannot turn the wheel that hard on the on the car or else you're going to turn it over. All that is teaching. And sure on a RAG, agentic RAG approach to LLMs could very potentially be a great teaching method, but I just want to make sure we're breaking out of the paradigm that says all teaching for agents must be LLM specific techniques. Because one, teaching goes so far outside of that, but two, effective agents, in my opinion, are going to go outside of strictly LLMs.

Andreas Welsch:

That's why I'm so glad to have you on to put that into perspective of what are all the different ways and options especially when our industry is so focused on gen AI and on large language models, right? Great to to hear that fact-based and experience-based approach that you're sharing. Now, we've talked about the need for teaching, the opportunities of teaching your agents. I'm curious, what are some of the prerequisites? How, do you actually teach that machine? Don't we now all have the data available, all the data in the world that we need that's in these foundation models? What else do we need? Isn't everything already there?

Kence Anderson:

Yeah, that's a really great question. So first of all, let's talk about what expertise is. So a lot of times in optimization, you'll see these graphs that look like maps. They actually look like topographical maps that show an agent or an algorithm finding a solution. And oftentimes it's called local minima or local optima, where you're trying to find the highest point in the topological geography or or the lowest point. Experience is like guideposts or landmarks. It's understanding that geography because you've been there before. Now, you might not have a complete map, like when I went to the Naval History Museum in Madrid it was so amazing because they had these maps of the world that were incomplete from like 1300, 1400, 1500, and you'd see like the west side of of South America, but the Brazil side, there was nothing there. They hadn't seen it yet. Or you'd see parts of different continents. That's what expertise is like. So when an expert says. Protect your queen or, always shoot a layup when you're close to the basket. First of all, they're not saying always shoot a layup when you're close to the basket. They're saying based on how this task is, you're going to experience much higher likelihood of success. If you use this skill in this way, now I want you to practice it and identify all the exceptions. So what organizations need to do is look for areas where it takes extreme expertise to succeed. So I hear a lot of executives saying, Oh, it takes 10, 20, 30 years to be great at this. And my experts are leaving because of they're retiring and baby boomer generation and the the great resignation and all those kinds of things. So you want to look for high value skills. So almost all the agents I've ever designed are worth at least a million dollars of ROI for a single digit improvement in the KPI. Some are worth much, much, much more than that. So folks are saying this is a high value skill, maybe crucial to my operation. That's criteria one. Criteria two, it takes a lot of expertise. It takes 10, 12, 15, 20 years to learn how to do this well. And so then there'll be a small number of experts. And a large number of people that aren't as good as that. And where they I've had people say when the experts are at the controls, and this is for all sorts of different use cases, industrial factory control, use cases to financial planning and commodity trading use cases. So many different decisions in enterprise where when the expert does it, it'll go 50, 60, 100 percent times better, 100 percent better than when everyone else does it. So that's your second criteria. And then your third criteria is these kind of decision making, human intelligence characteristics that we know that agents can bring. So when you combine those, you say, okay, those are the kinds of use cases where agents are going to radically outperform and they're going to be able to store these invaluable skills. And then the second. Part of the answer to the question is it's the expertise that you're actually teaching. So there's going to be a lot of different ways. For example, knowledge graphs, folks are saying, okay, we can integrate knowledge graphs with LLMs. And that's teaching. Now a knowledge graph or an ontology is a. a set of information or skills that you want to enforce in the way that the LLM operates. That's teaching also. So there's going to be lots of different methodologies for teaching machines, but it's always teaching existing expertise, that knowledge of the landscape and how it works.

Andreas Welsch:

I love how succinctly you're expressing that and how you're simplifying those complex constructs. That makes it very accessible. Now, I'm sure folks are wondering it Sounds great. Agents will be here at some point. Maybe one part of the question is when is that point going to be? I have a feeling part of the answer is it's already here. Otherwise, you wouldn't be building this. The other part of the question to me then always is well how do you get started if you are in an organization where you're in a data or AI role, or you're in an IT role? You know that there's something coming, it's called Asians. It can help you with all these different things and help you make decisions under uncertainty. But what does it really take? Where do you start? Whether it's learning more about it or it's actually getting started and getting your hands on it and try it out.

Kence Anderson:

Yes. First, I want to point out a couple of pitfalls. There is this narrative, which has pitfalls in it that says you must go and instrument IoT and collect all this new data in your in your organization before you can even think about analyzing that data. Then, maybe you can predict something. And then, maybe you can do something. We know that narrative is false because before all of those new techniques and technology, the control systems and operations research algorithms were doing something, some of the work that agents do. So don't fall into that trap. But to your point earlier, data is important. Data is certainly important for training any agent. And any agent needs to have a and I think that's a really good example of an expression of what are the going to be the results of the actions that I take? And that's always going to be from your historical data. But you want to start by really understanding the technologies. And I try to shameless plug for my book. I wrote a book for O'Reilly in 2022 called Designing Autonomous AI. And in there, there's no math and there's no coding. And what I've tried to do is outline the, different kind of agentic technologies, if you will, and I wrote this book before the LLM craze, but you can easily add kind of language to that but I would encourage you, whether it's that my courses, there's other people that have some great courses out there, but not the math courses, the math of how machine learning works is not going to help you get started. The engineering of it. So what engineers tend to do, and I'm a mechanical engineer by training, is what are the tools hammers, wrenches, and screwdrivers, and then what are the tasks that need to be done, and then matching those things up. That's the level that you need to be thinking of things, even if you're a technical person, before you dive into what is the linear algebra and matrix math behind machine learning, if you want to do this in real life. The second thing I would really encourage people to do is create hybrid teams. Like data scientists by themselves are not gonna solve all this. There's people with expertise and the people with expertise need help from the AI experts to understand these algorithms and understand how to put things together. So I see people forming these hybrid teams that are working really well. And, the last thing I'll say is find levels of abstraction. So every new technology starts out at the infrastructure layer where there's a set of deep experts. Very few deep experts that are creating everything from scratch at a very low code level. In order for you to do this in real life, you're going to need to find the next levels of abstraction, which are platforms, solutions, and apps. Now, platform solutions and apps will have different varies of expertise required of your teams, but will also have different levels of customization. But you need to get above the experts using infrastructure level in order to do this at any scale.

Andreas Welsch:

Thank you so much. I think that brings us to the almost the close of the show. And I was wondering if you're able to summarize the key three takeaways for our audience today before we wrap it up. There was so much good information in there.

Kence Anderson:

Thanks so much, Andreas. I really appreciate it. I would say, first of all, any intelligent system, in order to perform high value tasks well, is going to need to be taught. You're going to need to teach it something. And if it's not working well, it's because you're not really teaching it much of anything. And that may be why and that's true for humans. And that may be, and that's true for, machines also or, intelligence algorithms. The second thing I would say is you can get you can get started now with understanding the right things about data, things about LLMs, understanding where each of the high level pieces fit in with things like deep reinforcement learning optimization. And the third thing I would say is, the folks that take an engineered approach, a hybrid approach to putting together these agents from the best building blocks, regardless of which technology is the most popular in the day is going to win.

Andreas Welsch:

I'm, taking one final look at the chat and I saw there was a question from Sean. Maybe we can get to that again. What was the decision making paradigm that you mentioned earlier? I think it, it was the four different options that you shared.

Kence Anderson:

Yes. Math, you can calculate what to do next. You can always do that. The field of control theory covers that. You can search options for what to do next. Okay. That's what optimization algorithms do. And operations research is a field of study specifically dedicated to searching options for what to do next. You can recall stored expertise, which is what an expert system is, or expert rules are. Or you can learn by practicing. That's what deep reinforcement learning or reinforcement learning seeks to imitate, but humans do it all the time.

Andreas Welsch:

Thank you so much, Kence. Also, thank you so much for sharing your expertise with us today. Was really wonderful hearing from you what is top of mind when it comes to agents and, how also to to take a conscious approach on what to pursue and what they even are. I think that's super important when we're in a hype cycle and everybody's jumping on the next topic understanding what does it really mean? So thank you so much for that.

Kence Anderson:

Thanks for having me, Andreas. It was really fun.