The Human Code
The Human Code" podcast unravels the intricate blend of technology, leadership, and personal growth, featuring insights from visionary leaders and innovators shaping the future. Host Don Finley dives deep into the human stories behind technological advancements, inspiring listeners at the crossroads of humanity and tech.
The Human Code
Unlocking AI's Potential in Consumer Goods: A Conversation with Kumar Srivastava
The Human Code: Kumar Srivastava on AI's Impact and Future
In this episode of 'The Human Code,' host Don Finley is joined by Kumar Srivastava, a creative strategist and CTO. Together, they delve into the integration and impact of AI in everyday life, focusing on its ability to make tasks more efficient and aid in decision-making. They discuss the evolving interaction between humanity and technology, touching upon the ethical considerations and future challenges posed by AI. Srivastava offers insights into the importance of understanding problems deeply, rather than just focusing on solutions, and how this approach can guide the effective application of evolving technologies. The conversation also explores the long-term societal impacts of AI and emphasizes the importance of retraining and adaptability in the workforce.
00:00 Introduction to The Human Code
00:49 Meet Kumar Srivastava: AI and Technology Enthusiast
01:08 The Intersection of AI and Everyday Life
03:23 Humanity's Dependence on Technology
06:42 The Future of Human-Technology Interaction
13:49 AI's Role in Product Design and Consumer Goods
16:53 The Ethical Implications of AI
27:04 Preparing for an AI-Driven Future
30:09 Conclusion and Sponsor Message
Welcome to The Human Code, the podcast where technology meets humanity, and the future is shaped by the leaders and innovators of today. I'm your host, Don Finley, inviting you on a journey through the fascinating world of tech, leadership, and personal growth. Here, we delve into the stories of visionary minds, Who are not only driving technological advancement, but also embodying the personal journeys and insights that inspire us all. Each episode, we explore the intersections where human ingenuity meets the cutting edge of technology, unpacking the experiences, challenges, and triumphs that define our era. So, whether you are a tech enthusiast, an inspiring entrepreneur, or simply curious about the human narratives behind the digital revolution, you're in the right place. Welcome to The Human Code.
In this episode, we are excited to welcome Kumar. Srivastava a creative strategist and chief technology officer with a passion for building delightful products. With a career deeply rooted into machine learning and technology Kumar brings a unique perspective on how AI and technology intersect with humanity. Today, Kumar, and I will share how AI is integrated into everyday life. Making tasks more efficient and aiding in decision making processes. His vision for the future interaction between humans and technology were intelligence systems, handle mundane tasks, allowing humans to focus on their passions. The importance of understanding problems in depth, rather than just focusing on solutions to effectively apply evolving technologies. Join us as we delve into these topics with Kumar. This episode is packed with valuable perspectives that will inspire you to rethink how you approach technology and its role in your life. You won't want to miss it.
Don Finley:Welcome back here for another episode of the human code with Kumar Srivastava. gotta say we've had a lot of fun talking earlier, on our pre show introduction and just getting down to it. Kumar, thank you for being here. And also, what is it that excites you about that intersection between humanity and technology today?
Kumar Srivastava:thank you. Thanks for inviting me. I'm super excited to join and talk about this stuff. my entire career has been somewhere along, either the outskirts or the middle of ML And technology in general. so something that I think about all the time for sure. I think the technology is only as useful as, how it impacts humanity. So I think it's, it doesn't exist or it doesn't have any value without that lens, without that context. we all have, things that motivate us, that influence us, that drive us, And good technology or useful technology is technology that helps us meet or achieve our individual goals and objectives. And they could be anything, could be professional, personal, or somewhere in the middle. But it's really about, technology, uh, if it exists, if the value is in its, in how it makes us more human in our ability to, achieve our goals and desires. So that's how I think about it is that it's, there's value to it only in the context of humanity.
Don Finley:And I think you're right, or at least I agree with you. We might both be wrong. but like technology really does have to serve that human need, Like it has to fulfill something. at the same time, we're coming onto a technology now that is getting more legs. it is now like the first tool that we have that can actually make decisions for itself in some capacity. how do you feel about the state of our alignment with this technology?
Kumar Srivastava:are so early on, I think, it's a, it's, there are two things here. One is we are far along with the, in the evolution of technology. We've seen a few iterations already since, electricity and then the internet and whatnot. But then, we are still, another way to look at it is we're still in the beginning of what's actually about to happen. It's hard to imagine, because the way technology has progressed, it's grown exponentially in its ability of delivering on what we might be looking for. So it's hard to tell how far along we might be in just in the next 10 years, given the last 10 years. I feel that it's permeated itself in every aspect of lives in general. And I think, it's good and bad. there's more information, which makes us more productive, which makes us more intelligent. you're smarter. This, just because we have had access to Google for the last 10 years that probably caused this latest wave of, innovation, especially with, gen AI or LLMs. Because the idea of indexing content started with search engines, and then that became the input to creating these models that can predict the next word or the next phrase. And that's how you get Gen AI. So we don't know what that next level will be, but I think it is everywhere all around us, and it's just getting better and better, and at some point It just becomes something that you expect to be around you all the time. and this is, us in privileged positions of not having to imagine this or experiencing this a lot. But, if a storm comes in and power gets knocked out, you suddenly realize that, we are not prepared any longer to live in a world without technology because it's just such a big part of our, even not just physically, won't have lights to turn on. I think emotionally, what do you do when you're not attached to a device or a stream of information coming in?
Don Finley:And you look at like humanity and technology and they're somewhat tied at the hip, Like the second that we lit that first fire, like rolled something with a wheel, we gained the productivity from that. And then we built society around that as well. And so you can't really go back. where we're at, because even looking at what happened with global supply chains during the pandemic, we saw how fragile they are. But then additionally, like everybody's running to the store to get toilet paper. yet if the lights go out, that's the kind of challenges that we would have to go back to. Um, but my favorite. my, Wall Street friends who, they do, they spend some time on Wall Street. They make some money. it's a very stressful job, of hours, and then they decide that they're going to go become farmers. The money's great, helps them buy the farm. But then when they get to the farm, they realize that they don't have the skills to be a farmer. And I think where we're at with technology is in that boat of, we are tied to it, society, individuals, we've learned how to interact in this way and we've gone to it. Where do you see that interaction changing between like ourselves and technology in the future as well?
Kumar Srivastava:I think, it's just to your example, it's not just the skills, it's the patience. So I think what technology has done is it's changed our perception of how much patience is required for a task or a goal to be met or achieved. So, you can figure out how to do farming right on there in that example, but it requires patience and at least four or five iterations, which is probably over five years because you have to do five crops to really get good at it. And we don't have that patience any longer. So I think it's almost the definition or the idea of humanity ends up. Changing with every technology revolution. What does it mean to be human? And maybe this is what it is. We're not designed to learn or understand things that take years and years of waiting and iterations, or it becomes a specialized skill set. Something that was very common and everyone had to go through. now it's an advantage to be able to do that because it's just not normal or expected any longer. So I think in the future that there are interactivity I mean, at this part, if you look at all the trends around even, variables and what they have become with AR and VR, with LLMs, building on top of, uh, search engines. we don't have to spend time and energy gathering information, that's probably going to be the main driver of our interactivity with technology the pattern is, there's a lot of information keeps coming in at you, there's going to be systems and software and tech that's going to reduce that size, and that's going to be the complexity of reducing all of this information into something that you can use to make a decision. And then there will be systems that will help you make these decisions. By leveraging, and there'll be a specific mode in which you delegate the decisioning to the system that is getting its own information all of these channels. So that's the self driving car. You delegate away the responsibility to do something that you, that we as human have been doing for, 50, 60, 80 years. To to this system that collects its own information, makes its own decisions and tries to get you to the objective that you have. it's the same thing with, the common example of Netflix or Amazon, They know what you want to shop or view. So you've delegated that responsibility. So our interactivity will be driven by sometimes in a good way, sometimes in a bad way. We will be able to pick and choose the decisions that we want to take ourselves and the rest will be able to offload to intelligent systems and technology with both software and information processing and actuators and mechanical things. robotics and the advancements there are an example of that. We'll delegate, seems like most people want to have someone to wash dishes. So, you we've already done that to dishwashers. Now that the difference is a robot's going to place it in the dishwasher and then, it back into where they're where the shelving is. So yeah, we will offload the decisions we don't want. And that's why interactivity will be either in, that entire process of offloading a particular decision and then focusing our remaining of our interactivity with systems that, you know, Uh, that we want to interact with, with. So if you're doing something creative, like writing something or, uh, painting something, you might want to use technology to help you do that better. But You want to be part of the process, and there will be other interactions where you don't want to be part of the process.
Don Finley:Let me see if I can summarize this. It's basically, you're seeing the interaction become a point of like where the human is deciding like, Hey, this is what I really want to be participating in. This is where I want to grow. This is what I want to provide to the world. And then having the intelligence systems surrounding that person to help them to just either focus on that, accomplish that, or take away the things that, don't align in that space. Thanks. that's a really cool world. I dig that.
Kumar Srivastava:Yeah, and we see this already, there's nothing new. if you look at word processing, it started with little red lines under text that you typed in, and now LLMs will write the whole, essay for you. And that's how it started. It first did, and self driving is the same. It used to tell us that, We are drifting into the second lane or there's a car in front of us and I can drive itself. So I think it technology takes over the highly repetitive, high frequency, mundane tasks that tend to have, very little diversity in the possible outcomes or the choices you have to make. So that easily gets offloaded to any sort of intelligent technology. And then anything that's creative or where you want to remain in control. and so it's the same, as if you're building an intelligent system, you have to do a lot of grunt work to understand the data that's coming in. You have to move the data into a data processing environment. Uh, and then you have to produce the algorithms themselves to process that data. Everyone wants to write the algorithms. That generate the very cool stuff. Nobody wants to do the grunt work. And so there's always a there's always, companies out there and startups and big companies that are creating the software to make that grunt work be easier. And because people want to offload that work to someone else, to some other system, so that you can focus on the cool data science, which everyone wants to, and I think that applies everywhere. Nobody wants to. people
Don Finley:that's true with every and that's what you're getting at is 90% of the work can probably be automated away or is not really something that people get excited for, but is absolute necessity. And I think both of us, it's probably in our best interest for us to reiterate how important data pipelining is as well as goal setting for these model generations. Because otherwise, if you don't those two things right, the model itself is inconsequential.
Kumar Srivastava:doesn't work. It doesn't, there's no value to it. And so the grant work, ironically needs the highest quality and thus it's best suited for technology because there's no variability from, Different people, different time of day, weekend versus weekday, the machine's going to do it in the same way every time, within its error parameters. But as a human, if you had a bad day or didn't have coffee, you might miss something and then it causes an issue, five years later. That
Don Finley:me of the, there was a study done about, judges and the sentencing. So judges and sentencing, and they figured out that you want to get sentenced right after lunch. And you're like, okay, so there's a definite impact on what time of day that you go in to get sentenced and you could possibly end up in jail for a while longer if the judge happens to be hungry. so that's a great point that you bring up. I know you're working on some cool things right now as well, and they fall into the idea of taking away some of the time that it takes to actually, get results out. I guess you're probably seeing plenty of benefit from doing that because your customers are very happy with what you're providing. But additionally, how else do you see that you could better serve, the community that you serve in that?
Kumar Srivastava:biggest, what we do, currently we focus on helping large companies design better products for their consumers. And it's very different and it's consumer packaged goods. So these are not, consumer apps or software that is still,
Don Finley:Is it like the items that you find in the grocery store
Kumar Srivastava:right, exactly.
Don Finley:Yeah.
Kumar Srivastava:cookies and ketchup and mayo and so on. and and so the coolest part of this is that. you are helping, uh, these, companies design these products that are healthier, they're better for the environment. they're cheaper to produce and the same quality. All of these objectives that you might think of, we build software that helps, our customers create these products and bring them to market and the coolest part there is that it's it's actually, application of technology, in this case, machine learning and an AI and Gen AI to create something or enable something that I, as a person, as a consumer, interact with daily. And I can walk into the store and I can see the products that the software that I would have influenced or worked with, uh, helped enable our customers to bring to market. And there's a real, connection between, a physical good, uh, being able to design and enable that to market and then, somebody to be able to consume that. which is different than other stuff I've done in the past, I've started my career at Microsoft working on anti spam and anti phishing technology, which is, again, consumer focused. But you're experiencing the benefit of that technology. So these are machine learning powered systems that would predict whether an email that's coming in is malicious or not, and you would put it in the right folder and you know something that's very common. And this was. 20 years ago, this was brand new and spam was a huge problem at that time. but again, that was a virtual experience, of the benefit of technology, I think in my current role at Turing Labs, it's really about, how do we influence and there's a direct line between the work we do and what we enable our customers to do to the goods and services, actually goods. that, consumers get to purchase and, eat as I have over the years, I've realized that the value of high quality, food that you eat is so critical to health and just humanity and longevity that, being able to influence that process is actually personally very satisfying.
Don Finley:Oh, that's really cool. And and it is true, like you are actually helping to make a healthier world by helping the companies to design
Kumar Srivastava:Right. Right.
Don Finley:well. And that's a really, nice little ad every time you go into the grocery store.
Kumar Srivastava:Yeah, absolutely.
Don Finley:yeah. all So I'm going to softball this one up to you. What's the question that you're hoping I don't ask?
Kumar Srivastava:I think, the biggest one I'm always seeking for, seeking answers to a different opinions on is, how pervasive and how Destructive. can AI become right with whole AGI and there are much smarter people who are thinking about this problem. that's the question I think about a lot is where does it go wrong and where can it go right.
Don Finley:this is a question of the nuclear bomb versus nuclear energy, one homes and one is destroying homes. what do you think the nuclear bomb is for AI? is it literally just a, An AI that destroys homes?
Kumar Srivastava:No, I think there's a short term and a long term. And so this is the one sort of the conclusion I've reached. I think because of the hype, we're seeing a lot of shift in the market where we are changing how. Everything is done. And the assumption there is that you need a lot fewer people to do what required, larger. So short term impact, the negative impact is that because it's still too early to see how big the impact will be. it's very similar to, things stabilize over time. And so when you reach an equilibrium, so before, online search became a big deal, You had to go to the library and read a bunch of books and, make your notes. But then that technology became pervasive and was accessible to everyone. So it just shifted the playing field. there's a phase where some industry or some people have an unfair advantage because they have early access, but that's the risk with early adoption. But eventually, search became the de facto way of finding information. So I think right now it's too early to see what that stabilization will be with AI. Great. And I feel that there's a lot of companies that would be making decisions about changing how their organizations look, which has a real impact on a lot of real people before realizing what this table if you're in an environment where you're competing for the same consumers or same customers, and you have this Gen AI advantage because you can write a marketing copy faster, You maybe a year later, maybe five years later, the entire market, all the people competing in that environment have access to the same tech. So at that point, the question is, just like spell check and now writing content. So everyone is equally more productive, but the work still needs to be done. And what that means is that you still need, unless you're in an industry where you can completely eliminate the human, I see the problem as the work definition changing, and then the bar to compete or to be differentiated, changing and adjusting, but not going away, which means that I feel it's less of a, the problem is, in a hurry. We don't want to, Change the organizational structure too quickly, but rather see this as a problem of retraining or retooling because it's too early there's too much information that exists in typical enterprises that is not found in any sort of AI yet, because most of AI is trained on public sources, especially in the LLM world. So there's a lot of information that has to be still coded in. and then there's still a lot of information that, will be generated with the use and the application of these new technologies in existing workflows. And that also has to be fed back into an intelligent system. so it's hard to tell. what I worry is that there's a, a lot of potential that exists in still code. Leveraging these internal sources of information could be lost if organizations change too quickly.
Don Finley:I like how you're thinking about this because it is a problem of adopting something too quick before you understand it. and I can tell you from like the investor space, like we, we work with startups and so we help some of them fundraise. And I have my own other startups that we're also raising funds for. And I can tell you every investor that we talk to wants to talk about AI, Like they want to talk about it and they want to go and Sometimes we have a really compelling story, Like we know how we can adopt the technology to what we're doing, but then there's some businesses that you want to be like, no, we really need the human aspect of this. There is that either connection or that ingenuity in that pipeline of information flow that is necessary to get someone involved. And, we've seen this over the last 40, 50 years, company sizes are getting smaller in order to generate, a certain amount of revenue, right? Like in the eighties, I probably needed 50 people in order to generate 10 million. Today, you can generate 10 million, even on a, inflation adjusted basis. You could generate those numbers with only a handful of people. Altman got on stage and said that he has, a bet going with his, friends that they're going to see the first billion dollar valuation one person business coming about. But I like where you're at, where it's like, you know what, companies themselves who are implementing this need to look at it, not just as a, Hey, we're going to be saving bodies, but also what is the quality of work that's going to be done by the people who remain, as well as are we really going to be able to cut. Or to shift that much work off of people. And so a methodical process, I think, is what you're looking at. create the experiments and go. But additionally, there's the retraining that's necessary
Kumar Srivastava:Retraining, shifts, some aspect of it is that it just shifts the input to the output. So for example, no system is perfect. So you, it might be that nine out of 10 times you're able to complete a workflow automatically because of some intelligence in there. But the one out of 10 where it fails, it's not going to be as easy as, if it's a service ticket, it's not like that Ticket was somebody forgot their password and that's the one that you have to, reset that one thing that you cannot do with AI, will be the most complex and thus will require the most ingenuity and human attention to solve because the easy stuff has been taken care of and the bar keeps getting higher. What is complex and harder? and AI could not do it in the first place is because There's not sufficient data to train or to extract the pattern and train a model to solve those problems at scale. So I think hard to tell, what is the difference in the effort required? It might be that you definitely need fewer people, but you need them to be highly skilled and extremely creative to handle that error rate. Or it might be the other way around, but, it's going to vary by organization. I feel that. Because what you feed into the system is what you get out, the problem here is that the hardest problems don't have sufficient data generally, which means that you cannot train a system to intelligently predict a good outcome in those situations, which means that you cannot replace whatever you're trying to solve entirely with technology. So yeah, and it's not a static world. Which means that the complexity of the problems, the hardest problems will keep getting harder and be different than what we are seeing today. just like we didn't have to deal with, the problem of, I don't know why it's coming to my mind, but, superchargers on, like nobody had the problem. They thought the problem that if Tesla stopped building supercharger stations. it's going to slow down the electric car market because nobody else is solving the problem. And that problem didn't exist five years ago, but now it exists. So now you have to design, what do you design? How do you create these? how do you create systems and policies and environments where everyone else can do this? Or you can convince Tesla to do it, or you can protect these things from, like the problems themselves change. And
Don Finley:Exactly. Like we invent new, the children of today, the number one job that they want is a YouTube creator. that didn't exist when I was up, new roles that are coming about. I think for me, the personal, like the nuclear bomb of the short term that we are not able to retrain people fast enough. For what comes out and four years ago, we weren't talking about white collar jobs being or highly skilled, like in office jobs being automated right away. But chat GPT definitely changed that. Whereas four years ago, we were really talking about what would happen if we got self driving cars and more specifically self driving trucks and the impact on the 10 million plus jobs that are there. What happens with those people that come That's yeah,
Kumar Srivastava:that's, that has the potential to destabilize or, in a negative manner, impact so many different things. Because, this goes back to that saying, the empty mind is, what's the thing? I forget. But if people have nothing to do, then what do you do? not everyone's going to start painting Picassos, It's going to create recital problems. That we haven't, and I don't think we understand yet what
Don Finley:I, and I agree with you. Like we're not going to fully grasp it or I don't think we fully grasp the challenge, but I also have a feeling that our systems have today to handle that sort of like social flex of like unemployment or career assistance programs, I don't think they're going to be able to handle the capacity. which we will see this change. And then if we continue on this exponential curve, the amount of change that we'll be experiencing in the labor market will continue to increase.
Kumar Srivastava:Right.
Don Finley:And that spiral feels like a very bad place. But along these lines, bringing this back to things that we can control, what would you recommend like students of today, people who are early in their career, or even somebody who's mid career? what are the skills that they can look to adopt to be more effective or, prepare themselves for the changes that are here today and coming soon?
Kumar Srivastava:Yeah, it's something that it's one constant thing that I apply to just product design software product design in general, but specifically, but generally applicable to everything is, technology is change. And so your solution can evolve over time to be. better, faster, cheaper, or what remains constant or more constant than the technology and the solution is the problem itself. And so understanding and being an expert on the problem is far more important than and then being able to say understand a tool like ChatGPT. And I can write a query and or I can configure the tool by being a prompt engineer or something along those lines. It's more important to understand what is the problem and being an expert at the problem. Because no problem is, uniformly or, you it's a single problem. There's always nuances to it. one thing that you can prepare yourself in while being adept and familiar and keeping track with what's actually happening on the technology side is becoming an expert at quickly identifying and determining what technology and how to apply it to a nuance of a problem, and that requires a very deep understanding of being able to break down a problem and in its constituent problems. And subproblems, and then thinking through and, you hear this in the AI world of, multi modal, multiple agents working together. That's all basically the problem that's been divided up into many different smaller problems and the appropriate technology is being applied. But that can only be done when you understand the problem to a depth, and problems are constant. People will always want a healthier food item. People will always want a safer car. And so understanding what makes a car not safe is more important than the current technology of building safe cars.
Don Finley:I really like that divide, right? Because we can sometimes get too wrapped up in kind of the latest flavor of the day. But the things that come back to us are the problems create the opportunity. in that we can decide what tools come into that Rix.
Kumar Srivastava:And you want to design your solution so that it can handle the next evolution in the tool, because the tool is going to keep changing. what we have today. is not going to be what we have in a year from now and wasn't what we had a year ago. So, if your solution cannot handle progression, then it becomes obsolete and replaced. So that's really critical, is what makes the solution be able to deal with changing technology? And how do you design that? And that's the only answer I found and learned from, again, smarter people than I am, is focusing on the problem and being an expert at the problem, not at the solution.
Don Finley:Nice. I gotta say, it's been an absolute blast talking to you today. I like really appreciate you have having you on the show and getting a chance to share like your thoughts and wisdom with the rest of the audience too.
Kumar Srivastava:It's been my pleasure. thank you for inviting me
Don Finley:Thank you for tuning into The Human Code, sponsored by FINdustries, where we harness AI to elevate your business. By improving operational efficiency and accelerating growth, we turn opportunities into reality. Let FINdustries be your guide to AI mastery, making success inevitable. Explore how at FINdustries. co.