Infinite ML with Prateek Joshi
The podcast where Prateek Joshi interviews world-class AI founders and VCs. Each episode dives deep into a specific topic in AI. You can visit prateekj.com to learn more about the host.
Infinite ML with Prateek Joshi
The Long Tail of AI: Understanding and Resolving Edge Cases
Michael Kohen is the founder and CEO of SparkAI, a platform that delivers real-time resolutions to longtail AI exceptions in production. SparkAI was acquired by John Deere. He was previously the cofounder of Wonder. He has also held roles at Zoox and Hero. And he got his bachelors degree from Harvard University.
Michael's favorite book: Star Maker (Author: Olaf Stapledon)
(00:00) Introduction and Definition of AI Edge Cases
(02:24) Examples of Edge Cases in Everyday Life
(03:09) Importance of Edge Cases in Various Industries
(04:09) Challenges in Resolving Edge Cases
(08:53) The Potential of Reinforcement Learning in Addressing Edge Cases
(10:48) Continuous Updating and Retraining of AI Models
(29:28) Lessons Learned in Handling Edge Cases
(31:39) Exciting AI Breakthroughs in Hardware Development
(36:18) Advice for Founders: Focus on Creating Value for Customers
--------
Where to find Prateek Joshi:
Newsletter: https://prateekjoshi.substack.com
Website: https://prateekj.com
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19
Twitter: https://twitter.com/prateekvjoshi
Prateek Joshi (00:01.572)
Michael, thank you so much for joining me today.
Michael Kohen (00:04.462)
Absolutely great to be here.
Prateek Joshi (00:06.564)
Let's start with the fundamentals. Can you explain what an AI edge case is?
Michael Kohen (00:16.142)
Absolutely. So there's probably a better technical definition out there. The way I like to describe it is an event or situation that for whatever reason looks different than how, than what an AI model was trained to expect.
Prateek Joshi (00:38.629)
Interesting. And if you think about where an average person would see an edge case, so to bring it to life, can you provide a couple of examples of a notable edge case that I may have seen in real life?
Michael Kohen (00:57.07)
Sure, sure. Well, so let me kind of maybe take a step back and think about, you know, one might think about edge cases as these things that are always unique, but it's not really the case. I like to think about edge cases really kind of in these two categories. One, it's an event or situation that's actually perfectly mundane, but for whatever reason in that moment looks different, you know, looks unique to the AI model.
So that's kind of one bucket. And the second bucket is truly a unique situation that an AI model was not at all trained to expect, was not in any of the training sets. Nobody could have predicted it. one of the really interesting things about edge cases is that we as humans really take them for granted. You see edge cases all day long when you're walking down the street, depending on where you live, right? It's, it's more interesting people or not, but they're, you're constantly encountering people and things that are, that are different. And you have.
a super high performing model running in your head that's able to very accurately categorize what you're seeing and understand, do I avoid this person? Do I walk around? How do I handle this edge case? But think about just the crazy things that you might encounter walking around just in New York City. A clown holding balloons, walking with a ladder, dressed in a colorful suit. These things are happening all day every day and we take them for granted and then we put an AI model in the real world. That's when things get interesting.
Prateek Joshi (02:50.02)
Perfect. And if you think about the AI products out there, they're live, people are using them. Why are edge cases so important? Like why do we care about them and how do they impact the performance of an average AI system?
Michael Kohen (03:09.87)
Yeah. So I love, by the way that we're doing really just kind of this conversation, almost spotlighting edge cases, because they don't get a lot of attention and they should. So I think fundamentally, it really depends on the type of product that you're trying to build. If it's a self -driving car, multi -thousand pound piece of equipment, really edge cases are impacting safety. And we can talk a little bit more about that. If it's a computer vision model that you're using to do...
Prateek Joshi (03:17.54)
Yeah.
Michael Kohen (03:38.894)
AI -based insurance inspection, it's really more about accuracy. If it's a robot running in a logistics center, it's really more about throughput. Fundamentally though, I think all those things ladder up to the viability of your product in the market. And in effect, kind of the magic you're able to deliver for the customer or the value that you're able to deliver for the customer. In many applications, getting things kind of right is actually totally okay. Right? Chat GPT messes up all the time and we think it's cute.
But if you're dealing in other applications where accuracy is important, that last little bit really is that make or break of whether or not a customer can trust you in the critical path, trust you with their life in the case of a self -driving car, or really consider it valuable enough to pay for.
Prateek Joshi (04:30.98)
And we see a lot of edge cases, as you said, where a model is trained on a certain data set. It has all these patterns. And then it encounters edge cases in real life. Now, why is resolving edge cases? Why is it so hard? What makes it difficult to deal with?
Michael Kohen (04:57.87)
I mean, it's the nature of the edge cases themselves, right? The fundamentally edge cases by definition represent just the long tail, right? That last 5 % is just this super, super long tail of occurrences that is really, really hard to anticipate in advance. It's really hard to say, hey, we need to go prepare for this specific scenario, so let's go collect a bunch of data around it. You can do that, but it's really.
that times a million different variations of that occurrence and different occurrences. So it's just super, super hard to explicitly train for, especially when you're in R &D mode and you're trying to manually collect data. This is why we can connect this to the work that we've done, why we believe it's so important to as quickly as possible, get your product out into the real world and have it bump into edge cases and give you a sense for what the full...
contours and shapes of the environment it's going to be running in.
Prateek Joshi (05:58.052)
And based on that experience, what industries are most affected by these AI edge cases? And why is that the case?
Michael Kohen (06:09.39)
I think it really comes down. So I would say everybody is affected universally where it where it matters, right? That's that's kind of the question. Where do you actually need to care about this? I think it's wherever your AI product is making a decision of consequence where the cost of failure is either causing bodily harm or harm to property or or it's not as severe, but it's more around what we were talking about earlier around.
maintaining uptime and utilization and accuracy of your product to the point where your customer can trust you. And again, it's really about that last little bit. You can do something 99 % well, but if that last 1 % torpedoes the overall performance of the system, it's really that 1 % that's most important.
Prateek Joshi (07:12.644)
If you had all the time in the world and let's say you had all the resources, all the people, what are all the steps involved in resolving an edge case? Let's say you're out there, your system encounters an edge case, what happens?
Michael Kohen (07:22.926)
Hmm.
Michael Kohen (07:27.726)
Yeah. So really depends on the way to answer that is kind of right now, what do we do with this with the reality that we live in the state of AI development? How do we handle it? And then there's a broader answer to this question, which is, you know, there's, there's probably a new branch of, AI development that is yet to be discovered. Maybe two more branches, right? That's gonna, that's gonna deliver. Someone's going to earn a Nobel prize on for the right now. We're not there right there right now is these exists.
Our AI systems need to find a way to contend with them. How do you overcome them? Our approach that, that we developed in the company that I, that I founded called spark AI, really anchors itself way back in, I think part of the earlier days of self -driving cars, where we came to this realization that these things are not going away and it's going to take a number of decades before AI models are powerful enough to be able to just reason through these situations on their own.
So fundamentally the way that we approach this is by marrying AI models and the best of their abilities with humans, human cognition, human reasoning, and logic capabilities, and at the interface delivering human cognition and reasoning abilities to AI models in real time. And that interface, that partnership allows you to really capture the best of both worlds.
Prateek Joshi (08:53.764)
You mentioned earlier that edge cases, it's a long tail of possibilities and it's not possible to like cover all of them during training. Now, how do you identify and classify a given edge case, right? What methodologies are used to say that, okay, this is an edge case and then this is, you know, type three edge case. Like, how do you do that?
Michael Kohen (09:20.622)
Sure. So it really depends. Again, always, I'll always kind of keep going back to this. It really depends on the product. Fundamentally, I think the way you think about, let's say you're deploying a robot into the, into the world, really the way that robot is functioning at the most basic level, it's a prediction and a confidence score. How confident am I that what I'm looking at is a obstacle category two. And if I know if I'm 99%, that's an obstacle category two.
I, the robot, I know how to handle that. I'm going to drive around it. All good. Sometimes we're going to look at that and we're going to say, I think that's an obstacle category too, but I'm actually 30 % sure. And that really comes down to the builders of that robot to say, what confidence threshold are we confident enough to be able to just move forward versus needing to call for help or stop or whatever the triage path might be.
And again, just because of that nature of the long tail, it's not like you can day one, write a list of all the edge cases you're ever going to encounter. It's really about get that product into the real world, have it bump into a bunch of different walls, figuratively. And then, and then you'll see, Hey, we're really struggling with this type of situation that we haven't expected. Let's let's really concentrate our training and our model development around specifically dust as an example.
Prateek Joshi (10:48.068)
The way you describe it brings me to my next question about the potential of reinforcement learning. Obviously, this has found applications across different areas of AI. Now, in the case of resolving an edge case, what's the potential of reinforcement learning? Can it be useful? If so, how much can it help? Or is it so that it's just not a useful framework to think about?
resolving edge cases.
Michael Kohen (11:20.078)
Yeah, so I'm look, I'm not a I'm not an ML practitioner in the in the traditional sense. So I'm sure there's there's others that would disagree or there's probably ways to apply it. I think I think again, just going back to the nature of edge cases, it's it's it's a little bit more complicated than action reward update policy. I think one of the more interesting angles to this is.
having those edge cases as you encounter them, feed a training set. That training set, you know, you use that to update your model, that model gets deployed and then, you know, you, you hopefully are able to get better and better over time. But it's not like, you know, if you go back to what I was saying earlier, sometimes edge cases really aren't all that unique. They're totally mundane occurrences that for whatever reason on that day, with that lighting, in those weather conditions, that rock looked like a tree.
Dust is a really good example of this, where any human can look at a billion pictures of dust and say, yep, that's dust. But depending on the sensor stack that you have on the robot, dust can look like a truck. Dust could be interpreted as a truck, could be interpreted as any different shape. So it's not like you can look at one picture of dust and say, OK, I now
Prateek Joshi (12:18.66)
Right.
Michael Kohen (12:46.702)
I know how to teach the model to tomorrow not recognize dust as a tree. That dust cloud is going to look a little bit different. And that's where you need a little bit of a different take on really intentionally building a dust model that is able to detect dust or not dust, and then run its other model in parallel.
Prateek Joshi (13:06.468)
I want to talk about a couple of specific sectors where this is making a big difference. So the first is agriculture. Robotics is making huge strides. It's revolutionizing the way we do agriculture. How does this edge case problem manifest itself? Or rather, let's talk about before versus after meaning. How?
does work happen before they deal with edge cases? And how does it change after you come in and say, hey, here's a solution, and just like so much better?
Michael Kohen (13:45.262)
Sure. Yeah, so let's talk about this. And there's again, there's so many, every industry, every product is impacted by edge cases. In the case of agriculture, I do agree. It's one of the most exciting industries for just the next generation of autonomy products, automation products. And we can maybe talk a little bit more about that. Just let's just talk about a very specific example. Let's say you had an autonomous tractor. That autonomous tractor.
is running around in the field, right? It's going up and down the roads. It's doing its thing. Surprisingly, agricultural fields have occasionally random things in them, logs, shopping carts, especially towards the roads, things kind of get dumped there. So tractors driving encounters something that it doesn't understand, right? It doesn't look like corn. It looks like something else. And before you actually...
incorporate some kind of solution to handling that edge case, what's going to happen? That tractor is going to see something that it doesn't understand. Its prediction confidence is going to be very low and it's just going to have no other option but to stop. And at that point, what happens, right? You have a tractor somewhere else, somewhere on a 4 ,000 plus acre field that is now stopped in the field and has no way of overcoming that stoppage because there's no way for it to update its understanding of what it is looking at.
So the net result is has to stop, call a farmer that farm rest, stop what he's doing, drive all the way the tractor and say, okay, this is, Hey, this is just a pile of leaves or actually, right. This is a log. I need to move it. but that, that lack of the ability to really in real time, in real time triage, that, that issue means that 95 % of the time that tractor can be running perfectly that 5 % that it's stuck in the field, you know, burns an hour or two or more.
of productive time of that vehicle. So in aggregate, you now have lost that uptime utilization of the product, that product becomes less viable.
Michael Kohen (15:50.766)
And you can kind of cut and copy paste that same exact example in really any kind of situation. In this case, let's say we're uptime and utilization are really important. That 1 % of occurrences where it just stops and eats health really breaks the overall throughput and productivity of that product.
Prateek Joshi (16:10.244)
And for listeners who may not know, can you talk about how you come in across industries, not specific to agriculture, but like you come in and what happens, like how does an edge case get resolved like really fast?
Michael Kohen (16:25.774)
Absolutely. Okay. So really taking it back to the highest level, remember what we pursued here is this intertwining of AI decision -making with human cognition in real time. So we give you that example of a tractor in the field encounters something he doesn't understand. Now, instead of breaking down, right? Instead of just stopping and waiting maybe for hours in that moment, what the tractor does is it calls our service Spark AI.
programmatically, right? It says, hey, I'm looking at something I don't understand, and I'm going to send you some pictures of it. So it feeds, again, programmatically, images from the field to Spark AI Cloud. On our side, we have human beings we call the mission specialists that are explicitly trained for this use case. They have all operating procedure, standard operating procedure that they use. When something comes in, here's how we triage it. Here's how we digest it.
those mission specialists, and we have multiple of them working on the same task simultaneously, produce a resolution. They'll say, what you're looking at is nothing. Actually, it's just dust. What you're looking at is, you're right, it's a log. They'll produce that resolution and deliver it right back to the robot, again, all programmatically. The robot will take that input.
and really just use it as that, as input, in addition to all the other input that it's getting from the world. But it says, okay, if what I'm looking at is category X, I know how to handle that. I drive around it. That entire exchange happens in a matter of seconds. And if you think about the before and after, the before is tractor is stuck there for hours. And the after is really a few seconds of a handshake.
keeps that product running productively, safely, and most importantly, that stoppage is imperceptible to the end user. And you're actually able to get the value out of it and it will deliver for the customer.
Prateek Joshi (18:30.308)
That's great, like getting it like resolved that fast. So that's great for the tractor, the customer. So it can get the asset back up, it's being more productive. How do you think about the utilization of your own people? Because let's say you have to have N number of people who are sitting there and they are waiting for the next edge case. And if there are no edge cases, then you'll have to, you still have to pay them. So how do you think about?
Michael Kohen (18:50.446)
Sure.
Michael Kohen (18:57.582)
Yeah.
Prateek Joshi (18:59.492)
balancing the number of people you have versus utilizing their time because you're still paying for it.
Michael Kohen (19:08.526)
Yeah. Yeah. So this is where the tech, this is where one of the layers of technology is really, really important in just doing that load balancing, that dynamic load balancing and operational management. so right, there's two ways you can do what I just described. Right. Number one is you can have a thousand people sitting at their computers at the ready, ready to, to engage at a moment's notice. That's really expensive, especially what we're talking about here is, you know, the one 5 % of use case of events. So.
in an ideal world for the most part, they're actually not, you know, tractors running perfectly. So one way to do this, you have a thousand people staring at a screen and you're right, that's very expensive. The way we've taken, we've, we've taken the approach of you want to really minimize that human footprint as small as possible for exactly the reason you're describing. And the way you do that is really with the algorithms that we have running in between and also the, the.
the tooling that we deliver to help kind of wrap the human mission specialists and the human tasking. I'll talk a little bit about both. So in terms of that routing algorithm and some of the tech that sits in between the robot and the humans, there's a lot of things that you want to balance, right? One is the SLA, not every response time SLA, right? Not every robot needs a response within a few seconds. Some can wait 30 seconds, some can wait 60. So you're servicing multiple products at the same time.
You're constantly routing and prioritizing a queue that accepts an engagement and says, okay, this one has an SLA, a response time requirement of let's say 30 seconds. So I have someone who's going to be available in five seconds. I'm going to hold it and then deliver it to that person. Obviously very, I'm describing in a very simplistic terms, but you can imagine a version of this that is incredibly sophisticated that.
Prateek Joshi (20:56.42)
Yeah, yeah, yeah.
Prateek Joshi (21:02.404)
Yeah.
Michael Kohen (21:03.63)
really packs that queue as tightly as possible, makes sure things are done as efficiently as possible. And you add all the other elements to this, right? Who in the network is best suited for this specific application that has the most experience, has the highest trust score, things of that nature. And that's where you optimize for both availability, response time, small operational footprint, and accuracy.
Prateek Joshi (21:28.036)
the level of operational sophistication needed here, I think it's pretty high because if you have two lacks with it, then you'll end up burning a mountain of cash. And if you're too tight, then your SLAs will start triggering clauses. They're like, hey, you said 30 seconds. I've been waiting for five minutes. What happened? So I think it's a very like a lot of operational nuance here. The next exciting...
Michael Kohen (21:38.862)
Mm -hmm.
Michael Kohen (21:49.678)
That's right.
Michael Kohen (21:55.118)
We like to live on the edge.
Prateek Joshi (21:58.66)
That should be a tagline. That's amazing. Another exciting area is robotic fulfillment. All those huge warehouses, robots, they're doing their thing, filling out e -commerce orders is one example, but basically robotic fulfilling. It's a great boost to productivity. Now, in this setup, can you just similarly, can you take an example of before versus after? And...
Michael Kohen (21:59.438)
No pun intended. Yeah, exactly.
Michael Kohen (22:07.278)
Mm -hmm.
Prateek Joshi (22:26.628)
Let's take a very commonly encountered flavor of edge case here.
Michael Kohen (22:31.246)
Sure, sure, sure. Yeah, I'll talk about one product that we really had the privilege of working on in depalletizing. So what's depalletizing? So think of a pallet of boxes, right, as things are moving through a logistics center. Now, the specific product that we were working on was a robot that was doing mixed skew depalletizing. What that means is a pallet that has a box of different shapes and sizes, totally non -uniform.
and kind of just thrown together onto a palette. And one of the holy grails in logistics robotics is robotic mixed skew depalletizing, taking a palette that has a bunch of different boxes and unpacking it in effect. So there, I think you kind of get where I'm going here, which is every so often you end up with a box that looks different, right? That is shiny instead of brown, is crooked, is crumpled, and...
For the most part, the robot knows how to handle that, but every so often it'll encounter a box that it doesn't know how to process and it gets confused. It gets confused about what are the dimensions of this box? Where should I pick it? And it gets confused about should I pick that one or the one next to it? Which of these is the highest box? What's my next pick? So in this case, 99 .5 % of the time that robot does extremely well.
And it's doing, you know, it's picking, it's picking boxes, a couple of seconds flat each time. Every so often, you'll encounter one of these situations. And just like the tractor, it doesn't know what to do. And in this case, what it does is it just stops. It stops and it calls from help. And somebody from across the floor has to, you know, pick their head up and notice that there's a flashing light, walk over to the, to the robot, move the box manually. All of that can take.
A minute, minutes, numerous minutes. and as a result, the blended throughput of that robotic product is torpedoed. And suddenly you're bumping up against this question of ROI. Suddenly the person who bought that robot is saying, well, why am I buying this robot? Why should I just have somebody stationed here full -time? It was not going to make a mistake and can process on a mixed, on a, on a blended basis.
Michael Kohen (24:53.166)
about the same number of boxes as this robot that every saw infelters. So here again, in this moment of confusion, the robot called our service programmatically, and we provide the input that here's the box, here are the boundaries, here are the pick points. Do that in just a few seconds and keeps the throughput high.
Prateek Joshi (25:17.06)
Right. I think that's a very good question because if the humans have to intervene a little too much, the person who bought the robot will say, will ask that exact question. Like, hey, why am I paying the human and the robot? I'm just doubling my cost and it's just not, yeah, that's a great point. Now across all these engagements and sectors, you're collecting an amazing amount of data on just edge cases. Like how do they appear? What happens? How do they get?
Michael Kohen (25:31.566)
That's right.
Prateek Joshi (25:46.46)
Now, are you using that data to update and retrain AI models? Do you use it internally? Do you provide that to the customer? Or is it a combination of both?
Michael Kohen (26:00.43)
Sure. Yeah. So this is, this is, I think, you know, maybe it goes philosophically to how I think about commercializing robots and really what I've seen in the past where you could do your best and try to focus on just building as big a training set as you can and go off and do, you know, data collection runs, or you can take your product, however good it is, push it into the real world, have it bump up into these situations, knowing that there's, you know, there's a safety net to catch.
you know, catch these events and, and learn from those events, right? The best way to figure out what, what, you know, what the, the shape of edge cases you're going to have to deal with or to go experience them, take that data and then, either intentionally go off and collect more data that looks like that specific edge case or, you know, or use synthetic data or gen AI. I think there's some, it's kind of interesting role to play there on, you know, taking one event that is extremely rare that you're never going to encounter.
and then using some of the modern tools that are available to us today to amplify it.
Prateek Joshi (27:05.572)
Actually, this is not apples to apples, but reminds me of how in the pharma world, companies that can solve rare diseases, it's a very valuable thing to do and they're just valued more precisely because there isn't enough data, there aren't enough patients. So they have to do a lot of work to figure out how do we do this with limited number of people who we can work with, limited number of patients who we can test this on. So it's not apples to apples, but I think it's like if something is rare.
edge cases are by definition rare. So if you have a strong data set that captures that, I think could be a very valuable thing to have. And let's say there's a model trained, it's deployed, and it keeps encountering a specific flavor of edge case over and over again. And then they decide, OK, I think it's time to just include this into the model. And they update the model like,
Now the next question is, how do you test a given AI model against edge cases? Is it the same as just adding that to the list of things you test? Or is there anything different you have to do to see, hey, I updated the model for this edge case. Is it going to work?
Michael Kohen (28:21.614)
Yeah, it's a good question because you know what you're saying there is you've updated the model, but you have no guarantee you're going to bump up against that edge case again for however much time. Yeah, so there's probably a more technical answer out there. But I think, yeah, this is where testing comes into play. This is where simulation to the best, to the best of our abilities comes into play to try to recreate those scenarios that you encounter.
Prateek Joshi (28:29.476)
Right. Yeah.
Michael Kohen (28:47.47)
simulation, synthetic data, GeneAuth has a massive role to play here and those tools are just getting more and more powerful every day. Nothing is going to compare to encountering in the real world with a slightly different permutation than what you saw. But yeah, it's kind of this is what makes them so hard.
Prateek Joshi (29:08.132)
Yeah. Now you've dealt with edge cases, you've dealt with customers, you've dealt with different sectors. What lessons have you learned and would like to share with our listeners on just handling all these edge cases in like live AI systems?
Michael Kohen (29:19.31)
Hmm.
Michael Kohen (29:28.526)
Yeah, well, that's a great question. I think...
Michael Kohen (29:34.317)
I think it's, it comes down to just, you can't really ignore this. And it's not to say that every single product needs to, for some products, it's actually just perfectly okay to be 70 % accurate. Really is strawberry, robotic strawberry picking. There are products out there that are, that are nowhere near a hundred percent, 90 % accurate, but they, they're able to find this corner of ROI where.
you know, a certain level of accuracy is perfectly fine. So I think the first step is really just to be honest with how important this is to the actual product, to the actual end customer, to the value you're trying to deliver, and not over baking, right? Again, in some cases, you're okay launching with a product that's imperfect, but in a lot of other cases,
really being honest with this idea that that last X percent of performance is really going to be the make or break of whether or not you can launch your product or not, whether or not customers are going to pay for it, whether they'll pay for it and actually use it. Those are things that you'd rather find out early so that you're architecting your product in the right way, that you're selling it in the right way, and just managing expectations on what the end customer can expect.
Prateek Joshi (30:53.7)
I have one final question before we go to the rapid fire round. Outside of edge cases, what technological breakthroughs in AI are you most excited about as you look forward from here?
Michael Kohen (30:57.998)
Right back.
Michael Kohen (31:09.39)
Sure. So again, not a traditional ML practitioner. I think one of the things I'm particularly enthusiastic about is just this greater emphasis, not just on advancing the models themselves, but also advancing the underlying hardware. There's one company in particular that I'm extremely, extremely excited about, I'm actually an investor in, called Etched, and they're building a specialized AI chip for transformer models.
And I think it's important to not forget about the underlying hardware that is powering these models, not taking for granted that, this is the performance we can get. Let's just build a bigger and bigger and bigger GPU. In the same way that the trajectory of AI development requires continuous innovation in some cases in radically different branches. I think I'm particularly excited about companies like Etched and others.
just rethinking the underlying architecture.
Prateek Joshi (32:13.028)
Yeah, I agree with that one. I've been very excited about the hardware development here and so much scope. I think a couple of very, very big outcomes are just waiting to happen. And I think for the first time in a very long time, like small groups are shipping like chips, which in the past, the common wisdom was like, okay, you need like a giant amount of capital to even get the basic version out. But now, I mean, you mentioned Etched, there's Grok.
GROQ, is like small teams are like building and shipping chips to take on these giants. So yeah, it's very exciting. And I do think there's a lot of scope to like just verticalize this and ship chips that are suited to a given use case, like an order of magnitude more efficient. So yeah, yeah, I agree. All right, with that, we're at the rapid fire round. I'll ask a series of questions and would love to hear your answers in 15 seconds or less. You ready?
Michael Kohen (33:00.238)
Exactly right.
Michael Kohen (33:06.51)
Go for it.
Michael Kohen (33:11.822)
Do it.
Prateek Joshi (33:12.836)
All right, question number one. What's your favorite book?
Michael Kohen (33:17.774)
The Star Maker by Olaf Stapledon.
Prateek Joshi (33:21.06)
Wow, that's a first on the podcast. Great. All right. Next question. What has been an important but overlooked AI trend in the last 12 months?
Michael Kohen (33:32.846)
Well, I kind of cheated a little bit. I would have kind of talked a little bit about the underlying hardware and I think what's, you know, just again, emphasis that we need to go beyond GPUs to deliver the future I think we all want.
Prateek Joshi (33:46.916)
What's the one thing about edge cases that most people don't get?
Michael Kohen (33:54.094)
they make or break your product. I think people often write off that last 5 % because of how fast they've been able to make progress against the 95%. And for most products, that last 5 % is where all the value is created. I don't think you or I would get into a self -driving car that was 95 % accurate.
Prateek Joshi (34:14.756)
There's a 5 % chance we might kill you, but 95 % you'll be alive, most likely. So yeah, that's not going to work. Next question. What separates great AI products from the merely good ones?
Michael Kohen (34:19.426)
Yeah, yeah, no thanks.
Michael Kohen (34:31.982)
You know what I'm going to say? They consider the edge cases. But I'll get, I'll get, so, so we talked plenty of that. I think one other interesting thing, especially in robotics world, I think the robotic products that consider, you know, where their pocket of value is along three axes. One is how narrow of an ODD can they live in ODD operational design domain? How narrow of a simple of environment they can work in while delivering the most amount of ROI.
Prateek Joshi (34:34.468)
Yeah.
Eugh.
Michael Kohen (35:01.422)
And then the third axis is, you know, at what scale basically, what's the total market size. So self -driving car driving around a block, narrow ODD, very low ROI. A self -driving tractor in a agricultural field, fairly narrow ODD, massive, massive ROI, massive market size. Those are the real pockets of value.
Prateek Joshi (35:24.676)
You know, I love that ODD framework. I think it's so useful to think about when you build products and you ship products, that's really great. Next one. What have you changed your mind on recently?
Michael Kohen (35:39.662)
College, I actually don't think it's right for everyone and that it may be time to rethink the role of college as the default next step after high school.
Prateek Joshi (35:52.804)
What's your wildest AI prediction for the next 12 months?
Michael Kohen (35:58.83)
John Deere, a 187 year old company, becoming the pacemaker for commercializing autonomous products.
Prateek Joshi (36:08.324)
Wow, okay, that's pretty wild. All right, final question. What's your number one advice to founders who are starting out today?
Michael Kohen (36:18.99)
So there's so many dimensions of, I think, just being a founder. There's the personal health team, company, product, fundraising. I'm going to touch on one that's not particularly revolutionary, which is the customer. And I think it's just important for people to hear this again and again. I think if you always anchor yourself on what creates value for the customer, if you can create value for customers,
customers will be willing to pay you for that value. You're gonna be able to recruit a better and better team who get excited about delivering value for customers and they'll be able to get investors excited about the team and the value of your delivering. So everyone kind of touches on this, but it is so true. Stay anchored on the customer.
Prateek Joshi (37:08.932)
You know, it's funny how people who are still like slightly early or the middle, they'll talk about all the frameworks and this and that. But I think beyond that, I think I've heard us over and over again, when people get to a certain Zen state, they're like, just create value for customers. It's as simple as that. Just ask how it's creating value for the customers and just work backwards on that. There's no magic framework with like 19 steps. It's like, yeah, that's amazing. It always comes back to that. So.
Michael, this has been a brilliant episode. Thank you so much for joining me today and sharing actually a ton of knowledge on this weird corner of AI that most people don't talk about, which is just handling edge cases. So thank you again.
Michael Kohen (37:51.47)
Absolutely, it's a pleasure to be here. Thanks so much.