Infinite ML with Prateek Joshi
The podcast where Prateek Joshi interviews world-class AI founders and VCs. Each episode dives deep into a specific topic in AI. You can visit prateekj.com to learn more about the host.
Infinite ML with Prateek Joshi
Open Source x Health AI
Dan Caron is the founder and CEO of Health Universe, where they are building an open source collaboration platform for health AI. He was previously the founder and CTO of Dark Pilot. Prior to that, he was the founder and COO of RxREVU.
In this episode, we cover a range of topics including:
- State of play in Health AI
- Why open source in Health AI
- The founding of Health Universe
- Potential risks of using open source in healthcare
- Regulatory environment
- Open source vs commercial healthcare
- Impact of the open source approach on healthcare providers and global healthcare disparities
- The future of open source in Health AI
Dan's favorite book: Build (Author: Tony Fadell)
--------
Where to find Prateek Joshi:
Newsletter: https://prateekjoshi.substack.com
Website: https://prateekj.com
LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19
Twitter: https://twitter.com/prateekvjoshi
Prateek Joshi (00:10.524)
Dan, thank you so much for joining me today.
Dan Caron (00:13.334)
Thanks so much for having me, Prateek.
Prateek Joshi (00:16.324)
Let's get right into it. What is the state of play in health AI in terms of what technologies are being built and also what is the need of the ecosystem?
Dan Caron (00:28.85)
Oh, wow. That is such a big and exciting question, Prateek. I mean, wow, the state of play in health AI, it is one of the most interesting fields for machine learning and AI currently. It actually is garnering more investment than any other field in industry. If you can believe that. And really, health AI is all about making better outcomes for patients, alleviating
all of the pressure that doctors are under and clinicians are under, finding ways to improve outcomes, reduce costs, improve revenues. I mean, you name it, every corner of healthcare is being inspected for potential opportunity to apply machine learning. So it's a really exciting time.
Prateek Joshi (01:18.252)
amazing. Can you explain the role of open source software in the development of health AI?
Dan Caron (01:27.474)
Yeah, so open source software has seen success in many industries, right? And the reason it's so successful is that it's a worldwide collaborative and you can have people joining from right any, uh, any organization who wants to commit some of their time or organizations that are entirely built on open source that have, uh, significant budgets to contribute to developing software. And it really allows for, uh, robust.
products to be built. Now, of course, not every open source project is a robust piece of software, but it has the potential to be, right? Because it can be inspected, it can be poured over with a fine tooth comb. And really the big question in health AI right now is, how do we know how to trust these algorithms? How do we know that an algorithm is safe to use with a patient?
And clinicians are wondering these things because they've been hearing for so long that machine learning and AI is coming for them. It's going to take their jobs. And while I think that sort of fear mindset is not helpful and really isn't what's going to be how things are going to roll out, but really, truly open source helps with the transparency piece. And that's really what Health Universe is all about. We are establishing.
a collection of open source and closed source machine learning and AI models for healthcare really to bring people together to show them what's under the hood, what data has been used to train models, how does the application work, what type of algorithm is being used, who is using it, how many executions does it have. And the transparency piece is huge, right, because a lot of closed source software or
for commercial purposes is not always easily understood, but open source allows for that transparency and code review. And that's really, really necessary for everyone to learn to trust the algorithms that are being developed these days.
Prateek Joshi (03:40.94)
I think it's a good stopping point to quickly talk about Health Universe. So for listeners who don't know, can you explain what Health Universe does?
Dan Caron (03:53.158)
So Health Universe is a platform that is aggregating machine learning models for healthcare. We allow people to post Python projects and we collect what the project does, any scientific papers that are associated with it, and we allow people to discover those projects on Health Universe in our app ecosystem. We allow people to run them right from the browser.
And that's really important because that means a doctor anywhere in the world could have access to say the latest breast cancer detection model. And we're trying to democratize healthcare machine learning and make it available worldwide. Health Universe has a global mission to democratize health AI and make it available. And really we see an opportunity to reduce the time that it takes to bring...
research into clinical workflow because that can take years or decades even sometimes there are some amazing papers being published around health ml and health AI but if that research just remains locked up in a PDF right on archive or you know published in nature or science or something those publications do not always reach the patient as quickly as they could right?
And so we believe that there's a world where researchers are writing Python code and they're deploying applications to Health Universe. And we're facilitating the integration of those applications into the healthcare ecosystem, into EHRs like Epic and Cerner, et cetera, et cetera, and really sort of building the guardrails and building the requirements necessary to make health AI.
available, transparent, understandable, right? To non-technical audiences as well. That's really important. Clinicians are not necessarily hip to the latest variant of transformer models, right? There's a lot of how does this work? And can I trust this that sort of people that are not data scientists, they sort of need to have context, they need to have metadata, they need to have
Prateek Joshi (06:03.55)
Right.
Dan Caron (06:19.054)
how they can use a tool, any given tool. And Health Universe provides that ecosystem, that curation, that trust and safety, and that exploration piece to allow people to come and start using these tools and start to adopt them. We see a lot of tools being released right by OpenAI, of course, and other organizations. And...
We're told, you know, be careful with this. It's not necessarily the be all end all, but we see physicians are adopting these tools because they're under such stress and the burdens of medicine are often very steep. And so we're moving the needle and providing a bit more support for those users that need to understand the proper context for healthcare.
Prateek Joshi (07:15.5)
Amazing, there are so many moving parts and so many questions I have here. It's amazing, by the way. Okay, so let's go through that one by one. So you talked about people, like builders, coming into the platform with the machine learning models or their applications, and they wanna put it up on the platform so that people on the other side, maybe people who need this,
They'll browse through it and they'll use whatever they need. So for people who are building these apps or models, where do they get their data? And also, what are the rules around data usage?
Dan Caron (08:01.714)
Yeah, that's a great question. So data, data is a huge topic, right? And health universe does not prescribe specific sources of data. What we're focused on is the application layer. There are a lot of people that are focused on the data layer. And some of those organizations are integrating or have integrated with health systems, with pharma, with different aspects of sort of the healthcare landscape.
And Health Universe is here to say, look, we're an application layer and we're curating these applications. Your applications can go and connect to different data sources, different APIs. You can train your models on whatever research data you get ahold of. All we're asking is that you provide insights, metadata, context, transparency, and a description really of what that data is and how it's being used.
Data is something that we have a long-term vision to support a data universe, if you will. We're working on some really cool next-generation data engineering workflows, but that's not really ready for prime time yet. There's a lot of interesting things happening with synthetic data that make research more viable, because you don't have to worry so much about PII and
personally identifiable information being stripped and or leaked by accident. So synthetic data is a really interesting aspect of health AI right now. But for the most part, health AI, you could think of us as an application layer. And underneath it are Python applications that creators are deploying. Really, it's sort of up to them to bring their own data at the moment, but to provide...
transparency and insights around that data. Yeah, so it's a really, really interesting time for health data because we're starting to see this data being utilized in really innovative ways.
Prateek Joshi (10:11.268)
Right. And when the builders, let's say, they come to the platform, they have an application that is useful to a group of healthcare providers. And the healthcare providers, they come to you because you are the one who's vetting these applications, right? Because so many builders can come in. So can you talk about how you vet these applications and how do you build trust?
with the healthcare providers.
Dan Caron (10:40.018)
Yes. Yeah. I mean, that's that is a that is a really I mean, that's the million dollar question right now. How do we vet health models right for trust and safety? And that is a that is what we are squarely focused on. So my co founder Doug Fridzma was CSO for ONC for health IT. So he had one of the top health data jobs in the country.
And he was also president of the American Medical Informatics Association. And really, you know, we've been to some workshops, uh, hosted by Amia, um, uh, Beth Israel, uh, division of clinical informatics. And we're part of these working groups that are exploring the pathways to, um, to curating and enabling trust and safety. And my perspective, my personal perspective,
is that we need sort of an open source, collaborative approach that brings stakeholders to the table. Health Universe supports a validation stack. And what that means is that any application that's deployed on Health Universe is initially given a status of prototype, meaning this application has not been vetted. You should not trust this application. In fact, you should consider the outputs to be
dangerous, right? And that is sort of the default deployment validation status of an app. That can then move to stable, which means the application seems to be working well, it doesn't seem to be producing garbage outputs, you know, things are shaping up, right? And then we move to peer reviewed, meaning this model has been published, here's a link to the paper. And then of course,
sort of institutionally validated, right? This algorithm, this application is in use, right? Maybe at Stanford or Harvard or UCSF or something, and it's being used and here's who's using it. Here are the number of executions, here are some comments, right? And then if it's pushed even further, maybe that application gets FDA approval. Now, part of my personal history is that I'm a type 1 diabetic.
Dan Caron (13:01.774)
And I hacked an insulin pump. I reverse engineered the Omnipod insulin pump. And that reverse engineering effort allowed an iPhone application called Loop to become the first FDA approved medical device, the open source medical device that we're aware of, at least in the type one space. If not,
all spaces. But that was a really meaningful project to me because it showed that open source and open source collaborative effort can produce a piece of software, right, that's in use by over 40,000 people sort of last time I checked. And it's delivering a deadly hormone right insulin to
to patients, right? And it's adjusting that insulin dose sort of every five minutes, right? And there's very young children that are running the system. So there is zero, absolutely zero room for error. And we did that as an open source collective and that allowed us to review the code very carefully and it allowed people from all over the world to inspect it and look for bugs and look for problems.
And there were, you know, hundreds of people that worked on the project. And, uh, it really showed that open source collaborative efforts can be. Extremely successful at, um, creating real world results for real people with, with real disease states and health universe is, um, is my attempt at recreating that success at scale for all the problems that we face in, uh, in healthcare.
And so, you know, we're looking at, you know, what is the FDA doing? What are they saying? We're looking at certain groups, right, like AMIA, we're looking to them for guidance. We're participating in the conversation around regulation and health AI. And we're also bringing our own beliefs about how to build trust and safety.
Dan Caron (15:23.122)
through open source collaborative efforts, bringing stakeholders and really, you know, looking under the hood and giving people the ability to be introspective about these tools because you know, it's important that folks understand how they work and have proper context.
Prateek Joshi (15:48.228)
Right, right. And also, if you look at the, let's say for a second, let's look outside of health universe and talk about open source, the nature of open source, which is global in nature, meaning any developer in any corner of the world, and if they're building something useful, they can contribute to the open source community and make it available. But on the other side,
it's a regulated industry and data you deal with is sensitive. So on a bigger scale, how do you, how should we manage the trade-off between keeping open source as an inclusive mechanism versus dealing with the healthcare system? Where how do you enforce, let's say somebody builds an open source project that outside of your jurisdiction, how do you, how should we manage that?
regulation that we can enforce here.
Dan Caron (16:48.362)
Yeah, that's a really great question. So I like to think about, well, Health Universe as a matrix. And I like to think about there are different rows and columns. We have the data layer. We have the application layer. And we have open source. We have maybe medium source. And we have closed source. And so the application code may be open source. That may be available for inspection.
But maybe the data set is private. So that would be sort of a mixed source approach. If that application is from a well-known institution or a very well-known researcher, that increases the trust. If the person posting the project is not affiliated with the university, if there's no sort of transparency around the code, then that application is not going to have as much trust. And
It's really this sort of mixed model, if you will, that allows for certain parts to be available for introspection and other parts to be private, if that makes sense.
Prateek Joshi (18:03.808)
Yeah, that's actually interesting. And how do you look at today's regulatory frameworks, or rather, how should the framework evolve to accommodate a movement like open source? Because today, it's very, it's kind of, it favors large centralized companies with a lot of, there's a lot of red tape to go through. So,
How do you think the regulation or the regulatory framework should evolve to accommodate more and maybe encourage more open source?
Dan Caron (18:42.814)
Yeah, that's a great question. I, as I said, my personal opinion is that the nature of open source allows for more sunlight to be let in. And I think that sunlight is necessary to build trust. We see, right, large commercial AI organizations get blowback sometimes because their models are not open source.
And there have been some different challenges that have that have sprung forth from that So, you know my recommendation to regulators is the more open we can be with our algorithms and our training data and the more metadata and context and curation that we can provide the more we will have a better picture right of
the safety of an algorithm. And we need developers to really specify, you know, what are the use cases here? What are the limitations? What is the intended use here? There's a responsibility for algorithm developers, as well to really say, to share and document, right, their, their understanding and their expertise with the model that they've built or the application they've built.
And to be able to share that in a way that non-technical users can understand. You know, some of the machine learning metrics, right? F1, precision, recall, area under the curve. Those metrics are not necessarily penetrable to non-technical clinicians, physicians. There are some physicians that are diving into machine learning, and I have a great respect for them. But for the most part, physicians are busy and they don't have time to learn.
you know, the confusion matrix. So really, when it comes to regulating health AI, we need to provide as much context really as possible. And we need to bring all the stakeholders that are involved in the utilization of these apps and algorithms to be able to provide feedback and input, right, via comments and suggestions and...
Dan Caron (21:04.214)
People need to be able to highlight problems, right? And there needs to be feedback loops. And there needs to be external validation from the scientific community and the academic community. And all of that has to come together because a neural network, for the most part, can be a very tricky beast to introspect. And we need as much clarity as possible in order to.
make sure that we do no harm.
Prateek Joshi (21:36.016)
Right. One of the things that I've been thinking about given this discussion is incentives or rather impact on various players in the ecosystem if open source takes off. So let's say it'd be fun to kind of go through the exercise here. So let's say open source and health AI really takes off and the industry starts embracing it. Can you talk about how that's going to impact the following
three players in the industry. One is the doctors, the healthcare providers. Two, companies that sell software to the healthcare industry. And third one is insurance companies.
Dan Caron (22:21.546)
Yeah, that's a great question. So we'll start with the first physicians. Well, we've seen open source software has bolstered the tech community for years, right? And tech has benefited greatly from open source software. I mean, there are so many different underlying commercial products commercial products that use underlying open source software. I mean, almost
everything right open source permeates all of tech and that collaborative effort has allowed tech to always be at the forefront of innovation and that so what does that mean to physicians well it means that open source can bring that pace of innovation to health care because health care is oftentimes you know 15 years sort of behind the latest trends and
That's just a matter of people being conservative, people being reticent to adopt the latest and greatest. There's a lot of security and IT requirements and things just do not get implemented very quickly. And there's a lot of reservation and hesitation to move to the latest and greatest. And I mean, you know, there's a good reason for that, right? We're trying to protect people and always do that, but healthcare.
in the United States is broken in many ways, right? I mean, if we're gonna speak frankly, there's a lot of problems. I mean, 40% of physicians are thinking of quitting or finding an exit or figuring out a way to move away from their primary career. And I mean, that is alarming, right? So we need to figure out ways to help physicians and help clinicians.
Prateek Joshi (24:12.893)
Right.
Dan Caron (24:21.046)
to relieve the burden that they face and help them with their work. And so I see open source software as a way of providing those solutions in a worldwide collaborative effort to make things better, not just for patients, but for clinicians and to give them better tools, faster tools, more accurate tools, more helpful tools.
And they will be, at least on Health Universe, they can contribute feedback, comments, they can favorite things, they can provide suggestions, and we want them to be part of the process. And it's important, right? They're a stakeholder. They need to be able to say, hey, I don't understand how this works, can you explain it? Or I wish the tool did this, right? And creating that feedback loop.
allows for rapid iteration and allows us to fix problems as they come up. And that's what's really great about open source. What does it mean? The second audience was, was that commercial software developers?
Prateek Joshi (25:27.229)
Right.
Dan Caron (25:28.494)
Commercial software developers, I think most of them are already familiar with open source. They probably leverage many open source tools to build their products. I would guess almost all of them do in one way or another. With Health Universe, what that means is that if they choose to really open source their algorithm or their application, we think that's fantastic, right? But then you say, well, how do they make money?
Well, they can make money, you know, selling the application or selling support and services. And just because, you know, code is open source doesn't mean that you can't make money from it. There's been, there's tons of examples of open source software projects that do very well because they're providing ancillary services, support or customization, things like that. And frankly, like a lot of times people don't necessarily want to.
replicate a code base and figure out how to, you know, make changes that they might want to make. So we see open source as a friend to the commercial software developer. You know, and that's, that's a given, right? That's played out in software and SaaS for, for decades. So it's not really an opinion. It's just a fact of the industry.
Insurance companies.
Dan Caron (27:00.931)
Well, insurance companies are going to be forced to adopt fire standards. They're going to be forced to adopt new data interoperability laws. And what that means is that if they're looking to reduce costs, you know, which hopefully we're all trying to do in healthcare, they can benefit from open source projects.
that are doing things that can help their bottom line too. I mean, predicting readmissions is a really great example. It's one of the highest costs to a payer. And being able to use different machine learning models to predict readmissions is just, it's top of mind for a lot of folks. And there are some really compelling models that do just that. And, you know, so really there's no,
that can't benefit from an open source approach to health AI.
Prateek Joshi (27:59.364)
Right. I have a final question before we go to the rapid fire round. If you had to predict, what's the next big breakthrough that can happen in health AI? And also, what needs to happen to enable that big breakthrough?
Dan Caron (28:22.014)
Well, I mean, that's kind of a setup and it's I'm extremely biased, right? Like I run a venture backed startup in healthy. I like it. There's no way that I can't answer that without really talking about what we're building. I mean, health universe is an infrastructure platform that allows for worldwide collaboration between researchers, data scientists, bioinformatics, engineers, protein designers.
Prateek Joshi (28:28.576)
Hahaha
Dan Caron (28:49.582)
clinical informatics folks, clinicians, physicians, surgeons, and administrators alike, and even patients, right? And what we're doing is instead of people committing, you know, code to GitHub or publishing a PDF of their research, we're saying, look, just publish your work, publish your research on Health Universe, and we can collaborate together. And that PDF that
might be some really fantastic research is now available as an application that other researchers or other clinicians can run and derive value from your fantastic machine learning model. We're unlocking innovation and we're making it available right from the browser. So there's nothing to download, there's nothing to install, it's instant. Because a lot of times, if someone wants to implement
a great piece of research. They have to find the code on GitHub. They have to figure out how to create the Python environment or they have to get the IT department to do this. And then they have to look at the security requirements. And then, I mean, these things take months or years. And that's why innovation is takes so long to go from the bench to the bedside. Right. So we're trying to really shorten that dramatically shorten that by providing the cloud infrastructure.
for collaboration and for applications. So, yes, that's a biased answer, but I mean, my big belief is that collaborative effort is going to unlock all of this innovation in health AI for the world.
Prateek Joshi (30:30.676)
Maybe another, I'll ask the question in a different way. Outside of health universe, what is the next big breakthrough that can happen in health AI?
Dan Caron (30:37.762)
Sure, sure.
Dan Caron (30:44.578)
That's a great question. That's okay. So what's really interesting right now are the multimodal models, right? That can take in image data, audio data, text data, numerical data, right? And sort of synthesize this into a clinical picture. That's really, really interesting and compelling because you can start to come up with analysis that is
very rich, right? And that would take physicians a very long time to do, right? If I take all of my, like, let's go to an extreme, right? Let's say that all of my Fitbit data is available or all my Apple Watch data is available. All my medical records for the last 20 years are available. And then I have some MRI scans and CT scans, right? And all my blood work is available.
Um, and right, uh, now we have this very, very rich sort of clinical picture. And with multimodal models, um, you know, we're looking at high dimensional analysis of all of this data to, um, to really pinpoint maybe a diagnosis or maybe a treatment that might have a high likelihood of being efficacious. And I mean, that is super exciting, right? Because
That starts to take, that moves us out of this chat GPT world into a world where we're using multimodal data to make some really compelling analyses.
Prateek Joshi (32:31.836)
Right. Amazing. With that, we're at the rapid fire round. I'll ask a series of questions and would love to hear your answers in 15 seconds or less. You ready? All right. Question number one. What's your favorite book? You can name more than one.
Dan Caron (32:43.553)
Yeah, sounds fun.
Dan Caron (32:50.594)
Oh man, that is a tough question. I, um, you know, recently I read build by Tony Fidel and he worked at Apple and he helped build the iPhone and he was sort of leading the development of the iPhone and he also built nest, right? And the nest thermostat company that was acquired by Google and Tony, I mean, when I read that book, every single page was gold and it's a very witty
Prateek Joshi (33:02.804)
Mm-hmm.
Dan Caron (33:18.382)
It's a funny, engaging book filled with practical tips and just a really, really amazing book for entrepreneurs. I think it's one of the better business books I've ever read and I've read hundreds. So I would highly recommend Build by Tony Fidel.
Prateek Joshi (33:36.788)
Love it. All right, next question. What has been an important but overlooked AI trend in the last 12 months?
Dan Caron (33:46.55)
Um, I would have to say, uh, probably causal ML and causality. It's my personal belief that, uh, you know, people talk about the singularity and AI becoming conscious and all that. I don't think we're going to get there with transformer models. I think we need, um, we need to look at causal models.
Dan Caron (34:14.402)
20, 30 years or longer, right? He's been a big contributor to this field. And I think you learn in statistics, right, that correlation is not causation. Well, causality starts to look at, well, when is correlation causation, right? And how do we know that correlation is causation? And how do we sort of take our directed acyclic graph
of factors and how do we sort of manipulate them through experimentation to see when correlation is indeed causation. And that's really compelling, I think, to understanding the world, right? Because humans learn cause and effect. When you're very young, you smile and you giggle and your parents laugh. Oh, that's good. Or you throw a glass on the ground.
and it breaks and your parents are mad, right? Cause and effect and that cause and effect in understanding that process is, I think, overlooked a little bit in the current field of machine learning and AI. We're sort of doing a lot of brute force hacking, if you will, by just building models from data.
without necessarily looking so much into causality. And I think that's a really underappreciated field.
Prateek Joshi (35:48.716)
All right. What's the one thing about the US healthcare system that most people don't get?
Dan Caron (35:58.642)
Um, so I, um, well, when it comes to health AI, I would say everyone's worried about trust and safety and rightfully so. I mean, there's no doubt about it. That is the number one priority, but also, you know, like I said earlier, 40% of doctors are thinking of quitting because they're so stressed out. They're so burnt out. They're so overworked. Um, and, and the other thing is, right, there's a, there's a, there's a, there's a opportunity cost of not using health AI.
And right, like, physicians are human, clinicians are human, they make mistakes. And they make a lot of mistakes, right? Because they're under so much pressure and time constraints. And it's hard, right? Like being a physician is very difficult. So, you know, it's, it's not their fault when they make mistakes. Well, I mean, well, maybe it is sometimes. But point being is that there's a lot of human error, right in medicine, and that is
You know, there have been ways like the checklist manifesto to sort of reduce the error rate. And yes, while health AI does have issues, right, hallucination, things like that, you know, apples to apples, sometimes that error rate is much less than a human. And we have to consider that full context when we think about trust and safety.
Prateek Joshi (37:19.408)
What separates great AI products from the good ones?
Dan Caron (37:27.45)
I would say ease of use and simplicity, right? I mean, when I first saw Mid Journey creating artwork from text prompts, it was like magic. You know, you signed up for Discord, and you started chatting with the bot and you started seeing whatever was in your imagination coming to life. It was a very simple experience, right? You're just, it's like texting, you're like, it's like you're texting with a friend, but you're, it's, it's bringing your imagination to life.
And that's a simple experience. It's easy to use and it creates a quick effect. We had that same effect with GPT, chat GPT from open AI, right? When we first all started using it, it was like, oh my God, this is magic. This is like magical. We sort of like, I think after how long has it been? Yeah, it's been a little while now, but I remember the first few months, like the
The magic hadn't worn off. And even still, I'm like, this is impressive. This is still an impressive experience, but that real simple, easy experience, I think, makes great AI products.
Prateek Joshi (38:35.868)
Right, next question. As a founder, what have you changed your mind on recently?
Dan Caron (38:42.454)
Hmm.
Dan Caron (38:47.138)
I think the singularity recently, I used to kind of look at the pace of AI and it's still astounding to me. So right conscious AI is a question. And I used to think it was a little bit closer. Right I mean, people say well they're just, it's just, you know.
Silicon and plastic and metal like it's never going to be conscious, but ultimately humans, right? We're inert elements, right? Water, right? Carbon, for carbon based life forms. If you look at an atom as an atom conscious, I don't know, maybe it is but is carbon conscious? I don't know. But we are a collection of these things that has aggregated into a human that has produced consciousness. And I just think that
I think until we really bring causality into ML and AI, that's kind of going to be the missing ingredient. And so I've kind of pushed the timeline for the Singularity back a bit. But we'll see. Yeah.
Prateek Joshi (39:54.204)
Right, right. What's your biggest AI prediction for the next 12 months?
Dan Caron (40:01.822)
Oh man, we're going to see text to video rapidly improve, and it's going to be everywhere. Just like we're starting to see AI generated images everywhere. I see them on LinkedIn all the time. I see them on Instagram all the time. They've made it onto the cover of Time Magazine and they're used in a lot of places now in print.
I think we're going to start to see text to video everywhere. We're going to start to see commercials on TV that are going to be more fantastical, more interesting, more engaging, more rich, more compelling. And those are going to be from, uh, video models. And that's going to be really exciting. And we're going to see people creating movies, right? Like we're going to see short films and movies and very rapidly. We're going to get some really cool, creative, um, uh, movies to watch.
Prateek Joshi (40:54.432)
Final question, what's your number one advice to founders starting out today?
Dan Caron (41:02.838)
Well, as a person who started several businesses, you have to have a very compelling, big idea. It's gotta be a big idea. If you wanna do something grand, you need the support of investors, of employees, of partners, of your customers and your clients, and you really need the support of as many people as you can to make something successful.
And people only really get excited. They only join things that excite them, right, and that they really want to be a part of. And for me, health universe, I thought long about it. I said, what can I do for the next, at least decade that I'm going to want to get out of bed every day and work on? And that was bringing machine learning and AI into healthcare, to really advance outcomes and bring innovation out of research.
into the world at a much more rapid rate, help people get better, help find cures for diseases, like all these big ideas that ML and AI can solve, I wanna build that infrastructure. And that's what we're doing and that's what we've done. And we have a group of people and investors that are very committed to that vision and it's a big vision and it's exciting. And so I would say, you know, do something that is big and exciting in that.
is audacious, right? Because two, three years after you start your company, if that vision isn't big and compelling, you know, you're going to lose steam, you're going to be interested in other things, shiny objects are going to catch your, your attention and, you know, your business idea might languish a bit. So don't make that mistake. And not to say you can't, you know, build a build a great business and a niche and make money. I think that that's
That's wonderful, right? But if you want to be, if you want to build a startup, if you want to be a founder of a high growth startup, make sure your idea is big and compelling.
Prateek Joshi (43:09.708)
Amazing. That's a wonderful piece of advice. And I think I agree with this because in the early days, you have to attract people, maybe teammates and customers and investors. And people are very like...
emotional, irrational creatures. We don't do like, we don't have a, we're supposed to have a mathematical and logical mind, but we make decisions on a lot on gut feel. And if something doesn't sound exciting, we're not drawn to it. So I think with that, it's a great piece of advice. And Dan, this has been a wonderful discussion. Thank you so much for coming onto the show and sharing your insights.
Dan Caron (43:42.999)
Yeah.
Dan Caron (43:52.114)
Oh, thank you so much. This has been really fun. I really appreciate it.