The Money Runner - David Nelson

Decoding the future of A.I. with tech visionary Jeff Huber

December 20, 2023 David Nelson, CFA Season 1 Episode 112
Decoding the future of A.I. with tech visionary Jeff Huber
The Money Runner - David Nelson
More Info
The Money Runner - David Nelson
Decoding the future of A.I. with tech visionary Jeff Huber
Dec 20, 2023 Season 1 Episode 112
David Nelson, CFA

Join us in an eye-opening episode of The Money Runner, where host David Nelson sits down with Jeff Huber, an influential pioneer in the tech industry, to discuss the rapidly evolving world of Artificial Intelligence (AI). Jeff is the co-founder of Triatomic Capital, a former senior vice president at Google, the founding CEO at Grail and sits on the board of too many companies to mention. This podcast will shine a light on the power of AI and its implications for the future. Here's what you can expect. 
 
• The AI Revolution: How AI could be the most significant advancement since the rollout of electricity across the United States. 
• Expert Perspectives: Jeff Huber shares his journey from leading projects like Google Ads and Maps to spearheading innovative machine learning projects.
• Generative AI vs. Task-Specific AI: Understanding the difference and the future of AI applications. 
• Government's Role: What should the regulatory structure for AI look like?
• The Military and AI: An in-depth look at AI in defense, ethical concerns, and the balance between safety and innovation. 
• The Future with AI: Embracing change and preparing for a world where AI is as ubiquitous as electricity.

Show Notes Transcript

Join us in an eye-opening episode of The Money Runner, where host David Nelson sits down with Jeff Huber, an influential pioneer in the tech industry, to discuss the rapidly evolving world of Artificial Intelligence (AI). Jeff is the co-founder of Triatomic Capital, a former senior vice president at Google, the founding CEO at Grail and sits on the board of too many companies to mention. This podcast will shine a light on the power of AI and its implications for the future. Here's what you can expect. 
 
• The AI Revolution: How AI could be the most significant advancement since the rollout of electricity across the United States. 
• Expert Perspectives: Jeff Huber shares his journey from leading projects like Google Ads and Maps to spearheading innovative machine learning projects.
• Generative AI vs. Task-Specific AI: Understanding the difference and the future of AI applications. 
• Government's Role: What should the regulatory structure for AI look like?
• The Military and AI: An in-depth look at AI in defense, ethical concerns, and the balance between safety and innovation. 
• The Future with AI: Embracing change and preparing for a world where AI is as ubiquitous as electricity.

A.I. is the rage. And the question everyone wants answered is how artificial intelligence is going to change their lives for the better? Or is it a technology so powerful that in the wrong hands it could end life as we know it? If we're going to find out. We need to talk to those who were there at the beginning and laid the groundwork for what could be the biggest technology advancement since the dawn of electricity. Let's get started. Welcome to The Money Runner. I'm David Nelson. I'm very excited about today's interview. So much so, I flew all the way out to San Francisco and then on to Palo Alto to sit down in person with today's guest. Giving a proper introduction would take some time. But let me just hit on some of the high points. If you have used or collaborated on the Internet, Jeff Huber has touched your life. He has worked on and led some of the most influential companies on the planet. He was the CEO and founder of Grail, a firm dedicated to early cancer detection and spun out from Illumina, where he served on its board. Prior to that, he was senior vice president at Google. He co-founded the Life Sciences effort at Google, and he and his team developed some of the more profitable projects at Google, including Google ads, Google Apps and Maps. He was recognized as one of the 100 most intriguing entrepreneurs in 2017. Jeff holds a bachelor's degree in computer engineering from the University of Illinois, a master's degree in business from Harvard. He's a visiting scholar from Stanford University's Department of Bioenergy enginearing. And he sits on the board of too many companies to mention. And finally today. Jeff is a founding member of Triatomic, a leading venture capital firm. Focused on engineered biology, new energy, next generation computing and engineered materials. Jeff, I know I left a lot out here, but welcome to The Money Runner. Thanks so much for being with us today. Thanks so much, David. I'm excited to be here. Jeff, I want to try something a little different here in this interview. We're going to kind of go stream of consciousness and maybe a little out of order. But right at the top, I want to. Touch on what some of my listeners want to know about. You are on the cutting edge of technologies and services that in that in some ways are destined to almost change life as we know it. High on that list is artificial intelligence or A.I.. What are the dangers for me? My job, my children and grandchildren? So I put to you what is the biggest risk we face with a technology that could. Disrupt just about every. Platform we know of? So there is a lot of excitement around A.I. and as you highlight, there is some some fear along with that. In the scheme of things, I would say I'm an optimist. I do think that this is a time of unprecedented change. With change comes opportunity, but with change can come fear as well. On the positive side, I personally and then our firm that you mentioned Triatomic capital are big believers in the positive potential of A.I. to essentially give all of us superpowers where we now have access to intelligence and reasoning and capabilities that just weren't possible previously. I think the best way to to consider or to think about the fear side of it is it is going to introduce substantial change. And my encouragement would be to lean into the change. If you look back on periods of historic change in evolution, the people that were part of the change tended to do better than the ones that that were riding along or were ultimately impacted by it. So my encouragement would be for everyone, the listeners here to to lean in, to learn, to be active users, to think about what are the ways that that A.I. in the products that are available today things like Openai's Chat, GPT or Google's Bard can impact how they think about the world, how they learn, how they do their job so that they can be part of the change. Is government up to the task? Because even today, the Biden administration announced a new executive order focused on on A.I.. Is there a danger that they're going to overstep here and snuff this out before it's too late or that or the danger is real and something needs to be watched here. So I think the. Key role that the government can play is one around providing the safety net as part of the change that is happening. So thinking about how can there be incentives for businesses to help employees become trained to kind of stay on the train as things are moving? I haven't today's been a super hectic day, so I haven't had a chance to actually completely digest the executive order and all of the implications of what they got right or what they got wrong. But my encouragement around regulation, I think there's a positive role that regulation can and should play. Just my encouragement for for the United States, for other countries, governments that are considering things is really to balance optimism and responsibility. And the take a forward leaning approach of what are the positive implications of this. But then with that element of responsibility, to make sure as many people come along with the change as possible. This is where A.I. can get a little scary and maybe we really do need some regulation. Let's drop in on the conversation where Jeff weighs in on the industrial military complex. One of the. Fears that I have, the military is going to want to use this. They probably already are or maybe first to find better ways to protect us. But but let's be honest. Better ways to kill the enemy. And if we're using it, so are they. Yeah. What's out there? I mean, I worry about some maniac being able to, you know, bioengineer, you know, some kind of new new disease is going to kill us all. There are enemies out there. And if they have this, how afraid should we be? Should we be? Yeah. I mean, I think it's the reality of the modern era and A.I. is being used already on the on the battlefield. If you think about applications of computer vision and the automation of drones and applications like that, it's already here. And that said, I think it's going to be nearly impossible to regulate that because it turns out that bad actors don't pay attention to the regulations or laws or guidance. So it is the reality. Again, though, I think the positive side of it is it can be used for signal intelligence, for defensive applications. So there is on the positive side of it, sort of a positive arms race of the ways that these can be used for defensive and peace, enabling solutions as well. Reagan's stream back with Star Wars some 30 or 40 years ago or even longer was that there'd be a system in place that could end the idea of, you know, MIRV warheads flying all over the over the world and knocking out dozens and dozens of cities at a time that we'd be able to defend ourselves. Is that a reality? Could that even happen. In limited applications? Potentially. If you look at Israel's now famous Iron Dome, it steps that direction. I think the reality of it is it's hard for any system to ever be perfect or impervious. So I would say it's still a work in process that's unlikely to ever be completely accomplished because adversaries come up with more advanced capabilities as well. That's the literal definition of arms race. A lot of A.I.. And I'm a I'm learning it as well. I'm using it in my work. And it's pretty fascinating. But I'm trying to understand the difference between, you know, there's. Seems to be. Artificial intelligence designed for specific tasks and then there is generative A.I. And if I'm understanding that right as it creates new content and data and in some ways mimics human intelligence, that scares a lot of people, including me. Are we going to get there or are we going to be able to tell the difference between the two? So I think it's. Useful to to do a little bit of defining of of what we mean by the term AI and then how A.I. has evolved over time. So interestingly, A.I. is a is a term is one that I've historically resisted using because I'm a really an engineer by background. And to me, A.I. has been much more of a essentially a marketing term than something that means anything in actual underlying technology. And what would you call it if you recall? So I have finally conceded basically in the last year because AI is so broadly used, but my personal definition of it is I use it as an umbrella term that encompasses underlying capabilities and technologies. And if you look at the kind of last 20 years of A.I. think there's been three pretty distinct chapters of AI. 20 years ago, 2003 2004 when I was at Google, my teams built some of the first A.I. systems at Google, and those were machine learning systems that were really kind of statistical regression based models where they were predicting future events based on historic data, so called that the era of machine learning. And there you had kind of specific applications developed for specific purposes. So after we developed the first one, there were then 30 or 40 machine learning systems that proliferated across Google for things like spam detection in Gmail or fraud detection in transactions. Building on that first system that we built for ads, quality or click through rate prediction. A second era was about a decade later, which was the era of deep learning, and that was using neural network technologies that in some respects they were they were yes, they were still big, complex systems, but they were a level of they were a level of conceptually more simple than the very purpose built machine learning systems. You'll get a kick out of this. Listen to Jeff's comments about venture capitalist Marc Andreessen. And that led me to, to coin a phrase, that Google that built on another observation from industry. I don't know if you remember from kind of 2010, 2011 or so. Marc Andreessen Andressen Horowitz was very famous for coining the phrase that software eats the world, that everything was kind of being software platform ified. My observation of Google was that A.I. eats software because we went from having 30 or 40 different, uniquely developed systems to being able to have a common system with deep learning based systems where it was common code, still a specific instance for each application in tune for each application, but a common code underneath. So. So A.I. eats software. The next chapter that we've just entered into or has has exploded now into broader public consciousness is, as you mentioned, is generative AI using large language models, systems. And if you look at those systems, they're the next level of conceptual simplicity. But the big difference is the amount of data and the amount of compute that you throw at the problems. And is it bigger or larger? It is bigger and larger is far, far, far more data and far more compute. That's being applied. And that's led us at Triatomic to kind of go the next chapter of observation, which is in these AI systems now the output is entirely defined by the data that you feed it. So we've encapsulated that as in AI systems. Now the data is the code. So back to your question about A.I. kind of that's the umbrella encapsulating that machine learning, deep learning now generative A.I., large language model systems. There are other variations underneath those, but those are the largest categories. And in each case, as with the evolution of media, television didn't replace radio, it supplemented it. The Internet didn't replace television. It supplemented it. Each of these models as it goes along or errors as it goes along. Supplements what was there previously. Marc Andreesen also said that this would be bigger than the Internet. You think that's true? I will echo actually, the most famous observation that resonates for me is John Doerr In the mid 1990s or late 1990s with the first Internet boom, said famously that the Internet was under hyped when everyone thought it was a bubble. And he ultimately came to be proven true. The only thing that seemed to have become a bubble was the stocks from that period of time. Correct. But the Internet. So that the the implications of it are significant. And I and we as a firm at Triatomic think that indeed A.I. is a big deal. I mean, we've compared it to if you if you think in in more than a century term century defining terms, the introduction of electricity in the not the introduction, but the broad scale of adoption of electricity that happened across the 1920s and 1930s. Electricity had existed previously, but that's when the penetration really took off, both residential and commercial, and certainly by the 1930s, 1940s, companies didn't say they used electricity because it was so obvious and implicit and had value. We think that AI is that kind of of inflection now happening in in the 2020s where just about every company is going to be an AI company, whether they recognize it or not. We do think that there will be significant differentiation between the ones that do it well and the ones that lag. I hope you enjoy today's interview. And of course, you know what comes next. This is the part where I ask for your support. And no, it doesn't cost money. If you like today's podcast hit, subscribe and let us know what you think. Also, don't forget to visit me on Substack where I publish my blog and research. You'll find articles, charts, audio and video. Thanks for joining. I'm David Nelson.