AI Portfolio Podcast

Sanyam Bhutani: LLM Experimentation, Podcasting Insights, and AI Innovations - AI Portfolio Podcast

June 09, 2024 Mark Moyou, PhD Season 1 Episode 10
Sanyam Bhutani: LLM Experimentation, Podcasting Insights, and AI Innovations - AI Portfolio Podcast
AI Portfolio Podcast
More Info
AI Portfolio Podcast
Sanyam Bhutani: LLM Experimentation, Podcasting Insights, and AI Innovations - AI Portfolio Podcast
Jun 09, 2024 Season 1 Episode 10
Mark Moyou, PhD

Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.

Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.

๐Ÿ“ฒ Radek Osmulski Socials:
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Twitter: https://x.com/bhutanisanyam1?lang=en

๐Ÿ“ฒ Mark Moyou, PhD Socials:
LinkedIn: https://www.linkedin.com/in/markmoyou/
Twitter: https://twitter.com/MarkMoyou

๐Ÿ“— Chapters
00:00 Intro
02:46 200 days of LLMs
06:16 Venture Capital
08:40 Setting Goals in Public
09:45 Fine tuning Experiment
14:02 Kaggle Grandmasters Team
15:55 Doing Challenges & Reading Research Papers
17:47 Hardest topic to learn in AI
19:05 Are you afraid to ask stupid questions?
20:43 Learning how LLMs work
22:54 Academic vs Product First Mindset
27:51 Training or Inference on LLMs
29:15 Favorite LLM Agent
32:10 How to go about learning LLMs?
36:55 Open Source LLMs on Research Papers
37:41 Capability of Modern GPUs
45:48 Journey to H20.ai 
50:07 Why Sanyam stopped podcasting?
56:25 Podcasting Experience
58:39 Top Data Scientists
01:00:19 Advice for New Podcasts
01:03:32 Breaking into Data Science
01:12:23 Career Optimization Function
01:14:02 Making Progress Everyday
01:15:05 Advice for New Professionals
01:17:00 Book Recommendations
01:18:04 Rapid Round

Show Notes

Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.

Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.

๐Ÿ“ฒ Radek Osmulski Socials:
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Twitter: https://x.com/bhutanisanyam1?lang=en

๐Ÿ“ฒ Mark Moyou, PhD Socials:
LinkedIn: https://www.linkedin.com/in/markmoyou/
Twitter: https://twitter.com/MarkMoyou

๐Ÿ“— Chapters
00:00 Intro
02:46 200 days of LLMs
06:16 Venture Capital
08:40 Setting Goals in Public
09:45 Fine tuning Experiment
14:02 Kaggle Grandmasters Team
15:55 Doing Challenges & Reading Research Papers
17:47 Hardest topic to learn in AI
19:05 Are you afraid to ask stupid questions?
20:43 Learning how LLMs work
22:54 Academic vs Product First Mindset
27:51 Training or Inference on LLMs
29:15 Favorite LLM Agent
32:10 How to go about learning LLMs?
36:55 Open Source LLMs on Research Papers
37:41 Capability of Modern GPUs
45:48 Journey to H20.ai 
50:07 Why Sanyam stopped podcasting?
56:25 Podcasting Experience
58:39 Top Data Scientists
01:00:19 Advice for New Podcasts
01:03:32 Breaking into Data Science
01:12:23 Career Optimization Function
01:14:02 Making Progress Everyday
01:15:05 Advice for New Professionals
01:17:00 Book Recommendations
01:18:04 Rapid Round