Unexplored Territory

#076 - AI Roles Demystified: A Guide for Infrastructure Admins with Myles Gray

May 27, 2024
#076 - AI Roles Demystified: A Guide for Infrastructure Admins with Myles Gray
Unexplored Territory
More Info
Unexplored Territory
#076 - AI Roles Demystified: A Guide for Infrastructure Admins with Myles Gray
May 27, 2024

In this conversation, Myles Gray discusses the AI workflow and its personas, the responsibilities of data scientists and developers in deploying AI models, the role of infrastructure administrators, and the challenges of deploying models at the edge. He also explains the concept of quantization and the importance of accuracy in models. Additionally, he talks about the pipeline for deploying models and the difference between unit testing and integration testing. Unit testing is used to test the functionality of a single module or function within an application. Integration testing involves testing the interaction between different components or applications. MLflow and other tools are used to store and manage ML models. Smaller models are emerging as a solution to the resource constraints of large models. Collaboration between different personas is important for ensuring security and governance in AI projects. Data governance policies are crucial for maintaining data quality and consistency.

Takeaways

  • The AI workflow involves multiple personas, including data scientists, developers, and infrastructure administrators.
  • Data scientists play a crucial role in developing AI models, while developers are responsible for deploying the models into production.
  • Infrastructure administrators need to consider the virtualization layer and ensure efficient and easy consumption of infrastructure components.
  • Deploying AI models at the edge requires quantization to reduce model size and considerations for form factor, scale, and connectivity.
  • The pipeline for deploying models involves steps such as unit testing, scanning for vulnerabilities, building container images, and pushing to a registry.
  • Unit testing focuses on testing individual components, while integration testing ensures the compatibility and functionality of the entire system. Unit testing is used to test the functionality of a single module or function within an application.
  • Integration testing involves testing the interaction between different components or applications.
  • MLflow and other tools are used to store and manage ML models.
  • Smaller models are emerging as a solution to the resource constraints of large models.
  • Collaboration between different personas is important for ensuring security and governance in AI projects.
  • Data governance policies are crucial for maintaining data quality and consistency.

Chapters

  • 00:00 Understanding the AI Workflow and Personas
  • 03:24 The Role of Data Scientists and Developers in Deploying AI Models
  • 08:47 The Responsibilities of Infrastructure Administrators
  • 15:25 Challenges of Deploying Models at the Edge
  • 20:29 The Pipeline for Deploying AI Models
  • 24:45 Unit Testing vs. Integration Testing
  • 28:22 Managing ML Models with MLflow and Other Tools
  • 32:17 The Emergence of Smaller Models
  • 39:58 Collaboration for Security and Governance in AI Projects
  • 46:32 The Importance of Data Governance



Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.

Show Notes

In this conversation, Myles Gray discusses the AI workflow and its personas, the responsibilities of data scientists and developers in deploying AI models, the role of infrastructure administrators, and the challenges of deploying models at the edge. He also explains the concept of quantization and the importance of accuracy in models. Additionally, he talks about the pipeline for deploying models and the difference between unit testing and integration testing. Unit testing is used to test the functionality of a single module or function within an application. Integration testing involves testing the interaction between different components or applications. MLflow and other tools are used to store and manage ML models. Smaller models are emerging as a solution to the resource constraints of large models. Collaboration between different personas is important for ensuring security and governance in AI projects. Data governance policies are crucial for maintaining data quality and consistency.

Takeaways

  • The AI workflow involves multiple personas, including data scientists, developers, and infrastructure administrators.
  • Data scientists play a crucial role in developing AI models, while developers are responsible for deploying the models into production.
  • Infrastructure administrators need to consider the virtualization layer and ensure efficient and easy consumption of infrastructure components.
  • Deploying AI models at the edge requires quantization to reduce model size and considerations for form factor, scale, and connectivity.
  • The pipeline for deploying models involves steps such as unit testing, scanning for vulnerabilities, building container images, and pushing to a registry.
  • Unit testing focuses on testing individual components, while integration testing ensures the compatibility and functionality of the entire system. Unit testing is used to test the functionality of a single module or function within an application.
  • Integration testing involves testing the interaction between different components or applications.
  • MLflow and other tools are used to store and manage ML models.
  • Smaller models are emerging as a solution to the resource constraints of large models.
  • Collaboration between different personas is important for ensuring security and governance in AI projects.
  • Data governance policies are crucial for maintaining data quality and consistency.

Chapters

  • 00:00 Understanding the AI Workflow and Personas
  • 03:24 The Role of Data Scientists and Developers in Deploying AI Models
  • 08:47 The Responsibilities of Infrastructure Administrators
  • 15:25 Challenges of Deploying Models at the Edge
  • 20:29 The Pipeline for Deploying AI Models
  • 24:45 Unit Testing vs. Integration Testing
  • 28:22 Managing ML Models with MLflow and Other Tools
  • 32:17 The Emergence of Smaller Models
  • 39:58 Collaboration for Security and Governance in AI Projects
  • 46:32 The Importance of Data Governance



Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.