The Reasoning Show
The Reasoning Show AI moves fast. Thinking clearly matters more.
The Reasoning Show cuts through the hype to explore how the smartest people in enterprise AI actually make decisions — the strategy, the tradeoffs, and the hard lessons no press release mentions.
Every week, hosts Aaron Delp and Brian Gracely sit down with the founders building the tools, investors funding the shift, and operators running AI in the real world. Not hype. Not panic. Just clear-headed conversations with people who have to make actual decisions.
Because the AI revolution isn't just happening. It's being reasoned through.
New shows every Wednesday and Sunday.
Topics: Enterprise AI strategy · LLMs in production · AI leadership · Agentic AI · Digital Sovereignty · Machine Learning · AI startups · Cloud Computing
The Reasoning Show
Validation and Guardrails for LLMs
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Shreya Rajpal (@ShreyaR, CEO @guardrails_ai ) talks about the need to provide guardrails and validation of LLM’s, along with common use cases and Guardrail AI’s new Hub.
SHOW: 797
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:
- Learn More About Azure Offerings : Learn more about Azure Migrate and Modernize & Azure Innovate!
- Azure Free Cloud Resource Kit : Step-by-step guidance, resources and expert advice, from migration to innovation.CloudZero – Cloud Cost Visibility and Savings
- Find "Breaking Analysis Podcast with Dave Vellante" on Apple, Google and Spotify
- Keep up to date with Enterprise Tech with theCUBE
SHOW NOTES:
- Guardrails AI (homepage)
- Guardrails AI Hub
- Guardrails AI GitHub
- Guardrails AI Discord
- Shreya on TWIML podcast
- Guardrails AI on TechCrunch
Topic 1 - Welcome to the show. Before we dive into today’s discussion, tell us a little bit about your background.
Topic 2 - Our topic today is the validation and accuracy of AI with guardrails. Let’s start with the why… Why do we need guardrails for LLMs today?
Topic 3 - Where and how do you control (maybe validate is a better word) outputs from LLM’s today? What are your thoughts on the best way to validate outputs?
Topic 4 - Will this workflow work with both closed-source (ChatGPT) and opensource (Llama2) models? Would this process apply to training/fine-tuning or more for inference? Would this potentially replace humans in the loop that we see today or is this completely different?
Topic 5 - What are some of the most common early use cases and practical examples? PII detection comes to mind, violation of ethics or laws, off-topic/out of scope, or simply just something the model isn’t designed to provide?
Topic 6 - What happens if it fails? Does this create a loop scenario to try again?
Topic 7 - Let’s talk about Guardrails AI specifically. Today you offer an open-source marketplace of Validators in the Guardrails Hub, correct? As we mentioned earlier, almost everyone’s implementation and guardrails they want to implement will be different. Is the best way to think about this as building blocks using validators that are pieced together? Tell everyone a little bit about the offering
FEEDBACK?
FEEDBACK?
- Email: show @ reasoning dot show
- Bluesky: @reasoningshow.bsky.social
- Twitter/X: @ReasoningShow
- Instagram: @reasoningshow
- TikTok: @reasoningshow
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Software Defined Talk
Software Defined Talk LLC
Dithering Preview
Ben Thompson and John Gruber
Everyday AI Podcast – An AI and ChatGPT Podcast
Everyday AI
Prof G Markets
Vox Media Podcast Network
Acquired
Ben Gilbert and David Rosenthal
Decoder with Nilay Patel
The VergetheCUBE
SiliconANGLE, Media