Edtech Insiders

Exploring the Role of Open Source in AI with Mark Heaps of Groq

June 10, 2024 Alex Sarlin and Ben Kornell
Exploring the Role of Open Source in AI with Mark Heaps of Groq
Edtech Insiders
More Info
Edtech Insiders
Exploring the Role of Open Source in AI with Mark Heaps of Groq
Jun 10, 2024
Alex Sarlin and Ben Kornell

Mark Heaps is the Chief Technology Evangelist at Groq® where he leads a diverse team focused on innovative and creative AI solutions with demo applications enabled by the Groq LPU™ Inference Engine. Mark is a longtime technology evangelist who has worked with Adobe, Google, Apple, and others on various projects ranging from digital imaging to AI systems used in some of the most popular applications today. He is passionate about democratizing AI in service of advancing human agency and loves exploring how the next generation of technology end-users will interact with conversational AI. Mark is also a die-hard foodie obsessed with finding the world's best pizza – so far a hole-in-the-wall joint in Croatia (is there any other kind?) is in the lead.

Recommended Resources:
🌐
groq.com
☁️
GroqCloud
📑
groq.com/docs

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

Show Notes Transcript

Mark Heaps is the Chief Technology Evangelist at Groq® where he leads a diverse team focused on innovative and creative AI solutions with demo applications enabled by the Groq LPU™ Inference Engine. Mark is a longtime technology evangelist who has worked with Adobe, Google, Apple, and others on various projects ranging from digital imaging to AI systems used in some of the most popular applications today. He is passionate about democratizing AI in service of advancing human agency and loves exploring how the next generation of technology end-users will interact with conversational AI. Mark is also a die-hard foodie obsessed with finding the world's best pizza – so far a hole-in-the-wall joint in Croatia (is there any other kind?) is in the lead.

Recommended Resources:
🌐
groq.com
☁️
GroqCloud
📑
groq.com/docs

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

Alexander Sarlin  00:04
Mark Heaps is the Chief Technology Evangelist at Groq®, where he leads a diverse team focused on innovative and creative AI solutions with demo applications enabled by the Groq LPU™ Inference Engine. Mark's a longtime technology evangelist who has worked with Adobe, Google, Apple and others in various projects ranging from digital imaging to AI systems used in some of the most popular applications today. He's passionate about democratizing AI in service of advancing human agency, and loves exploring how the next generation of technology end users will interact with conversational AI. Mark is also a die-hard foodie obsessed with finding the world's best pizza -so far a hole-in-the-wall joint in Croatia (is there any other kind?) is in the lead.

Ben Kornell  01:43
Welcome to Edtech Insiders. We're super excited to have Tech Evangelist, Chief Technology Evangelist at Groq, Mark Heaps Welcome to EdTech insiders.

Mark Heaps  01:55
Thanks for having me, guys. Glad to be here.

Ben Kornell  01:56
Before we dive into Groq and all of its features and technology. Let's hear a little bit about your background. Mark, how did you come to this work at Groq? And where's the intersection with education and technology?

Mark Heaps  02:10
Yeah, so actually, the position at Groq came about. I was running an agency and Groq was one of our clients. And I was working very closely with Jonathan Ross, the CEO and founder. And I felt constantly like he was pushing me to think differently about things. And I love that when you get with these real visionary leaders, and technologists that sort of challenge your conventional thinking. And so I found myself reaching out to them more and more, just to engage. And then eventually he said, Hey, man, why don't you come join the company, I think you really think about things differently. And I'd like to have you here. And so that's what landed me at Groq, we actually had some mutual connections at Google. So I had been at a couple of agencies before Groq, I had been at an agency that's pretty well known called Duarte, a lot of educators actually are very familiar with Duarte and some of the affiliates of Duarte. So I was a director there. And before that I was at Google where that was really where I first dove into the world of AI. So I oversaw the book scanning project where we were scanning all of the books from libraries around the world. And so you can imagine I met with a lot of educators through those programs, university in Ann Arbor, etc. And then that led into a number of other AI projects like autonomous vehicles, Google Voice, moderation, for Street View, etc. And then before that, I was actually a college professor for seven years. And then I also ran an agency where we develop technologies for educators, as well. So this is right in my wheelhouse. And all that being said, nothing to do with my own educational background, my background is actually in design and fine art, my degree is in fine art. And my minor was in theater, which would seemingly have nothing to do with AI or computer science or anything similar. But I've always been a tech enthusiast even since I was a kid building my own computers and learning programming way back in the early 80s. So yeah, it's a bit of a mix, but I love it.

Alexander Sarlin  03:57
I imagine many of the people listening to this may not have heard of Groq. But mostly because it is a new company. It's working in the generative AI space. And it's not always considered edtech, even though it has some really interesting educational applications. Can you just give an overview of what Groq is, and what its relationship is to other large language model type interfaces that people may know?

Mark Heaps  04:21
Yeah, sure. So at the core of the company, Groq builds the world's fastest infrastructure for what's called AI inference. And for those that aren't familiar with the jargon, the inference side of AI is basically the logic or the compute side of when you're using AI technologies in production. So when you ask chat, GPT or some tool like that, to generate something, it's inferencing, right? It's taking from its training and your request and saying, I infer you want this. So we build the infrastructure, everything from the silicon of the chip, all the way up to the cloud service. And that's why we basically have a full stack that we can provide As developers and business leaders and administrators, and so in the past where people might have had to consider buying very expensive hardware to implement AI technologies in their facility, today, you're seeing more and more people do that through a cloud service where they provision and pay as you go. With Groq. We've got that deployment optionality. So what we did that kind of changed everything for us in the last six months or so, was we put everything in the cloud and made it publicly available for free and said, Please go in there, dive in and start experimenting. And so we've seen all kinds of users in there, but I do hear a lot of educators from the university down to elementary, middle school level, actually using the tool, which is really exciting. But the thing I think most people need to know is that our secret sauce is we make the chip. And that's where people get confused, because they say, Well, are you like an Nvidia? Or are you like an open AI? And we're more on the full stack side of the hardware through to the software? And then we depend on the open source communities for the actual AI models.

Ben Kornell  06:01
Yeah, so let's talk a little bit about the open source models. How are you seeing the battle between closed source and open source models playing out? And what are the implications for those of us who are building on top of large language models?

Mark Heaps  06:15
Yeah, so I think, you know, historically, if you know any of the history of sort of Silicon Valley, in the tech industry, especially software, in general, open source has kind of always eaten closed source for breakfast. Right? You can look at that in the Linux communities that Red Hat communities, etc, cybersecurity communities. Oh, my gosh, open source has always been a major thing. So I think the same is happening right now where we obviously saw this rapid and meteoric rise of open AI and what they did with chat, GPT, and more. And then suddenly, you saw lots of little open source models being developed on GitHub repos and similar, but it took a large institution like meta, when they rolled out llama, the first version of llama, which we got up and running very fast. Suddenly, people went, Oh, my gosh, this is a commercially viable and competitive option. And since then, we've just seen a constant stream of very large, very high quality models being deployed as open source. And now people have tested those enough and refined those enough that we're talking to customers who say, Hey, this is good enough for me to run, you know, a fortune 50, or fortune 100 enterprise business on, we can build on top of this. So a couple of key points, I think, you know, that people should consider is iteration. You know, the open source community is moving at rocket speed. We're seeing people that take models like llama, or like MC straw, and then they're tuning it and training it for their needs and putting it back out, even llama got put out. And there's an AI technologist online, that within like a week said, Hey, I've retrained this, and I've done it with a larger context length, and more than Forex, the context link so people can use it for these applications. You know that iteration speed doesn't really happen in a commercial closed source environment. Right? They're thinking about product generational life cycles, how do we maximize return on the users and what we can profit from the open source community is pushing it at an unbelievable pace. Our CEO says all the time that the speed of iteration is the speed of innovation. And I think that's why we've seen things move so fast. If there's a downside to it, right? It is, in fact, the rate of change if you're an institution that maybe moves a little bit slower, and you're saying, hey, we want to establish a foundational part of our systems to run the business on. And suddenly you find out two months later that the model that you built, everything around has changed. There's a better version of it, or even a completely different model out there. What does that mean for you? And I think people that are in those sorts of roles are now learning very quickly, they need to build fail fast, and agile methodology to be able to update their systems. This is why you're seeing the developer community use API's where they call a model that they do on Groq. We've changed models numerous times based on what the developer community is telling us. But it's literally a few minutes process to change that over to a different model. So that's the one sort of risky side is how are people going to keep up if they're building concrete systems. And, you know, I tell people all the time, don't build with concrete build with sand.

Alexander Sarlin  09:23
That modular approach where if people are building on top of an existing model, they shouldn't lock into any one model, because the models keep changing and evolving and keep overtaking one another in certain ways. So you mentioned that the secret sauce is the chip is the silicon, and you're using these open source models like MC straw and llama, you know, as a Groq user, I think one thing I would say is the special sauce as somebody sitting down using it as an end user is the speed for and it's yeah, that's usually a real wow factor with the speed I want. I want to ask you guys three two related questions. One is to talk to us about the speed of this fast inference because it's really interesting to experience it. And I recommend anybody listening to this. Try it because it is pretty amazing. The second is, there's a mode in Groq, where you can actually ask it to give you answers in educator mode, where it specifically talks to you like a mentor or an educator. Fascinating features, right? Can you talk to us about both of these things?

Mark Heaps  10:21
Yeah, so I guess, let's start with the speed side of it. First, you know, the LPU, which is a category of processor that we created. People are learning about us now. But this has been an eight year effort. Do you know to get to this place with the technology, it was designed with inference in mind, it's meant to be fast. So we're other processors, like a GPU are really excellent at machine learning, training, and currently are the industry standard for inference as well, they're designed to be able to do lots of different things at the same time, which means they're very, very effective at that, but they're not built to be optimal for inference. We built the chip in a way using things like SRAM versus HBM, and a number of other design specific features. That allows us to go really fast. And you can look at benchmarks on websites like artificial analysis, we're very often anywhere from six to 18x Faster than sort of an incumbent legacy technology out there, like a GPU. And again, there's a place for them. And we're really the end and complement to that. But when you experience it, and that's one of our beliefs within the brand. Organization is everything should be make it real. I don't want to show people charts, I don't want to show them that stuff. Go play with it. And that's one of the reasons that we made it free and publicly available. Now, the question we always get from people that have used it enough says, well, it's so fast, why would I care? Like I can't read that fast, right. And I'm using it as an educator to refine things. I'm building tests, I'm doing all kinds of curriculum work. This is way faster than I can compute myself. So why do I need it that fast? What we're seeing as people move forward in the future is you don't have just an LLM. But what you're going to do is a series and sequence of calls to different MLMs, which is a multi-agent modality. And so you might have, I'm a Star Wars geek. So I would say think of this as Star Wars. Right? See, Threepio was a language droid, he knew all these languages in the galaxy. But our 2d tool was great at hacking systems and getting into the Deathstar and flying, you know, starlings, etc, or x wings. So that's really the world we're moving into with AI. So imagine you have an LLM that you've given educator mode system prompts to what happened in the back end, and it's looking at any request you put in, and it's saying, How do I rewrite this to be optimal for the next model I'm passing it to, and then that model might be a writer or editor agent, and then you pass to that, and, and so on, and so forth. So in the future, you wouldn't want agent to agent to read at the speed of human reads as that would create a huge chain of bottlenecks. Whereas if that can be at this lightning fast speed that we provide, then you can go through lots of agents, and by the time it gets to you, it's still sub one second real time results. So that's kind of why you know, we have the secret sauce.

Ben Kornell  13:00

And then also on top of it, just to jump in on that part two, it's also 10 times cheaper than a call is to the other MLMs. So how do you do that? Why is it so cheap? I mean, there's actually a few that I saw on your pricing list that are like, almost 100 times cheaper? And yeah, that's true. Why is that?

Mark Heaps  13:21
Yeah, I think, you know, the really simple way to understand this, and I'm glad you brought this up, actually, because a lot of folks say, Oh, you guys are just trying to do a land grab, you're dropping your price, you're just trying to like, undercut everybody, and then you're gonna, you know, raise everything back up. And certainly businesses have done that in the past, the reason we can actually be cheap, the reason our investors are happy with the affordability price, is because we're not brokers of compute. So if you think about every other provider, open aI don't make their chips, right? They don't make their infrastructure, they don't make their hardware. There's a reason that you hear Sam Altman out in the world right now saying, we're going to build our own chips. This is the same thing that we're seeing trends across the board, everybody wants to make their own AI chips, we just happen to have done it before them. So when you own everything from the silicon up, you're removing all of these middlemen that cause you to inflate the price on your business. Right, we don't have to do that because all of our chips are actually made in North America. Unlike a lot of the incumbents. It's actually made in molten New York right now by Global Foundries. It gets packaged in Canada, and then we assemble everything actually in Silicon Valley. So we have no supply chain challenges or increased costs there. We're also using a 10 year old fabrication technology for the silicone. So whereas you might compare us to a GPU that's been designed in the last year or two, they're on four nanometer silicon. That's a very expensive process. We're still on 14 nanometer silicon and outperforming everyone and that's a much more affordable process for manufacturing a chip and it stabilizes our supply chain. So if you look at other insurance providers, they're even further removed than say an open AI. They actually are broker In from the cloud providers who also have to make their margin on top of everything. So for us, we can be very, very competitive because we own the entire stack.

Ben Kornell  15:07
And just not belabor the point, I want to talk about educator mode too. But so I was talking to Tim on your team about one thing that he described in a GPU, it's built for the complexity of simultaneous calculations. There's actually like an electricity efficiency here, where, because the LPU is a simpler and more straightforward chip, you can actually coordinate the utilization of the bandwidth so that you've got essentially full efficiency of the kind of processing power rather than a GPU where actually, you know, a lot of the load is taken up by the transmission, and the complexity of the query. One, is that an accurate characterization? And then two is pretty good. Yeah. Like, just take us down to the LPU design, is that kind of what the breakthrough is here?

Mark Heaps  16:04
Yeah. So you know, GPUs were originally built to support video games, and then 3d rendering and video rendering, etc, right. So if you think about rendering a picture, you don't care if you're rendering pixel number 17, or pixel number 570. At the same time, there's no order to rendering a graphic, right, because the end result is all you care about. And in fact, that was a breakthrough. If you go back to the 80s, where we used to see graphics loading line by line, I'm old enough to remember that stage. But when you got to the GPUs, that wasn't the case, right? You had interlacing. And things were sort of loading and rendering and all these random patterns. But you don't want that in language. So when you get into AI, you have these transformer based models that are using linear algebra, and a lot of AI is based on linear algebra. So when you think about something linear, you want it to be sequential. And where this is advantaged, is, you don't want to generate the 100th word of a piece of content before the 99th word, AI should be evaluating the context sequence of all those words, so that it makes logical usable phrases right. Now, if you have a chip that's multicore and silicones, all spread out and you're processing randomization of words, then you need to introduce a piece of software that then evaluates what it generated and actually puts it in the right order. So you end up with all this complexity of schedulers and more, that just slows down and requires that entire chip to be charged into our system to be charged with electricity the entire time, whether you're using an area or not, of the surface die. That's not the case for us. So if you were to look at a picture of our chips side by side, compared to a GPU, it's a little like looking at a aerial view of Manhattan, which is this beautiful grid versus something on a GPU that looks a little bit like New Delhi in India, right, where it's you've talking about hundreds of years of roads being developed, and kind of chaotically being intertwined. Now imagine managing traffic between those two different geo locations, right, it's obviously much more complex in Delhi. And that's really what a GPU does. Now, the really radical part for us is when Jonathan and the team conceived the idea they wanted the chip to be deterministic. And basically what that means is, you want to know exactly how something is going to perform before you even run it. So at compile time, they actually started with the compiler. First, they designed the compiler first and actually went through three designs before they figured it out. And then they designed the silicon, the architecture around the compiler. So what that means for us is compared to traditional compute, when data flows over our chip, there are literally no schedulers, there are no traffic signals in our city, everything flows beautifully at the exact same speed. And you know how fast that data is going to move? before it's even taken off from the line. So imagine watching an f1 race and saying before the car leaves, I can tell you exactly what time it's going to come back to the finish line every single time down to the nanosecond. So when you have that level of organization, you can then optimize power usage. So a lot of people like to look at power usage like well, what can this chip be charged with? But it's really about how much fuel does it use while it's running, right. And so the LPU is advantaged in that way, where traditional chip design and data flows off of the main surface of the chip and has all these extra functions. It's kind of like pulling into the pit lane, slowing down and doing some extra work, and then getting back on the track. That's not the case for us. We just rip all the way through. And it's almost like the crew move with you on the track to help you execute things. This affords us a ton of efficiency, once you get to scale. And that's probably the one area that most people like to criticize us on. They said, Well, if I'm going to use LP use, it sounds like I have to use a very big system. I can't put this in my desktop tower. Right. And that's absolutely true. People start with us at enterprise scale. And that was one of the things that kind of worked originally against our ethos of “We want to be able to provide AI for everyone”. One of Jonathan's visions for Groq was I don't want to see a world of the hat. haves and have nots, that poorly funded researcher at a college should have access to the same power of tools that the big boys do. So how do we guarantee that? Well, you put it in the cloud, and you say, I'm not going to charge you for a traditional provisioning of cloud compute, I'm literally going to let you pay for each token you use. And let us take the burden of how to make it more efficient, and actually make our profits as a business, while keeping it extremely affordable for you. And if anyone's seen our material, the mission statement for the company is to drive the cost of compute to zero. Now, we'll probably never accomplish that goal. It's an infinite goal. But it reminds every employee at Groq, we call them Boxsters. To try to continue to improve this so that we can stay effective and efficient.

Ben Kornell  20:43
I so appreciate one you diving down into the technical, and then also giving us the analogies to make sense of it. Alex, did you want to probe more?

Alexander Sarlin  20:52
I just wanted to know about the educator mode?

Mark Heaps  20:55
Oh, yeah. So the educator role was just a fun thing, you know, Groq chat on our homepage. It's not a competitive product to chatGPT or things like that, we built that as a part of the brand ethos, which was “Make it real”. So we just wanted people to experience it, which it sounds like you guys have, when Jonathan and I were really diving into that interface, and the first gen of it, we said, you know, I don't think most people understand prompt engineering. So you can write a query in there as if you're in Google, like, you know, write me a curriculum about this, and it'll generate some copy for you. But really how you say things in the prompt and how you can add extra components to your prompt can give you these different outputs. So most people don't realize you can write personas into your prompts. And what you see with applications is they build those sorts of things into the system prompt the part you don't see. And it filters the result. So we sat down one day and said, Hey, what would it look like to write some prompt extensions, that when someone writes a prompt in and they get a result, you can then say, hey, reevaluate that with this prompt extension. And one of those was, as an educator, I think another one is professional. We did a lot of really crazy ones the first time, we did things like make it funny, make it snarky, you know, and that's actually pretty easy to do. And so I have one that I'm building right now with my son, and he runs a d&d club at his school. And so he's like writing all these descriptions of what it means to be a dungeon master. And like things that you should lean into like, don't, don't be nice to your players make it really tough. So now he's got a model where he can just have anyone playing online with him, ask the query, but then it acts like the Dungeon Master. So the educator mode was that and this goes back to the point of me, being a former educator, I put that one in there, because I was like, you know, I've written hundreds of sheets of curriculums and courseware, this would just make it so much faster for me. And then you think about executives that need to do summarizing. Very often I get these long, long emails, and I don't really have time to read them. So I can paste that into the model there Groq chat, and then get a result and say, Please summarize this. And then there's one of the pull downs that says, Give me three bullets of it. Give me the takeaways. That's the goal. That's how these tools are going to change everyone's lives, is it really isn't about replacing humans as much as its human plus and augmenting their abilities? How do you make them more effective more quickly, we got a really cool tool that we just started showing everybody that now you could put in a YouTube video. And you can just put the URL in and start asking your questions. And immediately it pops out answers. And what it's doing in the background is it's transcribing the entire video, putting that into an LLM. And then you can just query the content of that video. But imagine how many of us have watched videos to learn something on YouTube? And then said, Okay, what did he say in the first part? I gotta go back and write down that thing from the TED Talk. Now you just query it. So it's really tiny.

Alexander Sarlin  23:42
It's and that's not even, you can't even tell that you can do that from landing there. I'm going to try that. Very hidden.

Mark Heaps  23:48
Yeah, I'll shoot you guys. I'll shoot you guys a link later. So you guys can play around with that tool. We're just developing it now.

Ben Kornell  23:54
Yeah. So you know, I think the last question is really just about relevance to the tech sector here. And for those who are building or developing in ad tech, for those that are running universities, how should they think about Groq? In relation to their overall AI strategy? And how are you seeing people in our space using your tools?

Mark Heaps  24:17
Yeah, that's a really good question. I mean, I think as far as people, how should they think about Groq, there's gonna be a number of providers out there that are going to be the back end infrastructure for the tools and systems they build. I think what many people should think about is, as we stack a lot of these systems together, what's going to cause this to become unusable and Groq is really providing the performance and speed that folks need to be able to have that larger end to end infrastructure. So they should just consider that right out of the gate. And of course, we're gonna have a new generation of chips coming out next year. That's 10x faster than the one we have now. So it's gonna get even crazier. You know, the way that I'm seeing people use it and you know, this group isn't actually running on Groq right now, but I just discovered this, I live in Austin, Texas, the entire AISD Austin Independent School District just actually moved to having all of their star testing and end of year exams, graded by AI. And so to me, that's huge. I mean, you're talking about something that used to. I think I saw on the news article, I think they had four or 5000 human operators that we're doing test evaluation, which of course, has a high degree of variance and inconsistency in subjectivity. Whereas now they are moving to an AI model to run all of their grading systems. And I met a lot of parents that reached out to me here locally, that know me through my own kids and said, Hey, I'm not comfortable with this, I don't want a robot grading my kids work. And I said, you absolutely want a robot grading your kids work, because it will be consistent every time with how it's evaluating it. As long as the school district has a program where you can challenge the grade, and then bring in a human operator to evaluate, you know, how good a job did the bot do? That's literally what we did with book scanning at Google, the bots decided what the pages were, and then the humans evaluated the quality of the bot. If the humans were more right than the bot, then we use that to train the bot to be better. So I think there's a lot of round tripping, that's going to happen for people that are building these systems. But you know, the breadth of user level is wild, you can see that a single educator can use this to write a curriculum or can use it to create projects or tests. And then at the global maxima, you're seeing people use MLMs, for things like network traffic, observing and summarizing emails, building templates that go out from a department chair or a dean of a school, right? There's massive programs that can be used with something that's relatively easy to implement with your workflow. Yeah, we had one business, it's a private theater. They're a charter school. And they came to us and said, hey, could this thing write emails that sound more human and are a little bit more unique to all of our students and parents? They're like a Montessori network. And I said, Absolutely. And I showed them how to do that really quick. And they went, Oh, my God, we could build 500 different emails in a matter of seconds. And it could be implemented into our CRM. That's our HubSpot. That goes out to everyone. I said, Absolutely. So, you know, I think we're really only beginning to scratch the surface.

Ben Kornell  27:17
Was that higher ground education? Yeah, yeah, one of their emails today. I don't know if that was aI generated or not? You might have? Yeah, you know, I think we're, like you said, I think there's an iterative element of this, where we're going to constantly be tuning and evaluating. And I know a lot of, there's a lot of fear around bias. And, you know, the MLMs are trained on a vast record of human instances where there has been bias, there was a study recently around hiring bias in AI versus human screeners. And it basically showed the exact same levels of bias as humans do, unsurprisingly, because it was trained based on historical data. So I think we will all have to be vigilant on that front. But I do think this idea of compute going to zero is really a critical unlock for education use cases, we do not have the same kind of resources as many other large enterprises, let's just say legal, for example, do like spend on AI compute. And the other thing is, speed is so important when you're working with kids. You know, my son has an AI reader that uses speech recognition to kind of deliver back feedback on his reading. And the pace of it can be slow, because it's complex calls back, yeah, and has visual and audio components to it. So I'm really excited for the future that you're bringing in just for our listeners, that's Groq. With a cue, if people want to find out more, it's groq.com. Is that right?

Mark Heaps  28:56
That is correct. Yeah, they can go there. If they want to play with Groq chat, they can log in with their regular Gmail through the browser and go ahead and start playing with it. And even if they want to start getting into early app development, or if you have any developers, they can go to Groq cloud, which has a link on the page. And that will take you to a developer console, where you can generate an API key and you can start building your own applications and exporting code and build whatever you want. From there. It's literally three clicks. So it's all there and readily available. And then for those that really want to learn more about the tech, not specifically Groq. But just in general, if they go to groq.com/docks. There's a number of thought leader pages, their spec pages and more where they can learn more about what we think about the industry and how people are building for tomorrow.

Ben Kornell  29:38
Fascinating conversation, Mark Heaps, Chief Technology Evangelist at Groq. And we're so glad to have you evangelizing here. And we're really excited for you all to fuel innovation and education. So excited for the road ahead. Thank you.

Mark Heaps  29:57
Awesome. Thanks so much. I hope everyone has fun trying it out and we'll talk again soon.