Edtech Insiders

Week in Edtech 7/4/2024: Nvidia's Stock Buybacks, Instructure Acquires Scribbles, Apple Joins OpenAI Board, LAUSD and AllHere Whistleblower Reports, SCOTUS Ruling Impacts US DOE and More!

July 11, 2024 Alex Sarlin and Ben Kornell
Week in Edtech 7/4/2024: Nvidia's Stock Buybacks, Instructure Acquires Scribbles, Apple Joins OpenAI Board, LAUSD and AllHere Whistleblower Reports, SCOTUS Ruling Impacts US DOE and More!
Edtech Insiders
More Info
Edtech Insiders
Week in Edtech 7/4/2024: Nvidia's Stock Buybacks, Instructure Acquires Scribbles, Apple Joins OpenAI Board, LAUSD and AllHere Whistleblower Reports, SCOTUS Ruling Impacts US DOE and More!
Jul 11, 2024
Alex Sarlin and Ben Kornell

Send us a Text Message.

Join Alex Sarlin and Ben Kornell as they explore the most critical developments in the world of education technology this week:

📈 Nvidia's stock buybacks expected and rumors on a French anti-trust suit
🔍 Instructure Acquires Scribbles acquisition impact
🤖 Apple joins OpenAI board
📰 TIME and OpenAI content collaboration
📊 LAUSD and AllHere whistleblower reports
🏛️ SCOTUS ruling impacts US Dept of Education
📋 TPS and regulatory updates from Phil Hill

Stay updated with the latest Edtech news and innovations. Subscribe to Edtech Insiders podcast, newsletter and follow us on LinkedIn!

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

Show Notes Transcript

Send us a Text Message.

Join Alex Sarlin and Ben Kornell as they explore the most critical developments in the world of education technology this week:

📈 Nvidia's stock buybacks expected and rumors on a French anti-trust suit
🔍 Instructure Acquires Scribbles acquisition impact
🤖 Apple joins OpenAI board
📰 TIME and OpenAI content collaboration
📊 LAUSD and AllHere whistleblower reports
🏛️ SCOTUS ruling impacts US Dept of Education
📋 TPS and regulatory updates from Phil Hill

Stay updated with the latest Edtech news and innovations. Subscribe to Edtech Insiders podcast, newsletter and follow us on LinkedIn!

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

Ben Kornell:

Hello, everybody. Happy Fourth of July. I hope you all had a great weekend. Hopefully it was filled with cookouts friends and family and fireworks. We're so excited to bring you our mid summer edition of the weekend. edtech. I'm Ben Kornell alongside my co host co founder Alex Sarlin. So great to connect lots of stuff going on as we round the corner of the school year into the next school year. But before we dive in what's going on on the pot? Yeah,

Alexander Sarlin:

so we have all these great interviews coming up. We talked to Rebecka Peterson National Teacher of the Year, last year who had all sorts of insights about what teaching looks like in this modern era next week, where we have Ethan and Lila Mollick. Ethan Mollick has been a huge thought leader in AI and education. We talked to Sarah DeWitt, who's the head of PBS Kids. That's amazing. And Scott Kirkpatrick, the CEO of BrainPOP. So even in the depths of summer, keep your ears hear if you hear some of the biggest names in edtech, talking about how they're dealing with this vibrant and exciting but also a little bit scary era in the field. And so I know we're in the dead of summer, but what can we look forward to,

Ben Kornell:

for the summer, there's a bunch of conferences we just had is the two weeks ago, and there's AI conferences in Singapore, in Portugal, and in Australia coming up. So lots of really exciting things, we'll be bringing you some highlights from those conferences, in terms of edtech insiders, our next happy hour that we have on the books is in September for back to school, but I know there's a bunch of get togethers the higher ed group folk he is having a gathering in San Francisco in August, so many, many people traveling across many parts of the world connecting, you know, with that kind of global mindset. I think a lot of the news recently has brought us here, mainly to focus on the US lots and lots of drama happening both in the AI world in the education world. Alex, where would you want to start? Well,

Alexander Sarlin:

this was an interesting week in AI. And we usually start with AI and we sort of really go deep, I feel like we could cover some of the headlines in AI in just a couple of minutes and get to sort of the juicy education stuff. Because it is exciting. It is sort of this combination of really local and somewhat global because of these US based companies are everywhere now. And they're really sort of leading the AI revolution globally. So I mean, I thought it was interesting, we saw sort of two opposing headlines about Nvidia this week. One is that it is just doing so well. It's continues to you know, jockey for the the most valuable company in the world. And there was an article about how it's, you know, basically has all this money, it's going to have to buy back stock. They talk about stock splits, you know, we're not at financial advice podcast, but the NVIDIA stock has just been this crazy story over the last year. And at the same time, we're getting the first rumor that NVIDIA is in the sights of an antitrust suit. And this first one is actually coming from France, speaking of you know, the global world, because Nvidia is makes the chips that are sort of running almost the entire world's AI mechanisms, you know, AI models everywhere, there are other people jumping in, but they have such a big head start that they're everywhere. So I thought that was interesting to see, you know, this company just continued to explode. But also you're finally getting the hints of at least one regulatory thing that might begin to rein them in. What do you think of that? Or what other AI headlines for grabbing your attention this week?

Ben Kornell:

Yeah, I think Nvidia continues to be the kind of ascendant chip developer and if you think about things as a full stack, who's sitting at the bottom of the stack, it really is, you know, 95%, and Vidya and anyone else who's attempting the challenge is four or five years out what Whereas on the API level, which is really an enabling software layer, infrastructure layer, there's a sense of no one's ever got to lead more than three to six months. And so we've seen a ton of new products and features, including Claude releasing some new models, but also collaboration with their kind of GPT rival. It is really interesting to think about single user moving to multi user moving ultimately to universe and are there ways in which people within companies or across companies will be collaborating on AI platforms, with the AI performing almost as your rather than the expert, and also helping to organize all their work. But I think the shocker here in the valley is been Apple getting a board seat at open AI. What this you know, folks here are talking about is really open AI is moving from an upstart startup competing on the value of Chachi, BTS, incredible miraculous rise, and their great revenues to being almost an embedded player with all of the big tech players. That's right. And we all know about the Microsoft relationship. But to have Microsoft and Apple being your key core partners, means that they don't have to just win on the value of the specs of their AI. And the truth is, there's a fast follower dynamic here that makes any of those leads very costly, and not that valuable in terms of ROI. folks here are viewing it as a really shrewd move, to kind of get into bed with some of these other big tech players and establish open AI as an enduring if not the enduring AI player. So very, very interesting. Lots of you know, does raise questions about what kind of backroom coercion is going on or collaboration might be going on? And will that you know, initiate regulatory review or oversight. But as it looks right now, they've done a masterful job of positioning themselves with strategic distributors, tech companies and partners.

Alexander Sarlin:

And I would say, you know, from an edtech perspective, these are basically two similar at this point, but but slightly different distribution strategies. We've talked on the podcast about how Google this sort of big tech player here who is not part of any of this, as well as Amazon, which is sort of sided with anthropic, in many ways. Google has this incredible software distribution strategy, right, as well as hardware through Chromebooks, but mostly software. You know, the we saw this week, Gemini being put into Gmail, we saw Gemini being put into headline about Gemini being connected to teens through their school accounts, because every teen has a Google school account. And that's a place where Gemini can go. So Gemini is getting in front of students, mostly through their software and their sort of ubiquitous SAS services. And now open AI is working with Apple and Microsoft, the two biggest hardware creators in the world, certainly through cell phones. And it's a really interesting, I think, what we can probably expect from an ethics perspective is that students will be exposed to both of these technologies that Gemini will reach them through their Google accounts through their Chromebooks and chat, GBT will reach them through their home phones, and through their Android devices through their, their their app stores. So it's probably a good bet at this point to say, you know, we've talked about opening sort of verse Google. And it seems like, it's unlikely that students through their tech stacks are going to be only in one camp, I think they're both of them are going to be part of their lives. And of course, it teachers lives as well. I also thought was interesting, you know, opening I continues to pin these content deals. And they announced this week, a strategic content partnership with Time Magazine, which is really interesting Time Magazine is now owned by Marc Benioff, of Salesforce, which is, you know, yet another tech player in here. This, these don't make big headlines, but they continue to sort of scoop up these content partnerships with all of these, you know, storied media institutions, which you can imagine, as an educator, it means that these models are going to be trained on, you know, decades and decades and decades of really solid reporting and current events, quote, unquote, from going back, you know, 80 to 100 years. That's really interesting. And I think it could be something that does begin to sort of play out in some of the EdTech tools, if you can query chat GPT. And it can say, Oh, here's an article about Franklin Roosevelt from Time magazine when he was, you know, Man of the Year in 1942. That's pretty interesting. And that's not something you can sort of get through regular web search. So pretty cool. What did you make of any of that?

Ben Kornell:

You know, I think the focal point of what you're saying is, essentially the consolidation of power here in big tech. Yep. And this is what big tech has been doing for 20 years, but we really saw an opportunity for the democratization of it. AI, and this channel, this wing is going in this direction. Meanwhile, meta is going the opposite direction, hoping that there's an open source revolution. But there's also been a lot of criticism in the media that when they say open source is a really open source, or is it open weights. And so what they're finding too is there's limits on some of these open source models, where, you know, true open source would be, here's all the training data as well. And what we're really getting is open weights. So I think the battle is still to be played out. But I'd say, you know, maybe we're in the second inning now. And in the second inning of the battle, we're seeing a consolidation of power around two major players, Google and open AI. And if you want to say Microsoft's in there, I think it's in relation to open AI. If you want to say apples in there, it's in relation to open AI. And the big, an outside player who's betting on an alternative future is meta. And then Amazon is in a spot where you're not sure where they're going to play long term, they don't really have a consumer commercially available path. But man is AWS, a juggernaut in terms of cloud and they've got aI built into it. And then from a consumer standpoint, the AI underneath their, you know, sales, motion is really powerful. I will say that this excitement or fear of AI, being in the likes of kids has also come into the forefront, because what's going on in Los Angeles School District, which we talked about last pod, there's been some new updates that have come to light, which have to do with a whistleblower complaint around some of the data practices that were used by all here, which was the vendor for LA Unified in LA Unified, launched this chatbot, called Ed, which was really intended to be your kind of personal assistant to help make sure you're having success at school. And all here, they're kind of V one value proposition was around chronic absenteeism and getting kids to get to school. And so you know, V two of creating a personalized chatbot may have been a bridge too far for the startup. But actually, the dialogue around this is raising fundamental core issues that affect all of that tech, which is how do we apply data privacy policies and laws in an AI environment. And I think we need to disentangle a little bit of what's coming up in the credible articles from the 74,000,001 is overseas servers and student identifiable data, and student identifiable data and overseas servers. That should be a very soluble issue. When you have a cloud provider. It's very easy to restrict where data flows. And it's a very common practice for school districts or states or any policymakers to say we don't want the data to be going in over in foreign shores. One, it could be around, you know, the ability for it to be hacked, but to is also like if there's a lawsuit, where's the jurisdiction for the lawsuit, part of GDPR is fundamentally that that data can't go outside of Europe. So everybody's built that infrastructure in the first place. Why? If it's true that student data was going overseas? Why that would be an issue. That's an unforced error, potentially. The other issues, though, around pinging an LLM for personalized information about a study that is at the heart of what we've got to figure out and that what does it mean to query or send a request to an AI bot? What data should be allowed should not be allowed? And how is that data used or exposed? And so one very conservative strategy would be don't send the data at all, or take out any personalized identifiable data, or even do pre anonymization of data before you send it so that you can still get personalized recommendations. And then you have to be anonymized back or something like that. Practically speaking, this creates real friction in education space. And it's not incredibly clear on what these providers do with the data when they get a query. The second layer here is do they use student data to train models? And that I think is under current legislation not allowed. Now, you can anonymize and you can train the models and so is there a way to essentially instruct the AI partner not to train their models and there is for most of these models, but as you have Glen's of open source as you have multiple agents doing multiple things, what is the protocol, a company that I'm a supporter of is called dewy and they think really extensively about data privacy, who gets to see what data To how is it identifiable? How are you making meaning back to the educator or to the student or the family so that it does feel personalized. And we've heard from some people in the space that actually you don't need that much personalized data to actually make it feel personal. So I think this is a really important gray area that we're going to see coming up time. And again, what was your thought in reading the whistleblower complaint and kind of the back and forth and I know 74 million has reached out to us it's like, what it's like a dynamic situation. It is.

Alexander Sarlin:

I think that was a terrific overview and breakdown of some of the fundamental data issues at play here. A couple of thoughts, you know, first off, the idea of a whistleblower in edtech just sort of blows my mind. I mean, whistleblowers, you know, you associate with the tobacco industry, the oil industry, obviously, big tech, you know, there are whistleblowers that add meta, but like, whistleblower, just as a concept is really like about an industry that's really trying to hide some major things. And, you know, I know the all here people, I think whistleblowers is not the right term for what's going on here. This, I think the media narrative on this as if it's like all here is or LASD, for that matter, are these sort of sinister organizations trying to hide things and move student data without letting you know, is just such a false narrative. The story here seems so much simpler than that, to me, which is that LAUSD especially Alberto Carvalho want to be really first movers in AI. It's this huge movement, and they said, Let's go big, let's do something that's tries to be district wide, that actually personalizes to your point when that actually has data about each student so that it can feed content to each student that's based on how they're doing in school, and it can feed their school bus schedule and their academic schedule. But then the issue here is that the vendor they ended up going with the contract was a relatively small company that may not have been ready, I think it's probably safe to assume at this point may not have been ready in one way or another, to handle this massive of a rollout and this massive of a product. And this probably all sorts of back and forth, there may have been payment issues. I don't want to speculate too much. But the idea that this is a whistleblower, I mean, I think of some of the other, you know, Ed Tech scandals, we try not to be a gossip rag, but you know, the big stuff, the Frank scandal when the company was bought, and they found they were sort of falsifying data, like, it's makes me so sad, when you sort of get this narrative around ed tech, that it is like this place filled with these schemers and players. And you need whistleblowers, because I just think the furthest thing from the truth, I think the real issues are the ones that you're pointing out Ben, that we are in a very fast changing dynamic space. And it's risky to be a first mover in AI. And this, sadly, I really want it to work wanted it to work, I don't know where it's at right now. But at this point, I think it's now become a sort of poster child for, okay, if you try to go big, with AI before some of these policies are in place before people really understand the privacy issues, there's a risk of it, you know, having at least a big press backlash, if not a, you know, real problems in the actual implementation, which so far, it's not actually even clear that the implementation is, you know, what's going to happen with the implementation, but the press has been really bad. So the issues, you're talking about the real issues, but they're so complex and nuanced. I've talked to a friend of mine, Evan Hill, Reese, who is the former deputy counsel at amplify for many years, incredibly knowledgeable about IP about AI about student data. And he said something similar to what you just mentioned, which is that you know, personally identifiable information is the third rail of data in edtech. But there really isn't a great reason to send that kind of data all around from system to system, even in the context of personalization, personally, identifiable means like names, means phone numbers, addresses, things that you can track back to an individual person, we're going to have to have better policies around this. But this whistleblower was basically saying that PII personally identifiable information was being sort of sent around to different parts of the system in conjunction with the student data without being you know, properly anonymized or privatized or hashed or whatever people do in that world. And, you know, that is a problem. But the idea of it being a whistleblower, and as if like, all here was like trying to, you know, send data to Sweden, and wherever it gets going, seems just so absurd to me. So I'd like us to move out of this narrative, as if like, Ed Tech players are trying to, you know, screw everybody over and full investors and full clients and full districts. I just don't think that narrative has almost any ring of truth to it. And we're all trying so hard in a very difficult environment, especially with AI. Data. Privacy was a problem before AI data privacy was a problem 10 years ago was in bloom. It's been a problem with interoperability, it's like it's plagued this field forever. So this is just

Ben Kornell:

the niche. And I will just say on that point, too. There are other industries that have thrived with data privacy law, like health care, so this should not be a barrier to success for our field. But these great air areas are the ones where we need legislative clarity. And that brings me to our last story for today, I feel like we need to spend at least five minutes on the chevron rule in which basically, the Supreme Court of the United States limited the ability of federal agencies to pass policies and hold companies to account for those policies, without actual legislation or ruling from the courts. And the kind of under arching takeaway from this is it is a huge setback in terms of the power of the federal government to use not only legislative means to set policy rules and regulations. And of course, everyone sees, oh, it's Chevron, its EPA, all of this. But I don't think they realize how far reaching this is, this essentially will undermine so much of the work of the US education department, and things like data privacy, policies, guidelines, you know, they have kind of this dear colleagues letter around using for profit, OPM providers in higher education, for example, there's a bunch of ways in which they, in the best of cases, they've tried to provide clarity for education spaces, so that people know what the rules of the road are to operate. In the worst case, they've like, stifled the innovation of the sector company's ability to do innovative things, because of a more conservative approach on, you know, education sector, all of that seems to be up in the air now. And with a good lawyer, you know, most companies are going to be able to fight back regulation that harms their business, who's fighting for the consumers? Who's fighting for the kids? Where are we I think, really shines a light on basically two decades of dysfunction in Congress, where almost no policy has been passable. I mean, basically, since the Clinton era, it's been really hard to pass legislation. And now that, you know, to get around that we've been, you know, we've had federal agencies that have been passing policies, that will no longer work. And so it is a big deal. And in talking to other Tech Insider, folks, you know, some of this could come back to the States, which as we know, as edtech operators, it's super hard when you've got 50 Different states, 50 different regulatory regimes, and some of this could be legislative in Congress. But all of that legislation then takes, you know, two years to get through, you don't know where it's going to land, and it creates a bunch of uncertainty, which basically freezes up the market. So big, big deal, big implications, you know, what are your instant thoughts?

Alexander Sarlin:

I'll keep my comments, Sir, here, because this is a very complicated issue, we could probably spend hours talking about this. Yeah, the Department of Ed already has not had that much power in the education system in the US. And, you know, we've talked to officials from it, who say, you know, we can do recommendations, but honestly, it's the local control country, it's a place where the decisions are made at the state or local level, the thing that they've tried to do, especially when it comes to higher ed over the last few years, is trying to protect students from certain sorts of predatory practices in the higher ed space, like, you know, specifically ways to get huge student debt owed to the companies, you know, basically trying to be able to get, you know, sign students up, to have them whether or not they finish their degree, they're in lots of debt, that debt is carried by the student loan providers. So it has to be paid that people can never get out of it. Often veterans are the target of this. It's been, you know, there has been some really problematic practices in higher ed around edtech. And one thing I'll note is that the Department of Ed has tried to protect higher education students from being subjected to predatory practices around tuition that comes through the federal government, which then students owe forever, even if they don't get their degree. And so I'm a little concerned that with this new ruling, there's a potential for those practices to start coming back into play. Anyway, that's it for us at ed tech insiders. Thanks so much for being here. And you know, if it happens at ed tech, you'll hear about it here on Edtech insiders. Thanks for listening to this episode of Edtech Insiders. If you like the podcast, remember to rate it and share it with others in the tech community. For those who want even more Edtech Insider, subscribe to the free Edtech Insiders newsletter on substack.