Bot Shit Crazy
No topic is off the table, and we'll give it to you straight. This channel isn't just about robotics and automation. At Bot Shit Crazy, we cover everything from industry news, manufacturing topics, and roll in some no-limits conversations with experts in the field. Let's pull back the curtain on the manufacturing world, and let's do it with some plain, straight talk. Such a simple idea, it's almost crazy...
Bot Shit Crazy
Episode #15 - Executive Order on AI Part 2
In this episode I cover the second section of the new executive order on AI released by the Biden Administration. This gets particularly interesting as this section covers how government agencies will utilize AI and how the executive branch will enforce and oversee that usage...or not. Spoiler alert: your data is still probably not safe.
Check one, two. Welcome back to the Bot Crazy podcast. Today I am gonna continue to go over the executive order fact sheet. That was released by the Biden administration, and I wanna say it was about two weeks ago now, and basically covering what they're kind of sweeping orders are regarding the use of AI testing of ai and just kind of trying to put, it seems like to me some guardrails around the development of AI and how it can get utilized and deployed in different markets and what kind of threats those could possibly pose to the American people. So. Today we're gonna cover more or less section two of this particular order. And we, we might get to section three,'cause section two's not very long. So section two involves protecting American's privacy. And here we're gonna, we're going to I'll read this, just a quote from the opening to that. Quote, without safeguards, AI put can put American's privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. End quote. No, I don't disagree with that at all because everybody knows that if you join Facebook, if you join Instagram and both of those happen to be meta. But if you're on Twitter anything where you're not paying for it, you are the product. You are what they are making money on. They're making money on your data, they're taking your personal data and they're selling it to other places. And that personal data doesn't just mean your home address.'cause sometimes that can seem relatively harmless. And, and I don't mean to get on a tangent here, but it's just an important point to note. That data can seem relatively harmless because like, you know, anyone can Google your address. If somebody, somebody wants to find where you live, they can find where you live unless you're really, really good at keeping that secret. But the data that they're talking about. Is more granular than that. The, the data that they're selling, the data that they're selling are things like not just your age and where you were born and where you live and what nationality you are. And maybe how many siblings you have Now. It's, it gets even deeper than that. It gets down to what grocery store do you shop at? What kind of food do you like? What kind of items do you usually shop for? What kind of conversations are you having that would tell somebody what kind of advertisements they should put in your face? And obviously if they know that, then the question is, what else can they know? So I do think that this is a legitimate issue. I'm not sure that this really is gonna do anything about it. I think personally, I think that the government and these companies are, are making too much money together to really wanna put a kibosh to it. But, hey, I appreciate the sentiment. So, Anyways here's what they're trying, trying to do. And they're saying that this is especially targeted towards protecting kids. The first point says that they will protect American's privacy by prioritizing federal support for accelerating the development and use of privacy preserving techniques, including ones that use cutting edge AI and that let AI systems be trained while preserving the privacy of the training data. The thing that comes to my mind, just to summarize that, what they're saying is They will develop, they're going to try to develop more quickly, privacy, preserving techniques that somehow you can use the data to train the ai. ai without the AI knowing what the data is, you know who the data is. Even. What's hilarious about that, to me, is that they wanna use AI to make the data more private from the other AI they got like warring AI And what cracks me up is like, what if the AI decides to work together? And I know, I know people are like, well these are two different systems. I understand that this is, these are physically separate or whatever. But come on. The idea is just funny, that you got two separate AI systems and all of a sudden they're like, God, these humans are stupid. I'll just, I'll tell you, I'll, I'll tell them that I'm gonna protect the privacy, but I'm just gonna tell you what it all is. And the other AI system was like, oh yeah, thanks. We're gonna make tons of money together. Ah, Skynet soon. So I best of luck on that one. The next one they say that they will, strengthen privacy, preserving research and technology such as cryptographic tools that preserve individual's privacy by funding a and this is in, this is an organization funding a research coordination network to advanced rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading edge privacy, preserving technologies by federal. Agencies. Okay. So what this sounds like to me is they're gonna give money, they're gonna establish a research coordination network, and then they're gonna give it money. And that's supposed to advance rapid breakthroughs of development for privacy, preserving research for privacy, preserving technologies. So they're gonna fund r and d. Is what they're saying for privacy preserving technologies. Sure. I mean, the government does this anyways. Most tons of the research out there is government funded. So, you know, big whoop, excuse me, coffee break. And, and that's, that's fine. I don't necessarily see this being a huge game changer, but. You look at the second part and it says that the National Science Foundation will also work with this network to promote the adoption of those technologies, so they're by, specifically by federal agencies. This kind of bothers me if this is an executive order. Why do you need any kind of science foundation to promote that adoption? If you're the president, If you're talking about adoption by federal agencies, those federal agencies work for you. I, I am not a hundred percent up to date on my civics and government classes from high school and college, but I'm pretty sure that if it's a federal agency, the administrative branch or the executive branch can in fact force these kinds of just basically order. These people to use a, a protocol in their administrative practices, which is what this would fall under. So I, I'm not sure why you need a National Science Foundation to promote those things to be adopted by federal agencies. If you're the president and you're signing this executive order, I would think part of it'll be that you will adopt, federal agencies will adopt this, you know, but I'm sure as with all politics, they need some wiggle room.'cause who knows, point number three. Evaluate how agencies collect and use commercially available information, including information they procure from data brokers, Facebook, Google, et cetera, alphabet, YouTube and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular, On commercially available information containing personally identifiable data. It's a bit of a salad. So Again, they're going to evaluate how agencies collect and use commercially available information. So they're gonna say all right, government agencies, let's, let's, we're gonna take a look at how the, our, our agencies gather information that's available on the market. You know, it's free. It's, it's not necessarily what you'd call freely available, but it's legally obtainable commercially so they can go buy it, you know, on the market from, again, there's your alphabet soup. and they're buying that from data brokers. They're saying or including information from data brokers. And that makes it seem like when they say, including information they procure from data brokers makes it seem like they are getting some stuff on their own. Like they're just basically public information. You control the internet and you know, you can have a web crawler that finds people's personal data without necessarily Calling Facebook and asking for it or whatever. So then it says they're gonna use that they're going to evaluate that and they're going to strengthen privacy guidance for federal agencies. This is my problem with this. This is fairly toothless in this section at least.'cause they're saying they're gonna strengthen privacy guidance. Guidance is just that. It's guidance. It means you can ignore it. There's, there's not any rules set down here. They're just gonna strengthen guidance. And hope people adhere. This leaves way too much room for nefarious behavior on, in my opinion. And they're gonna focus particularly on commercially available information containing personal identifiable data. So they're saying data that they can, someone can use to identify you. And I'm not sold on this one. This, this part in particular does seem pretty toothless so far. I mean, they're talking about how. They're going to promote federal agencies to use these practices. They're gonna strengthen guidance for federal agencies to use these practices. They aren't ordering anything right here other than other than some people to do some research and and promote something, and they're probably gonna be throwing a hell of a lot of money at it. I am not a fan of that part so far. I would rather see something where, and I understand that it could be premature to say we're gonna force them to use these, but if you're a lawyer, you can sure self find a way to say something along the lines of all reasonable and necessary measures that don't interfere with national security or other business operations will be used, you know, will be ordered by this administration to be used by federal, federal agencies. There. I don't understand. You know, I'm a lackey from Iowa who has a general studies degree. You can't figure that out with all the lawyers in there. You can't find a way to say that, which tells me about their intentions. It tells me about their real intentions.'cause it seems that didn't, it seems if this is what you're putting in there, it's almost like that didn't even cross anyone's mind, you know? So not a big fan of that so far. Develop guidelines. Sorry, we're onto to point number four now. Develop guidelines for federal agencies to evaluate the effectiveness of privacy. Preserving techniques, including those used in the AI systems. These guidelines will advance agency efforts to protect American's data. Okay, all this has done right here. All they're saying here is. They're gonna throw down some guidelines for people to use to self-test. They're gonna put in some self-test guidelines and procedures. Again, whether you use them is completely up to the federal agency, which by the way, works for the administration. So the administration, the executive branch, I would think should be able to tell them that as part of their administrative processes, they will use these things. But because this is, I suppose, the federal government talking in particular in this section overall about the federal government. They're probably not likely to get too deep into it, so I guess I can, I I shouldn't be surprised, I guess is what I'm saying. So. Rather than go in necessarily to the next section. Personally, my thoughts on government agencies using things like AI is, and, and these standards where they're talking about protection for privacy is the AI is out there, people are going to use it, the government's going to use it, commercial entities are going to use it. Private citizens are using it all over. I use it a little bit well, at least I'm told I do. I, I removed the background from a picture the other day using Canva, and I'm told that was done with ai. There's editing software for videos and podcasts and stuff that uses purportedly uses a ai and if you know what AI is, then you start to see that this is like, there's a lot of flash here because, I mean, artificial intelligence is fancy programming. It's in-depth programming. It's, it's a lot of, a lot of if then statements and that's fine. I'm, I'm not saying that that's not effective or can't be effective, but there's a lot of kind of marketing going on here where, what people are really concerned about, I think, or what I'm really concerned about when it comes to the federal government having, you know, deploying this kind of technology is that it does have the ability, To advance exponentially. It advances itself just, just like the technological boom happened in the, oh, I mean, in my, to my recollection, what I really felt I saw it was kind of in the nineties. That was when you really saw, like, I remember the the 3 86 computers coming out. And then when, if you had a 4 86 or a 5 86, it was like, oh my god, you got a beast of a computer with four megabytes of Ram And I literally remember someone saying that, oh, that's a 5 86 with six megabytes of ram. That thing's a beast. And a guy was playing doom on it, you know? And I remember how fast that moved. And you can remember, you can look at how fast that moved. We're, if you go from 1995 to now, we're 28 years later, I. And my cell phone, the phone I carry around e everywhere I go and sit on the sill in the bathroom when I, you know, at the Packers game or the Hawkeye's game when I go pee has a bajillion more times computing power and interface capability than the old 3 86 or 5 86 that everyone thought was amazing. And that's before you talk about comparing that to, you know, what landed the, or guided the actual spaceship on the Apollo missions. So not to get off on a tangent there, my point is just that technology expand, technology accelerates itself. As technology gets better, the technology to improve, the technology gets better and you get this exponential increase. The idea of the government having, you know. Deploying that kind of technology should be something that is discussed and has a concern over it. I think it doesn't mean, I don't think that you have to say, oh God, the government's gonna, they're coming to get us and we can't let'em have this and blah, blah, blah. I do think in general, citizens need to be healthily skeptical of the government and they need to be It is basically fiercely defensive of their privacy and their rights. I do. And I think that's just a healthy adversarial relationship between government and population. Lest the government as history has repeat as shown. And human nature, this is a human nature thing. People tend to forget that, you know, if they work in the government, they, they work for the people, you know, and that is their job. And that's just a human nature thing. This has been shown actually in clinical trials when people are jailers. That they can get. And I'm not speaking about all jailers here. I'm talking about, there have been studies that show that if you give someone essentially too much power, and they did this with basically jailing people. The jail, the jailers and the jailers, the jailers were, were uncharacteristically cruel. To the Les and when they switched roles, the same thing happened. You know, it's just, it's a strange thing about human nature. It's why kids pick on each other and a lot of other things. But so, so I think you need to have an adversarial, a healthily, a healthy adversarial relationship between population and government. I am not The biggest fan of everything that I'm seeing in here, because I do think it's a little bit toothless when it comes to how federal agencies are gonna use their stuff. It does sound like what the government is saying is, look, we're gonna use ai. It's out there, everyone's gonna use it. And I agree, of course they are. But it also seems like they're saying we're gonna develop all these soft edged guidelines and kind of breakable rules, essentially. They're not even rules. And we will actually deploy, we're gonna throw money at it, which great, more that just throw money at it. And then saying, we're not necessarily though going to use it, we're just going to promote it. Two federal agencies that, again, I can't say it enough, seems to me the administration has the authority to tell, to use these things, to instruct, to use these things. So I think the idea of. The government's involvement with ai, particularly when it comes to privacy, needs to be taken seriously. I'm not sure it needs to be at paranoia level, but again, invoking that adversarial relationship. I do think it's important that the government be kept in check. I do think oversight is important. I think citizen oversight is important and I think that it's it's essential that people be aware for themselves. Of what the capabilities are. And realistically, I also think people won't know 99.9 9 9 9 9 9 9 9 9 9 9% of the people won't know what the capabilities truly of that AI are. Because I think the government can legitimately claim a national security risk and. Some of this technology and how they use it, of course, for national security probably needs to be kept under wraps and I don't have a problem with that. But again, we come back to the power thing and that's with that capability. How far do they go with it and how Okay is the population with it? And how much visibility will the population, will the people, let's call'em, that's a better way to say it. How much visibility will the people have to how they are using it and what their capabilities are? Because if there is that Section that, you know, the people don't know about, then it's likely also that the people won't know about how it's being used and therefore won't be able to have oversight on it. They won't be able to have feedback about it. So those are real considerations. This document so far as I can tell, and we're only in section two, is not addressing that. Really. It's talking about it, and that's a start. Hey, you know, that's I'll, I'll take it for now. But that's not, this is, this is not, at least from what I can see, this is not a fix. This is simply a, a mentioning of it. Some guidelines, some suggestions, and again, it's in the conversation, so that's good. But I'll, I'll be looking forward to seeing what else. The rest of this contains in next week. When we get into it, it looks like we're gonna be talking about a tangent to this particular subject, and that is advancing equity and civil rights. That should be interesting. I appreciate you joining me this morning. Enjoy the rest of your week and have a great weekend if you're listening to this on Thursday when it drops. If not, just take care of yourself today and we will see you next time. Take care.