Decode AI

Cloud AI Act with Raphael Koellner

Michael & Ralf Season 2024 Episode 7

Send us a text

In this conversation, Ralf, Michael and Raphael Koellner discusse the AI Act and its implications for companies in Germany. They highlight the different perspectives on the AI Act, with some seeing it as over-regulation and others seeing it as necessary.
All three also talk about their first experience with AI and shares current use cases, such as cost estimation for insurance companies and contract analysis. They explain the meaning of the AI Act and its four categories of AI systems.
In the ongoing debate they discusse about which authority is responsible for enforcing the AI Act and the need for a clear definition of AI.
They also mentions the forbidden AI systems and the timeline for the implementation of the AI Act. The conversation explores the EU AI Act and its impact on companies, particularly in Germany. It discusses the need for guidelines and regulations to prevent the misuse of AI while still allowing for innovation. The role of an AI officer within companies is highlighted, as well as the importance of defining AI use cases and considering the business case. The conversation also touches on the potential positive impact of the AI Act on small and medium-sized businesses and the emergence of AI in various sectors.

Links
Raphael Köllner | LinkedIn

Keywords

AI Act, Germany, regulation, use cases, forbidden AI, categories, authority, definition, EU AI Act, guidelines, regulations, AI officer, AI use cases, business case, impact, small and medium-sized businesses, sectors

AI, Microsoft Build, OpenAI, language models, AI development tools, hardware advancements, Google Gemini, technology development


Do you the other moderation? Let's first switch to English so that we don't hassle with the first opening. speak English. You cannot. So I can do the opening if you want to? Sure. Okay. Shall we give you a start point? No, just go ahead. Let's go ahead. Hello and welcome to Decode AI, your podcast when it comes to AI to learn, understand and get forward with everything you do with AI. Today we have a special guest here, one of our MVP colleagues. But first let me say hello, Michael. Hello, Ralf. Yes, I am in this podcast as well. And I just realized we haven't had any proper introduction over the last episode, so please check out our website decodeai .de to get more insights about ourselves if you want to have a face to our voices. So, but now I think we can start and jump into our discussion with our guest. Yeah, we have a guest here. and its name is Raphael and Raphael, maybe you can introduce you shortly so that we get a good picture about you. Shortly. Enough of introduction. Okay. Yeah, my name is Raphael. I'm leading a Microsoft partner and law company I'm one of these MVPs and Microsoft Video Directors and yeah, I'm more regulation and compliance guy instead of the developer guy that knows something about development but more the lawyer, the compliance officer, the officer, of these parts and that's very funny to see on another view on these topics. So I'm here and I'm happy to be here. And now it's your turn. What's your questions? Thanks for following our invitation. It's great to have you. Yeah, for sure. It is crucial and very important to see the law perspective and the responsible perspective of AI these days and to understand it. And I guess that companies have many questions to you. regarding some things happening within the regular series for these days as well. But first of all, you gave us a little insight of what your daily doing is, but can you expand that little insight a little bit more? So what's your daily doing? when it comes to a usual week, how does it look like? That's very interesting. My week is 50 % of consulting and 50 % of I'm a concern head of data privacy or a concern DPO worldwide. And yes, we are talking with AI on both sides, but AI and all of these, it's more 60 % of my weekly work. Now it rises a little bit up because everyone talks about this topic and we have more and more regulatory like the AI Act since last week. more and not only in Europe, have also in China, in South Korea, we have no AI Act, no new ones and more and more regulation in this area and it's very interesting and that's what is it. work today is data privacy, answering questions like is it allowed to block and posting pictures of people? So a little bit low level things and on the other side all of the innovation things. have AI in this company, how we can use it, how it is allowed to use it, how it is in a data privacy confirmed way to use it and all of this and yeah this is my daily work so 70% of work but fun of these parts and 30 % of community work and like these podcasts and all of these conferences and more and more people are asking me about is it allowed to use AI in these part of idea in these processes of personal data? yeah, this is my daily work. Classical question and. Yeah, it's a little bit, and this is last time for me, it's a little bit since six months like I'm back at university. Because in regulatory everything is new and we started with what is AI as a definition and we have a lot of parts in science at universities, professors, we have a lot of papers we are writing. to try to define rules and to try to that all of these use cases we have some terms of use and regulatory and it's very very hard and it's yeah it's a new time since this year in these parts so a new era yeah a new era but i'm very happy a lot of courts and legal guys are very happy about it, so they have a new topic. So, often it's, yeah, we have a lot of experts now. So called experts. but yeah, that's my daily work since some months with all of these questions. And you just mentioned in a In a part of your statement what you are doing, you are already in this interesting topic AI. What was your first touch, what was your first experience with AI? My really first experience, is now five years old, five to six years. because I'm thinking about with Microsoft together and LexisNexis is a legal database. A part of this is part of the German governments and we are thinking about to create a digital judge. Okay. And we're using the typical chatbot framework and then Q &A framework. and using all the legal parts, the law and everything put it in. And we're starting from this typical Q &A with AI, a little bit like an AI search. And yes, we know AI six years ago, but in the beginning and starting, okay, how we create a Yeah, it's more an helper for judges. I can advocate. Yeah, that they know, for example, they we started with all the legal topics around traffic and cars. it is in the legal area. Typically, when you drive 10 kilometers too fast, you get a fine of 20 euros and two points, like for example. So we have a lot of points which are numbers and computer things. So we started with this and this was very funny that we have, for example, a use case, a legal case. So you drive on the the Autobahn 50 kilometers too fast and you have a little accident and all of this and we put it in and the result was okay, this is your fine and you have you don't have to go to jail, but you have these to pay and this how much points. So it was really good, but not perfect. But I think our idea was that judges in Germany have a green line in their judgment. that they said, not in Munich, for example, in Bavaria, you get more fines than in Niedersachsen, for example, for the same thing. And that we said, okay, with this bot, we tried that every lawyer, also yours, and judges said, okay, we see we have the same made us all over in Germany and this was my first part of trying to put AI into to help to help in this use case and yeah today I see there are a lot of startups trying to do this we have ideas five years before and like every time at Microsoft we are five years ahead but not with this marketing thing already a little bit about what use cases you dealt with a few years ago. When we look to nowadays, are there any use cases you can talk about? Yeah, we have use cases today into two parts or two spaces. The first space is the typical copilot for Microsoft 365. Help me to write a text, help me to write an email. You have a text, make it more formal or like this. And for me, the more interesting part in a kind of the same with the legal bot and legal tech is to analyze a contract. Okay. To analyze for example, processing order agreement or like this. So what I see today in the first space, AI is not very good to create contracts or create data privacy risk assessments or like this. They can this but it's not on the level that we need it. The structure is not the structure we want, but as an analytics tool, we see more and more it will be better. So yes, we have to read everything and we have to read every sentence, but these legal tools, and you can start with Copalite for Microsoft 365 with a prompt like, give me the highlights of the contract or give me the risks of the contract. And with OpenAI when you create this for your own, it's typically good that you get 10 points of this contract and you see, you take beware of owner, you have to read these parts of the contract better. So for example, and this is a really challenge for us, we have contracts about 100 sites. pages or 20 pages or 50 pages and with this AI tool we said okay now we have the highlights the 10 points on page 1, 10, 100, 150 and then have a closer look at these parts so this is what I see more and more and I think this helps every one of us to better views and the in this second space third use case is the Insurance use case so we see it in with Google cloud with Microsoft cloud that insurance company said okay, you have an accident and your car has yeah problems on the front side or the backside the glasses are broken or whatever you make a picture and then the helps you to set okay these accidents the result is you need€850 to repair the glass. then a lot of, some, not a lot, some of these insurance companies said, okay, we give these €850 directly to the people. So we are faster, we see smaller accidents or smaller parts. So I think these are some use cases out of my surrounding area, which are really interesting and we will see what happens in the future. And I think we take care of it, what happens with it. because we had the same discussion two years before with smart contracts, smart contracts and all of these parts. And now we will see how it works with AI. So smart contracts, we don't have smart contracts today. So, but we will see what happened with the same dealing or we will have AI next year in this area we will see. But smart contracts, no, it's not part of daily work. Yep, this is some examples of... Pretty cool to see such use cases like cost estimation for insurance companies and summarizations and top highlights of out of contracts. cool use cases beside the copilot one. Copilot is like a tool for me, it's not a use case. So I feel pretty interesting to listen up to something like this. These insurance stuff is a really cool thing. with that said, what is the meaning of the AI act? to you and to companies these days, especially for Germany, if you can say anything about that. AIACT is a little bit like now we have GDPR, now we have the AIACT. So in the part of questions of the companies of we don't know what happened now, we don't know what we have to do. So this is the same period like in the beginning of GDPR. And today we have the same two groups. of people. The first one said the AI Act, this is very bad. We don't have any innovations in Europe. We don't have any innovations in Germany. All of our companies have to go to US or have to go to South Africa that they can work with AI, that they can have new companies, new ideas and dealing with AI. This is the first group that said, OK, this regulation is too much and it's over-regulated. And we have the other groups that said, OK, this regulation not to less, but this regulation is in the right part, but it's not finished. We have to optimize this regulation the next years, but we have a regulation. it is a little bit AI act or AI regulation is faster than GDPR, for example. we have AI and we have to see after two years, two and a half years, we have an AI act. This is totally fast for the legal area. Normally we need 10 years about it to have a regulation. But here in this part, political parts of our society knows that they have that they need a regulation in this area in the context of ethics, for example, that was a main part of access and to use AI in a good way. And a little bit out of my perspective is also because we have more and more elections this year and next year. And also our political friends and parties, they're a little bit afraid so that they have a lot of fake news and AI is posting more on X or Twitter and LinkedIn and everything. So we need a regulation to deal with it and for me We will see how it works in practical. So we have AI Act since last week. there are no use cases. AI, there are use cases a lot, but there are no judgments or there are a lot of we have a lot of paperwork now. We have a lot of books and about AI Act. We have six legal books about AI Act. Yeah, they work with the draft and publish it now. Okay, the draft is only 10 % on the last version of the paper. So, okay, here's our book. Here's our working book to AIX. So we have the first time in legal area working books. We don't know this about how to deal with AI. And then for me, it is very interesting that, yeah, we have the first time fast legal regulation, which defines some of our I want to say some of our borders. So for example, we have the first time fast definition of AI. So we have a legal definition. What is AI? But for me the legal definition in the AI Act is not perfect. So for IT guys I say that's not AI. So what I try to test is for example, that's what I also told in my sessions. I go to my grandma or to someone who don't know anything about AI and I read the definition, the legal definition of AI Act to them and they give me the answer. Okay, you mean the Bible. Every time, 90%. You mean the Bible. 10 %? I think it was something of this new AI thing, think, but a lot of people say it is the Bible. So I think that is a little bit where we have to think about it to find a better definition of AI. And this is what I said to everyone in your company, find the definition of AI. What is AI for you? And where are the borders of AI? Because this is hopefully something which will the AI will help because more and more companies using AI for marketing reasons. Also Microsoft. The cognitive services are AI services. So no, that's not everything is AI. for example, I talked to a company yesterday. They told me, yeah, they have a new program for exchange server with logic apps and now with AI to have rules to transport emails to the right people. And there's now everything with AI. And I said, okay, what are you using on AI? No, we have only 50 rules. Okay, these rules are not AI, that's rules. And this is what I see, okay, we need to talk more about AI, about the definition, what is AI, but with AI we have a definition. And what we also have and what we get with AI, act, is also we have four classes of. AI. So what I mean is we have AI use cases and now we have to bring these use cases under these four classes. So what we have? have minimal AI systems, we have restricted AI systems, have high confidential, high restricted AI systems and we have forbidden AI systems. So it's not allowed to use these AI systems. And it's very interesting that all of these four classes and I think this is that was an intelligent way that they said, okay, we have a lot of use cases, millions of use cases. And now we bring these in these four categories or better as categories in these four categories. And now when these, for example, is in the high risk category, we have more regulation in the result of it. So high risk category is for example using AI in the health sector, using AI for example in the justice. So the lawyer bot the beginning five years ago is today a high -risk AI system. and also when, for example, the town Leipzig will use AI, it is a high -risk. So we have in this category who is using AI, so in which area, and we have which data we use in this AI. So we have two parameters and then out of this parameter we can go to these four categories. are like and this is also a very first part in this Germany government since today and it starts today or it starts yesterday the BSI the is the national security authority they have a website that helps you, which is this AI, how you can classify it and has building a little program application to help the companies with this regulation. And this is the first time in my life that I see that we have a regulation and three days later we have from our authority a tool which helps us to deal with this regulation and to help with the questions what happens with my AI. tool what happens with my AI use case what we have to do and the leader of yeah the leader of these national authorities are posting in this morning after one day we have more than 13 000 users of this new tool and we see okay This money from my tax in this area is perfectly done. They have to do more like this and they see how much people are afraid about AI and how much company don't know what to do. And now we have these. little tools, have the support from BSI, from this government and we need more of this because now we have these AI Act, have these four categories, we have to talk to deal with, we need to think about how to deal with it. For example, and this is what I see, we need new processes. So the first result... Out of the AI Act, in the most companies is in the IT demand process or in the development process of an AI tool, we need to categorize these AI tools to the AI Act. And when it is, for example, a high -risk AI system, we have now one year, a little bit more, with high restricted AI systems. We have two years to 2020. It's like it's a little bit of time period to prepare for the AI act to to say okay, what is the the parameter for this AI system? So it must be stable, secure, all what we know from NIST2ACT or what are our principles to build an AI tool now. And now we see, okay, we need more transparency and all of this. And for me, it is very, interesting because in these parts legal, justice and health sector, the government and also cities and also all of us say, okay, in these areas we need more digitalization and maybe AI will help to be faster, have better results at the end. But I think what we need in this AI a little bit more is that we have the people point in it. So at the end we need people who knows AI, who knows how AI works to control the AI. Because we all know we're putting something into the air and we have results. We put in 100 times, but we 80, 90 times another result. So we have to think about how to deal with it. But this is a little bit what we will see in the next year. when we have a look and I have my table here because it is very different. So we have see the AI Act now, but it will be released in August 2025. but we have three parts of the AI Act which will be released on other dates. So for example, the general terms of AI, including of the definition, and the forbidden AI systems are part of the 2nd of February 2025. So we have only a half year that forbidden items are forbidden. You can't prepare, but they are forbidden. You are not allowed to use them. The second one is the 2 of August is the main part of the AI Act, like the what is an AI model and use cases, government, fines and responsibility and all of this. And then and what I said is to 2 August of 2027 is this high risk. AI level so they split a little bit so that companies and also we can deal a little bit better with this regulation. Let me quickly jump in. When you're talking about forbidden AI and you were talking about categories of risk and stuff, who is going to say that an AI should be forbidden and what's the reason for that? That's a very good question because in Germany we have the discussion who is the authority for the AI. We don't know it's the BSI set, maybe we, but we have more and more of the data privacy offices, also national data privacy, for example, from Bavaria, from Niedersachsen, from Baden -Württemberg, also from North Rhine -Westphalia. They said, we are the new authority for AI Act. We will be the authority. So we have a fight between our authorities who is responsible for AI Act. And currently it's a interesting that the first time that the authorities fight who is responsible for AI, who is the authority who said, okay, you get a fine or whose authority said, okay, it's forbidden and we come to you and it's not forbidden and you do this and you deal with this forbidden AI, you get a fine. So it's very interesting in this you say that there is not yet a list with forbidden AIs? Yes, have, no, we don't have a list. We have a definition what is a forbidden AI. For example, the definition of the category is we need very clear risk in this AI for the security of Germany for example for security of people. So it's not allowed to use AI to shoot people, example, with a drone or like this. This is forbidden AI. And it's not only the heels of the people, it's only the ethics, the internal heels of the people. It's also the basement of living of these people. And all of AIs in these parts are forbidden. The same are manipulation AIs. What I said was fake news in the election. This is also forbidden. And also it's forbidden for example that you're putting people in front of this AI system and the AI system said okay now you have to work this part and this tool and this tool and put these together so that the AI controls the people. This is also forbidden. So these are these scoring, manipulation, high risk and security parts which are forbidden and it's very interesting. We have the AI since last week, but in the Microsoft terms of use for OpenAI, these are already forbidden. So the Microsoft looking in the draft of AI Act and putting this in their contracts. So it's not allowed to use AI to know the faces of the people. putting decisions over these people, for example, not allowed to use to film the faces and make a decision, for example, shoot these people or kill these people or like this. This is not allowed. So we have these these regulation and this regulation with these forbidden as to 27. But in the use cases of Microsoft, AWS, Google, these forbidden use cases are in their contracts. So in real life, most of these use cases are today in the contracts of AI tools. But not every AI tool we can use is a part of these global players. You have AI tools from GitHub, which is for free, which is open source, and you can run on your laptop. But also, this is also dealing with this part. So very interesting. You mentioned the categories. They are very important and one of the benefits we get from the AI Act. What are the main pillars actually where we can benefit or may have some restrictions from the AI Act? The target of the AI Act and the main pillars is that the AI Act brings us a border which defined, give us the hints and the guideline which AI use cases we can use or which AI use cases are allowed in our society. So what we are seeing is the AI act is a result of discussion of lawyers, IT guys, experts and everything. This is of the knowledge of the last year and the beginning of this year. We will see how the AI will be changing in the next years. But the main pillar is that we have now the borders in which we can deal and which is forbidden, which is a part outside of the border. And the main target and also the main pillar in this area is that, OK, we know this, we know how to deal, we know how to deal with AI. But the main pillar is that everyone and also me, want the target and we want the goal at the end, that AI is not in a use case like destroying the society, destroying, killing us. It's a little bit like we have dynamite, dynamite in a positive way, we get gold and everything, but with dynamite you can kill people. That's the same here. It's forbidden to use AI to kill people, but it's possible. You can use an AI. We have these scenarios. We see it in Ukraine today. We have our industry. They have these use cases and there is AI with, for example, you can use the reply robot dock with a gun on it. And there's AI for who is the people we need to kill, who is the people. And this is forbidden. it is for me, that is the main pillar. We know now how to use AI and we get a guideline to use AI in a way that we don't destroy our society. Got it. Sorry. So that's very interesting because it's good to understand that's a kind of guidelines instead of something like you said at the very beginning, you have a fine catalog or something like that. It's a law which some fines, some borders. It's more about the general understanding and a common way to identify, specify AI in general and clarify some questions. But what are the use cases for AI to get good for the society instead of you can go and use AI for almost everything without any limitations. So that's an interesting one for me to understand. Feels like it is in framework where we can work with like the Responsible AI framework from Microsoft. And as you said, it matches the whole time together. So that's cool to know that. So to our audience out there, if you've acted within the Responsible AI of you should be very safe for your applications that you don't have to fear the EU AI act at that stage. That's pretty cool. And Microsoft is working with authorities together in this responsible AI topic. And also at the beginning, the head of legal of Microsoft, Smith is also going to everywhere and Microsoft also speaking to bring this responsible AI on a level that we can deal with it, that is the basement of bringing your ideas to life. but not every idea. So apart, and this is what I like, and I'm talking to Sarah, who's the owner of the Azure Safety API system, that we have these Azure Safety API systems, not only from Microsoft Tools, it's also possible you have your AI on AWS, tunnel it and bring it to Azure Safety API and bring it back, so we have this control and filter system, but on the other side, we have these two groups, the first group set, every filter system and everything crash our AI. our AI innovation and they said okay we need a little bit of controlling what happened there that AI is also on the on the street and it goes not on from the street from the street in the dark parts of the wood nearby the street yes we will see it I think there are dark parts of the street when you go into the darknet. Today you can buy for $50 AI tools with hacking components and everything. But this is what we don't want in our society to bring it more. bring the value of AI into front and when something happens we have the fines and the sanctions but now we will see which authority will come and say okay now it's not allowed but now you get a fine and now you have to stop this working. I like the idea of getting some rules and a general understanding. Do you think that will change anything for our personal life? like we already clarified it's in some or let's say many of these large company contracts already so it's nothing it sounds like it's nothing new we just have an official paper to get this do you think it makes any impact for our personal life and gets more protection or something like that yes it it's and then this is a very good question in a part of We have the EU AI Act. We have acts in China where we said China is only allowed to use AI out of China. So it's not allowed to use OpenAI in China. Microsoft hopefully won't want to deploy OpenAI, that's your part, deploy AI in the China government's cloud, but today it's not available. So that's not allowed. And South Korea is the same. So in APEC, we have more of these restricted regulations. In Europe, we have a regulation, but not so much restricted, like in China in this part. So in China, it's forbidden to use AI outside of the country. I think with these regulations and AI, we have a fight of the economical systems in the world via these AI regulations and AI tops. So we see, for example, we see today and I think we see it more and more with this AI question in our daily life that we use an application out of US or Europe. And we use an application out of India or out of Russia or whatever that will have another functions that will be maybe more free or it will deal with more high risk functionality. And we will say, okay, we will see more. Okay, we use a tool from here, we use a tool from here. But yeah, we will see more and more differences. and we have to decide what we want. But what is the solution at the end? We don't know. That is a high discussion of our society. And it's very interesting that only with the topic AI and these new tools, we're coming to this high -level discussion of fight between the social systems. I don't talk about this in one of these podcasts, but at the end, it is a question of... how to protect your society or how open you are and which commercial or which society system will deal best with it. Do I know the solution if Europe is better, US, Brazil, China or South Korea, the epic part? we will see what happens at the end. is a good question. But let me give you the question to you. What do you think? Is it better to have an open system where you can create every AI users you want? Or is it better you have a border? Or is it better you have so small borders that only 10 % of use cases are allowed, like in China? What do you think? What is better for AI innovation? I personally prefer there are some guidelines and orders to avoid misuse of all these. And I think, if it's too restrictive, then we lose some or we miss some chances to get more innovative technology. But to just specify it like harm any human life. or something like that to create some deep fakes to, I don't know, use for illegal parts like something I heard about the story. Someone was in a video call with sea level of the whole company multiple people but no one was real all was deep fake and they decided to spend money or move money to a legal account this is the manipulation AI and is a forbidden AI exactly so that's something which should be regulated from my personal point of view and then already aware there is a dark net for today's technologies and there will be a dark AI net or something like that. So yeah, from my point of view, long story short, I'm absolutely for the regulated, but not to crush everything and restrict it on a very low level. We should be open to work with AI and get more ideas about that. Yeah, I see it on a very same way like you. I like the idea of having a framework that gives some guidelines on how to do it. And I also like that there is a possibility to estimate whether you are going to risk or not, that there are some estimates where you can follow on to put it into one of those columns to say whether this is a risk position or not. Also having an idea about where the data is going to be treated and how the data is going to be treated and what that system means to a human or to an employee within a company. I think all those points are very, very valid. What I also like is that the EU, the EU AI Act is also taking care of that there is still freedom to be innovative and that they are enforcing and encouraging the economics and the citizens to be creative, to create something fancy and cool or to have a production support and assistance or whatever what you called an advocate or so. So this is really crucial and important. I wouldn't say that the better way is to have like the China or the APEC does it to hardly strict it and regulate it down. that stops innovation and that, from my understanding of how the world is running for the moment, will stop also economics at that point, on a certain point. We will see what happens. But when we bring it back to your question from the beginning, what happens now for the companies? We have to think about it who is responsible in the company for these AI topics. And we are talking about it, about regulation, ethics and everything. And I think the... Data privacy officer for all of this? No. The compliance officer? No, that's more aberration contract and all this. The CSO? Is it a CSO -thema? It's every single part of it. like me and some others said, okay, we need an AI officer in the company. Like DSB, compliance officer, we need an AI officer. They deal all of these topics and talk with every legal IT security and data privacy department and bringing AI on this regulated compliant way that we have some people they deal with it. And that's very interesting that I don't have the definition for this AI officer for example. But the idea comes from from legal guys comes from the companies that said okay, we need someone who do this So the companies today starting with people two or three they searching or shopping AI use cases in the company and I think some of these people bring them to the knowledge that they are an AI officer not on sea level but on on a like like a seasonal level on this level to bring AI and to deal with AI and the company because to be a you can ask. So for example I'm in the HR and I want to use AI. Okay, go to the AI officer and ask, okay I want to use this. Is it allowed? Can you help me to create a data flow diagram for the privacy officer? Can you help me with the security one? Can you help me? Is it allowed? Do we need to ask our employees that is allowed to use their data for this AI or is it a part of the working contract or whatever? Can we deal with the working council? in this part so I think we at the company we need to think more more about the this person call it AI officer or call it like you want for me is it these these these part of AI officer that you you help the company to bring AI on a effectively way because I see more and more and all of us we see it starting the companies with company GPD GPD or EAP program of Microsoft 365 Copilot and we see now in this area the most equation and please make a podcast to this topic. Podcast idea or for self? But I think this is too much for today but the question of... how to see parameters if AI is effectively for this company. So works AI in this part for this company, help these people, or is it only nice try? And is it AI that we can use? So is it okay to bring this money into these AI topics that AI at the is good for the company. So this matrix is AI effectively is one of the very big questions. We have this regulatory. Okay. We have an AI part which is okay with the AI Act, with data privacy and everything. But at the end, what is the matrix that this AI help the people? And our digital minister said AI will help every employee and will bring more free time for the employees because with AI they can work faster. but what means they work faster. Yeah. And this is very interesting. And out of my perspective, I think I that I will bring a lot efficiency into a company if it is used right and they use it to automate, for instance, toil processes, which have a large running stuff and so on. So there can be a very benefit, beneficial thing for for the employee. I think within a company, it should be a council to AI questions not only one person as I would say as you've explained there are many different topics to be discussed and such it should be a council of these people to decide at the end of the day whether it will... In Germany we say Lenkungskreis, typical German word for a working group with decision people I mean, there's the work council, there's the data protection guy, there is the CSO. But the guy or the woman who organized this can be the AI officer, okay? I'm making a lot of advertising for this, but this is the thing about it's too much for... I'm working at the, for example, the plant manager and I'm dealing also with AI. No, as a plant manager, we have a lot of to do. can't deal with AI. AI for me is too big for someone that we said, okay, you have a hundred percent position and 80 % is your normal work and 20 % or 10 % is AI. So this is for me, It's not 20%. I must be 100 % or 80 % position. And this would be a thing, a very hard question for the companies today because we don't find these people. Where are these people? We have no universities. We have no stadiums, some that we have that these people will be trained. So what information do I need or what are the people? For example, get good question. What are these people? Who are dealing with AI a company will be searching. What is the profile of these people? That is very very interesting. So we have AI but in in Germany, for example, we don't have the background We don't have schools and everything they trained us to be an AI guy or AI officer or AI knowledge people or do we have a lot of we have some some AI Studying Ganger? don't know. But you know what I mean. But you know, at the beginning of the era of search engines, had the same, no one had any experiences with that. So we will grow in this and the companies will learn how to work with it. I really second that it's necessary to have someone who will have a dedicated focus on AI, maybe not someone who's using it on 100 % level at the beginning. at the beginning. think it will grow and I think it makes sense to go this way and use it, get more experience with that, and you will get more questions in the companies. Like, can we use it for this use case or this use case, not only for the legal part, not only for the compliance part, but also for what kind of AI technology can be used. And so you need someone who is familiar with this topic, and this is nothing you can do just as a side hustle. That's the point I definitely echo that. Also the question how to define the AI use case. What is the AI use case? Those 50 rules you mentioned before. Yeah, yeah, yeah, yeah. But this is also very interesting. And this is, for example, for me as a data privacy guy and maybe a little bit of AI people, a little bit, only a little bit. I'm not the 100 % AI guy, but a little bit. But this is very interesting. I'm asking every time what is the use case? And everyone, it's very hard to define a use case also for using Microsoft Teams, Word for tools we are using every day. But now they have to define use cases for AI. I think this is one step for and with these use cases, I can say, okay, it's a part of AI. We can categorize this, but we need these. detailed use case description. So please define what is air in your company and make these use case definitions. Why you want, for what, why, why, why, why, for what, what is the target when you use it, so parameters, is it effectively or not? And then we can check which data, which personal data, which data you will deal in which area, and then you will see if we didn't deal with it. Yeah, to me it's as always when it comes to a project where something must be developed and that's the case with AI. It's not only to have a use case. mean, you can find a ton of use cases out there, but it's to me also a question about a business case. So without a business case, don't... don't start thinking about it because the use case will pay the bills at the end of the day. So you were pretty much impressed by the speed of the government, the EU government on how they quickly came up with that EU AI Act. Do you think that German companies are catching up that fast as well with all this AI stuff around or do you think that they are lot behind? Clear answer, no. Because we see it and we can compare it with GDPR. We have GDPR since 2018. We have now 24. It's long day. For sure. And the companies are not GDPR ready. many companies are not yeah many companies are not gdpr ready so now we have the act we have one year the one year is too short yeah and most many company has to deal with gdpr they have to deal with t -sugs they have to deal with the data with the other u regulations in this too we have the data act we have a lot of regulations import and export rules and all of these new rules and everything every to the same time to same part and now we all have to deal with it and this will be a highly hard challenge and I think many companies will say what is a fine? Okay we don't deal with it, do it. So do you expect that this will slow down the innovation hub of AI within the German economics? For some companies like in the insurance, finance and banking sector, the regulated sector, yes. But I think it is a chance for the middle market because in the middle market, they are in these innovative cases, they have the chance to say our risk is lower, not totally less, but lower and the fines are okay, we can deal with it, do it. So it's more of these do it thing. And you know, in an enterprise company, you need a long IT demand process. I think, and I go to the middle market companies as my last sentence, please, KMUs, middle market companies, use AI to come into the front of the economics. It can be a chance. For the middle market. For the middle market, yeah. Definitely. For small and medium businesses. Yeah, for small and medium businesses it's be a chance to do it. I see that some enterprise companies founding now AI companies to bring this momentum into an enterprise company. But I think it's now the chance for the middle market companies. Cool. Definitely. And you think there's a positive impact from this AI act? for those growing number of companies? Yes, definitely they can deal better with it than an enterprise company. Yes, we have this regulation and they have to deal with the regulation, what I mean is... They are more in this part that they don't need to fulfill everything of the regulation. I don't want to say middle market go and don't think at the regulations, but you know what I mean. That's more a part that it's not so critical when they don't fulfill everything diplomatically in this part. So yes, fulfill everything. That's totally fine. it's flexible, more flexible. And it's more the part that you can deal with it. And maybe you can be faster than the enterprise companies. And we see this in the electric car builder part. We have Tesla, small one, now big. We have, for example, in Germany that the post company said, okay, we are creating our own e -cars because no one does. Now they are one of the biggest e -car builder in Germany. So we see now the part of the middle market and startups to bring AI on the level that we have some enterprise use cases we don't need. For example, very easy part newspapers. are hundreds of people there writing text and everything. And there's a startup, they're using AI to create your personal newspaper. to select the content, bringing the content together and create your own newsletter, your own newspaper, your own news of the day. So you don't need a big paper company. It's a company with... You would still need some content creators or journalists to have a good... correct. But this combining and putting IT is faster. Individually and this is not what the the big newspapers can do today and all we all know 80 % of the most newspapers local ones are copying DPA content and bringing these together and only the middle part two pages are written by by journalists But I think in this area we see very fast the results of AI and also in porno sector. Yeah, would say wouldn't matter these days, but I see there a risk that the the information for the citizens is at risk because I mean, people are not reading newsletters anymore or newspapers anymore. They are more and more consuming like TikTok, Instagram, social media stuff and are more related to that than having like official news and stuff. So that will bring up another risk here. And I would be curious to see the development of that when just AI is writing something and another AI starts to copy it. copy it and When that circuit starts, it would be funny to see the story evolving like. we say that I can be like GDPR can be a chance to bring the younger people there on TikTok, Instagram, everything to create smaller parts of news and bring it to them. So not the whole story. They don't watching Arte or Dreisatz the whole week, but bringing it on a smaller pieces and bring it in these channels. Why not? So I would say we're almost at the end of the podcast. Yes. It was a very, very interesting talk and I thank you a lot for the insights you gave here. Thank you very much. I think it's... a topic which will have the effects of doing the next years and we will stay in touch and maybe you can give us an update in a couple of months or... How AI works after 100 days. AI act works after 100... Yeah. Yeah. Why not? Why not? Yeah. Good idea. interesting. Yeah. I'm very interested into that. So I couldn't wait to have you as our guest here. So you now get some one sentence for our audience. for you, what you want to shout out to them. one Great sentence. can grave this somewhere. This is a great one. Thanks for being our guest, Raphael. It was really a pleasure to talk to you and to listen to all the insights you gave to us. You covered almost the whole area, including use cases, new use cases to our audience. Especially you catched up with how to understand the EU AI Act at this stage. Michael and would say We're very, very happy to follow up with him then in a few months. Absolutely. And I'm really happy you made some good explanations in this topic. thought, we will talk about a lot of stuff. But I think it's important and you gave us the opportunity to understand it a little bit better. Thank you very much. Thank you. So. Thank you for listening our podcast here. Stay tuned, stay interested, sign up. Here we go. Bye bye. Take care all. Thanks for listening. Bye. Bye bye. Thank

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

MVP Voices Artwork

MVP Voices

Ralf Richter
Talk Microsoft 365 Artwork

Talk Microsoft 365

Talk Microsoft 365