Bytesize Legal Updates | Fieldfisher

Bytesize Legal Update - The EU AI Act becomes law - what now?

July 18, 2024 Fieldfisher Season 2 Episode 6
Bytesize Legal Update - The EU AI Act becomes law - what now?
Bytesize Legal Updates | Fieldfisher
More Info
Bytesize Legal Updates | Fieldfisher
Bytesize Legal Update - The EU AI Act becomes law - what now?
Jul 18, 2024 Season 2 Episode 6
Fieldfisher

It's finally here - the EU's Artificial Intelligence Act (EU AI Act) has now been published in the Official Journal of the European Union and will enter into force on the 1st of August 2024. So what happens now?

In this episode, we explore what the timeline for implementation of the Act looks like, what are the priorities businesses need to be considering now, and what steps they can be taking now in order to prepare.

Cutting through hundreds of pages of new law, Fieldfisher's  Flick Fisher, Richard Lawne, and James Russell distill down the key takeaways and outline the critical next steps for businesses.

Show Notes Transcript

It's finally here - the EU's Artificial Intelligence Act (EU AI Act) has now been published in the Official Journal of the European Union and will enter into force on the 1st of August 2024. So what happens now?

In this episode, we explore what the timeline for implementation of the Act looks like, what are the priorities businesses need to be considering now, and what steps they can be taking now in order to prepare.

Cutting through hundreds of pages of new law, Fieldfisher's  Flick Fisher, Richard Lawne, and James Russell distill down the key takeaways and outline the critical next steps for businesses.

James: Hi there, I am James.

Flick: I'm Flick. 

Richard: And I'm Richard. 

James: And we're tech and data specialists from Field Fisher Silicon Valley. So this past Friday, the EU's Artificial Intelligence Act has finally been published in the official journal of the European Union, With hundreds of pages of new law to dig into, there's a whole bunch that we've got to unpack here.

So on today's episode of the BiteSize Legal Update, we're asking, what does this mean for business? And what do businesses using or developing AI need to do now? So, the ink's finally dry and the clock has started ticking. The EU AI Act will officially enter into force in a matter of weeks, but that doesn't mean we suddenly need to be fully compliant in a matter of days, does it?

Richard, before we start looking into some of the detail of who this law applies to and what they need to do, [00:01:00] can you maybe just go over what are the timelines here? 

Richard: Yes, absolutely. There is no need to panic, the AI Act is law as you say but there is a staged roll out of its provisions, you don’t suddenly need to be compliant by the end of the month, there is some time. 

In that respect, let's talk through some of the most important dates. The first date that people should note in their diaries is 1st of February 2025. So this is six months after the AI Act came into force and this is when the prohibited practices provisions take effect. So when the most strictest prohibitions on certain AI practices come into force.

This includes things like social scoring, real time biometric identification, and certain manipulative AI practices. The second date that people should mark in their diaries is 1st of August 2025. So that's one year after the AI Act came into force. And this is when the rules [00:02:00] applying to general purpose AI models come into force.

And these rules are going to apply to the largest frontier or foundation models, the biggest LLMs that perform general tasks. It's important also to note that pre existing GPA I models, i. e. those models which exist by, the 1st of August, 2025, we'll have some additional time to comply. Then let's come onto the last important date.

That's the 1st of August, 2026. This is two years after entry into force of the And probably the date that most of our listeners need to pay attention to. This is the date when the majority of the remaining provisions under the act AI act will come into force, including the most onerous obligations for high risk AI systems and the transparency rules for the lower limited risk AI systems as well.

There are another of other important dates under the act, but those are the [00:03:00] three key dates to bear in mind. 

James:  That is really helpful thanks Richard. So we have got a bit of a roadmap then set ahead of us then. We have got this idea that we have got to be doing something to prepare. How do we go about prioritising those? What are the immediate priorities and next steps we need to be thinking about now. Flick can you maybe speak to that? 

Flick: Yeah, thanks, James. So I think the headline point is that the AI Act doesn't apply to all AI systems. It takes a risk based approach. , and so one of the first things that we just want to make sure that we're doing is to kind of get a handle on what AI systems we have actually developed, deployed or plan to develop in the future.So doing some kind of inventory to assess what AI systems you've currently got or plan, to develop is really crucial here because once we've got a kind of map of what systems you currently have, we can then start to determine whether or not those systems are indeed caught or how they're regulated by the act.

So [00:04:00] by way of quick, refresher. There are basically three types of risk classifications for AI systems under the Act. The first is those prohibited systems. So those include those AI systems, which the EU has identified as the most dangerous, and includes practices like behavioral manipulation, social credit systems.

, scoring or untargeted facial recognition from CCTV footage. And we know from the dates that Richard had just flagged that we need to get a handle on whether we have any prohibited AI systems because those rules are going to kick in within six months. The next bucket that we need to be mindful of is those high risk AI systems, and this is where we know the bulk of the obligations under the AI Act kind of sit.

So those, , are permissible under the Act, but subject to a laundry list of compliance obligations to do with risk management, Data governance, technical documentation, et cetera. Um, and these include many services which have already been targeted, , by the EU for elevated product safety obligations under [00:05:00] existing consumer protection legislation.

So think of children's toys and vehicles but also situations where we can imagine that the consequences could be particularly dramatic, like in the context of medical devices, employment, and education. So do you have, any high risk systems? And then the final bucket is that more sort of limited, risk systems.

Where there are going to be far fewer obligations. And that's where the act has some provisions dedicated to those users where it might not be clear to the user that they're interacting with an AI system or its outputs, as opposed to a real person or real content. So we're thinking about those customer service chatbots, um,, but also gen AI content.

That is functionally indistinguishable from authentic content. So an artist's work, the recording of a celebrity speech, et cetera. And that's where the EU is putting in place transparency obligations, um, that have to be met though, so that consumers are aware that they're interacting with a, with a chat bot, et cetera.

So the first [00:06:00] task, where does your AI system sit? Um, and then the next is really to understand your particular role with respect to that AI system. So are you the developer of that AI system or are you just a deployer, a user of that system? Or are you one of the answers? Auxiliary, , sort of operators in the system, a distributor for example, because understanding where you sit within the different, roles under the act will determine the scope of your obligations.

and the spoiler alert is that the bulk of the obligations will apply to the developers of those high risk systems. And then as kind of mentioned, there's also particular provisions for general purpose AI models. And there we need to determine whether or not we've just got a standard general purpose AO model or one where, it may have systemic risk where there are going to be additional compliance obligations.

So figure out what you've got, what risk categorization it falls within and what your role is. It's going to be the key next steps. And you [00:07:00] might find having done that analysis. that actually, you're not doing anything which is high risk for AI Act and that you actually fall outside of it. 

Richard: Yeah, and just to quickly follow up on that, it's important to remember that this is a risk based regulatory framework and the purpose is really to target the highest risk use cases of AI in the EU.

So, as Flick mentioned, actually there might be many, uh use cases involving AI that, either fall outside scope of the AI Act or are only subject to very limited obligations. Most of the obligations fall on those which fall into the high risk category.  

James: That makes sense. So, we've now got an idea of, how to find out whether or not we're in scope, what the kind of obligations will be depending on the level of risk of our product.

But what are the consequences for non compliance? What if I'm not able to get up to speed in time? What happens? 

Richard: Yeah, so the consequences of non compliance can be quite serious. So first of all, there's the potential for regulatory [00:08:00] fines. There's a tiered system for fines. The most serious, the upper end, we have 7 percent of global turnover or 35 million euros. And that's with respect to infringement of the prohibited practices.

At the lower end of the scale, there are also. Fines up to three percent of global turnover or 15 million euros Those fines apply to infringements of the rules for high risk systems And then further down we also have fines up to one percent for failing to provide information or Providing misleading information to regulators if you're failing to cooperate and comply with their instructions But apart from regulatory fines, there's also potential action that market surveillance authorities can take So they can suspend the use of an ai system They can prohibit its use in the eu or they can even issue orders for recall.

So recalling product safety legislation, if they consider an AI system to be non [00:09:00] compliant or particularly dangerous for EU consumers, they can issue a recall of that AI system and potentially ban it from the EU entirely. 

Flick: Yeah, the other thing that's just worth mentioning there is that, we've also got the AI office, which is going to be the sort of European body that's going to monitor, supervise and enforce the AI act. And they, , you know, this office has just been established. And as I understand it, there are five different units there, and they're frantically trying to recruit, the number of people that they need to, to properly run that office.

Um but it's fair to say that the AI office is being given a mandate to fulfill a large number of tasks, which includes, publishing and developing codes of practice, technical standards, and conducting testing and evaluation of AI models. They have a huge amount of work to do. And the big question is whether or not they're going to be able to meet some of the deadlines and be able to push some of this code forward.

content out fast enough because the Act tells us, what we need to do. It remains a little bit unclear how we do that [00:10:00] in practice without some of these sort of supplemental codes of practice and technical standards, which we desperately need, uh, to be able to interpret our obligations and apply them in practice there.

 Richard: And we do expect there's going to be a lot of movement in the next year or so around common codes of practice, technical standards, et cetera.

And a lot of those are not only motivated by the AI Act, but also broader AI governance initiatives, and also driven by commercial considerations and broader AI practices. Safety as well So even though this legislation is quite targeted in who it applies to And there is a staged rollout of the provisions.

There is some time before those rules for high risk come into play we expect that there are going to be in the meantime emerging standards that companies are going to be looking to comply with and adhere to For other reasons not just for regulatory requirements, but because of commercial reasons and because of industry expectations as well 

James: Absolutely. Oh, well, thank you guys very much. I'm [00:11:00] conscious that we've had to cover a lot in a very small period of time. So I wonder if like, if you had the ability to just condense this all down into a few key takeaways, what are the key points that businesses need to be thinking about when it comes to the AI Act?

Flick: Yes well we want to reassure you to not panic. The AI Act being pubished means it will shortly come into force but as Richard has flagged, there's a stage rollout of its provisions, which means you've still got some time to prepare your compliance plan.

 Now I don't want to underplay the volume of work that is going to be required if you are, for example, a high risk system. But there is a bit of time to make that assessment and work through some of those compliance obligations. And as I mentioned, we're still waiting on quite a bit of additional kind of supplementary guidance and codes to help us figure that out.

But you can't sit back and not do that. assessment, but, we have got time to, to work through that. So the real next priority is figure out what you've got in terms of AI systems and how you might fit within the act. Are you [00:12:00] a provider, a deployer, et cetera. And once you've figured that out, we can start to build a roadmap for, figuring out what you need to do to comply, with the act, in a sort of more organized way, and it is going to be a heavy lift if you, again, if you're caught, within that sort of high risk.

 So watch this space for more information from us, uh, as a team, we're going to be pushing out more content on the AI Act to give more practical guidance as we all, kind of navigate this new piece of law. And, um, yeah, let us know if we can help with anything. I'd say the best next step, if you have any questions, is come and speak to FieldFisher.

James: Absolutely, and as you say, Flick, we will to put out new content as the EU releases guidance about the implementation process, and we'll be releasing further breakdowns and analysis, so do watch this space. Well, I think that's about all we've got time for. So thank you for joining us on this latest episode of FieldFisher's Bite Size Legal Podcast, your source for concise legal updates on the key legal developments in technology and data [00:13:00] protection law.

As Flick said, if you have any questions about today's update, don't hesitate to reach out to us. And if you did find it useful, do make sure to give us a like or review on your podcatcher of choice. Apart from that, I think all that's left for me is to say thanks to Richard and Flick for joining us and for all those Interesting insight today.

Richard: Thanks, James. 

Flick: Thank you very much. 

James: thank you all for listening. We'll see you next time.