Bytesize Legal Updates | Fieldfisher

Bytesize Legal Update - The SCHUFA case

December 15, 2023 Fieldfisher Season 1 Episode 11
Bytesize Legal Update - The SCHUFA case
Bytesize Legal Updates | Fieldfisher
More Info
Bytesize Legal Updates | Fieldfisher
Bytesize Legal Update - The SCHUFA case
Dec 15, 2023 Season 1 Episode 11
Fieldfisher

The Court of Justice of the European Union handed down two landmark judgments last week aimed at the credit reference agency SCHUFA which considers what constitutes automated decision-making under Article 22 of the GDPR, and the issues around the lawfulness of retaining public registry data for commercial purposes. 

In our latest ByteSize Legal Update episode, Fieldfisher's Megan Ward and Flick Fisher discuss the two judgments on the SCHUFA case and why they are important for companies that provide fraud scoring services. 



Show Notes Transcript

The Court of Justice of the European Union handed down two landmark judgments last week aimed at the credit reference agency SCHUFA which considers what constitutes automated decision-making under Article 22 of the GDPR, and the issues around the lawfulness of retaining public registry data for commercial purposes. 

In our latest ByteSize Legal Update episode, Fieldfisher's Megan Ward and Flick Fisher discuss the two judgments on the SCHUFA case and why they are important for companies that provide fraud scoring services. 



Bytesize Legal Updates - SCHUFA
 [00:00:00] 
Megan: hi, I'm Megan Ward and I'm joined by Flick Fisher for today's Bite Size legal update. We're going to talk through two of the judgments of the Court of Justice of the European Union handed down last week in the SCHUFA case and why they're important for you. 
SCHUFA is one of the leading private credit agencies in Germany and there were two cases brought against it. SCHUFA 1 is somewhat a landmark judgment. It's one of the first European court decisions which looks into what would amount to automated decision making under Article 22 of the GDPR. 
SCHUFA 2 is important because it challenges the lawfulness of retaining public registry data for commercial purposes. We've split the reading on the cases this week, and I'm going to let Flick take us through SCHUFA So, Flick, who is SCHUFA and what was the case about?
Flick: Yeah, thanks Megan. As you mentioned in the intro, [00:01:00] Schufa is a German credit reference agency that assigns scores to people, credit scores, based on the probability that they will pay back a loan in this case. So they calculate a person's score by analysing their supposed characteristics and behaviour using mathematical and statistical procedures.
So we can probably assume that involves some form of machine learning models in the background. And then the Schufa's customers, who are typically banks, would then use that score to decide whether or not to offer a person that loan, and if so, on what conditions. So in this case, the case arose off the back of a complaint from a data subject who was refused a loan.
And that lender had based its refusal on the individual's credit score. Of course, the individual went to SCHUFA and submitted a subject access request. , they also requested that SHUFA delete certain, allegedly inaccurate [00:02:00] personal, , data about them. and SCHUFA had attempted to give certain information to this individual.
They broadly told them what the score was with some broad information about how the score had been calculated, but not much more than that. , and so the question that sort of arose in this case was really whether or not SCHUFA was doing automated decision making within the scope of Article 22 of the GDPR, and therefore whether or not they were required to, deliver up more information about how the score was being generated, and more generally whether or not they were lawfully allowed to be doing that automated decision making.
Megan: And we've said it's a pretty landmark judgement, so what did the court have to say and why was it so important?
Flick: Yeah It's fair to say that certainly in my experience across the industry of credit scoring, , and fraud detection type services that generate, , probability based scores, that many of those in that [00:03:00] industry have taken the position that They do not themselves, , make a decision with those scores, they just give the information to their customers to use that score to inform the ultimate decision that is made, i.
e. whether or not, in this case, to grant someone a contract or not, and in this case, it was really a question of whether or not the decision is made actually just by virtue of the fact that the score has been generated automatically using, we assume machine learning models and because the customer in this case had leaned heavily on and used that score to make the resulting decision, whether that in fact meant that Schufer was doing automated decision making within the meaning of Article 22. And before we dig into what the court said, I think it's useful to have a quick refresher about what Article 22 says. Now this is the provision in the GDPR which specifically regulates [00:04:00] any kind of automated decision making, including any profiling.
that has or produces legal effects or significantly affects individuals in some way. So that is really talking about purely algorithmically driven scores that could impact people because of the decisions that are taken using the outputs from those algorithms. Denial of a contract, denial of a loan some kind of, denial of medical or insurance cover, that type of thing.
So the law is really stepping in to make sure that consumers are protected in that scenario by including a general right for individuals not to be subject to that kind of automated decision making. Unless. In this case a vendor who's doing that decision making can prove that they have the individual's explicit consent, or that it's something that's authorized by member state law, or that it's necessary for the performance of a contract.
And if none of those things [00:05:00] apply, then they shouldn't be doing that automated decision making. And even if they can find a way to make it lawful, the GDPR also includes enhanced compliance obligations that apply to those who are doing the automated decision making. So as you can imagine, SHUFA Who's in the business of generating scores were trying to maintain the position that actually they don't do any decisioning based on the score.
It's really their customers that do that. So actually they weren't within the scope of article 22. Now, spoiler alert, we know that the courts did not side with them in this case. And actually this is why we have a pretty landmark and potentially disruptive decision for those in the industry, because we can see a really broad and expansive interpretation of what constitutes automated decision making.
In this case, the automated establishment of a probability value, in this case concerning the ability for someone to service a loan, i. e. the credit score, is itself a decision within the meaning of [00:06:00] Article 22, because the customer was using that score or drawing strongly on it to action the ultimate decision, i.
e. to refuse that person a loan so we can really see that argument, hey, we're just producing a score and the ultimate decision is taken by a third party is now not something that many can continue to make because what's relevant here is how that score has been used and in particular if that third party is drawing strongly on that score to take or inform that ultimate decision then that's enough for that vendor who generated the score to be caught by Article 22. So in this case SCHUFA's scoring process was deemed automated decision making. And so they are now subject to a whole host of additional compliance obligations.
 But also really importantly, the court emphasised the fact that we should treat Article 22 as a general prohibition on automated decision making that impacts individuals. [00:07:00] So unlike some of those other rights in the GDPR, the right of access, the right of deletion, where it's dependent on the individual exercising those rights.
This Article 22 provision should be treated as something that applies irrespective of whether or not someone exercises that . It should be seen as a general prohibition and only something you can do if you can fit within the conditions of Article 22. Quite a few implications there for those in the kind of credit scoring industry and those using AI and machine learning models to automate algorithmically driven outputs that could have legal impacts on people.
Megan: And what do we think it means for those service providers engaging in similar sorts of activities?
Flick: It basically means that they're now going to have to more carefully assess whether or not what they're doing and the ways in which their customers are using the score could pull them within the scope of Article 22. And I think, interesting, the interesting thing about this case is that the customer or the third party [00:08:00] doesn't have to be using or solely, it seems, rely on that score.
They just have to be leaning or a significant part of the way in which that decision has been informed. It's strongly informed or they're drawing strongly on that score. And to some extent, many vendors wouldn't necessarily have much control over how those customers are going to be using that score, but they may now be paying more attention to that.
And if they want to try and contractually impose some extra conditions on the customers or the way in which they're using that school. So require more manual review ensure that it's weighted with other factors. I think that might have huge commercial implications and may not be something they can achieve, but there may be more review and consideration for how, the school's ultimately being used there, but assuming that they are caught by Article 22, then it means that they're going to have to find a legal basis to make that work under Article 22, and that's not that easy.
If you're relying on the explicit consent of a data subject, they're going to [00:09:00] have to work with the customer probably to collect that consent if they don't have a direct relationship. They're also going to have to be mindful of the requirements to put in place additional safeguards that are aimed at protecting individuals So those safeguards require that they enable people to obtain human intervention to understand the score to express their view and ultimately contest the decision.
So there might need to be a whole process in place there. There's also upfront transparency obligation, so they're going to have to provide meaningful information about their methods around generating the score and the consequences of that score. And there are also greater restrictions on using special category data for this type of scoring.
A lot of vendors are quite scared about giving too much information about the logic involved in generating those decisions, because often that's the secret sauce of their service. There's going to have to be a balancing act there, and some of the commentary in this case indicates that leaning on, the fact that might reveal trade secrets might not be an argument that they can lean on, [00:10:00] because ultimately we have to be mindful of the The whole purpose of Article 22 is to give people clear transparency when decisions are being made about them that are significant.
So it's certainly going to force a greater review of some of these vendors whole positioning around automated decision making, both contractually with the customer and just from a positioning publicly as well.
Megan: Yeah, it's also quite timely, isn't it, as the judgment was handed down last week in the same week that the EU AI Act was passed, which will regulate credit scoring as a higher risk form of AI and introduce some of those similar increased obligations such as transparency, explainability and things like that.
So I guess I'll have to wait and see.
Flick: Exactly right. Yeah. Megan, turning to the next case which we were going to run through, which was SHUFA2. So what happened there? What's some of the key kind of findings from that case?
Megan: Yep SHCUFA two. It relates to another of Shufa's processing activities. So as part of its business model, Shufa also records and stores in its [00:11:00] own database information from public registries. In this particular case, it was the German Insolvency Register. And the information that they recorded and stored was relating to granting of a discharge from remaining debts.
Now, that information was compiled and stored in the event that their contractual partners, i. e. their customers, ask them for that information. And SCHUFA that information for three years after the entry was made in the registry in accordance with a particular code of conduct, which had been drawn up by the German Association of Credit Agencies.
However, the public registry itself had to comply with a six month period for deletion, which was prescribed by German law relating to public records on insolvency proceedings. And the case ended up in the German administrative court following two data subject complaints to the German data protection regulator.
And those data subjects had requested that SCHUFA delete entries relating to decisions to discharge their remaining debts from SCHUFA [00:12:00] database. And SCHUFA refused to comply, explaining that it had a legitimate interest in continuing to process the data, and that its activities were carried out in compliance with the GDPR, and actually that six month deletion period which was applicable to the registry, didn't apply SCHUFA.
Now, the regulator itself found that SCHUFA's processing was lawful Notably because SCHUFA complied with the three year retention period which was set out in the Code of Conduct drawn up by the Association. But the data subjects brought action against the regulator's decision, arguing that the regulator was obliged to take action on their deletion requests.
So the case went to the German courts. I won't go too much into their reasoning, but in summary, the court had doubts as to the lawfulness of a private agency storing public registry data in its own database. But it said that even if it was lawful the question remained as to whether the retention period was appropriate And the court was of the view that [00:13:00] private credit agencies had to comply with the same retention period as the public register. In this case, it would have been six months.
Now, the question that was referred to the court the CJEU was the question, We're most interested in which is whether it's permissible in principle for a private organization to compile its own database Which exists in parallel to the national database and how long that data can be retained for?
Flick: And what did they say then? What do people need to be mindful of as a result of this? decision?
Megan: So the CJEU held that it was contrary to the GDPR for private organizations to keep such data for longer than the public register In particular, the discharge from remaining debts is intended to allow the data subject to re enter economic life, and is therefore of existential importance to that person, because that information is still used as a negative factor when assessing the solvency of the data subject.
The German legislator provided for a six month retention period, and the court considered that at the end of that period, the rights and interests of the data [00:14:00] subject took precedence over those of the public to have access to that information, and so the private organization should only retain the information for as long as it was available in the public registry.
Flick: So a really helpful reminder there about the need to do that balancing test if we're looking to rely on legitimate interests and we can't just assume because the data is in the public sphere that we can, continue to retain that data for whatever period we decide. We always have to be mindful of the data subject's interest there.
Do you think, it's quite, fact specific there, do you think it's potentially distinguishable from other scenarios or other examples of public registry data like, company officer information?
Megan: Yeah, I do think so. I think the court was particularly sensitive to the fact that the information related to discharge from remaining debts and that information was intended to allow the data subject to re enter economic life and arguments along those lines. I do think it is potentially distinguishable.
I think it serves generally as a good reminder though that just because data is or was in the public [00:15:00] sphere or in a public registry, it doesn't mean that like private companies have carte blanche over their use of the data and how long they retain it for. I think one of the key takeaways from this for companies engaging in similar sorts of activities would be, if you're collating and retaining data that's made public then on a public registry, then don't retain it for longer than is made available, than it's made available in the public register.
And make sure it's in accordance with any prescribed local retention periods. Obviously there were some specific German law requirements that were taken into consideration here.
Flick: Yeah, thank you Megan, that's a really helpful takeaway. So both of these cases really, just to sum up SHUFA 1 really emphasising an expansive view of automated decision making and just to emphasise, not all automated decision making will be within the scope of Article 22, but if you're doing if you are generating algorithmically driven outputs that could have legal impacts.
denial of contract, etc. If you're doing that, just because you're [00:16:00] generating the score but a third party may be actioning the decision using that information doesn't take you outside the scope of Article 22. If you are vendors doing that type of scoring, you may need to evaluate your practices and positioning on automated decision making.
And then SHUFA2 reminding our us of the need to be mindful of that balancing act and to consider the interests of the data subject the types of data that we're processing. And we can't assume just because it's public data that we can automatically build a legitimate interest argument for data retention.
So some helpful takeaways there. And certainly I think SCHUFA won a really landmark case there because we just haven't had any real decisions around how to interpret those Article 22 obligations and particularly in light of all of the increasing use of AI to drive, decisions, for day to day working practices, et cetera.
We just need to be mindful of the expansive scope there. So thanks very much, Megan. , 
Megan: thanks, Flick! 
That's all from us for today. So if you have any questions on [00:17:00] today's legal update, don't hesitate to reach out if you found it useful, please give us a like or review on your usual podcast channel. Thanks for tuning in. We'll see you next time.