On 7 December 2023, the Court of Justice of the European Union (CJEU) issued an important decision on how the GDPR governs AI-assisted decisions. The case arose in the financial services context, with the court holding that the GDPR’s AI rules apply when banks use credit scores to make consumer credit decisions. But, the decision will likely not just impact financial services. Regulators are already indicating it may apply to also other industries or business processes where AI increasingly plays a role, such as employment, healthcare, or housing. This post briefly summarizes the case and provides salient take-homes for companies.
What happened?
SCHUFA is Germany’s largest consumer credit rating agency. (It’s an acronym for Schutzgemeinschaft für Kreditsicherung, or Protective Association for the Securing of Credit). Like all credit agencies, SCHUFA collects information about consumers and processes it with algorithms to generate scores predicting whether consumers will meet financial commitments – such as scoring whether a consumer is likely to repay a loan. For U.S. readers, SCHUFA plays the role in the German market that the major credit rating agencies Equifax, Experian, and Transunion play in the U.S. market.
When consumers apply for loans or other financial products at German banks, it is common for the bank to obtain the consumer’s SCHUFA score. A bank employee may manually review the SCHUFA score in deciding whether to make the loan to the consumer. Still, according to the CJEU, “in almost all cases” the SCHUFA score determines whether the consumer is granted or denied credit.
Article 22 of the General Data Protection Regulation (GDPR) states that individuals “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects … or similarly significantly affects him or her.” Such AI-facilitated decisions are permitted only (a) with the individual’s express consent, (b) where “necessary” for the individual to enter a contract, or (c) where expressly permitted by a local enabling statute.
A German consumer whose loan application had been denied by a bank ended up challenging SCHUFA’s credit-scoring practices as violative of Art. 22 GDPR in the German courts. The courts faced a complex situation: SCHUFA uses AI to make the credit score, but did not make the actual credit decision. On the other hand, the bank did not use AI to make the credit score, but did make the credit decision – and the decision was made by an employee who reviewed the score. So, did SCHUFA make an “automated decision” that triggered Art. 22 GDPR, even though it didn’t make the ultimate decision about whether or not to extend credit? Did the bank make an “automated decision” by using an AI-generated score, even though it asked employees to be the final decider? Or did neither make the kind of “automated decision” that triggered Art. 22 GDPR? The German courts ultimately referred the case to the CJEU for further clarification.
The CJEU’s Decision
The CJEU started by expressing that it was not going to miss the forest for the trees by allowing that ultimately no one would be responsible for complying with Art. 22’s rules on automated decision making:
[I]n circumstances such as [these], in which three stakeholders are involved [i.e. SCHUFA, bank, and consumer], there would be a risk of circumventing Article 22 GDPR and, consequently, a lacuna in legal protection if [SCHUFA’s] establishment of the probability value must only be considered as a preparatory act and only the act adopted by the [bank] can … be classified as a ‘decision’ within the meaning of Article 22(1) [GDPR].
Instead, the CJEU noted that “an insufficient probability value [from SCHUFA] leads, in almost all cases, to the refusal of that bank to grant the loan applied for.”. Thus, SCHUFA scores constituted “automated decision making” that triggered Art. 22 GDPR when a bank “draws strongly” on them “to establish, implement or terminate a contractual relationship” with a consumer.
In short, the CJEU held it didn’t matter that SCHUFA wasn’t making credit decisions using its AI-derived credit scores. Banks were making decisions based on those scores, and even if humans were involved in the decision making process, they were following the SCHUFA scores for “almost all” credit decisions. This was enough for the CJEU to conclude that SCHUFA scores as used by banks were “decision[s] based solely on automated processing … which produces legal effects … or similarly significantly affects [an individual]” under Art. 22 GDPR.
Interestingly, Germany has a national “enabling statute” – § 31 of the Federal Data Protection Act – that was passed specifically to allow SCHUFA to process personal data to create consumer credit scores. (We wrote about it long ago in 2017, when Germany passed its GDPR implementation statutes). But in lower-court proceedings, the German courts expressed doubts about whether § 31 FDPA complies with EU law. So the CJEU remanded the case to the German courts to determine if § 31 FDPA is sufficient to enable SCHUFA scoring (including banks’ use of SCHUFA scores for credit decisions).
Take-Homes
This case has the potential to significantly impact a number of industries. For the near term, businesses may want to consider the following:
- Financial services providers with EU business should promptly review and re-assess their consumer financial products for compliance. The initial take-home of this case is that if a financial institution uses consumer credit scores for credit decisioning, its processes may not be in compliance with the GDPR.
- Prior to this case, financial institutions may have taken the position that including human intervention into the process of granting/denying consumer applications – even if the human reviewer typically followed the SCHUFA score – prevented Art. 22 GDPR’s “automated decision” rules from applying. That position may no longer be tenable.
- When making AI-powered decisions subject to Art. 22 GDPR, financial services must: (a) obtain consent to make an AI-powered decision, (b) permit consumers to contest the decision and express their point of view, and (c) enable consumers to “appeal” to human intervention. All of this can require significant updates to business processes and resources.
- Under Art. 15 GDPR, consumers can also request information about how AI-powered decisions are reached, including “meaningful information about the logic involved.” This may require disclosing information about how scores are calculated. It can create questions about whether financial services companies can provide such information, or whether vendors can be involved in the request-fulfilment process.
- This case arose in the context of consumers applying for loans with banks, but its reasoning could potentially apply to other common consumer financial products and services, such as:
- Insurance policies
- Leases (e.g. automobile leases)
- Buy now pay later arrangements or other microloans
- Consumer installment contracts (e.g. for appliances)
- This decision is not limited to financial services – it’s a general decision on AI issues that may have impacts across industries. The day after the CJEU issued its decision, the Hamburg Data Protection Commissioner (HDPC) issued a press release titled “Impacts of the SCHUFA Case on AI Applications.”. It noted employers are using AI to “pre-sort job applications,” and medical institutions are using AI to “analyze which patients are particularly suited for a study.”. The HDPC is expressly viewing these types of non-financial algorithms as subject to the SCHUFA It further noted that thanks to the SCHUFA case, the scores such AI applications generate can no longer be viewed “as pure suggestions” for human reviewers. Instead, businesses will need to show that any human reviewer engages in meaningful independent review of AI output – in the HDPC’s words, “the person making the final decision needs expertise and enough time to question the machine-made initial decision.”. Otherwise, it remains an “automated decision” subject to Art. 22 GDPR’s rules on consent, opt-out, and human review.
- What seems clear: European regulators will likely start viewing all AI applications as potentially subject to the SCHUFA decision, irrespective of its industry and use case. Some fields where scrutiny may be expected could include (a) employment, (b) housing, (c) financial services (as outlined above), (d) healthcare, (e) communications (e.g., credit checks as part of access to internet), and (f) any other service that can be deemed essential or important.
As a result, companies with EU business should identify where AI is used in their organization. Any AI applications that trigger Article 22 GDPR will require the same compliance to be built as outlined above in the financial services context. As an example, if using AI-assisted recruiting tools in the EU, companies would need to assess if they must (a) obtain express consent from applicants, (b) enable applicants to contest AI scoring of their applications, and (c) enable applicant to obtain human review of their applications.
- What is not yet clear:
-
- How “impactful” a decision needs to be in order for Art. 22 GDPR to apply. For example, many companies use algorithms to “segment” their customers into interest groups so they can personalize the advertising they send to them. Would inferring that a consumer likes coffee more than tea – so he or she can receive coffee coupons (and not tea coupons) – be important enough to trigger GDPR “automated decision making” rules? Similarly, if a taxi company uses an automated routing service, does Art. 22 GDPR apply if the routing service occasionally routes riders on slightly longer or slightly shorter routes? Decisions at this level seems to be addressable by other means (e.g., simple customer service) without having to break out the “big guns” of Article 22 GDPR.
- When an AI-powered decision should be considered “necessary” for an individual to enter a contract with a company. European regulators traditionally take a narrow view of “necessity,” and the HDPC noted it should be viewed as an “exceptional case.”. The only example the HDPC provided was “online platforms where an immediate and binding response is required” – which still seems leaves open the question of when such a response is “required.”. Other European regulators suggested several years ago that AI could be “necessary” in the HR context if a company receives massive numbers of applications for a job posting – “automated decision-making may be necessary in order to make a short list of possible candidate.”. Companies should in any case expect to have to justify any claim that AI is “necessary” for an agreement.
- Also unclear are questions of controllership, responsibility, and liability in regard to AI applications; positions will likely be developed in the market prior to regulators or litigation providing binding guidance. For the moment, both AI customers and AI providers may need to work together to ensure that, where Article 22 GDPR compliance needs to be built, it can be deployed as needed in customer-facing interactions.
As we have written before, all of the above is yet another reason companies should start inventorying the AI applications they use – and building out AI governance alongside privacy compliance, particularly now that a political agreement has been reached on the long-awaited EU AI Act.
- The U.S. already has similar rules to the SCHUFA case. A number of U.S. state privacy statutes already contain rules permitting consumers to opt-out of “decisions that produce legal or similarly significant effects” on them. This language originated from the GDPR. Thus, the SCHUFA decision, and the regulatory practice it fosters, may be relevant to how US regulators go about enforcing AI rules in the U.S.
For example, in at least one U.S. state (Colorado), recent privacy regulations introduce concepts similar to where the CJEU landed in SCHUFA. Under the Colorado regulations, if a human reviews the output of “solely automated processing” – like AI – but does not engage in meaningful consideration of how the AI reached its result, the processing continues to be treated as if it’s “solely automated.”. This resembles the CJEU’s holding that, if human reviewers almost always follow an AI-generated score, the decision remains “solely automated” under GDPR.