STUDENT ESSAY Algorithmic Discrimination in Health Care: An EU Law Perspective

Volume 24/1, June 2022, pp. 93-103 |  PDF

Malwina Anna Wójcik

Introduction

Pursuant to article 168(7) of the Treaty on the Functioning of the European Union (TFEU), the organization of national health care systems and the definition of national health policy remain the exclusive competences of member states. In spite of clear differences in funding and management, European health care systems share common values of universality, access to good-quality care, equity, and solidarity, which presume a commitment to combating discrimination.[1] Nevertheless, in practice, significant divergences in access to and quality of health care persist within the European Union (EU), and vulnerable groups are often subject to discriminatory practices.[2] This problem is likely to be exacerbated by the growing deployment of artificial intelligence (AI) in medical diagnosis, prognosis, and benefit allocation. In spite of the presumed neutrality of technology, algorithmic decision-making is capable of perpetuating social inequalities and creating new patterns of discrimination.

This essay explores whether the EU’s current anti-discrimination legal framework offers adequate protection to patients who face automated discrimination. In order to answer this question, I analyze the problem of discrimination in health care from three perspectives: social, legal, and technological. I argue that EU anti-discrimination law, in its current state, is not well suited to address the challenges raised by algorithmic bias. Thus, there is an urgent need for reform.

The essay proceeds as follows. The first section explores the social perspective by mapping out discriminatory practices in health care. The next section addresses the legal perspective, introducing EU anti-discrimination law and discussing its pitfalls. This is followed by a discussion of the technological perspective that explores the use of AI in health care, its potential to remedy existing discriminatory practices, and its potential to reinforce discrimination. The following section analyzes the EU’s anti-discrimination legal framework in light of the algorithmic challenges and proposes reforms that could strengthen its resilience. The final section briefly examines the additional protection against algorithmic discrimination offered by the EU’s proposal for a regulation laying down harmonized rules on AI.

The social perspective: Discrimination in the provision of health care in the EU

In 2013, the Fundamental Rights Agency published a report surveying inequalities in access to and the quality of health care in selected member states.[3] The study focused on three particularly vulnerable groups within the migrant and ethnic minority population: women, older people, and young people with intellectual disabilities. It revealed that patients coming from these groups often face multiple discrimination, which means that they are discriminated against on more than one ground. In particular, two leading patterns of multiple discrimination emerged among the respondents: additive discrimination and intersectional discrimination.[4] Additive discrimination occurs when patients are simultaneously discriminated against on several grounds—such as race, ethnicity, religion or belief, sex, sexual orientation, age, or disability—and when each type of discrimination can be proven independently. For example, a disabled gay person can face discriminatory treatment in accessing health care because of both their disability and their sexual orientation. Intersectional discrimination, on the other hand, is not based on the additive character of discrimination grounds but rather on their unique synergy. For example, the experience of ethnic minority women who access reproductive health care is qualitatively different both from the experience of ethnic minority men and from the experience of white women.

According to the report, discrimination experienced by migrant and ethnic minority patients was either direct, when respondents were denied equal access to health care because of their characteristics, or indirect, when the respondents were treated equally but the treatment failed to account for their specific needs. For example, migrants often faced indirect discrimination because of linguistic, socioeconomic, and cultural barriers.[5] Vulnerable minority patients also experienced direct discrimination, such as delay or refusal of treatment, humiliating treatment, harassment, and forced treatment. The study found that in some cases the delay in treatment was caused by health care professionals’ lack of knowledge about conditions specific to specific ethnic minority groups, such as female genital mutilation.[6] Roma and Muslim women, as well as women with disabilities, were particularly likely to suffer undignified treatment as a result of intersectional discrimination, often in connection with violations of their reproductive rights; forced gynecological examinations, sterilizations, and abortions are some of the examples in the report.[7]

Many respondents claimed that they did not report the discrimination they suffered. This decision was caused mainly by their lack of knowledge of redress procedures, difficulties in proving the allegations, general mistrust in the effectiveness of the complaint process, and a fear of retaliation from health care or immigration authorities.[8] Moreover, the report indicated that a significant number of health care professionals have an insufficient understanding of the concept of discrimination. Interestingly, although many professionals were aware of the linguistic and structural barriers in accessing health care and found them problematic, they were hesitant to label them as discrimination.[9] Among the professionals who acknowledged discrimination, only a few were able to explain the problem of multiple discrimination and offer solutions.[10]

A recent study conducted by Equinet, the European Network of Equality Bodies, has shown that the existing patterns of discrimination in health care have been exacerbated due to the COVID-19 pandemic.[11] Multiple—and, in particular, intersectional—discrimination remains a problem, with socioeconomic status being the key intersecting ground.[12]

The legal perspective: EU anti-discrimination law

The issues of equality and nondiscrimination are addressed in both primary and secondary sources of EU law. The former include the founding treaties—that is, the Treaty on the European Union, the TFEU, and the European Charter of Fundamental Rights—and general principles of EU law, while the latter encompass legislative acts adopted by EU institutions pursuant to article 288 of the TFEU. For the purposes of this essay, the two most relevant types of secondary law instruments are directives and regulations. A directive is binding as to the result to be achieved, but it leaves member states with discretion over the mode of implementation. A regulation is directly applicable and binding in all member states.

According to article 2 of the Treaty on the European Union, equality is one of the founding values of the EU. Pursuant to article 3 of the treaty, equality, nondiscrimination, and social justice also remain the EU’s objectives. Furthermore, in Mangold, the European Court of Justice confirmed that nondiscrimination constitutes a general principle of EU law.[13] The European Charter, which applies to EU institutions and to member states when they implement EU law (art. 51(1)), protects everybody’s rights to access preventive health care and medical treatment (art. 35). It also contains an open-ended anti-discrimination provision, which provides a non-exhaustive list of discrimination grounds (art. 21). Finally, pursuant to article 19 of the TFEU, the European Council, acting with the European Parliament’s consent, “may take appropriate action to combat discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation.” While many secondary sources of EU law address the issue of discrimination based on these grounds, their scope of application differs.

In relation to individuals accessing health care services, only two EU directives apply: Directive 2000/43/EC (Racial Equality Directive) and Directive 2004/113/EC (Goods and Services Directive). The former prohibits discrimination based on race and ethnic origin, inter alia, in the context of health care; the latter prohibits discrimination based on sex when accessing goods and services, including health care.[14] Both directives apply to direct and indirect discrimination in the private and public sector. None of the instruments explicitly protects against multiple discrimination. However, the Race Equality Directive makes reference to it in the preamble.[15] Both instruments operate on a reversed burden of proof. This means that if the claimant is able to present prima facie evidence of discrimination, the respondent must prove that his or her action did not constitute discrimination. The directives also foresee the establishment of equality bodies that are responsible for monitoring discrimination and protecting victims.[16]

Giacomo Di Federico points out three problems with EU anti-discrimination law in relation to health care.[17] First of all, the applicable directives do not prohibit discrimination based on religion or belief, disability, age, or sexual orientation in accessing health care. This is highly problematic because, as indicated in the previous section, patients are often subject to discrimination based on these characteristics. Second, individuals’ ability to bring a claim of discrimination on more than one ground is severely limited because the directives applicable in the field of health care neither define nor explicitly prohibit multiple discrimination. This is unsatisfactory because patients are rarely subject to discrimination on a single ground. Limited number of protected grounds allows patients to bring multiple additive discrimination claim based only on sex and race or ethnic origin. However, for the same grounds, particular hurdles exist in case of intersectional discrimination because of the difficulties in finding a legitimate comparator for the disadvantaged group, as required by the law.[18] Third, the implementation of the directives varies among member states, especially when it comes to the structure and mandate of equality bodies; some states designate a single equality body while others favor multiple bodies specialized in a specific ground of discrimination.[19] Unfortunately, these complexities often contribute to the aforementioned phenomenon of underreporting and poor outcomes for complainants. In the context of health care, equality bodies experience particular difficulties due to the low number of complaints, problems with gathering evidence, lack of expertise to deal with the complexity of health care systems, lack of competences to make legally binding decisions, insufficient resources, inadequate understanding of the problem of discrimination among health care providers, and failure to implement equality bodies’ recommendations.[20]

Finally, it is worth underlining that individuals can rely on the anti-discrimination provisions of the directives and article 21 of the European Charter only when the situation falls within the scope of EU law.[21] Therefore, because of limited competences of the EU in the area of health care, the situations when patients can directly invoke EU anti-discrimination law are limited.[22] The EU retains shared competence in the regulation of free movement of medical goods and services on the internal market (art. 4(2)(a) of the TFEU) and common safety concerns in public health matters (art. 4(2)(k) and art. 168(4) of the TFEU). The EU can also support, coordinate, or supplement the actions of member states in the protection and improvement of public health (art. 6(a) of the TFEU). However, as mentioned in the introduction, the organization of national health care systems remains the exclusive competence of member states, and thus no harmonization is possible in this regard (art. 168(7) of the TFEU). Therefore, it is mainly for the member states themselves to address the problem of discrimination in the field of health care. Unfortunately, this often leads to unequal levels of protection against discrimination in health care across the EU.

The technological perspective: Artificial intelligence and health care

Given that EU anti-discrimination law does not adequately address the nature of discrimination faced by patients in Europe, adding AI to this already complex picture raises new concerns. On the one hand, AI offers solutions that can help tackle existing discriminatory practices. On the other, it can also create new patterns of discrimination, some of which are difficult to detect and address. This section explains the use of AI in health care and explores its benefits and risks.

The use of artificial intelligence in health care

AI can be described as “computers’ ability to mimic human behavior and learn.”[23] The process of learning takes place through algorithms. An algorithm is a series of computational instructions that transforms the input value into the output value.[24] An important field of AI is machine learning, which allows the computer to detect patterns in data and use them to make predictions or decisions.[25] Machine learning algorithms are usually trained using “big data,” a collection of information of high volume, variety, and velocity.[26]

In medicine, machine learning systems can be used for “prognostics, diagnostics, image analysis, resource allocation, and treatment recommendations.”[27] During the COVID-19 pandemic, the deployment of AI in health care has intensified. For example, experts have been working on developing algorithms that can diagnose COVID-19 through chest scan analysis or predict the severity of infection.[28]

However, a crucial concern regarding some machine learning algorithms is that the output they generate is not fully predictable, and sometimes it is not possible to explain why and how they have reached a decision. That is why some scholars refer to algorithmic decision-making in health care as “black-box medicine.”[29] An interesting example of an opaque system is IBM Watson, which is currently being tested as an evidence-based decision-support system for medical use.[30] Watson uses advanced machine learning techniques that allow the system to infer rules, develop classification models, make predictions, and make decisions based on the analysis of a large set of both structured and unstructured data, such as doctor’s notes.[31] Unfortunately, its data-driven approach makes Watson “unpredictable by design.”[32]

The benefits of artificial intelligence in combating discrimination in the field of health care

As noted above, health care providers discriminate against patients for two main reasons: they are biased (either openly or subconsciously) or they lack knowledge about health problems specific to minority groups. Both of these issues could be addressed by the use of AI.

First, algorithmic decision-making has the potential to avoid stereotypes inherent in human decision-making. For example, it is possible to train algorithms to be fairness-aware “through incorporating anti-discriminatory constraints during data processing or removing the sources of bias prior to processing.”[33] However, in order for these algorithms to be successfully designed and deployed, we need comprehensive data surveying the discrimination experience in the field of health care. This is necessary in order to identify vulnerable groups and correlations that can lead to discriminatory outcomes. Since there exists a tension between different mathematical notions of fairness, data concerning discrimination are essential to establish the most appropriate fairness criteria. As stated in the preceding sections, the lack of data on inequality remains a problem in the EU, especially as many discrimination cases in health care are unreported.

Second, AI clinical-decision-support systems that are trained on a sufficiently large and diverse set of data could help health care practitioners fill in possible gaps in medical knowledge, especially when it comes to minority-specific health conditions. The added value of systems such as IBM Watson is that they can overcome the human cognitive limitations in collecting and processing information and are capable of outperforming human doctors in diagnosis.[34] Moreover, AI allows for the progress of personalized medicine that is individually tailored to the needs of patients.[35]

Third, it is also possible to adjust the algorithm’s output to account for the needs of specific ethnic or racial groups. For example, Alvin Rajkomar et al. suggest how distributive justice could guide the development and implementation of AI in the field of health care, actively advancing health equity for protected groups.[36] Recently, a group-specific approach to data analysis has been widely discussed in the context of ensuring a more equitable pandemic response. Some European scientists and activists have urged that collecting epidemiologic and mortality data by race and ethnic origin is necessary to address the impact of COVID-19 on specific communities.[37] For example, as reported in the Fundamental Rights Agency’s bulletin, during the present COVID-19 pandemic, Roma, whose underlying health problems make them more susceptible to severe symptoms of infection, keep experiencing discrimination when accessing health care.[38] On the other hand, certain commentators have warned against the use of racially tailored algorithms in health care, arguing, inter alia, that racial differences can in fact be genetic or socioeconomic and that race or ethnicity are elusive concepts that depend largely on self-identity.[39]

Fourth, the wide deployment of AI in health care, coupled with its comprehensive regulation at the European level, offers a chance to reinforce anti-discrimination protections for patients. As stated earlier, the scope of EU anti-discrimination law is limited in the field of health care because the organization of domestic social security systems is the sole competence of member states. On the other hand, the EU has competence to regulate AI technologies pursuant to articles 114 and 168(4)(c) of the TFEU (the internal market and the quality and safety of medical devices, respectively). Indeed, the EU is currently in the process of developing a complex regulatory framework for AI that has the potential to ensure a high degree of oversight over algorithms in health care, both before and after their implementation. The new regulation aims to minimize the risk of algorithmic discrimination and to help detect and rectify it.

Algorithmic discrimination in health care

Although AI can offer potential solutions to combat human bias, it can also widen existing divisions in the provision of health care. The most obvious concern is the inequitable deployment of new technologies, which are “disproportionately available to well-off, educated, young, and urban patients and to urban and academic medical centers.”[40] Innovative solutions such as personalized medicine are usually very costly and thus are likely to remain unavailable to poor and vulnerable groups, exacerbating inequalities in access to and the quality of health care.[41]

Apart from the question of availability, issues with the fairness of AI itself also arise. Sharona Hoffman and Andy Podgurski distinguish three main problems with algorithmic decision-making: measurement errors, selection bias, and feedback-loop bias.[42] Measurement errors relate to the quality of data. The “garbage in, garbage out” principle states that incomplete or misleading data inevitably lead to unsatisfactory algorithmic performance. The quality and interoperability of health data in the EU leave much to be desired. For example, during the COVID-19 pandemic, the inability to swiftly exchange and compare epidemiologic data halted a coordinated response.[43] Moreover, due to the structural, linguistic, and socioeconomic barriers to accessing health care, vulnerable groups are likely to be significantly underrepresented in the main sources of health data, such as electronic health records. When big data on which the algorithm is trained are not representative of the target patient population, selection bias occurs. In this case, AI can produce unintended results, such as interpreting the lack of data as the lack of disease. For example, when an algorithm used to distinguish malignant and benign moles is trained on fair-skinned patients, it might fail to properly diagnose moles on people of color.[44] Similarly, algorithms deployed to detect cardiovascular diseases might underperform on women because most of the medical training data concerns men.[45] Moreover, if the data reflect systemic bias toward different groups, existing patterns of discrimination can be entrenched in the algorithm; this is called feedback-loop bias. For example, according to the Fundamental Rights Agency, health care professionals often suspect immigrants, older people, and people with disabilities of exaggerating their health problems in order to claim benefits.[46] This harmful stereotype can, for instance, cause doctors to routinely administer incorrect doses of medicine to patients belonging to one of these groups. If these data are later fed to an algorithm, the output is likely to reaffirm human bias. This problem is especially difficult to detect, since even seemingly neutral data (such as place of residence) can be a proxy for a protected ground of discrimination (such as race or ethnic origin). An illustrative example of proxy discrimination is provided by an algorithm used to identify patients who are likely to miss their medical appointment. In this case, the system caused the overbooking of people of color because prior no-shows were a proxy for socioeconomic background, which in turn was a proxy for race.[47]

Facing the algorithmic challenge: The future of EU anti-discrimination law

The challenges raised by AI further question the effectiveness of EU anti-discrimination law in the field of health care, reinforcing already existing problems: limited grounds of discrimination, the absence of protection in case of multiple discrimination, and structural and evidentiary difficulties with pursuing a complaint.

First, the problem of proxy discrimination escapes the legal framework, which is based on specific protected grounds.[48] The European Court of Justice has not developed coherent criteria for assessing whether a proxy falls within the scope of protected categories. For example, in Dekker, the court accepted that discrimination based on pregnancy is a form of discrimination based on sex.[49] However, in Jyske Finans, the court ruled that unequal treatment based on the claimant’s country of origin and patronym could not constitute discrimination based on ethnic origin.[50] The problems with discrimination by proxy are exacerbated when it comes to health care, where protected grounds are limited to just three: race, ethnic origin, and gender. Moreover, because discovering previously unknown correlations lies in the very nature of algorithms, they are capable of discriminating in new, abstract ways, making the established categories redundant. The anti-discrimination directives appear inherently unsuitable to address this problem, as they are designed with a human perpetrator in mind. As humans, we use common sense to recognize discriminatory patterns in one another’s behavior. Thus, in law, discrimination and fairness are “contextual” concepts, and their determination is guided by judicial logic and intuition.[51] Unfortunately, the same tools are not equally effective against algorithmic discrimination, which is more subtle and unintuitive.

Second, as Raphaële Xenidis underlines, algorithms are likely to reinforce intersectional discrimination, which is already “a blind spot” in EU law.[52] She argues that the risk is particularly high in algorithmic profiling, which uses very precise identity data to classify subjects into distinctive subgroups.[53] Intersectional minorities are most likely to be underrepresented and misrepresented in datasets, which are infused with historic bias. Thus, if such a technology were used to allocate health care benefits, it would risk deepening intersectional discrimination, which is already pervasive in health care. At the same time, neither the anti-discrimination directives nor the case law of the European Court of Justice explicitly address the problem of intersectional discrimination.[54]

Third, the nature of algorithmic bias makes it difficult to establish that prima facie discrimination exists. In fact, it is highly possible that the victims of algorithmic bias will never know that they were discriminated against.[55] Again, these concerns are particularly strong in health care, where general awareness of discrimination is low among both patients and health care providers. As stated in the previous sections, patients coming from vulnerable groups often refrain from reporting discrimination precisely because it is difficult to meet the high burden of evidence to prove it.

Last but not least, unless it can be proven that the developers of discriminatory algorithms were explicitly or implicitly biased, most of the cases of algorithmic discrimination would qualify as indirect discrimination. Thus, according to EU law, these discrimination claims could be quite easily rebutted by proving that the application of algorithm is “objectively justified by a legitimate aim and the means of achieving that aim are appropriate and necessary.”[56] As noted by Daniel Schönberger, many algorithms used in health care are likely to fulfill legitimate aim, suitability, and necessity requirements, and thus the outcome of the challenge is likely to depend on the proportionality test.[57] There is a risk that courts will find the deployment of algorithms of overall high accuracy proportionate, even if they disadvantage certain protected groups.

Two approaches can be taken to address the discrepancies between EU anti-discrimination law and algorithmic discrimination. On the one hand, a captivating paper by Sandra Wachter, Brent Mittelstadt, and Chris Russell combines legal, ethical, and technological perspectives in an attempt to propose a technical standard in AI development that will allow technology developers to detect discrimination early on and provide judges with the resources needed to reach a well-informed decision in cases of automated discrimination.[58] The authors argue that the “golden standard” of review developed by the European Court of Justice in Seymour-Smith, which defines disparity by assessing the effects on both the disadvantaged and advantaged group, can be translated into the statistical method of conditional demographic disparity.[59] Importantly, this method does not offer a clear-cut answer as to whether unlawful discrimination has occurred. Instead, its purpose is to provide support for judicial assessment of automated discrimination by allowing the judiciary to identify possible group comparators and compare the distribution of outcomes among various protected groups. Conditional demographic disparity could help adopt a common standard of assessment in algorithmic discrimination cases, while leaving judges with interpretative discretion when it comes to the final result. Accordingly, it could contribute to bridging the gap between technical and legal notions of fairness, embracing the contextual approach to equality favored by the European Court of Justice.

On the other hand, Xenidis proposes how existing concepts and doctrines of EU anti-discrimination law can be “tuned” to address the new challenges raised by AI.[60] In particular, she focuses on demarginalizing the concept of multiple discrimination, which is acknowledged in the preamble of the Race Equality Directive. Xenidis draws the reader’s attention toward doctrinal developments that favor the recognition of multiple discrimination as an established legal concept.[61] For example, the opinion of Advocate General Kokott in Parris, albeit not followed by the court, emphasizes that in order to reflect the nature of discrimination in real life, the court must analyze the discrimination factors in combination rather than isolation.[62] Another promising development in the anti-discrimination jurisprudence is the relaxation of a link between the identity of the victim and the protected grounds.[63] The court is also increasingly willing to find direct discrimination without proof of actual harm to particular victims, when protected groups are directly targeted.[64] According to Xenidis, these approaches are particularly useful in dealing with the problem of proxy discrimination by algorithms, as they relax the burden of proof and introduce flexibility to the rigid framework of protected grounds.[65] Lastly, she argues that further flexibility can be achieved by fully exploiting the possibilities offered by the open-ended nature of article 21 of the European Charter and the general principle of nondiscrimination.[66] It is worth noting that article 21 was recently successfully invoked in a case of discrimination based on religion in the cross-border treatment context.[67]

Clearly, the resilience of EU anti-discrimination law against the challenges raised by automated discrimination is particularly low in the field of health care. The legal framework—which is already patchy and fails to address the nature of discrimination faced by many patients—is not likely to offer the desired level of protection. Hence, reforms are urgently needed to strengthen its resilience. Most importantly, the gap in protected grounds needs to be bridged. In this context, it is worth revisiting the proposal for a Horizontal Anti-Discrimination Directive, which would add the new protected grounds of religion or belief, disability, age, sex, and sexual orientation to the areas covered by the Racial Equality Directive.[68] Another much-needed reform proposed by the European Parliament’s amendment to the Horizontal Anti-Discrimination Directive is the prohibition of direct and indirect discrimination on multiple grounds.[69] The implementation of these proposals should be coupled with a coherent approach of the European Court of Justice, which should develop its future jurisprudence by embracing flexibilities described by Xenidis. Lastly, efforts to find a common grammar between the legal and mathematical notions of fairness should continue in order to enable detecting and assessing AI discrimination.

The new Regulation on Artificial Intelligence: A source of additional protection?

EU anti-discrimination law can be adjusted to better address the challenges raised by algorithmic decision-making. Nevertheless, when it comes to health care, the applicability of anti-discrimination legislation remains limited because the EU’s competences in the area are mainly shared and supportive. Thus, EU anti-discrimination law alone does not provide sufficient protection to patients facing automated bias. In this context, it is interesting to consider how discrimination in health care is tackled by the recent proposal for an EU regulation harmonizing the rules on AI.[70]

The explanatory memorandum for the proposal states that the regulation would complement the anti-discrimination law by minimizing the risk of algorithmic discrimination.[71] Moreover, the proposal acknowledges the need to ensure good quality of data (recital 44) and “non-discriminatory access to health data” (recital 45). The regulation foresees different treatment of AI systems based on their risk assessment, from unacceptable to minimal. AI systems causing unacceptable risk, such as the violation of fundamental rights through the exploitation of social vulnerabilities and manipulation of human behavior, are prohibited.[72] Recital 27 defines high-risk systems as those that “have a significant harmful impact on the health, safety and fundamental rights of persons.” They are subject to strict obligations both before and after being placed on the market. According to article 6, there are two main categories of high-risk systems.

The first category is AI systems intended to be used as safety component of products that are subject to third-party ex ante conformity assessment or AI systems that are themselves a product subject to third-party ex ante conformity assessment under EU harmonization legislation listed in annex II. AI that is either an independent software or an accessory to a medical device (e.g., software for a wearable device) can fall within the scope of the new Medical Device Regulation 2017/745, which is listed in annex II of the AI regulation proposal.[73] The Medical Device Regulation covers more AI-based systems than its predecessor, Medical Device Directive 93/42/EEC, as it expands the scope of medical purposes by including “prediction” and “prognosis” of a disease.[74] The conformity assessment procedure for medical devices depends on their classification into four categories: I (low risk), IIa (moderate risk), IIb (medium risk), and III (high risk). The class is ascertained on the basis of the device’s intended purpose and inherent risks associated with it.[75] While class I requires only a self-assessment by the manufacturers, classes IIa, IIb, and III require a varying degree of intervention by a notified body.[76] According to rule 11 of annex VIII to the Medical Device Regulation, software is classified as low risk unless it is used for medical diagnosis, therapy, or to monitor physiological processes. In these cases, it falls under class IIa (moderate risk), IIb (medium risk), or III (high risk), depending on its possible impact on the state of health. This means that AI systems that are classified as medical devices of moderate, medium, or high risk would need to comply both with the Medical Devices Regulation and the additional ex ante and ex post risk assessments and safety requirements for high-risk systems under the proposed AI regulation. However, AI systems that are classified as low-risk medical devices, and thus are not subject to third-party ex ante assessment under the Medical Devices Regulation, would not be considered high-risk systems for the purpose of the AI regulation proposal.

The second category of high-risk systems are stand-alone systems listed in annex III, which mentions, inter alia, “access and enjoyment of essential private and public services.” Under this section, the annex explicitly includes determining eligibility for public assistance benefits and services and allocating emergency services, such as medical aid.[77] Thus, algorithms deployed to assess health care benefits or dispatch ambulances would likely fall within this category and attract a high level of protection. Nevertheless, it is less clear if an algorithm identifying patients who are likely to miss appointments would be classified as high risk. It could be argued that this is simply an administrative tool used to avoid under-booking, not to assess the eligibility or priority of benefits. Yet, as described above, a system of this kind can trigger discriminatory results for patients.

The proposed AI regulation provides additional safeguards against algorithmic discrimination by high-risk systems, setting obligations relating to risk management, quality of data requirements, technical documentation, transparency and provision of information to users, quality management systems, human oversight, robustness, accuracy, and cybersecurity.[78] However, certain improvements could be introduced to the proposal in order to tackle the problem of discrimination in health care more effectively. For example, the regulation could include a direct cause of action for people suffering discrimination by algorithms. It would also be desirable to broaden the list of high-risk systems in annex III to ensure that algorithms that cannot be classified as moderate-, medium-, or high-risk medical devices under the Medical Devices Regulation, but are nevertheless used in the context of health care, do not escape the higher level of scrutiny.

Conclusion

Apart from perpetuating social inequalities and violating fundamental rights, algorithmic discrimination questions the very usefulness of AI. In the case of health care, the stakes are particularly high, as the life and health of marginalized and vulnerable minority groups could be endangered. The potential success or failure of AI in the diagnosis of minority-specific health conditions and the equitable distribution of benefits ultimately depends on the availability of health data concerning these groups, who constantly face obstacles in access to health care.

Unfortunately, current EU anti-discrimination law does not offer adequate protection to patients facing discrimination, much less to those facing algorithmic discrimination. Addressing this problem will be possible only if the social, legal, and technological perspectives on discrimination are analyzed together. Thus, the anti-discrimination law in the field of health care should be reformed to more adequately address the social experience of discrimination, which must be extensively surveyed by the qualified equality bodies. Moreover, even if automating the notion of fairness is neither possible nor desirable, law and technology must look for ways to develop common standards of assessing discrimination. Apart from the anti-discrimination law, additional protections under the proposed regulation on AI are also welcome in order to ensure that fairness is monitored in the design and implementation phases.

Malwina Anna Wójcik is a PhD student at the University of Bologna, Italy.

Please address correspondence to the author. Email: malwinaanna.wojcik2@unibo.it.

Competing interests: None declared.

Copyright © 2022 Wójcik. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited.

References

[1] Conclusions of the EU Council  of 2 June 2006 OJ2006 C 146/1.

[2] G. Di Federico, “Access to healthcare in the European Union: Are EU patients (effectively) protected against discriminatory practices?,” in L. S. Rossi and F. Casolari, The principle of equality in EU law (New York City: Springer Publishing, 2017), pp. 229, 232.

[3] EU Fundamental Rights Agency, Inequalities and multiple discrimination in access to and quality of healthcare (Luxembourg: Publications Office of the European Union, 2013).

[4] Ibid, p. 7.

[5] Ibid, pp. 47, 59.

[6] Ibid, p. 55.

[7] Ibid, p. 75.

[8] Ibid, p. 9.

[9] Ibid, p. 63.

[10] Ibid.

[11] Equinet, Equality, diversity and non-discrimination in healthcare: Learning from the work of equality bodies (Brussels: Equinet, 2021), p. 11.

[12] Ibid, p. 9.

[13] Case C-144/04, Werner Mangold v. Rüdiger Helm.

[14] Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, OJ L 180, 19.7.2000, pp. 22–26 (Race Equality Directive), art. 3(1)(e); Council Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services, OJ L 373, 21.12.2004, pp. 37–43 (Goods and Services Directive), art. 3(1) and recital 12.

[15] Race Equality Directive (see note 14), recital 14.

[16] Ibid., art. 13; Goods and Services Directive (see note 14), art. 12.

[17] Di Federico (see note 2), pp. 239–241.

[18] S. Fredman, “Intersectional discrimination in EU gender equality and non-discrimination law,” European Commission, Directorate-General for Justice and Consumers (2016), p. 65.

[19] N. Crowley, “Equality bodies making a difference,” European Commission, Directorate-General for Justice and Consumers (2019), p. 48.

[20] Equinet (see note 11), p. 11.

[21] Race Equality Directive (see note 14), art. 3; Goods and Services Directive (see note 14), art. 3;

Charter of Fundamental Rights of the European Union, OJ C 326, 26.10.2012, pp. 391–407, art. 51(2).

[22] Di Federico (see note 2), p. 236.

[23] S. Hoffman and A. Podgurski, “Artificial intelligence and discrimination in health care,” Yale Journal of Health Policy, Law, and Ethics 19/3 (2020), pp. 1, 8.

[24] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to algorithms (Cambridge, MA: MIT Press, 2009), p. 5.

[25] I. G. Cohen, R. Amarasingham, A. Shah, et al., “The legal and ethical concerns that arise from using complex predictive analytics in health care,” Health Affairs 33/7 (2014), pp. 1139–1140.

[26] W. N. Price II, “Artificial intelligence in health care: Applications and legal implications,” SciTech Lawyer 14 (2017), p. 10.

[27] Ibid.

[28] E. Gómez-González and E. Gómez, Artificial intelligence in medicine and healthcare: Applications, availability and societal impact (Luxembourg: Publications Office of the European Union, 2020), p. 47; X. Jiang, M. Coffee, A. Bari, et al., “Towards an artificial intelligence framework for data-driven prediction of coronavirus clinical severity,” Computers, Materials and Continua 63/1 (2020), p. 537.

[29] Price II (see note 26).

[30] F. Lagioia and G. Contissa, “The strange case of Dr. Watson: Liability implications of evidence-based decision support systems  in the health care,” in A. Santosuosso and C. A. Redi (eds), Law, Sciences and New Technologies (Pavia: Pavia University Press, 2020).

[31] Ibid, p. 83.

[32] Ibid.

[33] M. A. Wojcik, “Machine-learnt bias? Algorithmic decision making and access to criminal justice,” Legal Information Management 20/2 (2020), p. 99. See also European Parliament Research Service, Understanding algorithmic decision-making: Opportunities and challenges (March 2019), p. 46.

[34] Lagioia and Contissa (see note 30), p. 84.

[35] Gómez-González and Gómez (see note 28), p. 22.

[36] A. Rajkomar, M. Hardt, M. D. Howell, et al., “Ensuring fairness in machine learning to advance health equity,” Annals of Internal Medicine 169/12 (2018), pp. 866–872.

[37] V. Waldersee, “COVID  tolls turn spotlight on Europe’s taboo of data by race,” Reuters (November 19, 2020). Available at  https://www.reuters.com/article/uk-health-coronavirus-europe-data-insigh-idUKKBN27Z0K6.

[38] EU Fundamental Rights Agency, Bulletin #5: Coronavirus pandemic in the EU; The impact on Roma and travellers (Luxembourg: Publications Office of the European Union, 2020), p. 19.

[39] Hoffman and Podgurski (see note 23), pp. 21–22.

[40] T. C. Veinot, H. Mitchell, and J. S. Ancker, “Good intentions are not enough: How informatics interventions can worsen inequality,” Journal of the American Medical Informatics Association (2018), pp. 1080, 1081.

[41] E. C. Hayden, “This girl’s dramatic story shows hyper-personalized medicine is possible—and

costly,” MIT Technology Review (2019). Available at https://www.technologyreview.com/s/614522/thisgirls-

dramatic-story-shows-hyper-personalized-medicine-is-possibleand-costly/.

[42] Hoffman and Podgurski (see note 23), pp. 12–16.

[43] G. Di Federico, “Stuck in the middle with you … wondering what it is I should do. Some considerations on EU’s response to COVID-19,” EUROJUS 7 (2020), pp. 60, 71.

[44] A. S. Adamson and A. Smith, “Machine learning and health care disparities in dermatology,” JAMA Dermatology 154/11 (2018), p. 1247.

[45] C. Niethammer, “AI bias could put women’s lives at risk: A challenge for regulators,” Forbes (March 2, 2020). Available at https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=56af47fe534f.

[46] EU Fundamental Rights Agency (see note 3), p. 71.

[47] M. Samorani and L. G. Blount, “Machine learning and medical appointment scheduling: creating and perpetuating inequalities in access to health care,” American Journal of Public Health 110/4 (2020), p. 440.

[48] S. Wachter, B. Mittelstadt, and C. Russell, “Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI,” Computer Law and Security Review 41 (2021), p. 11.

[49] Case C-177/88, Elisabeth Johanna Pacifica Dekker v. Stichting Vormingscentrum voor Jong Volwassenen (VJV-Centrum) Plus.

[50] Case C-668/15, Jyske Finans A/S v Ligebehandlingsnævnet, acting on behalf of Ismar Huskic.

[51] Wachter et al. (see note 48), p. 65.

[52] R. Xenidis, “Tuning EU equality law to algorithmic discrimination: Three pathways to resilience,” Maastricht Journal of European and Comparative Law 27/6 (2020), pp. 736, 739–741.

[53] Ibid, p. 740.

[54] For the European Court of Justice’s  refusal to recognize intersectional discrimination, see Case C-443/15, Parris v. Trinity College Dublin.

[55] Wachter et al. (see note 48), p. 10.

[56] Racial Equality Directive (see note 14), art. 2(b); Goods and Services Directive (see note 14), art. 2(b).

[57] D. Schönberger, “Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications,” International Journal of Law and Information Technology 27/2 (2019), p. 184.

[58] Wachter et al. (see note 48), p. 47.

[59] Case C-167/97, Regina v Secretary of State for Employment, ex parte Nicole Seymour-Smith and Laura Perez; Wachter et al. (see note 48), p. 54.

[60] Xenidis (see note 52).

[61] Ibid., p. 742.

[62] Opinion of A.G. Kokott in Case C-443/15, David L. Parris v. Trinity College Dublin and Others at 153.

[63] Case C-83/14, ‘CHEZ Razpredelenie Bulgaria’ AD v. Komisia za zashtita ot diskriminatsia.

[64] Case C-507/18, NH v. Associazione Avvocatura per i diritti LGBTI – Rete Lenford.

[65] Xenidis (see note 52), pp. 748–751.

[66] Ibid., pp. 755–756.

[67] Case C-243/19, A .v Veselības ministrija.

[68] Proposal for a Council Directive on Implementing the Principle of Equal Treatment between Persons Irrespective of Religion or Belief, Disability, Age or Sexual Orientation, COM(2008) 426 final OJ C303/8.

[69]  European Parliament legislative resolution of 2 April 2009 on the proposal for a Council directive on implementing the principle of equal treatment between persons irrespective of religion or belief, disability, age or sexual orientation, P6_TA(2009)0211, amendment 37.

[70] Proposal for the Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence and Amending Certain Union Legislative Acts, COM(2021) 206 final.

[71] Ibid., p. 4.

[72] Ibid., art. 5.

[73] Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA relevance) OJ L 117, 5.5.2017, pp. 1–175.

[74] For the full list of covered grounds, see Medical Devices Regulation 2017/745, art. 2(1).

[75] Ibid., art. 51.

[76] Ibid., art. 52.

[77] AI regulation proposal (see note 70), annex III, art. 5(a), (c).

[78] Ibid., arts. 8–15.