EDITORIAL Enter the Cyborgs: Health and Human Rights in the Digital Age

Volume 22/2, December 2020, pp 1 – 6

PDF

Sara L. M. Davis and Carmel Williams

In Donna Haraway’s 1992 A Cyborg Manifesto, the medical anthropologist describes the cyborg as a “hybrid of machine and organism” living “on the boundary between fact and fiction”: “We are all chimeras, theorized and fabricated hybrids of machine and organism.”[1] In 2020, the world that Haraway imagined has arrived, accelerated by the isolation and surveillance enforced in the COVID-19 pandemic. In this special section, contributors explore the role of big data, technology, and artificial intelligence in the prevention, detection, tracing, and treatment of COVID-19 in a world being rapidly reshaped by this pandemic.

In 1992, Haraway observed that medicine was already witnessing the growth of people’s dependency on computers or other machinery. But as of late 2020, the computer and the mobile phone have moved fully into the center of our lives. In order to mitigate the risk of COVID-19 transmission, a significant portion of the world’s population now works, socializes, shops, and seeks entertainment and love online. Recall the feelings of despair and the very real, immediate social isolation that follows the loss of your phone or internet connection. These increasingly essential tools today contribute to what Netherlands informatician Sennay Ghebreab calls an “exocortex,” or artificially intelligent (AI)-driven information that now shapes and mediates our human judgments and behavior, making us vulnerable to manipulation and to perpetuating racial and gender discrimination.[2] Health has also digitized, with digital contact tracing, real-time epidemiology, and remote diagnosis rapidly scaling up in many countries. Even after COVID-19 passes into history, the world is never going back to the days when a phone was something attached to a wall instead of being a constant companion and essential tool for looking up health information, including the latest local incidence rates.

Our call for papers for this special section predated COVID-19. We invited authors to examine, for example, links between new technologies and the protection of economic, social, and cultural rights, because we believed that the impact of technologies on social rights was underexplored. We sought reflection on the risks posed by the global reliance on data and the way that new technologies are changing power relations between states and the private sector. We did not foresee, of course, how quickly this would accelerate. As the papers in this section demonstrate, we were already well down this path before COVID-19, but the pandemic has provided the urgency, financial resources, government support, and often public compliance, to accelerate the trends.

So if many of us are now cyborgs, mentally dependent on digital tools even if not physically attached to them, then what does the cumbersome 20th-century structure of human rights have to offer? Does it promote resistance toward big data and digital technologies altogether? Some human rights activists argue forcefully for this (for instance, Stop LAPD Spying, a US-based nongovernmental organization that advocates against predictive policing and discriminatory data-driven prosecution, as well as the Feminist Data Manifest-No, which “refuses harmful data regimes”).[3] The authors of the articles in this special section largely accept that digital health is here to stay; but they agree that there are both real threats and real boundaries that human rights tools, norms, and court decisions can help us manage in the digital age.

These risks are significant and frightening, as Lisa Cosgrove, Justin M. Karter, Mallaigh McGinley, and Zenobia Morrill describe in their analysis of mental health surveillance. They describe two new technology tools—digital phenotyping, which predicts mood based on how a person taps, scrolls, and types on their phone, and the first-ever “digital” drug, an antipsychotic medication embedded with a sensor to monitor compliance. These authors, and many others in this collection, express concern at how such intimate data is used beyond its first purpose. Nearly all mental health apps, they find, send the data they collect to Facebook or Google for use in data analytics or marketing. Not only do vulnerable users of these mental health apps have no agency over their own data, but there is absolutely no guarantee that the data will not be used to discriminate against them in the future.

Referring to COVID-19 as the “Pandemic Shock Doctrine,” Naomi Klein described the high-tech response to the pandemic as “a living laboratory for a permanent—and highly profitable—no-touch future.”[4] She characterized such a dystopic future as one in which our every move, our every word, our every relationship is trackable, traceable, and data mineable by unprecedented collaborations between government and tech giants. As Sara (Meg) Davis notes in her perspective, Shoshana Zuboff shows in Surveillance Capitalism how tech giants use social media data and notifications to manipulate individual behavior for profit, as frighteningly illustrated in Netflix’s film The Social Dilemma.[5] Davis and Rajat Khosla both identify the risks of partnerships between governments or global agencies, on the one hand, and data mining companies such as Palantir, on the other, where data previously collected for humanitarian efforts are then used, for example, to prosecute the parents of immigrant children.

Related risks explored in this special issue include threats to privacy, autonomy, and nondiscrimination. Several papers refer to human rights abuses that have been a direct result of egregious data-sharing instances. In his viewpoint, Khosla describes the important work of Amnesty International’s Amnesty Tech program in monitoring the unlawful use of digital surveillance to “spy on, intimidate, threaten, or silence activists or to locate, detain, or imprison them.”

Several authors in the collection describe discrimination and other abuses linked to gap-filled and inaccurate health data and algorithms, and to the commodification of data. Amy Dickens looks at the risks of public-private partnerships in which state agencies provide the public’s health or other data to the private sector to develop AI products that they can then sell back to the state. Such arrangements draw parallels with the pharmaceutical sector that historically, and for COVID-19 vaccine development, uses state funding to develop products for private profit. It is incumbent upon states to recognize the high value of data, as well as the human rights implications of any data-sharing arrangements they enter into. There is, as Dickens explains, much more at stake than civil and political rights—the big risk is that private actors are gaining ever more expansive monopoly powers that threaten future socioeconomic rights entitlements.

There is clearly something here that needs governing: cyborgs need rights, too. To address these risks, these authors find rich resources in the human rights tradition, which can be used to better manage the promise and threats of digital technologies and AI in health. Sharifah Sekalala, Stéphanie Dagron, Lisa Forman, and Benjamin Mason Meier examine these principles in their paper and take the view that surveillance and tracing technologies are more readily accepted by the public when they are clearly shaped and underpinned with transparency and other human rights features. They outline how the Siracusa Principles could be used to inform technology-related health decisions in an outbreak: Sekalala and colleagues’ recommendations could be invaluable to the committee engaged in the review of the International Health Regulations, which incorporates a reference to Siracusa.[6]

In response to Sekalala and colleagues’ recommendations, writing from a tech perspective, Akarsh Venkatasubramanian agrees that there is a need for more robust global regulation. COVID-19, he suggests, offers an opportunity to build and strengthen a global rights-based, equitable, inclusive governance structure, such as an international health data regulation, that is designed with geographical and sectoral representation and that promotes responsible and appropriate digital health surveillance during and beyond emergencies. He also calls for the creation of registers or indices of approved technologies, similar to the Access to Medicines Index. He challenges us to think about how tech could be used to fulfill and promote the right to health, and how the law needs to create space for rapid changes in technology.

Nina Sun, Kenechukwu Esom, Mandeep Dhaliwal, and Joseph J. Amon outline these much-needed ethical and human rights standards relevant to the use of digital health technologies. They present practical strategies to mitigate risks, and review mechanisms of accountability, showing how the International Covenant on Economic, Social and Cultural Rights, judicial rulings, and lessons learned from the work to address human rights and HIV could inform the governance of digital health. Importantly, their paper addresses the human rights obligations of the private sector, positioning the obligation not to cause adverse human rights impacts as a legal compliance issue. They stress the need for health technology assessments to prevent rights violations arising from data breaches, biases, and “function creep,” whereby data are used for purposes other than that for which they were collected. While acknowledging the potential of data technology to reduce health care costs and transform health systems, Sun et al. warn that the risk of rights violations is real and grounded in the experiences of populations who are already subject to discrimination, social marginalization, and surveillance. In the future development of digital health technologies, they urge that attention be given to the development of community-owned technologies, aligned with ethical and human rights principles, to advance accountability and justice.

For Louise Holly, the Convention on the Rights of the Child offers principles and guidance that could help ensure that children and youth are at the center of digital health policy. Data from a multitude of digital devices are captured about children even before they are born. While such data are usually captured for public health purposes (for example, biometric data are used to boost vaccination rates), Holly argues there is insufficient consideration of any unintended consequences on the enjoyment of other rights. Health data that enable people to be located, she explains, can put children—particularly those from marginalized groups—at risk of discrimination or persecution. Information relating to a child’s health status may later be used by employers, insurance companies, and other third parties, again potentially breaching their equitable access to health care and other social rights.

Furthermore, with regard to planning, priority setting, and providing guidance and technical support, Davis and Carmel Williams each outline responsibilities for the World Health Organization (WHO), the Global Fund, and states to improve their assessment of needs and risks before scaling up (or financing the scale-up of) digital health. Looking at more immediate right to health issues arising with the urgency that WHO and other agencies are placing on low- and middle-income countries to digitize their health systems, Williams recommends that health rights impact assessments be conducted before any AI or data-driven projects are embarked upon. These assessments are necessary not just to examine future ownership and costs of products but also to ensure that switching to data-driven systems will not weaken health systems by overlooking fundamental issues, such as whether the system has the capacity to sustain such initiatives. She warns that digital development projects are at risk of being designed from afar without regard for local knowledge or local contextual constraints. Both Williams and Davis contend that WHO and the multiagency AI for Good initiative, which encourages the use of data-driven technology to achieve the Sustainable Development Goals, limit their concerns about AI primarily to digital divide issues.

How could human rights experts and tech developers work together? Is there common ground to be found in these very different fields? Researchers from the tech community argue that indeed there is. It is an often-quoted truism in human rights that political problems cannot be fixed with technical solutions, but Vinodkumar Prabhakaran and Donald Martin Jr. make a compelling case for doing exactly that. While Cosgrove describes how Google extracts our most private data for its private gain, these two Google researchers propose combatting racial and other forms of discrimination by cracking open the mysteries of algorithms through participatory approaches to machine learning design. They claim that this more diverse and inclusive approach helps overcome what Ghebreab argues is a weakness in the technological “exocortex”—namely, the algorithmic errors that arise when machine learning systems are informed by the biased understandings of only one gender, one ethnicity, or one socioeconomic demographic.

In their paper on feminist data, Shirin Heidari and Heather Doyle urge the use of an intersectional feminist lens to data itself, arguing that it is not sufficient to consider the intersection of gender with other dimensions of oppression regarding what data are collected. Rather, they urge a critical reflection on the ways that data are collected and evidence is produced, calling for the adoption of feminist principles to shape global health data.

As Heidari and Doyle conclude, on a positive note, COVID-19 presents an opportunity to reshape our future, including our digital future. This opportunity, they say, is not a utopian dream. But, as they caution, the time to act is now. As many of the contributors to this special section make clear, if we do not protect human rights in the digital space, we risk not only the health and well-being of people today but also those of future generations.

These multidisciplinary voices are sorely needed. As the boundaries separating computers from humans blur, the disciplines engaged in governing new technologies also must begin to cross boundaries. The human rights linked to health must expand and adapt to a new domain with new standards that build on the old norms and that encompass rapidly changing capabilities. Lawyers, feminists, anti-racists, decolonizers, social scientists, tech researchers, and patients’ advocates will need to find common languages to communicate with one another. Rather than generals fighting the last war, rights scholars and advocates will need to be able to think ahead to a future shaped by technologies that are not in laboratories yet: a future in which our rapidly evolving and fragile cyborg selves will need protections and powers we cannot even imagine. This special section is a first step into this brave new world of digital health and human rights, but it must not be the last.

Sara (Meg) Davis, PhD, is a research fellow at the Graduate Institute of International and Development Studies, Geneva, Switzerland.

Carmel Williams, PhD, is a researcher for the Human Rights, Big Data and Technology Project, Human Rights Centre, University of Essex, and Executive Editor of Health and Human Rights Journal.

References

[1] D. Haraway, “A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century,” in D. Haraway, Simians, cyborgs and women: The reinvention of nature (New York: Routledge, 1991), pp. 149–181.

[2] S. Ghebreab, “Let’s have FAITH,” lecture at Waanzin Festival, Tivoli Vredenburg, Utrecht (September 15, 2018). Available at https://www.sennay.net/single-post/2018/09/18/Lets-have-FAITH.

[3] Stop LAPD Spying Coalition. Available at https://stoplapdspying.org; Feminist Data Manifest-No. Available at https://www.manifestno.com.

[4] N. Klein, “Screen new deal,” Intercept (May 9, 2020). Available at https://theintercept.com/2020/05/08/andrew-cuomo-eric-schmidt-coronavirus-tech-shock-doctrine.

[5] S. Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (London: Profile Books, 2019); The Social Dilemma, directed by Jeff Orlowski (2020; Netflix). Available at https://www.netflix.com/title/81254224.

[6] S. Zarifi and K. Powers, “COVID-19 symposium: Human rights in the time of COVID-19—Front and center,” OpinioJuris (June 4, 2020). Available at http://opiniojuris.org/2020/04/06/covid-19-human-rights-in-the-time-of-covid-19-front-and-centre.