Sara (Meg) Davis
The forthcoming high-level UK AI Safety Summit focuses on existential threats caused by the rapid growth and proliferation of AI systems. Health goals—for example, the promise of more rapid and accurate diagnosis and treatment—are often cited as an underlying rationale for the rapid growth of AI. But in practice, without stronger AI governance, the profound inequalities and human rights issues in global health risk being amplified. Experts, practitioners, and advocates in health must engage, drawing on lessons learned from the best and worst of global health governance to demand that future AI governance is grounded in human rights principles, including transparency and accountability.
Sam Altman, CEO of Open AI (the company behind ChatGPT), is one of those sparking the urgent calls for scrutiny and regulation of “frontier AI”, and Open AI has successfully helped to frame the summit’s agenda. Open AI defines frontier AI as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety” and its framing paper emphasizes risks of misuse, abuse, and loss of control as AI systems grow, leading to heightened risks of bioterrorism and other catastrophies.
At the same time though, Open AI urges a soft approach to regulation, including corporate self-certification, and has threatened to leave the European Union to avoid tougher regulatory regimes.
The Open AI approach, and that of other Big Tech leaders who have raised alarms about the existential threats posed by their own products, has been critiqued by AI ethics experts such as Meredith Whittaker and Timnit Gebru. They point to harms AI is already causing: gender inequalities, algorithmic discrimination, surveillance, and the use and abuse of biometrics, such as facial recognition.
This is why it’s crucial for global health actors to engage in discussions about AI regulation. As bad as they are right now, the problems we face already with global health inequalities—discrimination, marginalization and exclusion, and unequal access to life-saving technologies and medicines—will only get worse if we don’t ensure AI governance is robust, democratic, and grounded in human rights.
The World Health Organization has called for AI systems to address these concerns: to protect human autonomy in health; promote human well-being and safety; ensure transparency, explainability and intelligibility; and promote AI that is responsive and sustainable. Echoing these calls, the UN Special Rapporteur on the right to health, Dr. Tlaleng Mofokeng, recently called on states to address digital inclusion, ensure access to affordable and reliable connectivity, promote digital literacy, and eliminate the digital gender divide that leaves many women without the means to get online.
In many countries, rapid digitization is actually a barrier to health and other government services for those most marginalized, and is not the enabler promised by techno-optimists. But these concerns are back-burnered on the agenda of the UK AI Safety Summit and similar high-level AI discussions, which are frequently closed to the public while including high-level political and private sector figures. Health is used rhetorically to justify urgent technological advances that benefit elites while the real-world challenges of making those advances work for all, particularly marginalized communities, are a low priority.
The foundations for future AI governance will be laid in the next year, at high speed. The UK AI Safety Summit is taking place on 1-2 November. Meanwhile, the UN Tech Envoy is convening a High-Level Advisory Body on Artificial Intelligence with the aim of making recommendations on the future of AI governance (perhaps extending to establishing a new UN agency) before the Summit of the Future in September 2024, and supporting a member state-led process to develop a Global Digital Compact for the same date. Health and human rights experts and advocates urgently need to be part of the conversation, and to raise the three following concerns.
Whose security are we prioritizing? Real-world AI-related harms are disproportionately experienced by women and minoritised communities in high-income countries, as well as by many others in low- and middle-income countries who lack a voice in US or UK tech governance. So whose security really counts? The critiques of the UK AI Safety Summit echo familiar critiques of the securitization of global health. These include reinforcing colonial inequalities: focusing narrowly on protecting wealthy countries from pandemics originating in the rest of the world, while failing to address their faltering health systems at home; and ignoring equally critical and urgent needs of those dealing with weak health systems in the Global South, who are locked out of access to vaccines and more. In many countries with draconian cybersecurity laws, the digital securitization discourse has itself become a cause of insecurity for those targeted by police and authoritarian states. We need to demand digital security for all, not only for elites.
The spectre of self-certification by corporations for AI governance ought to ring loud alarm bells in global health. We have been here before, recently and embarrassingly: the State Party Self-assessment Reports countries dutifully completed for pandemic preparedness led the US and UK to rank themselves highly, only to perform abysmally when they were tested in reality by COVID-19.
Just as rights-based advocacy has been demanding (fruitlessly) for the pandemic accord, any self-certification process for AI safety must have: independent review by experts; real social accountability mechanisms to enable communities to have a voice at every level of AI governance; and whistle-blower mechanisms to enable anyone to raise the alarm when AI systems cause real-world harms.
Meaningful participation in AI governance. Given the rapid pace of AI development, Open AI rightly notes that laws and policies created now may not be fit for purpose a few years from now and may need repeated iterations. But how will this include robust and democratic community voice at every level? Gebru warns, “I am very concerned about the future of AI. Not because of the risk of rogue machines taking over. But because of the homogeneous, one-dimensional group of men who are currently involved in advancing the technology.” In global health, we have already experienced the lopsided influence of the private sector, private foundations, and interested donor states in multi-stakeholder platforms—and we will see this repeated in AI governance without pressure for truly democratic and inclusive governance, with a strong voice for communities and civil society to resist exploitative tokenism and promote meaningful participation in governance.
In the Digital Health and Rights Project, an international consortium of which I am principal investigator, we are establishing one potential model of transnational participatory action research into digital governance that includes democratic youth and civil society participation from national to international levels. We will continue to document and share what we learn from the process, but there are clear principles and norms we can already draw on from the HIV movement to ensure that youth and community activists represent and are accountable to larger groups of individuals and civil society.
Will the UK AI Safety Summit consider all these concerns? It seems unlikely. Criticism has been faint and expressed by too few actors. But AI governance is marching on quickly and health rights advocates can’t afford to wait to see the outcome before leaping in.
In the 1980s, AIDS activists around the world mobilized to demand a seat at the table in clinical trials and in global health governance mechanisms. That movement reshaped the global health landscape and saved millions of lives. Today we need to demand a voice and strong human rights and global health protections in AI governance.
Sara (Meg) Davis, PhD, is Professor, Digital Health and Rights, at University of Warwick. E-mail: firstname.lastname@example.org