Skip to main content

Human Rights and New and Emerging Technology

Wednesday 29 November – Friday 1 December 2023 WP3298

HUman rights and emerging technology image

In partnership with the Foreign, Commonwealth and Development Office

Introduction

The development and deployment of new and emerging technologies (henceforth referred to as “emerging technologies”), including artificial intelligence (AI), biometrics and neuro-technologies, have profound implications for the enjoyment of human rights. The pace of innovation poses unique challenges to understanding both the potential positive and adverse impacts on human rights associated with their use, and to designing the most effective human rights responses. 

From 29th November to 1st December Wilton Park held a Human Rights and New and Emerging Technology dialogue, bringing together representatives from governments, industry, civil society, international organisations and academia. The discussions were designed to enable key human rights risks and opportunities to be identified, and assess concrete steps to address them in the design, development and use of technologies. This included discussion of the levers available at international, regional and domestic level, and the role of the tech community. The case study of technology-facilitated gender-based violence (TFGBV) was also considered. This report summarises key themes and recommendations emerging from the discussions.

Theme 1: Mapping the opportunities and risks of new and emerging technology 

New and emerging technologies present opportunities to promote and protect human rights, as well as posing complex human rights risks. These risks can be specific to certain technologies, or to particular societal groups. It is therefore important to consider both the general human rights risks associated with new technologies, and the specific risks associated with certain technologies or groups. It is also necessary to prioritise the most urgent current risks, whilst taking action to mitigate longer-term risks.  

  1. Emerging technologies promise improved access to education and healthcare and democratised access to information. However, adverse impacts include harassment and abuse online, as well as the misuse of highly intrusive surveillance technology to chill civic space, and other violations of privacy rights.
  2. Whilst lessons can be drawn across technology areas, a tailored risk assessment and response is required to effectively tackle specific manifestations of adverse impacts associated with specific technologies. For example, the privacy impacts of intrusive surveillance tools differ from the impacts of artificial intelligence systems, and require a differentiated policy approach, engaging different stakeholders. To drive more effective action, stakeholders need to move away from making general human rights recommendations to articulate precise assessments of the human rights problem and potential solutions. 
  3. The need to take into consideration the particular risk profiles of at-risk communities was also highlighted as essential to properly target policy responses. This requires engaging affected communities as part of consultations and engagement on the design, development and deployment of these technologies, as well as civil society organisations, developers and tech companies, academic experts, policymakers and regulators. 

“New and emerging technologies present opportunities to promote and protect human rights, as well as posing complex human rights risks.”

  1. Discriminatory outcomes associated with the use of technology, and the high number of elections due to take place in 2024 were identified as immediate cross-cutting priorities. Deepfakes, mis-and disinformation, and restrictions on internet access may undermine civil and political rights, and participation in public life. These immediate threats are at risk of being overlooked by an excessive focus on catastrophic future risks. 

Recommendations

  • Stakeholders should map – and prioritise – specific human rights risks in the context of new and emerging technologies, and work together to better target collective efforts. Such a mapping should clearly identify the actors responsible for addressing them. 
  • Stakeholders should take forward targeted work to respond to the adverse human rights impacts which may impact on elections, including those associated with mis- and disinformation.
  • Governments should balance efforts to tackle current risks, with efforts to understand and mitigate future risks.
  • Case study

    Technology- facilitated gender-based violence (TFGBV) provides a valuable case study of the multi-layered approach required to tackling tech related human rights risks. A number of effective policy interventions in this space are not strictly human rights related but have had a positive impact on the enjoyment of freedom of expression and human rights for those belonging to marginalised groups. 

    TFGBV refers to any act using information and communications technologies and other digital tools resulting in harms or interference in the enjoyment of rights and freedoms, on the basis of gender. Women and girls, the LGBTQ+ community and other marginalised groups are most affected. The impact of TFGBV can range from exclusion from civic and political spaces, career setbacks, disengagement from work and school, and harm to mental and physical health. This makes TFGBV a key barrier to digital inclusion that must be tackled to fully address the gender digital divide. Safety by design approaches were recognised as crucial tools to mitigate the risks of TFGBV.  Increased investment in content moderation, including in minority languages, and support for survivor/victims of this abuse, are also important.  

Next

Theme 2: Strengthen implementation of existing human rights, rather than redefine the frameworks

Want to find out more?

Sign up to our newsletter