Read, Debate: Engage.

What if an Algorithm Decided Whether You Could Stay in Canada or Not?

October 11, 2018
topic:Innovation
tags:#Canada, #immigration, #xenophobia, #Artificial intelligence, #refugees
partner:NewsDeeply
located:Canada
Canada is rapidly expanding the use of AI in its immigration service.

By Petra Molnar, Samer Muscati

The detention of migrants at the U.S.-Mexico border in every single case presented; the wrongful deportation of 7,000 foreign students accused of cheating on a language test; racist or sexist discrimination based on social media profile or appearance – what do these seemingly disparate examples have in common? In every case, an algorithm made a decision with serious consequences for people’s lives.

Algorithms and artificial intelligence (AI) are starting to augment human decision-making in Canada’s immigration and refugee system, with significant implications for the fundamental human rights of those subjected to these technologies.

In our new report with the Citizen Lab, we look at how Canada’s use of these tools threatens to create a laboratory for high-risk experiments. These initiatives may place highly vulnerable people at risk of being subjected to unjust and unlawful processes in a way that threatens to violate Canada’s domestic and international human rights obligations, influencing decisions on multiple levels.

Canada has been introducing automated decision-making experiments in its immigration mechanisms… Recent announcements signal an expansion of the uses of these technologies.

Since 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in a variety of immigration decisions that are normally made by a human immigration official.

What constitutes automated decision-making? Our analysis examines a class of technologies that augment or replace human decision-makers, such as AI or algorithms. An algorithm is a set of instructions, a “recipe” designed to organize or learn data quickly and produce a desired outcome. These outcomes can include recommendations, assessments and decisions.

We examined the use of AI in immigration and refugee systems through a critical interdisciplinary analysis of public statements, records, policies and drafts by relevant departments within Canada’s government. While these are new and emerging technologies, the ramifications of using automated decision-making in the immigration and refugee space are far-reaching. Hundreds of thousands of people enter Canada every year through a variety of applications for temporary and permanent status.

The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of human rights, in the form of bias, discrimination and privacy breaches, as well as issues of due process and procedural fairness. These systems will have real-life consequences for ordinary people, many of whom are fleeing for their lives.

Our analysis also relies on principles enshrined in international legal instruments that Canada has ratified, such as the International Covenant on Civil and Political Rights, the International Convention on the Elimination of All Forms of Racial Discrimination, and the Convention Relating to the Status of Refugees, among others. Where the responsibilities of private-sector actors are concerned, the report is informed by the United Nations Guiding Principles on Businesses and Human Rights. We also analyze similar initiatives occurring in Australia and the United Kingdom.

Marginalized and under-resourced communities such as residents without citizenship often have access to less robust human rights protections and lesser legal expertise with which to defend those rights. Adopting AI without first ensuring responsible best practices and building in human rights principles at the outset will exacerbate preexisting disparities and lead to rights violations.

Adopting AI without first ensuring responsible best practices and building in human rights principles at the outset will exacerbate preexisting disparities and lead to rights violations.

We also know that technology travels. Whether in the private or public sector, one country’s decision to implement particular technologies makes it easier for other countries to follow. AI in the immigration space is already being explored in various jurisdictions across the world, as well as by international agencies that manage migration, such as the U.N.

Canada has a unique opportunity to develop international standards that regulate the use of these technologies in accordance with domestic and international human rights obligations. It is particularly important to set a clear example for countries with weaker records on refugee rights and rule of law, as insufficient ethical standards and weak accounting for human rights impacts can create a slippery slope internationally. Canada may also be responsible for managing the export of these technologies to countries more willing to experiment on non-citizens and infringe the rights of vulnerable groups.

It is crucial to interrogate these power dynamics in the migration space, where private-sector interventions increasingly proliferate, as seen in the recent growth of countless apps for and about refugees. However, in the push to make people on the move knowable, intelligible and trackable, technologies that predict refugee flows can entrench xenophobia, as well as encourage discriminatory practices, deprivations of liberty, and denial of due process and procedural safeguards.

With the increasing use of technologies to augment or replace immigration decisions, who actually benefits from these technologies? While efficiency may be valuable, those responsible for human lives should not pursue efficiency at the expense of fairness – fundamental human rights must hold a central place in this discussion. By placing such rights at the centre, the careful and critical use of these new technologies in immigration and refugee decisions can benefit both Canada’s immigration system and the people applying to make the country their new home.

Canada has clear domestic and international legal obligations to respect and protect human rights when it comes to the use of these technologies and it is incumbent upon policymakers, government officials, technologists, engineers, lawyers, civil society and academia to take a broad and critical look at the very real impacts of these technologies on human lives.

Immigration and refugee law is also a useful lens through which to examine state practices, particularly in times of greater border control security and screening measures, complex systems of global migration management, the increasingly widespread criminalization of migration and rising xenophobia. Immigration law operates at the nexus of domestic and international law and draws upon global norms of international human rights and the rule of law.

The views expressed in this article belong to the authors and do not necessarily reflect the editorial policy of FairPlanet.

This article originally appeared on Refugees Deeply. You can find the original here. For important news about the global migration crisis, you can sign up to the Refugees email list.

Content partners:
news deeply logo square
NewsDeeply
Content partner
Canada
Embed from Getty Images
Algorithms and artificial intelligence (AI) are starting to augment human decision-making in Canada’s immigration and refugee system.
Embed from Getty Images
In our new report with the Citizen Lab, we look at how Canada’s use of these tools threatens to create a laboratory for high-risk experiments.
Embed from Getty Images
The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies.
Call to Action
Please sign the petition and help end the multi-million dollar business of criminalising immigrants!
notech
Support now
.
.