Artificial intelligence systems have become a feature of the European Union’s approach to migration. From lie-detectors to risk profiling systems, AI systems are used to judge and control migrants in many different ways. Despite the possible benefits of AI systems, many human rights organizations warn of the dangers of their use and have repeatedly warned EU member states about how unprotected migrants are under the new EU’s AI Act.
AI systems are increasingly being used for migration restriction and control, affecting millions of people fleeing their countries. In the context of migration, AI systems are used to make predictions and evaluations of people, mainly to assess whether people trying to enter Europe present a risk of unlawful activity or security threats. The use of AI in EU borders mainly involves automated border checks, algorithmic recognition and classification of objects, maritime domain awareness and even lie detection. AI systems are now used throughout the whole migration process: before entry, during an entry, during a stay and returns. Additionally, many of these systems, such as robot systems in coastal areas or predictive analytic systems that forecast migration trends, are used by governments and institutions to shape the way they respond to migration.
Regarding the EU’s AI Act, many have argued that in spite of its categorization of some AI systems in the context of migration as “high-risk”, it still fails to address how they might further promote violence and discrimination against people in migration processes. What is more, systems that are harmful in migration contexts in systemic ways are overlooked by the act. This happens as a result of Article 83 of the AI Act, which states that the regulation shall not apply to AI systems that are part of large-scale IT systems such as EU migration databases like Eurodac or ETIAS (European Travel Information and Authorisation System). This way, systems that are labelled as “high-risk” are exempt from regulation.
The biggest threats that these systems pose for migrants are over-surveillance, pre-judgements made based on discriminatory assumptions and associations, criminalization, violation of privacy and targeting and violence throughout the migration experience as a whole. For example, some of these high-risk systems that constitute the basis of ETIAS, will target visa-exempt visitors travelling to the Schengen Area and will deploy automated risk assessment systems that profiles travellers based on risk indicators such as education level. Another aspect that needs to be highlighted is the imbalance of positions between migrants and the authorities that make use of these systems, which can lead to discrimination and the violation of their fundamental rights. Moreover, automated decision-making systems in the context of migration are based on human biases which further promote criminalization and pre-judgements, which can ultimately lead to the undermining of the reliability of the systems themselves.
Another aspect that must be borne in mind is the lack of transparency of the functioning of many AI systems as a consequence of intellectual property rights and copyrights. This is related to another concerning point, which is the fact that the use of AI (despite migration management being the responsibility of state authorities) is mainly vested in the private sector. As private actors are the main players in the creation of these technologies, not only does it raise concerns regarding the lack of transparency but also about the protection of the data and privacy of migrants. Many also argue that the loopholes of the EU AI Act will be exploited by private actors in order to sell their products without proper checks and that leaving the use of AI migration control up to member states will open up the way for a global race that will foster the use of increasingly intrusive technologies to prevent or deter migration.
Still, one should not forget that these systems may also facilitate procedures and have positive outcomes. The gathering of information to predict migration crises, for example, means that countries can use the knowledge to prepare for the influx of new people and better secure the fundamental rights of those seeking asylum. Some of the positive uses of AI systems have also been observed during the COVID-19 crisis, through the algorithmic identification of asymptomatic travelers infected with COVID-19. Also, AI systems in airports and border crossing points can also enable a smaller number of people to monitor a greater area in a shorter period of time and at a lower cost.
While AI systems offer great opportunities for states, they also pose a great threat for those in need of asylum. It is for this reason that states should take responsibility and ensure that AI systems are used in ways that foster positive outcomes rather than negative ones. Despite the fact that many believe that AI systems can be used to deter immigration, what it actually does is promote dangerous and precarious routes for flows of people. EU member states should therefore amend the EU AI Act in a way that protects the rights and needs of those who seek asylum and protection and ensure that migrants are not being assessed through automated systems that foster discrimination and violence.
REFERENCE LIST
Is the AI Act missing safeguards on migration? (2023, 26 January). The Parliament Magazine. https://www.theparliamentmagazine.eu/news/article/ai-act-migration-technology
Szwed, A. (2022). The use of artificial intelligence in migration-related procedures in the European Union – opportunities and threats. Procedia Computer Science, 207, 3645-3651. https://doi.org/10.1016/j.procs.2022.09.424
Trainee, C. (2022). Regulating Migration Tech: How the EU’s AI Act can better protect people on the move. PICUM. https://picum.org/regulating-migration-tech-eu-ai-act-protect-people-on-the-move/
By The European Institute for International Law and International Relations.