Artificial Intelligence

New AI Programs Compromise the Rights of Helpless Migrants

With the rapid advancement of artificial intelligence, algorithmic border governance technology has become an ever-present factor in international migration. This new technology threatens human rights. AI increases racist and xenophobic sentiments against refugees and asylum seekers, who must also face militarized borders. A human rights-based approach should be applied to ensure migrants are treated with due dignity.
By
people

Vector of diverse people waiting in long queue with robot standing at counter on world map background 穢 FGC / shutterstock.com

April 24, 2024 04:50 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

International borders can be of exclusion, violence and discrimination for those who do not qualify for the of seamless international travel and . Exclusionary factors can include race, ethnicity, national origin, gender identity, sex, prior travel history, protection needs, migration status and more. Now, the border has become a trial ground for invasive monitoring technologies. Algorithmic border governance (ABG) technologies affect almost every aspect of a persons migration experience.

Recently, the Office of the United Nations High Commissioner for Human Rights iris scans in refugee camps and artificial intelligence-driven installed at international borders. Social media is being used to refugees and citizens. from the US Department of Homeland Security uses an AI called Babel X, which connects a persons social security number to their location and social media. , autonomous robots that can move on four or even two legs, are being as force multipliers on the MexicoUS border. These are just a few of the unregulated, uncontrolled experimental initiatives that are quickly taking root. Technological advancement makes migration more nightmarish than ever before.

Frontex aerial highlights the life-saving potential of drones and aircraft, which can help those in maritime crises. Saving lives at sea ought to be the priority; a startling 25,313 have perished in the Mediterranean since 2014. As it turns out, however, these deaths were caused by Frontexs , which is in service of interceptions, not rescues.

More than 7,000 international students may have been unjustly due to a flawed algorithm by the UK government. They were erroneously accused of cheating on English language exams, with no evidence provided against them.

How can human rights professionals improve the dignity of individuals crossing international borders? How can they expose the reality of this terrible situation? How do migrants oppose these experiments? This piece examines some of the profound effects of ABG technologies on human rights with a human rights-based (HRBA).

ABG militarization and border AI

Racist and xenophobic sentiments against , asylum seekers, migrants and stateless persons are increasing. These can be fuelled by the AI-driven of borders and border governance. This involves tactics and policies that violate human rights, like pushbacks, extended immigration detention and refoulement. are operations that prevent people from reaching, entering or remaining in a territory. is the practice of detaining migrants, especially those suspected of illegal entry, until immigration authorities can decide whether or not to let them through. is the practice of deporting migrants, often refugees or asylum seekers, back to their country or another.

UN agencies provide a wealth of information about the grave injustice and threats to human rights that migrants at international borders. Threatened rights include freedom of movement, prohibition against collective and , the right to seek and many others. In these situations, borderless algorithmic are used to further security goals. They highlight and create new avenues for human rights problems.

The goal of ABG must be to respect human rights. This strategy should be based on two main objectives. First, it should comprehend how poorly-planned algorithmic of border movement management may result in unprotected human rights. Second, it should evaluate how newly-emerging technology may exacerbate pre-existing issues.

States use new algorithmic to identify individuals in transit near land, maritime and external borders, such as the and the . This technology includes ground sensors, surveillance towers, aerial systems, drones and video surveillance. AI has enabled tasks like movement detection and between people and livestock. New ABG initiatives have repurposed technology for military or law enforcement applications, creating robodogs. States and regional bodies are using AI to forecast migration , processing information from social media, Internet searches and cell phone data.

However, these efforts are primarily focused on stopping border crossings rather than assisting migrants. This has raised among civil society groups, academia and international agencies. When used in a securitized to border regulation, these AI could potentially human rights, like the right to asylum or the ability to leave ones country of origin. The UN Working Group suggests that for maritime can help detect and maintain a safe distance from search and rescue activities, allowing to reach secure harbors.

In 2021, the on the human rights of migrants released a highlighting the use of as a form of punishment, deterrent or targeting system. Migrants face significant danger at borders due to pushback policies, , physical barriers and advanced monitoring technology. EU-funded pilot like focus on automated deception detection , face-matching tools, biometrics and document authentication apparatus. The program offers real-time behavior that could uncover hidden intent through on-site observations and open-source mining.

Internalized borders and algorithmic risk assessments

As part of a goal to borders, some states are attempting to identify individuals with irregular through digital . This can happen years after the individuals initial entry into the nation. Investigative journalists show that some immigration agencies have databases of other state institutions, which are typically protected from law enforcement by firewalls. These agencies have attempted to identify people with irregular immigration statuses, putting them in danger of or .

Certain states allegedly utilize data brokers to obtain information about things major and minor: prior employment, marriages, bank and property records, vehicle registrations, even phone subscriptions and cable television bills. Academics and civil society organizations have demonstrated the chilling that digital border may have on individuals exercising their rights. These include rights to housing, healthcare and education. If they are discovered, migrants may face severe repercussions.

According to , many migrants abstain from using record-keeping services that are essential to their familys wellbeing, including child welfare, and legal systems. They avoid these out of concern that law enforcement may access their information and use it to detain, prosecute and deport them.

Algorithmic risk are used in border , such as assigning higher risk to applications and referring them to human . These assessments are also used in states to decide whether to detain migrants. Concerns about human rights arise when AI are applied in detention decisions.

Algorithms need large datasets to train. They may contain and information due to overrepresentation or underrepresentation of certain groups, particularly the categories of , race and ethnicity. The s weighting of input data and the results it generates also contribute to algorithmic . Researchers in the US have found that some may lean toward high-risk classifications in detention , potentially leading to the detention of low-risk migrants. This is because algorithms apparent impartiality and scientific character may corroborate human officers prejudices, which can lead to against certain groups and stereotypes.

States may use technology like and reporting software, digital ankle and electronic to substitute traditional methods. However, the on the Protection of the Rights of All Migrant Workers and Members of their Families notes that these automated may have unfortunate consequences. They could further stigmatize migrants, lead to burdensome requirements, cause detentions and prompt a growth of algorithmic detention . Specific methods may impede peoples freedom of movement and enhance monitoring, even if they are not considered confinement.

Role data and the future

AI technologies and those employed generally in the ABG context rely heavily on . Input data is entered into them directly, and additional data is produced as a byproduct of its deployment. The data many states store and use include and obtained for and ; data from social media accounts; automated border control like and smart tunnels; monitoring health data; educational records and employment status. Commercial corporations, international organizations and other states too may gather shared data.

The to Regulate Artificial Intelligence aims to exclude current on criminal records, immigration and from the usual safeguards offered for high-risk AI . Access to these facilitates immigration databases with data gathered for criminal . This raises several potential human rights risks, like violations of the rights to equality, privacy and freedom from discrimination. Rights to life, liberty and security are in jeopardy as well if indiscriminate leads to detention and deportation.

There are few formal regulations governing the design and deployment of digital used at borders. AI is broadly unregulated as well. Despite this, the use of ABG technologies does not occur in a regulatory vacuum. States must uphold international human rights law. Governments and businesses must abide by the .

However, when using digital border , noncompliance with these duties creates protection . Firsthand accounts of those impacted by ABG technologies must be prioritized when implementing an HRBA framework for migration and ABG technology regulation. There need to be discussions between affected communities and policymakers, academics, technologists and civil society about the risks of using new technologies that protect human rights. Mobile communities should continue to have conversations about creating and using digital border technologies before their deployment, not after.

[ edited this piece.]

The views expressed in this article are the authors own and do not necessarily reflect 51勛圖s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support 51勛圖

We rely on your support for our independence, diversity and quality.

For more than 10 years, 51勛圖 has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesnt come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FOs journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes 51勛圖 as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries