Imagine that your government could and track your movements digitally, solely based on your physical appearance and perceived ethnicity or race. This is not the stuff of dystopian science fiction, but is happening now due to the widespread use of artificial intelligence (AI) tools.
One of the most egregious examples of the abuse of AI tools like facial recognition is their use in China’s repression of the Uighurs, an ethnic minority group that lives mostly in the country’s far-western Xinjiang province. From police checkpoints to detention camps where at least one million people are incarcerated, have emerged about China’s effort to “reeducate” the mostly Muslim minority. Chinese authorities have even designed an specifically to monitor the Uighurs’ activities.
But this phenomenon is not only prevalent in China. Facial recognition software presents one of the largest emerging AI challenges for civil society, and new surveillance technologies are quietly being implemented and ramped up in order to repress minority voices and tamp down dissent. Authoritarian countries like the and have jumped on the facial recognition bandwagon. Despite raising serious concerns over privacy and human rights, the international response to the use of these new technologies has been tepid.
In the United States, reaction to this technology has been mixed. A New York district will soon become the first in the country to in its schools. Meanwhile, San Francisco recently became the first city to facial recognition software due to the potential for misuse by law enforcement and violations of civil liberties, and a Massachusetts town of Somerville . In short, some local and national governments are moving ahead with facial recognition while others are cracking down on it.
Uneven Response
So why is this uneven response problematic? The short answer is that the same software that is used to help track and detain Uighurs in China can be employed elsewhere without proper technological vetting. While facial recognition software may be touted as a more efficient way to track and catch criminals or to help people , it is not a reliable or tool. Human rights organizations have about government use of such technologies, including accuracy issues with facial recognition software and the software’s propensity to .
Last year, a researcher at the Massachusetts Institute of Technology found that while commercially available facial recognition software can recognize a white face with almost perfect precision, it performs much worse for people of color, who are already .
As governments embrace facial recognition software, some tech companies have taken notice of the related human rights issues. Microsoft recently with a law enforcement agency in California over concerns about potential misuse of its products in policing minorities. An Israeli startup has developed a tool to help consumers from invasive facial recognition technology that can violate their privacy.
Still, in most cases, companies cannot be trusted to regulate themselves. Amazon, which developed the facial recognition software , offered to (ICE), raising concerns that its technology could be used to target immigrants. There is still insufficient oversight of these companies and, more importantly, of the governments that continue to partner with them. As a result, these companies are complicit in the repression of groups vulnerable to this technology.
Going Forward
So what can policymakers and others do to combat the challenges presented by facial recognition technology? First, lawmakers around the globe need to craft legislation that limits their respective governments’ use of facial recognition software and limit companies’ abilities to export these tools abroad, as has been the case with other .
Second, individual cities and countries across the world, beyond liberal bastions like San Francisco, should prohibit police from using facial recognition tools. Seattle, and several cities across California have but have not gone as far as San Francisco.
Third, international bodies like the United Nations should take a more active role in advising governments on the intersection of tech tools and human rights. As Philip Alston, the UN special rapporteur on extreme poverty and human rights, recently , “Human rights is almost always acknowledged when we start talking about the principles that should govern AI. But it’s acknowledged as a veneer to provide some legitimacy and not as a framework.” The UN is well-placed to provide an international framework for tech governance, and should do so.
Finally, human rights organizations have been raising concerns about facial recognition software and other AI tools for years, but instead of focusing exclusively on , they need to increase investment in public information campaigns. Consumers may be unaware that by using the fingerprint or face-enabled features on their smartphones, they are actually providing biometric data to companies like Amazon that have cozy relationships with law enforcement. In some cases, law enforcement agencies have to use their faces to unlock their phones. A judge recently that acts like these are illegal in the US, but the battle is far from over in other countries.
As AI tools become more advanced, governments and international bodies must work on country-specific and global frameworks for reining in emerging technology. Otherwise, tools like the Uighur tracking app and facial recognition software will become more and more widespread. As the problematic statistics with facial recognition show, there is too much risk of error to let these tools further threaten human rights worldwide.
*[Young Professionals in Foreign Policy is a partner institution of 51Թ.]
The views expressed in this article are the author’s own and do not necessarily reflect 51Թ’s editorial policy.
Support 51Թ
We rely on your support for our independence, diversity and quality.
For more than 10 years, 51Թ has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.







Commenting Guidelines
Please read our commenting guidelines before commenting.
1. Be Respectful: Please be polite to the author. Avoid hostility. The whole point of 51Թ is openness to different perspectives from perspectives from around the world.
2. Comment Thoughtfully: Please be relevant and constructive. We do not allow personal attacks, disinformation or trolling. We will remove hate speech or incitement.
3. Contribute Usefully: Add something of value — a point of view, an argument, a personal experience or a relevant link if you are citing statistics and key facts.
Please agree to the guidelines before proceeding.