With major global military conflicts asserting themselves into the story of the twenty-first century, a parallel debate is being waged in academic, ethical and political circles about the appropriate uses for facial recognition technology (FRT) and other biometric systems in warfare.
In August – eighteen months after Russia began its invasion of Ukraine – Cambridge University Press published an article by Juan Espindola, a researcher at the National Autonomous University of Mexico, entitled “Facial Recognition in War Contexts: Mass Surveillance and Mass Atrocity”, which focuses on various uses of facial recognition technology by Ukrainian soldiers. Since then, the conflict in the Middle East has added new urgency to questions that the paper raises about what it calls “some of the most serious concerns with FRT in the context of war, including the infringement of informational privacy; the indiscriminate and disproportionate harms it may inflict, particularly when the technology is coupled with social media intelligence; and the potential abuse of the technology once the fog of war dissipates.”
Thus far, reports of facial recognition in the Israeli-Palestinian conflict have focused on uses related to identifying victims of the Hamas attack of October 7, which left at least 1300 dead, injured thousands more, and saw Hamas take Israeli hostages across the border into Gaza. Israel has used both Amazon’s Rekognition system and Corsight AI’s FRT to locate the missing and the dead using face biometrics.
Yet even before the current war, human rights groups had raised flags about how Israel was using facial recognition for mass surveillance and control of the Palestinian territories, and the Israeli government is likely to use every tool available to it in its current response to Hamas’ brutality.
Espindola points out that facial recognition still disproportionately hurts minority groups, writing that “the deployment of FRT in authoritarian and liberal democratic regimes alike to persecute ethnic groups, repress political dissidents, or conduct widespread unjustified surveillance – particularly when the technology is integrated into closed-circuit television, or CCTV, systems – has been aptly described as a political and social menace.”
However, his paper examines whether FRT deployment can be justified as a tool for espionage and counterintelligence, and leans heavily on a the work of the French philosopher Cécile Fabre and her 2022 book Spying through a Glass Darkly: The Ethics of Espionage and Counter-Intelligence – which Espondola calls “the most systematic and rigorous defense of espionage and counterintelligence as a permissible, even mandatory, form of self-defense in the face of threats to fundamental rights.”
According to Fabre’s ethical framework, Espindola writes, the use cases of FRT by Ukraine are justifiable on threat-prevention grounds, particularly deploying facial recognition to reveal Russian infiltrators amid Unraine’s displaced citizenry and to identify Russian soldiers who commit war crimes. The third use of identifying the dead by posting images on social media is more ethically iffy, but may be justifiable on humanitarian grounds.
Espindola does devote significant space to the objections to facial recognition as a justifiable form of counterintelligence. Yet his conviction rarely wavers throughout the paper; a particularly remarkable passage on objections grounded in privacy states outright that “Ukraine’s technological feat with FRT has been accomplished precisely because the services of companies like PimEyes, FindClone, or, most controversial of all, Clearview AI violate informational privacy.”
In his conclusion, Espindola says of FRT in war, “there is a plausible case to make about the permissibility of its deployment both to acquire information to prevent harm and to fulfill humanitarian obligations in certain contexts.” He hedges somewhat in his final analysis, saying that “whether the wartime benefits of FRT outweigh its postbellum risks is a matter to be decided contextually.”
Regrettably, there will be no lack of fresh context in which to continue evaluating whether facial recognition and other biometrics belong on the battlefield.