As the year 2025 begins, anti-fraud and cybersecurity companies such as iProov, World and Pindrop are laying out their predictions on the future of biometrics, deepfakes, data breaches and AI regulation.
Deepfakes will continue to rise at an alarming rate, according to Pindrop Security, the company known for identifying fake Joe Biden robocalls with its voice biometrics engine last year. In the first half of 2024, the U.S.-based company recorded a 1,400 percent increase in deepfake attacks and that trend is set to continue with more high-profile attacks expected.
The year 2025 could also see the first large-scale breach targeting confidential audio and video archives that could be used to train AI for malicious purposes, says the firm’s CEO and co-founder Vijay Balasubramaniyan.
The deluge of deepfakes could pose a special threat to the integrity of news and media, prompting calls for new content attribution technologies, adds UK biometric face verification company iProov.
For businesses, the danger may not just be in fake news, says the firm. Companies will need to invest serious dollars into tools against deepfake digital injection, including biometric authentication and identity verification to prevent hiring scams such as the KnowBe4 deepfake incident. Cybersecurity company KnowBe4 discovered in July that a remote software engineer they had hired was a North Korean threat actor using a stolen U.S. identity and an AI-enhanced photograph.
If 2024 was the year of the deepfake, then 2025 needs to be the year of anti-deepfake regulation, according to Steven Smith, head of protocol at Tools For Humanity, the main developer of World, previously known as Worldcoin.
The regulatory space surrounding deepfakes is fragmented at best while the U.S. still lacks federal laws that could address the creation, dissemination and use of deepfakes, Smith writes for Forbes. Some states, such as Florida, Texas and Washington, have come up with their own regulation but the efforts are still in the early stages. In the meantime, the world will need technological defenses such as tools that prove the authenticity of individuals online.
World added deepfake detection to its World ID digital identity in October.
Meanwhile, the influx of deepfakes is changing the financial industry, says deepfake detection technology maker Reality Defender. Deepfakes are increasingly being used to bypass traditional security measures meaning that companies are increasingly looking towards multi-factor authentication (MFA) to prevent unauthorized access.
These include liveness detection, analyzing audio, video and images for AI manipulation, introducing staff training to recognize deepfake impersonations and fraud reduction processes, such as callback verification protocols. Multimodal detection tools are the key for financial organizations, according to the U.S.-based company, which recently reached US$33 million in an expansion of its series A funding round.
Finally, in 2025, organizations will seek to reduce the number of cybersecurity tools in use, switching to unified platforms, predicts Palo Alto Networks. The cybersecurity company believes that the switch to more holistic security architecture will be driven by the need to reduce complexity, increase efficiency and overcome shortages in cybersecurity skills.
To fight the deepfake threat, however, organizations will eventually have to adopt quantum-resistant defenses, including quantum-resistant tunneling, comprehensive crypto data libraries, and other technologies, says Simon Green, the firm’s president for Asia Pacific and Japan.
“As quantum computing continues to become more and more of a reality and potential threats loom, it will be essential to adopt these measures to keep pace with the rapidly evolving cyber landscape, prevent data theft, and ensure the integrity of critical systems,” says Green.
Article: 2025 deepfake threat predictions from biometrics, cybersecurity insiders