With general elections set to begin in April, India continues to crack down on deepfakes – by threatening the platforms used to share them. Deputy IT Minister Rajeev Chandrasekhar has warned tech companies that the government is ready to impose bans if they don’t put measures in place to detect and remove deepfakes, according to The Print. He says the ministry plans to amend IT regulations to establish rules requiring platforms to effectively fight against deepfakes.
“If a platform thinks that they can get away without taking down deepfake videos, or merely maintain a casual approach to it, we have the power to protect our citizens by blocking such platforms,” Chandrasekhar said in a press conference.
The government has already issued an advisory for apps to amend their terms of service accordingly. It will explicitly require the changes by law if apps aren’t doing enough to comply. “These are not things based on their ‘best effort’…these are illegalities and harms and cannot remain on your platforms,” said the minister.
The notice comes after international cricketer Sachin Tendulkar said a replication of his voice and face were used in an ad for a trading app. At the end of last year, Narayana Murthy, the co-founder of the prominent Indian IT company Infosys, warned the public after a deepfake of likeness was used in a similar manner. Prominent Hindi film actors Alia Bhatt and Rashmika Mandanna have also been the targets of deepfakes.
In MediaNama’s virtual event this week, Deepfakes and Democracy, tech leaders in India discussed the threats and challenges that deepfake technology poses on elections and the national government more broadly.
Complications in combating deepfakes
So far, India’s efforts to fight deepfakes during election season and other times have placed a legal burden on platforms. But what are officials doing to effectively hold perpetrators accountable?
Very little, it seems. Shivam Shankar Singh said during the MediaNama event that political ads are supposed to be approved by all “concerned parties” and that failure to do so legally falls on the platform’s failure to obtain such approval.
He mentions that users who spread misinformation can be deplatformed after posting deepfake ads and are subject to criminal penalties, though these measures haven’t been effective so far. Some politicians had cases filed against them, but they typically get withdrawn based on who won the election. “These files just don’t go anywhere after the election is done because one really cares about pursuing them,” says Singh.
Politicians have tried to sway elections through the use of violence and distributing liquor or money with little legal consequence, so it follows that leveraging deepfakes in campaigning will similarly go unchecked.
In the case of frivolous reporting, some worry that anti-deepfake approaches will curtail lawful freedom of expression by taking down content that either does not contain deepfakes or is satirical in nature and marked as such.
The issue is further complicated by the fact that some deepfake content may falsely mark itself as satire and begs the question: when should satirical content be fact checked?
Some user handles claiming to be satire are “actually basically political shields pushing out some political narrative… using satire as a smokescreen to hide what they want to. When we see something which is satire we don’t touch it,” said BoomLive Editor Jency Jacob.
Content could also be cut and blended together to remove the satire label, mislead people, and misappropriate the original poster’s comments in a new post. The original content was “always meant to be satire, but when it’s passed forward and made viral on various other platforms, that labeling has been removed,” he noted. “That’s where we come in and say ‘This was meant to be satire by the original user.’”
Accurate labeling requires being able to tell content made with generative AI from real recordings, but presenters suggested that watermarks and deepfake detectors have yet to be proven reliable.
Deepfakes and identity verification problems
Deepfakes also pose a threat to India Stack, the national DPI, which uses the biometric Aadhaar digital ID for its identity layer. Today, the country relies on the system for biometric authentication in order to grant access to benefits, healthcare, financial transactions, taxes and other essential services.
There have already been a number of cases of fraud and other major security issues in the system. For instance, Aadhaar card data was found on the dark web at the end of last year. All of the processes that require identity verification could be further undermined by deepfake technology.
Deepfakes “will have a massive impact” on scams, said Saikat Datta, CEO of DeepStrat, during the event. “The whole infrastructure that we have built on ensuring that there is identity verification,” such as video KYC, “is now threatened in a big way,” because of deepfakes.
One promising tool is the use of liveness detection, which can prevent spoofing, a cyberattack in which attackers may leverage something as small as a fingerprint on a surface to access accounts of the victim. Some studies show liveness detection can detect deepfakes with high levels of accuracy and are able to evolve as deepfake engines grow stronger.
Article: India threatens to block platforms for spreading deepfakes ahead of elections