The United Kingdom has finally introduced its controversial Online Safety Bill, which will require companies to keep children away from certain content by enforcing age limits and age-checking measures, including biometric technology.
According to the new law, pornography sites will be required to prevent children from viewing content by checking their age. Platforms will have to remove both illegal and legal but potentially harmful content. This includes child sexual abuse, animal cruelty and illegal drug and weapon sales as well as posts promoting suicide and self-harm, illegal immigration and terrorism.
The law also makes it easier to convict people who share intimate images or deepfakes without consent.
Companies will need to comply with a long list of new requirements. Those that fail to comply can expect fines of up to 18 million pounds (US$22.3 million) or 10 percent of their annual global turnover. In some cases, their bosses may even face prison.
Enforcement of the act will be left to the UK’s communications regulator Ofcom which says it plans to draw up codes of conduct for companies, with its first draft coming on 9 November, the BBC reports.
The Online Safety Bill has found strong support among campaigners for children’s safety, including the National Society for the Prevention of Cruelty to Children (NSPCC) and the Equality and Human Rights Commission. Technology experts, however, have been raising concerns about the implications for privacy and free speech.
One of UK’s most visited websites Wikipedia has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users. Wikipedia’s Founder Jimmy Wales has attacked the government’s approach as “age-gating” and selective censorship.
In a statement to Biometric Update, Julie Dawson, Chief Policy and Regulatory Officer at age verification solutions provider Yoti said that the Online Safety Act is not about excluding children from the internet but giving an age-appropriate experience.
“Effective age assurance technology can now make this a reality. The Act will also enable users to control what content they see and which users they interact with,” says Dawson.
While the law may turn out to be a boon for biometrics and age verification companies, legal experts have noted that compliance with the law will require social platforms and websites to be more transparent and proactive in explaining their content moderation which could become a significant burden.
“They have to put all of those processes in place as to how their decisions will be made, or they risk actually being seen as a platform that is controlling all kinds of free speech,” Emma Wright, technology lead at the law firm Harbottle & Lewis, told Wired.
The law was also met with criticism for excluding disinformation and misinformation from moderation, which, according to Wright, is handled by different government departments.
The most divisive part of the law, however, has been its intention to force messaging platforms to examine encrypted messages for child abuse materials. Platforms like WhatsApp, Signal and iMessage have been threatening to leave the UK, saying they cannot access anybody’s messages without destroying privacy protections for all users.
The UK government has tried to provide reassurance, saying that Ofcom would only ask tech companies to access messages once “feasible technology” had been developed.
Article: U.K. introduces Online Safety Bill mandating age verification