Menu Close

Age verification comes to social media as age of unregulated use nears an end

If trends continue, social media is set to follow in the path of cigarettes: an activity benefitting early from lax age verification regulations, but identified over time as a threat to public health, leading to increasingly tight regulation. Moves to impose appropriate age restrictions on social media platforms have led some companies to tighten up their age assurance safeguards. But, as in the case of cigarettes, they have also prompted increased lobbying efforts funded by big tech companies that have much to lose in their youthful user bases.

UK report finds improvements in wake of kids code

A new report says legislation and increased regulation in the UK is having an impact, as social media firms make what researchers call a “flurry of improvements”  to protect children’s safety and privacy. A release from the London School of Economics and Political Science (LSE), which collaborated on the report via its Digital Futures for Children center, says its team found 128 changes related to child safety and privacy made by Meta, Google, TikTok and Snap between 2017 and 2024.

Many appear to have been driven by the adoption of the Age Appropriate Design Code (AADC)  in 2021. That year saw the four companies make 42 changes combined.

More than 60 percent of the 128 changes were made to default settings, meaning fixes to defaults that made the products more secure from the baseline.

“This report illustrates the effective impact that regulation is having in protecting children’s safety and privacy online,” says Steve Wood, founder of PrivacyX Consulting, ex-deputy information commissioner and an author of the report. “The research highlights a shift towards substantive design changes that build in safeguards by default – from private account settings to restrictions in targeted advertising.”

There are still concerns, however, that relying on parental controls – assessed as the second-most popular variety of privacy measure – means many of the would-be benefits are not necessarily being applied in meaningful numbers.

Wood says the team intends to repeat the research in 2025 to assess further progress. In the meantime, the researchers have made eleven recommendations to improve child safety legislation and regulations. These are included in the “Impact of regulation on children’s digital lives,” which is available here.

Australia’s $6.5 million age assurance trial kicks off

Australia’s experiment with age assurance is set to begin. The AU$6.5 million (US$4.3 million) trial, says a LinkedIn blog from the Age Verification Providers Association (AVPA), encompasses “both age verification and age estimation technologies, to explore their efficacy in protecting children from encountering pornography and other high-impact online content.”

The AVPA hopes the trial will address some of the persistent questions dogging age assurance technologies, such as whether it puts users’ privacy at risk, how much it costs and why kids won’t just use VPNs to circumvent age verification measures. “It is essential that a wide range of stakeholders across the states and territories are closely involved in the design, operation and evaluation of any trial,” says the blog. The AVPA recommends that civil society groups, trade associations, a range of regulators and the platforms, apps and websites affected are all represented on an advisory board.

The organization warns against simply doing a repeat of other trials elsewhere, particularly the EU. They note that eSafety’s age verification roadmap has already considered the results of a similar exercise funded by the European Commission in 2021-22, which became euConsent’s AgeAware “That was a simple proof of concept which showed it was possible to do an age check once for one website, and then re-use it across other sites, even if their age assurance service was supplied by a competing provider. While the results of that were very positive, there is no point in simply repeating it in Australia, as this would deliver very little marginal benefit.”

Australia has indicated that it is moving in the direction of what eSafety Commissioner Julie Inman-Grant calls a “double-blind tokenized approach.” The AVPA notes that the euConsent has announced its intention to have its AgeAware program “guide the age assurance industry towards the use of an ecosystem that relies on age assurance providers creating tokens which users can choose to retain on their smartphone, tablet, PC or even any other connected device, and digital services can then interrogate to confirm if the users meets their age requirements.”

Social media age assurance debate could keep the door open for porn

Most people would agree that hardcore pornography should have age restrictions. But the issue is not as cut-and-dried with social media platforms, most of which technically have age restrictions – for example, Instagram employs age assurance from Yoti – but many of which also have a large chunk of users who are technically too young, according to the platforms’ own rules.

The Australian trial has some wondering if the now-standard cutoff age of 13 for social media users should be raised to 16. Even then, says Iain Corby, director of AVPA, younger users might still get exposed. “For facial age estimation, the average error of the best in class is a year and a half, so if we were trying to control access for 16-year-olds, you’d have to expect a fair number of 14 and quite a few 15-year-olds to get through,” Corby says.

But some think adding social media restrictions to the mix could be a stumbling block for moves to curb access to porn sites in Australia, which government officials have blamed on a national crisis of violence against women. A piece in the Courier Mail quotes Melinda Tankard Reist,  director of the Collective Shout movement, who believes “the issue is too urgent” to lump in with social media. “Big tech companies have caused untold harm,” she says. “Every day of delay means millions more children are exposed.”

New York lawsuit fueled by nearly $1M spent in lobbying dollars

Social media firms are showing their less compliant sides in the U.S. with a bit of legal flamboyance. The New York Post reports that Google and Meta are spearheading “a fierce push to kill New York legislation aimed at protecting children online.”

According to the Post, big tech firms and their allies have already spent $823,235 lobbying lawmakers to kill Senate Bill S7694 to establish the Stop Addictive Feeds Exploitation (SAFE) for Kids Act and Senate Bill S7695A to establish the New York Child Data Protection Act. Among other things, the laws are designed to tighten the rules around algorithmic feeds for younger users.

Danny Weiss, chief advocacy officer at Common Sense Media and a supporter of the bills, observes that “they are spending a lot of money to oppose these bills, as if they pose an existential threat to New York.”

The top spender in the lobbying process? Meta – parent company of Facebook and Instagram, which issued a statement opposing the laws on the rather thin grounds that “teens move interchangeably between many websites and apps, and different laws in different states will mean teens and their parents have inconsistent experiences online.”

Article: Age verification comes to social media as age of unregulated use nears an end

Leave a Reply

Your email address will not be published. Required fields are marked *