Australia limits social media for kids
At the end of November, after just a week of public debate, the Australian Parliament voted 101 to 13 to require age verification for some social media applications.
The Online Safety Amendment (Social Media Minimum Age) Act of 2024, which will take effect in 12 months, amends the Online Safety Act of 2021 to require that providers of age-restricted social media platforms “take reasonable steps to prevent children who have not reached a minimum age from having accounts,” or face a civil punishment of 30,000 penalty units, or 150,000 penalty units for corporations.
(Penalty units are periodically revalued by Australian national and territorial governments. At press time, 30,000 penalty units is equal to more than U.S. $6 million; 150,000 penalty units are worth more than U.S. $30 million.)
The specific minimum age for the applications affected by the law is 16.
Making kids
safer?
Proponents of the bill, like Sen. Maria Kovacic from New South Wales, argued that their chief concern was for children’s safety, and asserted that “safeguards” in the bill should reassure Australians concerned about data privacy, security and mandatory participation in the newly rolled out national digital ID system. “This bill is not about restricting individual freedoms or lecturing parents on how to raise their children,” Sen. Kovacic said. “It’s about giving parents the tools and support they need to protect their kids from the very real dangers that are online.”
The Press spoke with Chris Ferry, executive director of Pennsylvania community programs and North Carolina residential programs for KidsPeace, about the new Australian law. KidsPeace is a nonprofit organization serving the behavioral and mental health needs of children, families and communities, and includes a psychiatric hospital, a range of residential treatment programs, accredited educational services and several foster care and community-based treatment programs. The charity has served more than 300,000 children since opening its doors in 1882.
Ferry notes that 64 percent of the world’s population uses at least one social media platform, with adults logging two and a half to four hours per day, and teens more than five hours daily.
“The harms of social media are well documented,” Ferry points out, naming cyberbullying and negative body image as two examples. However, he explains that 16 might not be the best age at which to stop worrying: “The brain doesn’t finish developing until mid to late 20s, and the last part to develop is the prefrontal cortex, which enables executive function.” Therefore, young people’s ability to develop and improve their interpersonal skills can be compromised by over-reliance on social media far beyond age 16.
And perhaps over-reliance, rather than social media itself, is the problem. Ferry points to the ancient Greek maxim “Everything in moderation, nothing to excess” and says that social media can be a positive experience for young people if used appropriately, and with limits.
Among the good things social media provides, Ferry says, are “self-affirmation, identity, self-expression, joining like groups, reconnecting with friends, sharing interests and activities.”
The problem with an outright ban, Ferry says, is that, “It’s a hard line,” adding that “I don’t think it’s if, but when teens in Australia find another route to access social media, through a different platform or through false identities.” Fear of missing out – known as “FOMO” – is also heightened in a situation when something is banned, he explains.
Ferry believes guidelines and parameters set and enforced by parents are a better route to helping young people develop a healthy relationship with social media. In his own family, Ferry uses rules like putting phones away during dinnertime, or when family comes to visit. By allowing teens to use social media, but keeping sensible limits on it, parents can build an environment of trust in which important issues are discussed. “If you keep the communication open, kids will tell you what they’ve seen,” Ferry says – something that might not happen if children are accessing a banned service through illicit means. Being consistent about the rules – and modeling good behavior as parents – helps: “If you put your phones away, you can have those conversations.”
Privacy and
efficacy concerns
The Australian law requires affected social media companies to offer users an alternative means, such as facial scanning, to the national digital ID to prove that they are 16 or older. This requirement did not assuage the concerns of MP Kylea Tink of North Sydney, who listed her constituents’ objections as including “sharing of data with third-party verification services, the creation of databases linking real identities to online accounts, requirements for platforms to track Australian user location and the creation of infrastructure content monitoring – not to mention the potential for commercial surveillance of Australians.”
The Electronic Frontier Foundation, a nonprofit civil liberties organization, has raised concerns about age verification legislation in the United State, including in Texas (HB 1181) and New York (Stop Addictive Feeds for Kids Act) in the past year.
Paige Collings of EFF spoke with the Press about the group’s concerns about the privacy implications of what she termed “blanket bans.” Collings brought up the United Kingdom’s Online Safety Act of 2023, which requires search services and user-generated content providers to use age verification to “prevent children from accessing harmful and age-inappropriate content.”
“There’s no doubt,” Collings acknowledges, “that this stuff is on social media, and it makes sense that parents and guardians don’t want to expose their children to this content.” However, she says, “Prohibiting [children below a certain age] from accessing the internet will not get rid of these harms [such as violence, self-harm, and demeaning sex stereotypes] in society, and it won’t prevent children from being exposed to them.” Material that many parents find objectionable is also available in print media, television, and in in-person interactions. “Just blocking children from this technology feels like such a shortsighted approach that is not going to mitigate the harms,” Collings explains.
The problem Collings sees is that “these blanket bans are so broad, yet they don’t actually tackle the issue.” She notes that the list of specific companies subject to new “safety” requirements is left up to regulators, such as Ofcom in the United Kingdom, and safeguards such as deletion requirements are not included in the laws. In Australia, she points out, “they’re [in the process of] testing age verification strategies, and they passed a bill where they don’t know which technologies they’re going to use.” The national Australian government funded a test of age verification technologies in its May 2024 budget.
Meanwhile, the burden on lawful users is one of exposing personal data to potential abuse, as well as sacrificing online anonymity, with the resultant chilling of free speech. In its friend-of-the-court brief opposing the Texas age verification law, EFF raised several issues, including: online age verification affects all users, unlike in-person age verification for, e.g., alcohol sales, which only applies to customers who appear younger than 35; any data collected online is subject to potential (and likely) security breaches, as well as corporate and/or government sale or abuse; tens of millions of Americans do not have valid driver’s licenses or other government-issued ID; and face-based age estimation tools are inaccurate.
First Amendment issues at the ISP level
More disturbing to EFF is regulation of content at the internet service provider (ISP) level of the technology stack. Collings points to S-210, a bill now being discussed in the Canadian parliament. If enacted, S-210 would (among other things) require internet service providers – the companies that provide access to the internet on computers or mobile phones, such as AT&T and RCN – to either block adult content or use age verification for all customers, with the intention of ensuring that underage users do not view sexually explicit material.
Collings notes that as a proposed remedy for a very specific harm — children being exposed to sexually explicit material — the Canadian bill grants the government wide powers and requires age verification by a very wide range of companies. “It’s got an extraordinarily broad parameter,” she says, explaining EFF’s position: “What we never want is for internet infrastructure intermediaries to regulate content at all. This specific bill in Canada doesn’t make a distinction: [age verification requirements are imposed on companies] hosting or transmitting [adult content].”