As more regions explore teenage social media access restrictions, two core issues stand out as key impediments to their future viability.
First, there’s the need for workable age-checking systems, which can actually keep younger teens out of social apps. Thus far, no system has proven to be entirely effective in this respect. Meanwhile, the lack of commitment to a defined solution also leaves gaps in enforcement.
The second issue is the implementation of any restrictions at scale to ensure a level playing field for all online providers. This would also stop children from simply logging onto some other, potentially less secure, platform if they are locked out of the main apps.
The current experiments have failed to account for both of these aspects, which inevitably means that teen social media bans have so far been ineffective and have not produced any real outcomes.
Last week, Australia’s eSafety Office released its first official report into that nation’s trailblazing under 16 social media ban, which has now been in place for 4 months.
The report, which incorporated feedback from 898 parents and carers of children aged 8 to 15 years, showed that 70% of children under the age of 16 are still accessing and using social apps. The report also said that there has been no decline in reports of online harm since the implementation of the law in December.
That would suggest that the experiment has thus far been something of a failure. In response to the report, the eSafety Commission said it will put more focus on ensuring compliance among the platforms, with a view toward enacting further action for violations by the middle of the year.
Which could see more teens pushed off the apps, for sure. But Australia’s approach was doomed to fail from the beginning due to non-definitive measures relating to platform enforcement, which essentially puts the onus on each specific company involved to try their best to comply with the nation’s new laws.
In the official wording of Australia’s ban it states that: “A provider of an age‑restricted social media platform must take reasonable steps to prevent age‑restricted users having accounts with the age‑restricted social media platform.”
Australian authorities haven’t designated an official age-checking process that all platforms must use, though it did determine, through its initial exploration of more than 60 technologies from 48 age assurance vendors, that there are viable ways for platforms to implement age checks.
But without a definitive provider to ensure the same enforcement rules apply to all platforms, there will be gaps in enforcement, depending on how each platform defines “reasonable steps.” That will make it increasingly difficult for authorities to actually penalize any platform under this legislation.
On that front, authorities in Ireland are already looking to take a different approach, by potentially implementing a new, universal digital ID system to facilitate age verification, according to Bloomberg. Authorities in Ireland are also considering an under-16 social media ban, and Bloomberg reported that this new system would involve a more uniform, prescribed approach to age verification, eliminating confusion around platform requirements.
That would solve the issue of enforcement, though as Meta has repeatedly proposed, a potentially even better solution would see age checks implemented at app store level, which would mean that users could verify their age once, then have that process apply to all downloads.
Which addresses the other core concern, in that by selecting some platforms to target for age checks, that leaves other, likely less secure platforms, as alternatives.
Many in favor of age restrictions seem to be of the belief that, if kids are locked out of social apps, they’ll stop using online services entirely, leading to a return to simpler days of bike rides and social gatherings. But that’s not realistic.
Online connection is now a central element of the interactive process. That was further solidified by the COVID-19 pandemic, when global lockdowns forced kids to maintain social interaction entirely online. Meanwhile, gaming has become so central to youth culture that it’s virtually impossible to see that activity being reduced for future generations.
The way we socialize has changed, and if people are prevented from using one app or platform, they’ll just switch to another. So authorities can either take a “Whack-a-Mole” approach to legislation, and keep adding in the next platform to their rules, or they can look to address this element from the beginning, with more structural reform that doesn’t single out the platforms they see as problems right now.
App store-level verification addresses this, with the developers of each platform then required to register in appropriate age-level brackets. Apple and Google are keen to avoid this, as it puts the onus on them to take action, but it makes more sense than the current approaches, which see each platform implementing their own “reasonable steps” to align with the law.
The final issue, then, is deciding which age-checking process is the most effective, and which one should be implemented as the solution for all platforms to abide by.
None of them are perfect, and as Australia’s example shows, many digitally native teens are adept at breaking through checking systems, no matter how advanced those systems may be.
That’s the real dilemma that should be up for debate. The other problems can be addressed through an improved regulatory process and scaled implementation that applies to all platforms and providers.
Most of the current proposals fail at at least one of these points, and until there’s agreement on a universal age-checking approach, the rest is largely irrelevant.







Recent Comments