Out-Law News 2 min. read
Australia's online safety regulator is expected to closely monitor compliance with the ban. Anna Barclay/Getty Images
10 Dec 2025, 12:55 pm
Australians can now be required to provide information to verify their identity using the methods the platforms, such as TikTok, YouTube, Snapchat, Reddit, Instagram and Facebook, will offer, so that they can continue to log in. This could include facial recognition, identity documents or other means to ensure their age can be verified.
Veronica Scott, an expert in digital communications and privacy law at Pinsent Masons, said: “From today, Australia will introduce its minimum age laws for social media. In effect this is a ban on under-16s having accounts with many of the social media platforms they would normally use.”
The onus of verifying the age of users is on the platforms themselves, but the laws do not specify what technology or methods they must use. Instead, these social media platforms must now be taking ’reasonable steps’ to prevent users who are under the age of 16 from having accounts, although children will still be able to access the sites without logging in.
“We expect compliance will be closely monitored by Australia’s online safety regulator. Failure by platforms to implement the minimum age restrictions could result in significant penalties. Other countries have been watching Australia’s plans to see what happens and are starting to consider how they can impose similar restrictions to address online harms to children, but not necessarily in such a blunt way,” she said.
Scott added: “There has been a lot of debate about what impact the ban will have. It is a blunt approach because it imposes a one age fits all – without considering a phased approach or other options such as requiring parental consent or considering the nature of the content or safety features platforms offer. Also, children will still be able to access the platforms and their content, but won’t be able to have an account or share content”.
“The government isn’t aiming for perfection, but the technology and age assurance methods do have their limitations,” she said. “Children will try to find ways around it and parents may not enforce it.”
“Platforms are expected to detect and deactivate or remove existing underage accounts with care and clear communication, and to prevent re-registration or circumvention by these underage users whose accounts they have deactivated or removed. Platforms are helping children prepare by enabling them to download their data before accounts are removed,” she said.
Platforms cannot rely on self-declaration of age by users alone, which is not considered sufficient to meet the obligation under the law. An accessible review mechanism for users who believe they’ve been wrongly flagged as under the age of 16 must also be offered, according to the guidance released by Australia’s online safety regulator.
Some platforms, including those where the primary purpose is messaging, email, education, professional development or health, are excluded from the ban.
Scott said: “This will have a big impact on how children can communicate online and on the level of personal information that will need to be collected to enforce the minimum age. This means it will also create privacy and data security risks which the law expects to be strictly managed by platforms - compliance with these obligations will be overseen by the Australian privacy regulator.”
“While it is also expected that the market will respond with the emergence of more tailored age-appropriate services, addressing the complex issue of technology use by children and enabling their safe access to online communication and relevant content will not be an overnight fix,” she said.
Out-Law News
17 Sep 2025
Out-Law Analysis
12 Aug 2025