Out-Law News 2 min. read

Australia’s online safety regulator releases guidance on social media ban for under 16s


Australia’s online safety regulator has published guidance on how social media platforms such as TikTok, YouTube, Snapchat, Instagram and Facebook, can comply with upcoming age assurance requirements.

The guidance provides more clarity on how regulated platforms can comply with the Australia’s social media minimum age obligation.

The eSafety Commissioner has released its long-awaited regulatory guidance (55-page / 596.63kb PDF), which will require platforms to manage and maintain their own age assurance technology and polices.

From 10 December, social media platforms will be required to take ’reasonable steps’ to prevent users who are under the age of 16 from having accounts, although they will still be able to access the sites without logging in, under part 4A of the Online Safety Act 2021.

The guidance is principles-based, doesn’t prescribe any specific technology that must be used and encourages platforms to use a layered approach to age verification, according to the eSafety Commissioner. 

Veronica Scott, an expert in privacy law at Pinsent Masons, said: “How platforms identify and engage with existing underage users will be a priority, with no legally enforced minimum standards or technology identified by the government. Platforms are expected to deal with existing underage users in a careful and kind manner.”

“Measures need to be reliable, robust and effective, and can't remain static. This will require a range of measures, including inhouse or third-party technology,” she said.

“A range of test data has been released from the trial which should provider helpful contextual guidance on the effectiveness of different methods of age verification.” 

Platforms are expected to detect and deactivate or remove existing underage accounts with care and clear communication, and prevent re-registration or circumvention by underage users whose accounts have been deactivated or removed.

They cannot rely on self-declaration on age by users alone, which is not considered sufficient to meet the legal obligation under the law. An accessible review mechanism for users who believe they’ve been wrongly flagged as under the age of 16 must also be offered, according to the guidance.

Platforms cannot use government ID as the sole method for age verification and must always offer reasonable alternatives, while the eSafety Commissioner has not mandated any specific technology or measures, instead placing the onus on the platforms themselves to prove they are taking reasonable steps.

Elly Krambias of Pinsent Masons said: “Platforms should consider what systems they can leverage, and what additional measures will need to be put in place to comply with these requirements.”

The platforms are also not expected to keep personal information from individual age checks, with record-keeping focusing on systems and processes, not user-level data. The guidance also states that platforms should not automatically transfer underage users to other services without explicit user consent or opt-in.

The guidance builds upon the recently released self-assessment guide, which will help platforms and services determine if they are age-restricted social media platforms. It also sets expectations for ongoing monitoring, transparency and fairness in relation to how platforms detect and deactivate underage accounts.

A review of the legislation will be required within two years of it taking effect on 10 December, which will examine whether the measures introduced are effective and delivering the desired outcomes or whether any changes to the scope or the minimum age are required.

Findings from the age assurance trial (170-page / 12745kb PDF), released in early September, included that age assurance can be done in Australia privately, efficiently and effectively, there are no substantial technological limitations preventing its implementation and while range of approaches exist, there is no one-size-fits-all solution.

The Australian government will also introduce a Children’s Online Privacy Code, aimed at enhancing privacy protections for children that will apply to social media services, messaging apps, online games and cloud storage platforms, as well as any other service that is likely to be accessed by children, by December 2026. 

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.