Out-Law / Your Daily Need-To-Know

OUT-LAW NEWS 1 min. read

US child safety measures set out in new AI policy framework

Donald Trump making a speech on AI_Digital

The new policy framework follows on from an AI action plan set out by US president Donald Trump in July 2025. Chip Somodevilla/Getty Images.


Online platforms could face new obligations on child safety in the US in future under a new AI framework set out by the Trump administration. The policy highlights the continuing focus on child online safety by policymakers globally, according to a technology law expert.

Lauro Fava of Pinsent Masons was commenting after the US government published a new national policy framework for AI (4-page / 234KB PDF).

The framework is broad in scope but high-level in detail. It sets out the Trump administration’s policy on everything from data centre development to copyright protection around AI training, as well as its regulatory approach to AI more generally.

Broadly, the US government is in favour of a sector-specific approach to AI regulation – like that in place in the UK – and does not want to see new bodies or “burdensome” rules – whether at federal or state level – enacted. However, its framework sets out exceptions to this, especially in relation to child online safety.

It has called on Congress to “empower parents and guardians with robust tools to manage their children’s privacy settings, screen time, content exposure, and account controls” and further “establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors”.

AI platforms and services likely to be accessed by children should also be obliged, under US law, to “implement features that reduce the risks of sexual exploitation and self-harm to minors”, it said. Among other calls to action, the US government said Congress should also “affirm that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising”.

Fava said the recommendations place emphasis on parental controls as a mechanism for protecting children online and highlighted limitations with that approach.

“While parental controls can form part of a broader child-safety ecosystem, experts caution against over-reliance on parental responsibility,” Fava said. “Not all children have parents or guardians who are able or willing to engage with platform-specific controls, and many families face practical constraints. Parents are often overwhelmed by the sheer number and diversity of services their children use, may be unaware of all relevant platforms, may not fully understand how particular services function, and frequently struggle to manage multiple, fragmented control systems.”

Fava said specific challenges would arise for platforms if they relied on parental attestation as an age assurance method.

“While age attestation by another individual is theoretically possible, it is notoriously difficult to implement effectively in digital services,” Fava said. “Platforms face fundamental challenges in determining when an attestation is needed – i.e. whether the user is a child – and in identifying who is providing the attestation, as well as in verifying that the person attesting is in fact a parent or legal guardian. Many other, more robust, methods are available, such as age estimation based on selfies and age inference based on user activity.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.