OUT-LAW GUIDE 14 min. read
The risks addressed by child online safety regulation
BrianAJackson/iStock.
02 Mar 2026, 10:56 am
Young people today spend significant time online communicating with friends, completing school assignments, and playing games. Much of the emerging digital regulation globally is driven by the desire to ensure minors under the age of 18 are able to access age-appropriate digital experiences and to protect them from a broad range of risks that may be associated with online activity.
While there is consensus across numerous jurisdictions and services on the need to take robust action against certain activity, such as child sexual abuse material and terrorist content, there are broader categories of risks that vary by jurisdiction and are influenced by legal and cultural norms on appropriate speech and conduct.
Read more from Pinsent Masons on child online safety
- The UN principles shaping child online safety regulation
- UAE introduces new online restrictions with child safety legislation
- UK to consult on social media ban for children
- Under 16s social media ban now in force in Australia
Further complexity is added by the varying capacities of children, both within and across age groups. The definition of a “child” can vary by country and age limits can be set differently for different activities. For example, the UN Convention on the Rights of the Child (UNCRC) refers to children as individuals under 18 years of age, while recognising the rights of member states to set a different age for adulthood in national law.
In the EU, the General Data Protection Regulation (GDPR) generally sets 16 as the age below which a ‘child’ cannot in principle consent to the processing of their personal data in the context of using online services, but each EU member state has flexibility to set that age anywhere between 13 and 16. In the US, the Children’s Online Privacy Protection Act (COPPA) requires parental consent for the collection or use of the personal information of children under 13.
Technology platforms have implemented a range of approaches to address risks. Research and analysis into the risks and impacts for young people of spending time online is in its very early stages, and we expect that the approaches taken by services and regulators will change further as more robust evidence of longitudinal impacts develops. Notwithstanding the lack of consensus on the appropriate legislative approach, in this guide we summarise some of the potential risks to children targeted by existing and developing legal and regulatory regimes.
The OECD’s typology of risks
In setting out the risks below, we have considered the categorisation used in the OECD’s typology of risks, which was an effort by the OECD, revised in 2020, to broadly categorise the risks to young people of being online (27-page / 1.19MB PDF). The EU adopted this typology of risks approach when issuing its guidance on the protection of minors under Article 28 of the Digital Services Act. In the UK, statutory guidance for schools on keeping children safe in education groups risks into four of the five categories identified by the OECD. In this guide, therefore, we have organised the individual risks into four categories identified by that guidance: content risks; conduct risks; contact risks, and consumer/commerce risks.
The OECD has also identified risks that cut across these categories and can have wide ranging impacts on children’s lives, which are: privacy risks, advanced technology risks, and health and wellbeing risks. We explore these cross-cutting risks in this guide too.
Classification of specific risks varies, and the relevance of risks will also depend on the purpose and content on a particular platform and its user base.
Although, as we explore below, a wide range of potential risks to minors online have been identified, it is less clear how prevalent these harms are and where users encounter them. This will depend on a number of factors, including how a platform serves content to users and whether the users have parental controls or other settings in place. This type of analysis is typically left to platforms through their risk assessment processes, so that they can consider where to direct their efforts to improve the safety of their platforms.
The EU’s ‘better internet for kids’ good practice guide on classifying and responding to online risks to children, dated February 2023, looked to results from an online survey of over 25,000 European child internet users aged between nine and 16 years of age, who reported that sexting and meeting new people on the internet were the most common online risks. Data from the Insafe network of helplines across Europe noted that in 2021-2022 cyberbullying was the problem most frequently reported to helplines, followed by relationships and sexuality and potentially harmful content.
Content risks – harmful content
The OECD describes content risks as including circumstances where “the child passively receives or is exposed to content available to all internet users in a one-to-many relationship”. The OECD includes in this category hateful content, harmful content, illegal content, and disinformation. Certain types of harmful content have attracted particular legislative scrutiny, as discussed below.
Pornographic content
Pornographic content is commonly encountered by children online. According to UK regulator Ofcom’s children’s register of risks (335-page / 3.7MB PDF), the average age of children encountering pornography online is 13. One in four say they have seen pornography by the age of 11; one in 10 by the age of nine. Boys are more likely to encounter pornography online than girls.
Ofcom has noted that pornographic content can have impacts for children who encounter it and for society more broadly. Encountering pornography online may cause harmful attitudes to sex and relationships or harmful sexual behaviours, as well as psychological outcomes such as low self-esteem, concerns about body image and addiction.
Suicide and self-harm content
Under the UK’s Online Safety Act (OSA), content which encourages, promotes or provides instructions for suicide must be addressed by services that are likely to be accessed by children. Similarly with content that encourages, promotes or provides instructions for an act of deliberate self-injury. This can include cutting or other forms of self-harm like burning or bruising. Minors who encounter this content online can find it triggering, and it can also potentially normalise the behaviour.
In its children’s register of risks, Ofcom noted that a wide range of suicide and self-harm content exists online. Ofcom referred to evidence that suggests that harmful suicide and self-harm content can manifest online in various forms, ranging from recovery content that could benefit some users but be detrimental to others, to more explicit content that actively promotes or glorifies these behaviours. It noted that the negative physical and psychological impacts of this type of content are well documented, and in the most severe cases exposure to such content can contribute to long-term mental health concerns, eating disorders, physical harm and death.
Eating disorder material
Some studies have identified a significant relationship between social media use and body image concerns and eating disorders (25-page / 1MB PDF). Others have identified that online platforms may promote idealised and stereotyped beautify standards, and minors viewing that content can result in negative self-image and unhappiness. Some websites provide virtual spaces in which teenagers can exchange ideas about their bodies and advice on how to lose weight.
Under the OSA, content which encourages, promotes or provides instructions for an eating disorder or behaviours associated with an eating disorder is treated as primary priority content, which means that services have a duty to prevent children of any age from encountering it. Ofcom has explained that some users that share this content may have experience of an eating disorder themselves, but even content that is “recovery focused” can be harmful to children.
Conduct and contact risks
The OECD uses the phrase “conduct risks” to refer to risks that children create when interacting with other children online. Minors can pose conduct risks to others or to themselves, for example by making themselves vulnerable.
These risks are distinguishable from contact risks where a child is the recipient or victim of risky or harmful contact in an interactive situation.
Where a child interacts with another child online, there may be a conduct risk for the child that is perpetrating the harmful activity and a contact risk for the child that is a victim. Much of the underlying harmful behaviour may be the same.
Cyberbullying
According to Ofcom, bullying content is content that is targeted against a person and either conveys a serious threat, is humiliating or degrading, or forms part of a campaign of mistreatment. Ofcom has set out contextual factors for identifying what it considers to be bullying content, noting that examples include content that persistently or repetitively targets individuals or groups with offensive or harmful content, content depicting or relating to a specific individual in an offensive or otherwise harmful way shared to humiliate them, ‘pile ons’, or serious threats or aggressive behaviours. It can also occur in the form of exclusion from online chats or groups.
The EU’s report on its public consultation on the action plan against cyberbullying highlighted that 91% of the respondents to the consultation identified threatening or insulting messages as the most common form of cyberbullying; 66% noted identity-based harassment; 59% referred to spreading false information; and 55% mentioned non-consensual sharing private information. However, online exclusion, stalking, impersonation, and repeated unsolicited contact were also identified as forms of cyberbullying.
Bullying content may overlap with other types of content described below, such as when it is aimed at individuals on the basis of their race, religion, gender or sexual orientation.
Bullying can be a continuation or escalation of behaviour which begins offline. Ofcom’s research highlighted that bullying through “communications technology” is more likely to occur than bullying in person. Ofcom identified that the potential for anonymity may enable those engaged in bullying to trivialise the consequences of it.
Young people frequently raise cyberbullying as one of the risks they are most concerned about online. The Australian eSafety Commissioner’s report on digital use and risk cited that 52% of children aged 10-15 surveyed had at some point been cyberbullied, with 38% reporting to have experienced cyberbullying in the 12 months preceding the survey.
Risk-taking and challenges
The OSA has defined as priority content harmful to children content which encourages, promotes or provides instructions for a challenge or stunt highly likely to result in serious injury to the person who does it or to someone else. Ofcom said the onset of puberty drives neurobiological changes that influence cognitive development and increases risk-taking and impulsive behaviour as children undergo adolescence. This continues as children enter their teenage years when peer influence and the desire to fit in becomes particularly important.
This content can appear as videos on social media or video sharing services, or as images and text-based content.
Hate speech and online harassment
Hate content attacks groups or individuals based on their race, religion, nationality, sexuality or other protected characteristics.
Roughly two-thirds of adolescents are “often” or “sometimes” exposed to hate-based content, according to the US surgeon general. The OSA defines as priority content that is harmful to children any content which is abusive and which targets any of the following characteristics: race; religion; sex; sexual orientation; disability; or gender reassignment. The OSA also regulates content which incites hatred against people: of a particular race, religion, sex or sexual orientation; who have a disability; or who have the characteristic of gender reassignment.
According to Ofcom, the online environment may encourage the sharing of abuse or hate content. This is because the greater potential for anonymity online may enable users to trivialise the consequences of their actions and break social norms that they would otherwise adhere to in face-to-face interactions. Ofcom has noted that children with listed characteristics are at heightened risk of seeing abusive content.
Ofcom noted that in some cases, content proliferates after significant national or international events such as large sporting events, like Euro 2020, a terror attack, or events such as the 2024 Southport stabbing. Ofcom’s research on online experiences in 2025 found that one in five children had seen hateful content online.
Abuse of women and girls
There has been considerable commentary on the disproportionate impact that abusive content online has on women and girls. This harms both the perpetrators and recipients of that abuse.
There has been increased scrutiny in recent years of misogynistic content online, which has been associated with the “manosphere,” which Ofcom defines as online spaces dedicated to men’s issues, where misogynistic views may proliferate. Ofcom explains that this can cover a wide range of content, including fitness and self-improvement, along with extremely misogynistic content which Ofcom explains is more likely to be found on closed groups or among “incel” (involuntarily celibate) communities. These communities can attract vulnerable or socially isolated users who may then adopt harmful views or mindsets.
The existence of misogynistic content online has a major impact on the experience of women and girls. Ofcom’s guidance on a safer life online for women and girls explained that there are a wide range of harms that threaten, silence, abuse and otherwise target women and girls online, negatively impacting their safety and ability to express themselves. Abuse of women and girls can take a variety of forms, including misogynistic or sexually abusive content, pile-ons and coordinated harassment, image-based sexual abuse, and stalking and coercive control. Ofcom’s research in 2025 found that 20% of 13-17 year olds had seen or experienced content which objectified, demeaned, or otherwise negatively portrayed women in the four-week period prior to the research.
Online child sexual exploitation and abuse (OCSEA)
OCSEA is a term used to describe the use of the internet or communication technologies to facilitate the sexual abuse of children and adolescents. This can include a variety of behaviours, including grooming, sexual extortion, sexting, live-streaming, and child sexual abuse materials (CSAM).
OCSEA does not have to directly involve the child who is the victim of abuse. Some OCSEA harms include the transmission between adults of CSAM materials.
The online world provides opportunities for criminals to create, obtain, and distribute CSAM. While technology such as AI has been developed to help identify CSAM, new technologies are also helping criminals to find new ways to create CSAM – such as through use of generative-AI tools.
Sexting, which is the exchange of sexual messages, may also be an example of a conduct risk by a child. A child may be self-producing child pornography material, leading to social and also criminal consequences for the child involved.
Grooming is a term that broadly describes the tactics abusers use to build trust and rapport with a child in order to gain access to that child for the purpose of sexual activity or exploitation. The online world can provide ways for adults to target and connect with children they would otherwise have no connection to.
Grooming can occur on a range of platforms and may include instances when a child is being groomed to take sexually explicit images and/or ultimately meet face-to-face with someone for sexual purposes, or to engage in a sexual conversation online or, in some instances, to trade the child’s sexual images.
Online sexual coercion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child.
This is a growing problem online which may be perpetrated by individuals who are not located physically near the victims. In some cases, the coercion may be carried out by organised criminal gangs seeking to blackmail people on a systematic basis.
Commerce/consumer risks
Children can also face risks online as consumers, and they may be less able than adults to identify and assess commercial practices targeting them.
Children online are subject to the same risks as other consumers. Children using video sharing and social media platforms may be at risk of marketing by influencers which they are unable to identify as advertising. Children are also frequent users of gaming platforms, where they may have difficulty understanding and assessing “loot boxes”.
Cross-cutting risks
Beyond the above content, conduct, contact and consumer risks are cross-cutting risks posed by new technologies or to children’s health and wellbeing or privacy. These are risks which may arise across different context and services, even if the underlying content or contact a young user is engaging with is not harmful.
Privacy risks
A number of data protection frameworks expressly address the particular risks to children of being online.
As the UK Information Commissioner’s Office (ICO) explained in setting out its strategy behind its ‘children’s code’, also known as the Age Appropriate Design Code, children may be less aware of the risks associated with their lives online and the sharing of their data. This may leave them vulnerable to being inappropriately identified or targeted by strangers, having their locations tracked, or being sent harmful communications.
Because serving children with targeted adverts based on data gathered from their online activity has been considered to raise privacy risks and other types of harm – such as if children are encouraged to spend lots of money based on the information that is presented to them – some policymakers have taken action. The EU’s Digital Services Act prohibits online service providers from processing children’s data for the purposes of serving targeted ads.
Contact risks can also move outside of the digital environment and into the real world on occasion, at times facilitated by location sharing features in the service being used. Location sharing can be helpful in some circumstances – for example, for parents tracking a child’s progress home from school – but can also be used to share information about a child’s whereabouts.
Advanced technology risks
This category refers to new risks associated with emerging technologies that may not be well understood. Examples of advanced technology risks could include AI and the ‘internet of things’. For example, there may be risks associated with young people developing intense romantic or confessional attachments to AI chatbots, which may prove harmful. As young people are likely to be early adopters of new technologies, the inclusion of this category signals the need to assess at an early stage how new technologies may give rise to risks.
Health and wellbeing risks
This set of risks applies to the impact of use of online platforms to young people’s mental and physical health. This risk could arise in a variety of contexts.
A concern that has arisen in response to media surveys of children’s experience is simply the time that children spend online. For example, the US Surgeon General’s advisory published in 2023 reported that up to 95% of youths aged 13-17 said they use a social media platform, with more than a third saying they use social media “almost constantly”. Nearly 40% of children aged 8-12 reported using social media. There are concerns that the sheer amount of time children spend online is interfering with other basic activities, such as sleep and physical activity. It may also cause attention problems and feelings of exclusion among adolescents.
Some organisations believe that features and functionalities of services made available to children that encourage children to remain online should be treated as a separate category of harm because they contribute to an “addictive” experience for children.
This is an area that is subject to ongoing research and there isn’t yet a consensus on the potential harmfulness of “sticky features” which encourage more screentime. Various platforms have proactively introduced measures to enable children, or more likely their parents, to regulate children’s screentime. Some regulations seek to address concerns about functionalities that may lead to excessive use by children. These include recommender systems, autoplay, push notifications, and the use of popularity metrics such as “likes”.
Addressing risk with assessments and features
A central feature of many of the legislative regimes, such as the DSA and OSA, is the requirement that online services closely examine the different types of risks associated with the features and functionalities they offer and the user base that they reach. There is a significant amount of research to support online services in identifying and addressing the risks that may result from their service’s design and operation.
Platforms have responded to these requirements, and in some cases acted before they were legally required to, for example by stopping the serving of targeted ads to users who they know are children, or by limiting the duration for which location sharing features may be used. Services are also increasingly exploring ways to mitigate risks in other ways, such as through the use of age assurance for relevant parts of a service and by implementing measures to protect young people in the development process.