Out-Law Guide 20 min. read

Moderation, liability and terms of use


This guide is based on UK law. It was last updated in December 2008 .

How to minimise the risks of liability for content

The operator of any site that hosts third party content – be it a website, blog, wiki or message board – must make a decision about whether and how to moderate that content. The main choices are: (a) do not moderate content at all; (b) moderate all content before it appears – i.e. check every submission for suitability; or (c) review all content after it appears. A robust takedown policy will be necessary for each of these approaches.

Unmoderated sites

When content is unmoderated, the quality of material posted is difficult to control. The legal advantage, however, is that it is easier to avoid liability for anything that is defamatory, infringing or otherwise unlawful. The only condition is that the operator provides a process for removing offending content expeditiously upon being made aware of it.

The process for the removal of content will in most cases involve a clear, easy-to-use facility on the site by which users can report inappropriate content to the operator. The operator must then have a clear internal process for dealing with complaints received.

We recommend that operators provide a link on each page of the website which clearly directs users to the process for reporting inappropriate content. Phrases such as "Report Abuse", "Complain about this content" or "Flag as inappropriate" are all commonly used as links. The link should take users to a page where the complaint can be detailed. Some users complain about statements without saying why they are complaining; others fail to specify where the offending content is. As an operator you are allowed to demand clarity in a complaint, which will help you to assess the merits of the complaint, and also reduce the number of spurious complaints.

In the US in 2007, Facebook agreed to add safeguards to protect children after New York prosecutors threatened the social networking site with fraud charges for failing to live up to its own safety claims. Under the terms of a settlement, Facebook agreed to place "prominent and easily accessible" hyperlinks throughout its site, enabling the submission of complaints about offensive content or unwelcome contact. Further, Facebook agreed that it must respond to and begin addressing such complaints within 24 hours and must report to the complainant the steps it has taken to address the complaint within 72 hours. See: Facebook made basic error with poor user safeguards, says lawyer, OUT-LAW News, 18/10/2007.

Many corporate blogs and internal staff blogs will be unmoderated. They should still have a complaint mechanism. It may be appropriate to offer an anonymous complaints page if you think staff will be afraid to report problems out of fear of being seen to be reporting their colleagues.

Moderated sites

When a site is moderated, either before content appears or shortly thereafter, the operator of the site assumes responsibility for the material that appears. If inappropriate content is posted on the site, and the moderators have failed to find it and deal with it appropriately, then the operator may become liable for that content.

This makes moderating content a relatively high-risk and labour-intensive approach, and as a result many sites choose not to moderate, but to rely on a complaints process. However, it is readily accepted that there is a greater moral imperative to moderate the content of some types of sites – for instance those which are used by children. By moderating the site the operator puts its trust in the individuals who act as moderators, and these moderators must be given clear guidelines on how to fulfill their role.

Again, even with moderated sites, a simple complaint process should operate. The less time that offending content appears online, the fewer people will see it – and that could impact on liability or reduce an award of damages. (See: Count the readers before suing for internet libel, OUT-LAW News, 15/06/2006)

Terms and conditions of use and disclaimer

An automated footer can be added to each blog posting, providing a link to the site's terms and conditions and a disclaimer. A suitable disclaimer for a blog that does not moderate its postings might be:

"We do not vet and are not responsible for any information which is posted in this blog. All content is viewed and used by you at your own risk and we do not warrant the accuracy or reliability of any of the information. The views expressed are those of the individual contributors and not necessarily those of the company."

The terms and conditions might include the following:

  • a wide licence from the user, allowing the operator to use, reproduce and modify the content;
  • a notice to the user that the operator has absolute discretion as to what content is used, how it is managed, where it is posted, and allowing the operator to move or delete any content at any time;
  • a requirement that those under 18 obtain their parent or guardian's permission before posting any comments or materials.

They should also outline the types of behaviour that will be forbidden, such as:

  • posting unlawful, defamatory, obscene, threatening, offensive, harmful or otherwise objectionable content;
  • posting content that violates the legal rights of others or that could damage a computer (e.g. viruses);
  • advertising;
  • promoting an illegal act;
  • revealing any personal information about yourself or anyone else.

For avoidance of doubt, these are some issues to address – not a comprehensive set of conditions that are suitable for adding to your site.

The incorporation of the terms and conditions is best done by asking users to check a box stating that they accept them before they can continue to make a posting. Some third party blogging software does not allow you to add a check box, so if you are forced to rely on a link, ensure that the link is prominently displayed before comments can be submitted.

Tailored warnings

Some sites run higher risks than others. If a particular risk can be identified, the operator of that site should go to greater lengths than are described above.

For example, the video streaming site YouTube.com runs a very high risk of users uploading copyright-protected video clips without authority. Alive to this risk, YouTube makes users follow several steps before joining and before uploading any clip. This is an attempt by the operator to minimise its risk of being found liable for copyright infringement for hosting the content. If it were seen to encourage infringement by its users, YouTube could incur liability for contributory copyright infringement in the US. (Viacom has sued YouTube, alleging encouragement. If Viacom was successful it may also have an influence on European laws.)

In addition, leading media and internet companies, including the BBC, ITV, Google, AOL and mobile phone networks, have agreed to warn users when they publish material that may be offensive. The warnings are designed to enable parents and carers to exercise supervision over the content viewed by those they are responsible for. The Audiovisual Content Information Good Practice Principles will only apply to commercially-produced content and not to user-generated content. 

Linking

When blogs link to other websites, there is rarely any problem – unless the link goes to material that is offensive or infringing the rights of others, such as music files. Most sites allow contributors to post links and they deal with complaints about those links as they would any other content complaints. (See also: Linking and framing)

Dealing with complaints

It is important that operators respond to complaints about content quickly, ideally within a matter of hours. This means that the process for reporting inappropriate content works properly – for instance it is no use if it feeds into an email address which is only checked every couple of weeks. If you do not moderate content on your site, you can avoid liability for content if, in the wording of the relevant legislation, you act 'expeditiously' when you are made aware of the offending content.

The only fixed period in legislation is under the Terrorism Act (which allows a maximum period of two days for removing content before an offence is committed). While this legislation does not apply to all content on all sites, a failure to deal with a complaint before the expiry of two days will be difficult to defend.

The safest approach is to err on the side of caution. You may not know whether a complaint is valid, but provided your terms and conditions allow you unrestricted rights to remove content, you run no risk by removing or disabling access to the offending content. You can only afford to ignore complaints that are clearly baseless.

Problem users

Sometimes the identity of a person who made an offensive posting will be demanded from you. If you store the poster's personal details (typically only a username shows on the site), you should generally request a court order before revealing that person's details, otherwise you risk breaching the Data Protection Act (unless the request is made by a law enforcement agency with appropriate authority).

If you use a third party blogging service provider, such as TypePad, you are unlikely to know the identity of an external contributor and you can refer complainants to TypePad.

If you host the blog on your own servers, store the IP addresses of contributors together with dates and times of access and make it clear to users that you do this. See also: : IP addresses and the Data Protection Act.

An IP address can often be traced to a particular ISP. The ISP is likely to require the production of a court order to reveal the personal data relating to the customer allocated that IP address at the specified date and time. Obtaining such a court order is relatively straightforward if there is, for example, a defamatory posting. If the ISP were to reveal its customer's details without receipt of a court order, the ISP risks breaching the Data Protection Act.

An advantage of using a third party host like TypePad is functionality that makes it easy to exclude bloggers that cause problems. If you have a TypePad account, you can configure it to forbid the posting of certain words; you can also forbid contributions from particular IP addresses.

Wikis

A wiki is a type of website that allows users to add, remove, or edit web page content using a web browser. The first use of this Hawaiian word meaning 'fast' was in WikiWikiWeb, an application developed by Ward Cunningham (named after a holiday encounter with a Honolulu shuttle bus called the 'wiki wiki'). The best-known wiki is Wikipedia. The online encyclopedia currently has more than five million articles that users add to or edit at will.

Organisations may wish to add wikis to their own websites. There are different varieties. Some will be private, for editing by qualified users only; some will be public. Wikis can make all user changes live immediately or adopt a workflow that submits changes for approval by an editorial team.

The same moderation issues arise with wikis as with blogs: where the organisation approves changes to a wiki, the organisation may find itself liable for the content of that page; where a wiki operator does not perform an editorial function it will not be liable for third party content.

Most wikis should also provide a means of making a complaint about content, as with a blog. Where the changes are not moderated, quality control is difficult. For example, in 2005, The Los Angeles Times was forced to remove a wiki from its site that offered its readers the opportunity to review and rewrite its online editorials after the wiki was flooded with foul language and pornographic photographs.

Terms and conditions of use and disclaimers are important for wikis. Many of the issues to be addressed are the same issues as explained in our guide on corporate blogging. Other conditions may be appropriate – for example, do not delete author attributions or legal notices.

Every page should link to the conditions; and users should be forced to accept the terms and conditions of use before being allowed to make postings.

The nature of wikis is such that contributions may be more substantial than blog postings. The content often has a higher creative value – and that heightens the need for making clear who owns the content or how it can and will be used by others. These issues should be addressed clearly in the terms of use.

The disclaimer and the terms of use should also make it clear that the wiki operator does not support or promote any opinion or representation posted on the wiki, and that no warranties are given about the content of the wiki.

In addition to granting a wide licence for use of the work in the terms and conditions of use, the author of the work must waive his/her 'moral' rights in the work (including the right to be identified as author and the right to object to derogatory treatment of the work). To address the concern about infringement of a third party's intellectual property rights, the contributor should either confirm that the user is the author of the material that is being posted, or, where this is not the case, warrant that the necessary licence has been obtained from the third party author before posting. The user could also be required to protect the wiki operator (via an indemnity) from any liability arising to the extent that such licence has not been obtained, although such an indemnity will be difficult or impossible to enforce in many cases.

The practice on some wikis, including Wikipedia, is to deal with ownership and licensing of content by reference to the provisions of the GNU Free Documentation Licence (the GFDL). The GFDL originated to deal with the licensing of documentation developed to accompany open source software.

Under the GFDL a user actively acknowledges that his contribution is subject to certain open source rules every time that he creates or modifies a work (by the relatively onerous requirements to provide notices, endorsements and footnotes on works which are to be licensed under the GFDL).

When content is submitted to or modified on a wiki, ongoing obligations to provide the notices, endorsements and footnotes required under the GFDL are unlikely to be practicable, or adhered to. Wikipedia reminds users that contributions are licensed under the GFDL before the user is able to make changes or submit content. This serves to make the user aware of the GFDL's application to content placed on the wiki. However, it is debateable whether the practical requirements of the GFDL are satisfied by the mechanics of Wikipedia.

Child protection

Many community and social networking websites offer an opportunity for children to communicate with friends and others with shared interests. But sites for children carry other risks.

In 2006 a Texas woman sued MySpace after her 14-year-old daughter was allegedly sexually assaulted by someone she had met through the site. The mother alleged that MySpace had not done enough to protect child users.

As mentioned above, Facebook was threatened with consumer fraud charges by the New York Attorney in 2007 for failing to promptly respond to concerns over children's safety. The investigators accused Facebook of failing to respond, and at other times being slow to respond, to complaints lodged by investigators posing as parents of underage users, asking the site to take action against users that had harassed their children.

No site which allows the posting of user-generated content, or allows users to communicate with each other through the site, can completely eliminate the risks to children. If a site is for adults only, established access controls (like authenticating age using credit cards and other personal details) can be used to exclude children. However, it can be much more difficult to exclude adults from a site or an area of it which is intended for children only.

There is no question that there is an important role for parents to play in supervising their children's use of the internet – including what they do on social networking websites. However, if a site allows access to (or doesn't prevent access by) under-18s then the site must assume some responsibility for protecting those users. If a child suffers harm that could have been prevented by following good practice guidance the site would have difficulty in arguing that it was not at least partly liable.

The Home Office issued good practice guidance in December 2005 for the moderation of interactive services for children (35-page / 187KB PDF).

It states: "It is important for public interactive communication providers to undertake a risk assessment of their own service and the potential for harm to children in order to decide what safeguards are necessary, including the use of moderation."

The guidance stops short of making moderation of services for children mandatory.

The good practice model for providers of chat services (which could apply also to blogs, message boards and other interactive services) suggests that:

  • Clear prominent information should be displayed about the kind of service offered and the audience at which it is aimed. For example, is the chat room moderated or unmoderated? Is it aimed at a specific age group or type of person?
  • Clear prominent and accessible safety messages should be present on front pages and in chat rooms themselves.
  • Links should be available to online safety guides either on the site itself or on third party websites.
  • Clear and prominent safety messages should be visible when completing profiles, highlighting the information which will be in the public domain.
  • The user should be able to limit what personal information about them is made public, and children should be aware of the need for caution.
  • Children should be encouraged not to post their phone/mobile numbers, addresses or email addresses.
  • Service providers should provide and give due prominence to tools such as ignore buttons, alert buttons, grab and print functions and reporting mechanisms, and provide means at the user end to block private chat or Instant Messaging.
  • Service providers should establish and give due prominence to a system of receiving and responding appropriately to reports of incidents.
  • In moderated chat rooms specifically aimed at children, service providers should establish and give due prominence to an alert system (for example a panic button) at the top of each chat room page, ensure that moderators are properly screened and trained, and establish a means of reporting failure on the part of moderators to meet the user's expectations.

The guidance gives examples of the techniques used by abusers who attempt to 'groom' children at interactive spaces, such as asking for personal details or offering cheap tickets to pop concerts.

Moderators should know about these techniques and the relevant law. For example, under the Sexual Offences Act 2003, sexual grooming is a criminal offence (the crime of befriending a child online or by any other means with the intention of abusing them), as is sending a pornographic picture to a child. The Home Office guidance discusses the limitations of technical moderation and offers suggestions for the recruitment of human moderators (the Criminal Records Bureau should be consulted, for example) and their training.

In September 2008 the Government launched the UK Council for Child Internet Safety (UKCCIS). The body will advise Government on how it can increase the protection from dangers posed by the internet. It will also police websites containing inappropriate content, write industry codes of practice for publishers and advertise to children about how to stay safe online. See: Government sets up online child safety watchdog, OUT-LAW News, 29/09/2008.

Contracts and children

Contracts are not always legally enforceable against under-18s. The age of legal capacity, for the purposes of contract law, is 18. Those under 18 are referred to as "minors" in the legislation.

Only certain types of contracts with minors are enforceable. These are:

  • Contracts for ‘necessaries’; and
  • Contracts of apprenticeship, education and service.

While many children may think that using chat rooms and Instant Messaging are ‘necessary’, in most cases the law will disagree. ‘Necessaries’ are considered to be things that relate immediately to the physical wellbeing of a minor, for example food, drink, clothing, lodging and medicine.

Where a contract does not fall within the types of enforceable contracts set out above, it will be ‘voidable’ at the option of the minor. This means that such contracts are valid but not binding on a minor to the extent that the minor may, at his or her option, ‘undo’ the contract and escape performance of his or her obligations. This could result in the minor demanding to be repaid money which he or she has paid under the contract.

What this means commercially is that while an operator may place terms and conditions on its site and accordingly contract with minors, it must always bear in mind that, if the minor so chooses, he can refuse to meet his obligations under the terms and conditions.

Privacy and children

Any site for children should display prominent links to a privacy policy and terms and conditions of use (preferably on every page), which should be in language that is easily understandable by a child. You may want to provide links to online safety guides available either on the operator's site or on third party sites e.g. Think U Know and Chat Danger.

The Data Protection Act 1998 controls the processing of personal data in the UK and it requires those collecting and using personal data to obtain the full informed consent of those to whom the data relates. Clearly this presents added difficulties when collecting data from children.

In 2001 and 2007, the Information Commissioner published guidance for website operators whose sites are directed at children. The 2007 Good Practice Note on Collecting Personal Information Using Websites (9-page / 69KB PDF) states:

"Websites that collect information from children must have stronger safeguards in place to make sure any processing is fair. You should recognise that children generally have a lower level of understanding than adults, and so notices explaining the way you will use their information should be appropriate to their level, and should not exploit any lack of understanding. The language of the explanation should be clear and appropriate to the age group the website is aimed at. If you ask a child to provide personal information you need consent from a parent or guardian, unless it is reasonable to believe the child clearly understands what is involved and they are capable of making an informed decision."

In 2002, the Direct Marketing Association (DMA) published a Code of Practice for Commercial Communications to Children Online (7-page / 38KB PDF) that follows the tenor of the Commissioner's guidance but offers more detail.

A key point of the DMA guidance directs that data should not be collected from under-14s without obtaining parental consent. Note that this age limit of 14 is lower than the age limit of 18 used in the definition of "minors" for the purposes of contract law.

The key DMA guidelines regarding data collection from minors are as follows:

  • Websites that are directed to children must not collect personal data from children under 14 years of age without first obtaining a parent/guardian’s verifiable and explicit consent.
  • Websites that are directed to children, and that collect personal data from children, must not disclose personal data from children under 14 years of age without first obtaining a parent/guardian’s verifiable and explicit consent.
  • Websites that are directed to children, and that collect personal data from children, must require a child to give their age before any other personal information is requested. If the age given is under 14, the child should be precluded from giving further personal information until the appropriate verifiable and explicit consent has been given.
  • A notice informing children of the requirement for parental or guardian’s consent must be shown at the point where personal information is requested. This notice should be clear and prominent and written in language that will be easily understood by young children. It should include an explanation of the purposes for which data are being collected (i.e. for marketing purposes) and how that consent may be given to the service provider.
  • Websites must not make a child’s access to the site contingent on the collection of personal information or entice a child to divulge personal information with the prospect of a special prize or other offer.
  • Personal information relating to other people, for example parents, must not be collected from children.
  • Websites collecting personal information from children must post a privacy policy statement on their website. Such a statement must be understandable by a child audience and posted in a prominent location, both on the website’s Startpage and on any page where personal information is collected. The guidance sets out what such a statement should contain.

The ICO Practice Note also states that "[it] will not usually be enough to ask children to confirm their parents have agreed by using a mouse click. If you need parental consent but decide that verifying that consent will involve disproportionate effort, you should not carry out your proposed activity."

In the US, the Children’s Online Privacy Protection Act (known as COPPA) sets out similar requirements to the guidance in the DMA and the ICO Good Practice Note. COPPA requires commercial websites to obtain verifiable parental consent before collecting, using or disclosing personal information from children under 13-years-old.

COPPA has implications for any UK-based websites collecting data or information from or about citizens from the US. So if websites deal in any way online with the US then they should be aware of the requirements and include a COPPA statement on their website. No enforcement action is known to have been taken against a UK-based operator under COPPA, but the legislation is written in a way that makes it possible.

While the DMA cannot fine for a breach, COPPA is enforced in the US by the Federal Trade Commission. In September 2006, social networking site Xanga and its founders agreed to pay a $1 million fine to settle with authorities over allegations that it collected, used and disclosed personal details of children under 13 in breach of COPPA. (According to the FTC, the Xanga site stated that children under 13 could not join, but then allowed visitors to create Xanga accounts even if they provided a birth date indicating they were under 13).

Excluding children

Some sites will need or want to block access to under-18s, normally done by verifying a credit card in the name of the user, albeit an imperfect method (since a determined child may 'borrow' a parent's card). An alternative may be to require parental consent for users under the age of 18.

Again, this would help ensure compliance with the Data Protection Act, and also help make parents, and under-18s responsible for their activity on the site – helping to enforce the Terms of Service.

An alternative would be where you require off-line parental consent only for those under the age of 14. Those between 14 and 18 will sign up using the normal online process, as they would if they were over 18. This will help to reduce the risk for those children in the "most at risk" age groups, and ensures compliance with the DMA Code of Practice. It does, nonetheless, present an element of legal and commercial risk for those between 14 and 18. The operator would potentially be exposed to liability in the event that these users sought to void the contract.

Contacts

See:

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.