Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

The digital insurer – a growing duty to act?


John Salmon’s Financial Services blog

The Pinsent Masons financial services sector team brings you insight and analysis on what really matters in the world of financial services.     

With big data comes big responsibility. As insurers and other acquirers of increasingly detailed information about individuals begin to use data more effectively, they, and society as a whole, will have to address the question: does that insight bring additional legal and ethical burdens?

If an insurer knows that a car is being driven dangerously or the driver is involved in some criminal activity is it obliged to act?

Insurers need to think about whether their increased use of advanced analytics technologies could lead to acquiring an awareness of facts or circumstances about customers that could trigger a legal duty to act. Could accurate details as to the speed at which a vehicle normally travels, detailed vehicle diagnostics, an accurate understanding of current driving conditions and even an understanding of a driver's braking patterns present an insurer with an awareness of circumstances suggesting a customer's exposure to danger? Could it even be that in time insurers will acquire 'actual knowledge' of circumstances that could lead to an ability to spot health risks or criminal activities that require immediate attention?

This matters because insurers are moving towards integrating digital technologies that promise multi-channel customer engagement, advanced analytics capabilities and wholesale process automation into their businesses. But as they follow this path, they need to think carefully about new or heightened legal risks which may arise.      

Questions about companies' responsibility for customers' behaviour based on information they hold have arisen before and been answered by the courts. Court after court has asked: to what extent should Google bear responsibility for enabling its users to be directed to sites that infringe copyright? In the UK, both courts and public opinion have weighed in on the discussion as to whether, and to what extent, Twitter should be responsible for death threats and defamatory comments made using its platform.

While the High Court has reasoned that a man should not be criminally punished for tweeting that he would blow an airport 'sky high', public discussion has often supported the view that the platform should bear some of the blame.

The most recent example is that of winter Olympian Elise Christie. Regarding abusive tweets Christie complained of this week, a British Olympic Association spokesperson said "I do believe service providers must be part of the solution and probably could do more to keep these things from happening", according to one report

But while the platforms themselves have on occasion apologised for abuse cases, they have at the same time been quick to take the view that they should not be turned into quasi-law enforcement bodies.

According to one report, Twitter has said that its "...official recommendation to victims of abuse puts the ball squarely in law enforcement’s court: 'If an interaction has gone beyond the point of name calling and you feel as though you may be in danger,' it says, 'contact your local authorities so they can accurately assess the validity of the threat and help you resolve the issue offline.'"

In so far as it relates to web services, European Union law has arrived at a general principle that sets out the standard of care to be expected. Legal liability generally takes effect where knowledge of wrongdoing is established, complaints are made and web providers fail to respond.

More specifically, the E-Commerce Directive provides that a website operator may not be held liable for someone else's activities where it does not have 'actual knowledge' of an illegal activity taking place in connection with its site or if it is not aware of 'facts or circumstances' from which an illegal activity is apparent. The protection is lost where a website operator, upon obtaining such knowledge or awareness fails to act 'expeditiously'. Laws in various EU member states and court decisions made under them reflect this standard.        

There is though, an even broader question which needs to be addressed here. Is it fair to compel the private sector to engage in law enforcement and public protection? The answer, I strongly believe, is something that cannot be decided simply within the framework of a legal discussion. It has to take into account the greater societal impact of how legal duties and obligations are formed.

Some would argue that there is good reason to place the burden of enforcement on the private sector. In many instances, private sector organisations will have a better idea than the state as to whether or not wrongdoing is taking place or what action should be taken to protect individuals.

But how far can service providers really be expected to be straddled with the cost burden of monitoring for legal wrongdoing? It is one thing to identify a car's location at a specific point in time. It is quite another to correlate that information with data indicating that a crime is taking place. If insurers are to be expected to undertake public service obligations, it may be that it is only fair that they also receive public funding.

Even with funding, privacy and human rights concerns may arise if private organisations are effectively going to become organs of law enforcement. It may be that compelling private companies to act on insights gained through analytics is not fair on either the companies involved or the citizens whose data is being processed.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.