Out-Law / Your Daily Need-To-Know

Out-Law Analysis 6 min. read

Why healthcare providers need a policy on AI ethics

Ethical considerations of AI in healthcare


Creating and publishing an artificial intelligence (AI) and ethics policy can help life sciences companies and healthcare providers build trust in their use of the technology.

Trust in AI systems is important to persuading patients to share the data required to power algorithms and avoid the risk of bias and discriminatory outcomes.

An AI and ethics policy will guide the work of chief technology officers and heads of digital transformation, who are leading new AI projects in life sciences and healthcare, as they seek to streamline clinical and non-clinical processes, improve the diagnosis of diseases, and speed up the development of new medicines.

The ethical considerations

The use of AI raises a series of ethical issues that academics and policymakers have grappled with for years, and which businesses embracing the technology must address also.

Those issues range from how the use of AI may fundamentally alter the nature of work undertaken by humans, to how to address the risk that advances in machine learning creates systems that become too smart for humans to control. There are further ethical questions around what happens, and who is responsible, when AI systems make a mistake; how the use of AI can respect privacy and freedom of expression; and how to ensure there is not inherent bias in the way AI systems function that might deliver inaccurate, discriminatory or even dangerous outcomes.

In Europe, high-level guidance developed by experts and endorsed by the European Commission is designed to help businesses address the ethical challenges.

The final assessment list for trustworthy AI was published in July 2020 and translates the ethics guidelines into an accessible checklist that developers and deployers of AI can use, touching on seven requirements deemed essential to the ethical use of AI:

  • human agency and oversight;
  • technical robustness and safety;
  • privacy and data governance;
  • transparency;
  • diversity, non-discrimination and fairness;
  • societal and environmental wellbeing; and
  • accountability.
Cerys Wyn Davies

Cerys Wyn Davies

Partner

Our analysis of corporate filings globally in the pharmaceuticals and biotech sectors found that there were over 700 references to AI, machine learning or data in the context of ethics in 2020

The law and regulation

Some ethical considerations are already baked in to existing legal obligations and regulatory requirements. The use of AI commonly involves the processing of personal data, triggering data protection law requirements, for example.

Both the EU and UK versions of the General Data Protection Regulation (GDPR) require businesses to carry out data protection impact assessments before implementing AI or other new technologies, while further obligations that apply include requirements around the lawful processing of personal data, data accuracy, minimisation and security, profiling and record-keeping. The UK’s Information Commissioner’s Office (ICO) has developed guidance to help organisations implementing AI technology to comply with the requirements of data protection law.

As in financial services, there are specific regulatory requirements in life sciences and healthcare that are engaged by the use of AI too. As software such as AI can constitute a medical device, new EU medical device rules could apply. Product liability laws, which are set to be enhanced in the EU with a new AI Act, also apply, while advertising rules – which are particularly restrictive in the context of the marketing of pharmaceuticals – are also relevant.

Life sciences companies are also often bound by industry codes of practice, such as the Association of the British Pharmaceutical Industry code of practice relevant to pharmaceutical manufacturers in the UK. The ABPI code contains stipulations around transparency and ethical responsibility, for example.

Regulatory expectations continue to evolve. A report published earlier this year by the European Medicines Agency (EMA) recommended that regulators specifically address ethical aspects to AI use in the context of medicines regulation, and included a call for a framework to access and validate AI as well as a framework to support the development of new guidelines.

Further developments are also expected in the UK. The AI Council, supported by The Alan Turing Institute, recently closed a consultation on a planned new national AI strategy which is likely to shape AI policy in the UK in the years ahead.

AstraZeneca has set out its own principles for ethical data and AI to guide the approach of its staff

The next step – a policy

It is one thing for life sciences companies and healthcare providers to understand their legal and regulatory obligations and have an appreciation of the ethical guidance that exists, but quite another to form and articulate a policy that allows the organisation to meet the requirements and expectations in practice.

A leader in the sector in this regard is AstraZeneca. It has set out its own principles for ethical data and AI to guide the approach of its staff.

There is significant cross-over between the AstraZeneca principles and the ethics guide and checklist endorsed by the European Commission. The principles revolve around five core themes:

  • Explainable and transparent;
  • Fair;
  • Accountable;
  • Human-centric and socially beneficial; and
  • Private and secure.

Among the specific commitments the company outlines, AstraZeneca promises to be “open about the use, strengths and limitations of our data and AI systems”, to ensure humans oversee AI systems, to ensure data and AI systems are secure, and to “act in a manner compatible with intended data use”. It also states that it anticipates and mitigates the impact of potential unfavourable consequences of AI through testing, governance, and procedures, and further promises to learn lessons from “unintended consequences” materialising from its use of AI.

AstraZeneca has said that its principles align with its broader code of ethics and values.

Novartis has also developed its own ethical principles for AI development, application and use too. In a detailed document, the company has, among other things, specifically acknowledged the risk of bias and discriminatory outcomes from using unrepresentative data samples.

Fujifilm also has a group AI policy promising respect for basic human rights, fairness, security, accountability and transparency, and the upskilling of staff to enable “a high-level and appropriate use of AI”.

Sanofi is another company in the sector developing its own policy on the use and governance of AI. It has said it will be shaped around three principles:

  • AI should be used in the interest of patients;
  • The use of AI should not treat any groups of patients unfairly;
  • Dignity needs to be preserved so the patient should have autonomy of thought, intention, and action when making decisions regarding health care. 

Merck has taken a different approach. Currently, a bioethics advisory panel, and a subsidiary digital ethics advisory panel, guide the company’s approach to tackling ethical issues that arise in its business and research. The company is developing a new code of digital ethics and stated that it believes patients and healthcare facilities will be “more likely to share data with a partner that adheres to a clear set of guidelines”.

Other companies in the sector are also acknowledging ethical issues in greater number. Our analysis of corporate filings globally in the pharmaceuticals and biotech sectors found that there were over 700 references to AI, machine learning or data in the context of ethics in 2020, though the vast majority of references centred on data ethics rather than anything specific to AI.

Cerys Wyn Davies

Cerys Wyn Davies

Partner

Fundamentally, any policy is only as good as the governance framework that underpins it. In the context of AI, governance around data use and sharing is of particular importance

Why a policy matters

Addressing ethical issues is at the heart of building trust in the use of AI in healthcare.

A ’state of the nation’ survey of chief executives, senior managers and others working across the AI ecosystem in England examined the role of ethics in enabling AI use in health and care. The 2018 study – a collaboration between NHS England, NHS Digital, the UK government and the AHSN Network – uncovered widespread support for an ethical AI framework. Of the 106 respondents, 88% said such a framework was extremely or very important to building or preserving trust and transparency in AI use in healthcare.

In a foreword to the report, health secretary Matt Hancock said it is necessary for the public to “have confidence that AI (and the health data which fuels the development of new algorithms) is being used safely, legally and ethically, and that the benefits of the partnerships between AI companies and the NHS are being shared fairly”.

The body driving digital transformation in healthcare in England, NHSX, is taking its own steps to address the ethical challenges posed by AI use in healthcare. It has set up an NHS AI Lab and its AI ethics initiative seeks to “ensure that AI products used in the NHS and care settings will not exacerbate health inequalities”.

The task for life sciences companies and healthcare providers is to articulate how they address the specific challenges that arise.

As developments in the market already highlight, there are different ways to achieve this – whether through a bespoke AI and ethics policy, establishing advisory panels or issuing a position statement or principles.

Life sciences companies and healthcare providers will also have to determine whether an AI ethics policy can be global or needs to be tailored for different cultures, values and expectations arising in different countries.

Further challenges for organisations include working through what happens when an ethical policy conflicts with commercial opportunity, and finding ways to protect and empower ethicists, support divergence of opinion on ethical matters and keep policies up-to-date as technology improves.

Fundamentally, any policy is only as good as the governance framework that underpins it. In the context of AI, governance around data use and sharing is of particular importance, so additional thought must be given to governance models that facilitate access to large data sets to improve the quality of data input and the integrity of outcomes, while ensuring patients have control over how their data is used, who has access to it and for what purposes.

On Thursday 1 July 2021, Pinsent Masons is hosting its second AI Healthcare Leadership Academy session on ‘Ethical Considerations of AI in Healthcare’ in collaboration with Intel and The Digital Leadership Forum. This session will explore crucial ethical concerns of implementing AI into healthcare practices, particularly with regards to data governance, privacy protection, regulatory landscapes, risk management, and security. Registration for the event is now open.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.