Why a policy matters
Addressing ethical issues is at the heart of building trust in the use of AI in healthcare.
A ’state of the nation’ survey of chief executives, senior managers and others working across the AI ecosystem in England examined the role of ethics in enabling AI use in health and care. The 2018 study – a collaboration between NHS England, NHS Digital, the UK government and the AHSN Network – uncovered widespread support for an ethical AI framework. Of the 106 respondents, 88% said such a framework was extremely or very important to building or preserving trust and transparency in AI use in healthcare.
In a foreword to the report, health secretary Matt Hancock said it is necessary for the public to “have confidence that AI (and the health data which fuels the development of new algorithms) is being used safely, legally and ethically, and that the benefits of the partnerships between AI companies and the NHS are being shared fairly”.
The body driving digital transformation in healthcare in England, NHSX, is taking its own steps to address the ethical challenges posed by AI use in healthcare. It has set up an NHS AI Lab and its AI ethics initiative seeks to “ensure that AI products used in the NHS and care settings will not exacerbate health inequalities”.
The task for life sciences companies and healthcare providers is to articulate how they address the specific challenges that arise.
As developments in the market already highlight, there are different ways to achieve this – whether through a bespoke AI and ethics policy, establishing advisory panels or issuing a position statement or principles.
Life sciences companies and healthcare providers will also have to determine whether an AI ethics policy can be global or needs to be tailored for different cultures, values and expectations arising in different countries.
Further challenges for organisations include working through what happens when an ethical policy conflicts with commercial opportunity, and finding ways to protect and empower ethicists, support divergence of opinion on ethical matters and keep policies up-to-date as technology improves.
Fundamentally, any policy is only as good as the governance framework that underpins it. In the context of AI, governance around data use and sharing is of particular importance, so additional thought must be given to governance models that facilitate access to large data sets to improve the quality of data input and the integrity of outcomes, while ensuring patients have control over how their data is used, who has access to it and for what purposes.
On Thursday 1 July 2021, Pinsent Masons is hosting its second AI Healthcare Leadership Academy session on ‘Ethical Considerations of AI in Healthcare’ in collaboration with Intel and The Digital Leadership Forum. This session will explore crucial ethical concerns of implementing AI into healthcare practices, particularly with regards to data governance, privacy protection, regulatory landscapes, risk management, and security. Registration for the event is now open.