Out-Law / Your Daily Need-To-Know

Out-Law Analysis 4 min. read

AI can improve aerospace operations if legal risk is managed


Aerospace companies have the potential to make their operations more efficient – and improve their health and safety and advance their decarbonisation objectives in the process – by harnessing the power of artificial intelligence (AI).

Legal risks arising in the context of AI use need to be managed, however, if the benefits are to be realised.

Analysis undertaken by Pinsent Masons has identified many businesses in the manufacturing sector already using, or considering using, AI tools – including to reduce repetitive tasks or schedule and allocate human resources, improving productivity and reducing labour costs in the process, or to analyse data on energy use to optimise energy consumption.

In aerospace specifically, AI can help make pilot simulations more comprehensive, support with engine inspections, help with detecting and responding to cyber threats, and improve fuel efficiency.

AI can also support with predictive maintenance of vital components such as jet engines, by analysing data from sensors, flight logs, and historical performance, to predict when those components may fail – and can be used in tandem with other technologies like virtual reality systems that support engine inspection and repair. This provides for proactive maintenance scheduling that can minimise downtime, reduce costs, and enhance safety, and could have particular utility for companies that find their machinery is prone to breakdowns when operating in extreme climates.

The agreement between Rolls-Royce and Aerogility announced recently, under which Rolls-Royce will use Aerogility’s AI-based ‘digital twin’ solution to, as one trade publication put it, “run multiple ‘what-if’ scenarios in rapid, largescale simulations of the business” to support “better decision-making for complex asset lifecycle management”, further highlights the potential of the technology in the maintenance context.

However, there are legal issues that aerospace companies need to consider when implementing AI solutions.

As the European Commission put it in a report published in 2020, “the vast amounts of data involved, the reliance on algorithms and the opacity of AI decision-making, make it more difficult to predict the behaviour of an AI-enabled product and to understand the potential causes of a damage”.

Caution, therefore, is needed if decisions around maintenance of components such as jet engines are to be outsourced to AI tools – a degree of human oversight is likely to remain appropriate, at least in the short term, to guard against the potential for errors in AI-led analysis and recommendations given the potential for catastrophic accidents and knock-on regulatory and reputational consequences.

Health and safety issues also need to be front-and-centre. Employees need to be adequately trained on the use of AI systems, including their limitations, potential biases, and the importance of maintaining human oversight. An over-reliance on technology can also lead to deskilling and safety risks, while there may also be challenges relating to organisational structures, legacy systems, and culture to address in seeking to integrate AI into existing systems and processes.

Where businesses use collaborative robots (cobots) they need to guard against the risk that their increased mobility and any scope to take decisions themselves based on self-learning algorithms does not render their actions less predictable – and dangerous – for human workers working alongside them.

There is also the question of who would be responsible if, for example, AI recommendations lead to engine failures. If a manufacturer procures an AI solution from a third party and something goes wrong, there is potential for disputes to arise over who is responsible. Courts will grapple with defining reasonable AI standards and assessing liability, with disputes likely to attract considerable public interest.

The issue of health and safety – and liability – in the aerospace industry was recently brought into sharp focus when an Alaska Airlines plane was forced to return to the airport just minutes after its departure after a mid-air “blow out” of a door panel. The incident has led to the grounding of Alaska Airline’s fleet of Boeing 737-9 MAX aircraft. In Pinsent Masons’ experience, such an incident can lead to litigation – over liability for losses incurred due to other aircraft being grounded while the issues are investigated and resolved, and also delays to manufacturing which in turn postpone the fulfilment of new orders. While the cause of the Alaska Airlines incident is being investigated, it reminds us that the smallest of details can have serious consequences – and any AI systems will also have to navigate those complexities.

While the EU AI Act will introduce new transparency obligations in relation to AI systems – and separate proposed UK reforms will also promote appropriate transparency and explainability, AI systems can be something of a ‘black box’ where it is not easy to understand how or why decisions are made. Users of AI solutions need to be sure they understand what the AI system is doing and that they can get to answers if something goes wrong. Courts will require the same.

In its report, the European Commission further identified that the connectivity associated with operating AI systems is a potential cyber risk. In the aerospace context, something as safety critical as an engine is at risk of hacking if it is digitally connected and controlled. Companies will need to ensure that their pursuit of AI-led innovation does not cause safety risks.

There are also fundamental issues of data ownership to consider: aerospace companies will need to consider who owns the data generated by AI systems, such as in relation to engine performance, maintenance logs, carbon footprints, and predictive analytics, and put in place clear agreements that address that question and further issues such as access rights and use restrictions.

In the context of the growing use of generative AI systems, aerospace companies will also want to consider the extent to which their use of such systems might raise intellectual property risks. Getty Images and the New York Times are just two examples of content creators that have lodged legal proceedings against AI developers in recent months, claiming the developers have unfairly exploited their copyright-protected material for the purposes of training AI systems. It is not hard to imagine IP rights disputes of this nature broadening out to include litigation involving end-users of those AI systems – including in the industrial context – which puts an emphasis on the need to manage licensing risk.

With growing cost pressures and hype around the potential of AI, it can be tempting to rush to implement the new technology solutions emerging on the market.

Aerospace companies can improve their efficiency and safety, and advance their decarbonisation agenda, by embracing AI tools, but thought needs to be given to the evolving regulatory landscape and existing legal issues – and the extent to which contractual frameworks and insurance can be leveraged to protect them from the risks they face as they seek to keep pace with advances in the technology and its adoption by competitors.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.