Out-Law News 2 min. read

EU says autonomous car makers should prioritise cybersecurity


An EU report has recommended that manufacturers of autonomous vehicles should make cybersecurity the central element of digital design to combat the risk of malicious attacks on software.

The report by the EU Agency for Cybersecurity (ENISA) and the Joint Research Centre (JRC) warned that the artificial intelligence (AI) systems in autonomous vehicles were vulnerable to intentional attacks designed to disrupt safety functions. Such attacks could take various forms from manipulating the AI system or disrupting communication channels to physically placing paint on a road to confuse navigation systems or stickers on a stop sign to prevent the vehicle from recognising it.

Cybersecurity expert Christina Kirichenko of Pinsent Masons, the law firm behind Out-Law, said the report would help raise awareness of the issues relating to digital security in the transportation sector that could help accelerate regulatory action within Europe.

“ENISA gives a list of measures taken by EU and internationally with regard to cybersecurity in transportation and motor vehicles in general, which highlights there have been just a few concrete steps taken so far,” Kirichenko said.

Recent steps mentioned by ENISA include the revised Directive on Security of Network and Information Systems, which was adopted by the European Commission at the end of 2020, as well as regulations on cybersecurity and software updates adopted by the United Nations Economic Commission for Europe in June 2020.

“In the EU, there is a lack of sufficient legislation, detailed technical requirements and standardisation for both AI and autonomous driving. The absence of clear, defined technical requirements or standards for autonomous driving would significantly decelerate the adoption of type approval for autonomous vehicles as well as vehicles with automated functions,” Kirichenko said.

Kirichenko said ENISA’s recommendations for coping with cyber security challenges for autonomous driving were particularly important. She said in certain scenarios they could be used as a guide for the minimum technical and organisational measures required to mitigate AI cybersecurity risks in autonomous driving.

The report suggests (58 page / 1.99MB PDF) that security assessments of AI components should be performed regularly throughout their lifecycle, in order to ensure that a vehicle always behaves correctly when faced with unexpected situations or malicious attacks.

It also recommends the adoption of continuous risk assessment processes supported by threat intelligence could enable the identification of potential AI risks and emerging threats related to the uptake of AI in autonomous driving. Proper AI security policies and an AI security culture should govern the entire supply chain for the automotive sector, according to ENISA.

The report includes detailed risk assessments for five hypothetical attack scenarios, which can be used by equipment manufacturers, suppliers and AI developers as guidance to conduct their own risk assessments.

Kirichenko said the development of legislation and regulations across the EU and in individual member states risked making the regulatory environment in this area tricky to navigate.

“In the end, the interplay of all existing and – first and foremost – future rules that could apply to cybersecurity in motor vehicles, connected cars including cars with automated functions and autonomous driving cars, AI in general, as well as relevant cybersecurity and resilience rules in telecommunications sector could become uncomfortably complex for all stakeholders. Legislative actions should come quickly but be very well thought out,” Kirichenko said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.