Hello and welcome once again to the Pinsent Masons Podcast, where we keep you up to date with the most important developments in global business law and regulatory news every second Tuesday. I'm Matthew Magee, and I'm a journalist here at Pinsent Masons and this week we look at how our health data might be handled and shared in the near future. We also follow up on Outlaw's exclusive reporting on how the European Commission is changing how it regulates AI in response to China's DeepSeek model.
But first, here's some business law news from around the world:
The infrastructure project planning process for England and Wales is to be streamlined further
The South African domestic aviation industry is under scrutiny from a parliamentary committee and
UK's children's safety codes have been published
Consultation duties on developers of large infrastructure projects will be scrapped in amendments to the UK’s Planning and Infrastructure Bill. Developers for projects in England and Wales will no longer have to consult with certain agencies, local communities, landowners, and other stakeholders on plans for ‘nationally significant infrastructure projects’ (NSIPs) before applying for development consent. UK Housing and Planning Minister Matthew Pennycook said that some of the consultation had become a tick-box exercise that was driving perverse outcomes. Planning expert Robbie Owen said that the amendments could significantly reduce the time it takes developers to bring forward NSIP applications. “This will really help to simplify and shorten the pre-application process and reduce its cost, and is also likely to make it easier to bring forward minor changes to an application once it has been accepted for examination,” he said.
The South African low-cost domestic airline industry may face reforms in the future aimed at protecting consumers and ensuring fair competition as the sector faces increased scrutiny, experts have said. Competition law experts Andrew Attieh and Mark Thomas were commenting after the parliamentary Portfolio Committee on Trade, Industry and Competition and the Portfolio Committee on Transport met with aviation industry figures to discuss price control and regulation in the domestic low-cost industry Attieh said: “A parliamentary inquiry could lead to changes in how the domestic low-cost airline industry in South Africa is regulated. However, what shape the inquiry will take, or the reforms that will be recommended, remain unclear.” The committees heard from the Competition Commission which said that though there was significant competition in the airline market before the Covid-19 pandemic, it had noticed a post-pandemic shift in the market, with only one dominant airline in the low-cost market. This concentration raises concerns about reduced competition and its impact on consumers, it said.
The UK’s media regulator Ofcom has outlined the specific measures online service providers can take to meet obligations on child safety under the UK’s Online Safety Act (OSA). These include “safer algorithms”, effective content moderation systems, and the designation of someone accountable for compliance with the children’s safety duties under the Act. The measures are set out in new codes of practice published by Ofcom and relate to the higher levels of protection online for children that the Online Safety Act creates. The children’s safety codes that Ofcom has now issued, which are subject to parliamentary approval, set out the steps and measures providers of in-scope services can take to meet the obligations and to address the specific risks identified in the child risk assessment.
Data makes the world go round and this is as true in the world of health as anywhere else. Between computerised health records; public health statistics and personal fitness trackers we have more information about our bodies and diseases than ever before. But it is stored all over the place by whoever happens to have gathered it, which makes it hard to use. There's an obvious public benefit to giving researchers access to this information to improve all our health, but this has to be balanced by the commercial interests of whoever spent a lot of time and money collecting, cleaning, organising and maintaining the databases of information. So health data policy is in something of a period of flux as governments and health agencies try to work out how to square some of these circles. The UK recently showed its hand, announcing the establishment of a register of UK health datasets. To help us understand what's going on I turned to Louise Fullwood, a Leeds-based health law expert who in a previous life worked as a medical researcher. I first asked her to outline what exactly health datasets are.
Louise Fullwood: Health data set is a broad term. It can be used to cover really anything to do with your health. It can be as broad as nonspecific to individuals, like the number of babies born on a day, number of admissions in hospital. It can be super specific about you, the patient, like your medical notes, your clinical notes. It can be written data from medical notes. It can be data from, say, blood or tissue samples. It can be imaging data like X-ray scans, MRI scans, and increasingly it could be data from health and fitness apps, from your iPhone, your sleep tracker. All of that is health data, so very broad. It may be collected from a number of sources. It could be from the NHS, from your hospital consultant, your GP, your physiotherapist, or it may have come from a private source, for example, a pharmaceutical company doing a clinical trial of which you might be a participant.
Matthew Magee: Health data is amongst the most sensitive information about us that there is, and so it attracts extra legal protection. While the benefits of sharing it are significant, it must also be protected for the sake of people's privacy and health. So working out how to get the benefits of sharing without damaging people's interests isn't easy. This is what the UK's recent announcement is about. It's begun to clarify in broad terms only how it wants to go about it.
Louise: What was announced a couple of weeks ago is a new service called the UK Health Data Research Service. This will make it a lot easier. It will make it faster for researchers. So it should hopefully turbocharge the ability of people to get access to data and to crunch it. And if you combine that now with what's been happening in AI, in machine learning, in those models, in big data, you're getting the technology developing to such a point where having access to lots of data and big data sets is going to hopefully have good outcomes. The two things are converging. So this is making it easier. And what it will do is to give approved researchers a single secure route to get health data. Because for the last few years there have been a lot of, there's been a registration in the UK of these sorts of health data sets. But what they do is simply say, here is a hospital that has this data set, here is what the data set is. So researchers can look online, they can find appropriate data sets they might want to study and they can ask permission to get that. But what is different now is that rather than applying to each individual data custodian, there will be a central service. So imagine if you're a researcher looking at childhood cancers and you want to study data sets out there. You might find that Great Ormond St. Children's Hospital has one, Bristol Children's Hospital has one, Alder Hay has one. So what you've got to do then is to approach each of those data custodians, explain what you want to do, convince them you're going to look after that data properly, convince them they should share it with you and then negotiate a contract with them. So as you can imagine, that can take a very long time and be very difficult for researchers. What this announcement is that there will be a single source, a single environment, so you will only need to apply once. So it's really going to speed things up, make lives a lot easier. This is interesting to compare with what Europe is doing, the EU, because over the past five years they've been developing the European Health Data Space and Health Data Space Regulation. And what that has said is that if you are a data custodian, i.e., you're a person or organisation who's sitting on controlling, maintaining a health data set, it is mandatory that you have to register this with the central authorities who will then decide if that should be shared. And there are some issues with that, particularly from a pharmaceutical company perspective. So it will be very interesting to see whether this will be mandated in the detail of the announcement.
Matthew: We've heard Louise say that there are benefits to researchers in having easier access to all of this data. I asked her for a specific example and she explained how it'll affect clinical trials by pharmaceutical companies and research bodies, not only making them easier and cheaper to run, but making them more accurate and effective.
Louise: It is anticipated that this will be a benefit for clinical trials. It will make it easier for researchers to find patients. So for example, if they're looking at doing long-term follow-up of drugs, they will be able to locate patients who had those drugs. At the moment, it often is the case that a company wanting to do a clinical trial will sort of go through individual consultants or clinicians or GPs and they will try and find the patients associated with those clinicians. This makes it easier to just find the patients far more directly and it makes it a lot more streamlined. It also means that the data that can be achieved can be better quality. So for example, it will enable a researcher to find a data set that really meets its requirements. So rather than just taking from a general population, you can say compare a group of patients that you're giving a placebo to with a group of patients that you're giving the active drug to and you can make sure that those two groups are as similar as possible. Because if you're just taking them from the general population, it may be that Group A actually differs from Group B in a number of ways. But what you can do when you have access to lots of data is pull out what you call a matched cohort. So you say right, I just want to say men of this socio-economic class in this geographical region within this age group and then that is the group I'm going to divide into two and give the placebo, give the active ingredient. So it is going to be much more specific and higher quality research outcomes.
Matthew: Under EU plans, data owners will have to give access to their data if an independent body says they should, and that body will decide what fees, charges, or compensation should be paid. Pharmaceutical companies behind those trials are not so far wholly happy with data sharing plans either in the UK or the EU. Drug and treatment development is eye-wateringly expensive and the data collected through trials, testing, analysis, and development is where a lot of the value generated by that investment sits. Those companies understandably want to protect their investments.
Louise: Interestingly, the authority which will give that final approval isn't the data owner itself, it will be the data access body for each country. But there are two aspects of this which cause a bit of consternation, particularly to pharmaceutical companies. One is the fact that this is mandated. So they may have information that they would consider to be, they would not maybe want to make available. But that won't be a choice because if they don't make it available, there will be fines and sanctions. Second is the fact that the person making the final decision on whether to release data to a researcher will be these health data access bodies and pharmaceutical companies are questioning whether they will always make the right decisions, particularly because pharmaceutical companies will have concerns about intellectual property confidentiality of certain data. Another point is that about charging of fees because at the moment there's quite a secondary industry in organisations who have put together databases which to be honest take a lot of money to collate, maintain, cleanse, maintain up to date a database that involves a lot of investment. So people who've made that investment have the opportunity to make it available to others on a charging basis and that can be very helpful. The fee arrangements under the regulation, they're not fully clear, but they seem to indicate that it will essentially be kind of an at-cost basis. So that takes away a big part of that business. It takes away the ability of some data sets to be self-sustaining in the long run. So that continues to be an issue.
Matthew: The EU is further down this road than the UK, but it sets itself a timetable for an action that stretches far into the future. Louise says this offers the UK an intriguing choice to integrate and get the benefit of access to even more data all over Europe, or to go it alone and move more quickly.
Louise: I think it's interesting to think about how the UK may or may not follow the European Health Data Space Regulation, because that's just been published in March and it's going to have quite a slow rollout really over the next 10 years. And what the regulation says is that third countries, i.e., non-EU countries, of which the UK would be one, will be able to participate, will be able to access that EU-level data, but only if their country complies with those regulations and allows the applicants in the EU to have equivalent access. So the question now for the UK government is do we follow and model ourselves on the EU model, which means that we can play and we can be in that bigger sandpit? Or do we use, conversely, the fact that we don't have to follow that slow timetable, we can be nimbler and actually get something moving faster and maybe think about integration at a later point because again, the European Health Data Space specific guidelines on how this will work in practice won't be published the first set until autumn 2025 and then a second lot in spring 2026. So at the moment there's a lot of guesswork going on, a lot of feeling our way. So it'd be really interesting to see how the two progress.
Artificial intelligence systems are already powerful, are about to get even more powerful and are moving at lightning speed. They can bring enormous benefits but, of course, also risk causing enormous damage and upheaval. Governments have a duty to make sure they are used for the right purposes and in the right way. So far so uncontroversial. But how governments actually do this – how they regulate AI – is something nobody can really agree on. There are policy tensions – comprehensive control, or light touch?; commercial tensions – protect civilians at all costs or be welcoming to AI developers?; and the problem of comprehension – how can legislators control systems only very few people in the world really understand? Enter, a few weeks ago, DeepSeek the Chinese AI app that its inventors claim was developed more quickly and using less computing power than other systems. It has spooked governments and businesses elsewhere because if those claims are true then it shows there is a way to create powerful systems more cheaply and quickly than was thought possible. And it has spooked legislators because they have been using a fairly crude measure of computing power to decide which models are powerful enough to merit regulation as ‘general purpose AI’, or GPAI, models – and, in particular, which GPAI are to be considered as posing systemic risk and so subject to stricter rules. The measure of computing power used has been FLOPS, or floating point operations. But if DeepSeek can achieve the same results with less computing power, is FLOPSs the right metric for determining an AI model’s capabilities and, by extension, risk it poses? Amsterdam based technology law expert Wouter Seinem has a job ahead of him to explain this to us, but we started with that higher bar for more powerful models, those that pose ‘systemic risk’. What is systemic risk and why does it matter?
Wouter Seinen: Systemic risk is a concept that the regulator has introduced to provide another layer of regulatory scrutiny on providers of AI systems that folk in the political arena consider to have a potential of real impact on society. And that could be all kinds of things. That could be a general capability for cybercrime. It could be that they have a capability to create false, fake news. I don't know, biological weapons, anything. It's actually a bit up to the fantasy of anyone that has watched a couple of James Bond movies like what can we do with those extremely powerful things? So the thinking was more conceptually, let's make sure that whilst not stifling innovation, that we have an extra bucket of suppliers of those AI systems that could be used for very risky or dangerous activities without these systems really being designed for that purpose. So it's clear that if you're creating a weapon system, the purpose of that is quite clear and you have a different type of regulation, but this is more indirect and that's also what the lawmaker is doing. They're actually asking of all those providers that are in that bucket to come forward with additional auditing, additional research and reporting on how they have evaluated those risks and what steps they took to mitigate against them.
Matthew: I asked Voter to explain what it is about DeepSeek that's changed the picture so quickly.
Wouter: DeepSeek is an example of all of a sudden another GenAI operator falling out of the blue sky and doing things in a different manner than we were known to them. We knew from players like OpenAI and Microsoft, but I think DeepSeek opened the eye of both the regulators in a sense. We thought that this was a relatively stable market because it requires such enormous investments. So we can easily identify three or four giants and we'll just put them under a lot of scrutiny. And all of a sudden there was a new player, a new kid on the block. So could that happen again? That's one concern. That's probably also what has driven the lawmaker to find something that at least we can use to get our arms around as a regulator on new market entrants.
Matthew: Our reporters at Pinsent Masons content service Outlaw exclusively revealed two weeks ago that EU policymakers were considering changing the AI Act because of this new development. And we can now report further moves to set threshold measures for what constitutes a general AI model in the first place. The use of FLOPs is an attempt to categorise models in some kind of measurable, objective way to ensure the regulation works for technologies that we couldn’t even imagine just now. But Wouter has issues with the use of FLOP as a measure at all.
Wouter: Well, I don't consider it a good measure. It's unusual for the lawmaker that really loves to introduce technology-neutral concepts and all of a sudden there is almost a hard-coded norm brought into play. Whilst it's not sure how exactly the power consumption, the computer power consumption will evolve over time. So it could either be very aggressive and then maybe at some point your handheld computer or your iPhone could have that computing power and the other way around it is very well possible that certain AI models and operators will find ways to run their magic in a much more efficient manner, which would all of a sudden put them under the threshold where the regulator actually wants them in. DeepSeek was already surprising some commentators that they appeared to run a model in a more computer power-efficient manner and a cheaper manner than, for instance, ChatGPT. Then there was a question, well, are they cutting corners or not? But the thing is that they managed a way to do it cheaper. So that's why I think it may prove a quite unreliable measure to use.
Matthew: So as we report today on Out-Law, the European Commission is consulting with businesses and the public about how to regulate GPAIs, while it also prepares a code of practice for GPAI providers, due this week or next. The sudden emergence of models such as ChatGPT and DeepSeek has driven regulators to adopt the more objective approach, based on FLOPs. Nobody thinks it’s perfect, but Wouter still thinks there are other ways to achieve the same aim.
Wouter: One approach is having an indirect approach. So you could have a commission of scientists working on a model to evaluate different parameters. That would work like the size of the training data, the computing power used, the type of the model. That could be a more flexible approach. Would you bring academia and other technicians into the equation to have a more technical discussion about where you want to draw the line? But that would need to be informed by, let's say, functional requirements of the lawmaker. And another could be the amount of training data that went in could be another proxy and then there is the power for the Commission to designate certain players. Wouldn't be surprised if that power will also be used to designate certain players that are drawing a lot of attention because they're successful. People in Brussels think, well, let's just go for it. Let's just do a designation. You are a Gen AI provider with systemic risk and we expect you to report on all your risk assessment and your controls, etc.
Thanks once again for listening and please do share with anybody who you think might be interested in legal news, analysis, and regulatory observations from all over the business world. It really helps us to reach the people who might find it useful. We really appreciate the time you spend with us. I know lots of people want your attention, so thanks for spending some of it with Pinsent Masons and our podcast. And until next time, goodbye. The Pinsent Masons Podcast was produced and presented by Matthew Magee for international professional services firm Pinsent Masons.