OUT-LAW NEWS 2 min. read

Why sports teams face AI regulatory risks during recruitment

NFL-Draft-2026

San Francisco 49ers are among the teams who adopted AI into their recruitment processes ahead of the 2026 NFL Draft. Photo: Photo by Emilee Chinn/Getty Images


UK and European sports organisations must be careful or risk falling foul of data protection rules as they adopt AI to help with player recruitment, experts have warned.

American sports organisations are increasingly turning to automated decision making and AI systems when it comes to their recruitment – something highlighted most recently in the NFL draft.

But as AI usage continues to grow and develop in the competitive world of player recruitment and transfers, teams and clubs may find themselves at increasing risk of breaching strict data rules in the UK and across Europe.

Dom White, a data protection expert with Pinsent Masons, warned that while AI‑supported recruitment is often framed as efficient, its increasing use in profiling, ranking and filtering players brings it squarely within the scope of UK and EU data protection law.

“Player recruitment typically involves the processing of identifiable personal data, and may also involve sensitive health or physiological information, particularly where performance analytics, injury history or predictive modelling are used,” he explained.

“These risks are likely to be heightened where similar tools are extended to youth academies or age‑grade pathways, particularly given the involvement of children’s data, longer‑term profiling, and the potential to limit future sporting opportunities.

San Francisco 49ers general manager John Lynch revealed to journalists ahead of the draft that the organisation had adopted AI tooling to help in the process to determine which of the players best fitted within their roster requirements, adding: “I do think every team is probably using it in some form or fashion.”

In the UK, clubs are increasingly using sophisticated databases of player data – such as that employed by Brighton and Hove Albion in the English Premier League, and Heart of Midlothian in the Scottish Premiership - to identify more obscure players from around the world which fit their requirements, allowing them to sift through millions of individual performance and appearance statistics to find transfer targets.

However, the use of automated decision making technology for recruitment has come firmly under the regulatory focus of the Information Commissioner’s Office, which has recently issued draft guidelines around the compliance requirements of adopting AI processes in hiring decision.

The ICO’s guidance makes clear that meaningful human involvement must be real and not tokenistic, individuals must be informed and given routes to challenge outcomes and warns that profiling can still occur even in purely advisory systems.

White warned that while ICO’s work has focused on employment contexts, the same risks are likely to arise in elite sport, where algorithmic assessments may have significant career consequences for athletes – meaning clubs and organisations could face questions around profiling and automated decision‑making,

“Even where clubs emphasise that humans make the final decision, recent regulatory guidance makes clear that systems presented as “decision support” can still trigger automated decision‑making obligations if human involvement is not meaningful,” he said.

“As AI‑driven scouting tools become more widely adopted across domestic and international sport, clubs and governing bodies will need to ensure that innovation in recruitment is matched by robust data governance, clear accountability and compliance with profiling and transparency requirements.

“Success on the pitch does not displace the need for compliance off it.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.