Out-Law News 1 min. read
14 Jul 2023, 9:28 am
A new UK inquiry launched to examine how well artificial intelligence (AI) text generators are regulated should take time to consider the relationship between their size and performance, according to one legal expert.
The House of Lords’ Communications and Digital Committee’s study will scrutinise the work of government and regulators regarding so-called ‘large language models’ (LLMs) and issue recommendations to ensure the UK can respond to their opportunities and risks.
LLMs are a type of generative AI that are able to produce human-like text, code and translations. Goldman Sachs recently estimated that generative AI could add around £5.5 trillion to the global economy over the next decade, though so far smaller and cheaper open-source models are set to proliferate.
Artificial intelligence expert Luke Scanlon of Pinsent Masons said it was “no surprise” to see more focus on the accuracy and performance of LLMs. “However, the relationship between size and performance of language models is unclear, and there is growing evidence that smaller language models can, in some circumstances, create opportunities that are not available through the use of LLMs alone.”
His comments came after a number of experts raised concerns over the accuracy of work produced using generative AI. Chatbots, for example, can generate contradictory or fictious answers to questions posed to them by users.
The committee also said the training data used to develop LLMs can contain harmful content and intellectual property rights remain uncertain. The complexities of machine learning algorithms can make it difficult to know whether an LLM might develop counterintuitive or perverse way performing tasks.
The committee warned that LLMs and similar tools could be harnessed to spread disinformation, hacking, fraud and scams, and said it would examine potential safeguards, standards and regulatory approaches that promote innovation whilst managing risks.
“It will be interesting to see whether the evidence provided to the committee will bring concerns over the cost of, and access to, hardware and resources necessary for LLMs into the public conversation,” Scanlon said.
He added: “The wider discussion should also consider the extent to which smaller models can be deployed more readily in ways that enable greater accuracy, and other market developments that could have an influence on determining how quickly more reliable generative AI tools might become available.”
The deadline for written contributions to the committee’s inquiry is 5 September 2023.