Diversity and accountability
The WEDF’s emphasis on front-end teams taking responsibility for – or at least being aware of – ‘explainability’ issues is key. In the UK, the Department for Culture, Media and Sport has already stated that: “accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural”.
In many large organisations, this accountability might sit with individuals that are one step or more removed from the development of an AI system. It has previously been suggested that companies could implement an AI ‘explainabilty’ appraisal process for internal, regulatory and consumer use. The questions posed by the WEDF are very relevant and could be used as building blocks to develop these governance processes.
How the guidelines will develop
Despite many positive elements, the WEDF’s new guidelines are still very general. This will invite companies to use them as a starting point for the development of their own bespoke approaches to responsible use of AI – a process that will be determined by the specific context and the complexity of the technology’s use.
Overall, the way the guidelines are structured is very helpful, and will help to ensure that they will remain relevant even as the technology and its means of development changes. For example, in the context of generative AI, the WEDF’s guidelines on data integrity and sourcing will remain relevant regardless of whether a model is trained using real sources obtained through web scraping, or using synthetic data. Development of the WEDF guidelines is also an open forum project, so will improve with greater and more diverse engagement over time.
Co-written by Krish Khanna and Marina Goodman of Pinsent Masons.