Businesses should consider internally what systems they have in place to mitigate some of the more enhanced risks that AI may pose. Where businesses do not have these controls in place, they should engage with their AI technology suppliers to discuss what controls they can offer as part of their solution. This could include staggering the level of human control or input, or a guardrail system which would switch off the system in the event the AI begins to produce incorrect outputs.
Another risk to consider from the outset is that of integration and implementation. AI providers will need to understand the company's IT estate, processes and data sets prior to rolling out a proof-of-concept model. The implementation of AI can often be protracted, when the parties have not fully discussed issues such as legacy systems and the types of data that are to be analysed, for example whether only structured or also unstructured. Where conversations occur at an early stage, realistic expectations and milestones can be agreed in principle and thereafter translated into the contract.
Consider:
- What internal systems are in place to monitor the AI solution?
- Have you discussed what monitoring tools the AI provider has available?
- Are both parties clear on timescales for implementation?
Transparency
Businesses should be clear on the data sets that have been selected and used for the AI to train on, be tested and deployed. If the system has been trained on inaccurate data sets and has not completed the relevant training, the level of errors in outputs is likely to be much higher.
It is important for the business to engage with the AI service provider and to understand the services being provided. In particular, internal stakeholders should understand the underlying decision-making and evaluation process used by the AI tool and where decision-making cannot be traced or explained, businesses should ensure that they have in place processes to deal with such circumstances.
Businesses should also seek to discuss with AI providers how decisions and processes are logged and available for review. From a regulatory perspective, the management board will remain accountable to regulators. The board therefore needs to be able to understand and explain to the extent possible, the rationale behind decisions taken by AI systems, and where it cannot, consider whether it is comfortable with using "black box" AI in particular areas of its business, such as in customer facing environments, or across the business at all.
Given the vast data that is used by AI systems, developing a transparent process can be challenging. However, businesses should consider documenting the data types being processed, where they are stored, which algorithms are being used by the AI system, the parameters set and where the decisions are stored. At a high level this will assist in tracing erroneous data, should this be required, and may help the business to explain decisions made by the AI to regulators.
Businesses may wish to consider whether their existing processes help satisfy regulatory and ethical requirements of explainability or if additional measures require to be put in place.
Consider:
- What data sets are being provided?
- Can you access logs of the AI decision making process?
- How does the algorithm work, and whether the decision making models can be tested?If so, can the testing be documented?
Governance
At a business level, it is important that the appropriate individuals and teams are in place to engage with the technology, oversee the outsourcing and work with the AI supplier on day-to-day issues.