Navigating Risk Horizons

The use and development of Artificial Intelligence

"Brain representing a motherboard.

Understanding and embracing technology is key to business resilience. However, balancing rushing in and adopting the latest technological advancement with taking the necessary steps to check that innovative technology is fit for purpose is essential. The Post Office scandal over the Horizon IT system, which resulted in the criminal conviction of over 700 branch managers and has been described as the most widespread injustice in the UK, is an extreme example of how trusting technology without proper due diligence can go horribly wrong.

Fast forward to 2022 and the launch of ChatGPT which accelerated the Artificial Intelligence (AI) revolution and led to AI being considered a game changer in the modern world. Whilst AI can bring transformative opportunities for businesses, it also comes with inherent risks that need to be borne in mind. As we move into 2024 and organizations increasingly integrate AI technologies into their operations, we are likely to see a greater exposure to litigation arising out of allegations of incorrect information generated by AI, interference by AI with other IT systems, and accidents caused by AI applications.   

Earlier this year, US software company Salesforce reported the results of a survey of over 500 senior IT leaders which recorded that 67% of them were prioritising AI for their business within the next 18 months, with a further one-third naming it as a top priority. However, as the use of AI grows the incidences of AI failures will increase, and businesses will need to expend significant budget and resource to establish clear guidelines and ethical standards for AI development and deployment.

Privacy issues may intensify, with the responsible handling of sensitive data becoming a paramount challenge. The evolving regulatory landscape may introduce compliance challenges, requiring businesses to navigate complex frameworks. Workforce displacement remains a persistent risk, necessitating thoughtful strategies for upskilling and reskilling employees. Moreover, the ethical use of AI, encompassing bias mitigation and transparency, demands heightened attention to avoid unintended consequences.

The behaviour of AI systems has already led to PR disasters. In 2019, Apple's new credit card was found by CNN to be using AI which was extending higher credit limits to men than women. In one reported case, a male entrepreneur was given a credit limit 20 times that of his wife despite her having a higher credit score. Amazon’s face search and identification technology 'Rekognition' has been accused of serious gender bias and a failure to identify darker-skinned individuals easily. The Verge website reported that in tests carried out, Rekognition made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19% of the time and mistook darker-skinned women for men 31% of the time. 

Both instances led to extensive image management issues for the respective companies involved, which could potentially have been avoided with proper internal systems of scrutiny of the AI and its decisions. Depending on the business, there is also a risk that blind reliance on AI content could lead to litigation and significant claims for compensation for the damage caused by AI failing. 

To shield themselves against the risks accompanying AI integration, businesses must implement a comprehensive risk management strategy. Primarily, prioritising a robust data governance framework is crucial, emphasising data privacy, security, and compliance. Implementing ethical AI practices, including regular audits for bias and transparency, will help mitigate ethical concerns and ensure fair decision-making. Staying abreast of evolving regulatory landscapes in conjunction with adapting internal processes will help mitigate the risk to a business using AI. 

Establishing contingency plans for potential AI failures or disruptions is essential.  Such plans should emphasise the need for a proactive and adaptable approach to technology-driven challenges. Companies should consider developing a crisis management plan to address any AI-related incidents that may detrimentally affect the company's reputation before they occur which should include having the right professional service providers on hand to act quickly and help the business mitigate any fall out from AI disaster. Alongside this, a clear communication strategy will enable businesses to respond quickly and transparently to any issues that arise. By adopting these proactive measures, businesses can navigate the risks associated with AI integration and harness the transformative power of artificial intelligence responsibly.

Latest

Our lawyers are experts in their fields. Through commentary and analysis, we give you insights into the pressures impacting business today.

VIEW ALL