Generative AI in Banking: Understanding the risks
By Bernard Lam on the 15th January 2024Banking Artificial Intelligence
With the launch of ChatGPT, 2023 saw a significant acceleration in the use of generative AI across sectors.
Everyone has tried ChatGPT by now, and large organisations are racing to implement solutions that will improve the experience of their clients as well as automate internal processes. Early use cases range from anti-money laundering programs to specialised consumer services.
The growth of AI has improved response efficiency and set the stage for digital assistants to do more than just listening. They will be able to understand and even predict the needs of banking consumers, marking a significant shift in service capabilities and user experience.
But along with the advantages, there are challenges. It's important to understand the downsides of this exciting technology, assessing risk appropriately.
Unpredictable and inaccurate results
Generative AI can exhibit unpredictable behaviour, posing challenges for performance tests and risk evaluations. These AI models are capable of producing results that seem certain but are factually false—otherwise known as hallucinations. Such outputs aren't supported by the data the models were trained on, raising concerns especially in critical areas. For instance, if a model mistakenly identifies genuine transactions as fraudulent or vice versa, it can adversely impact customers. Instead of providing definitive answers, generative AI is designed to generate plausible responses, emphasising the need for further advancements in technology and protocols to ensure reliable confidence levels.
Outputs often include bias
AI systems may produce unfairly discriminating results as a result of training data or system design flaws. The ability to create content with generative AI introduces a new level of complexity. Unfairly biased content can be more subtle and qualitative, being harder to test for and monitor.
The risks associated with Generative AI spans multiple fields and demands coordination across organisations. There is not a single, most effective strategy to mitigate these risks. However, there are several methods to tackle these issues.
Ethical use of Generative AI
It's essential to define appropriate use cases and train models with suitable datasets, as risks are closely tied to specific applications. Human oversight is crucial, and may take many forms, from monitoring performance metrics to authorising every output in delicate circumstances like marketing.
Enhancing business awareness
Firms should focus on educating their employees on the risks and correct usage of AI. AI knowledge can range from highly specialist and technical to more general understanding, such as knowing when to utilise prompts and which activities are acceptable for using publicly available AI tools.
Diversity in training data
Regardless of whether a model is created in-house or supplied by a vendor, firms need to have an accurate understanding of the dataset that was used to train it. Although no dataset can accurately reflect every individual equally, users of AI models should at least be aware of which demographic groups may be underrepresented. This transparency is essential for assessing and correcting any possible bias.
Adapting risk frameworks
The majority of businesses already have strong risk frameworks in place, so adding another level of risk management might create unnecessary complexity and be a risk in itself. A better solution would be to modify current frameworks to take AI risks into consideration. This involves evaluating risk management enablers, updating governance frameworks, and updating AI related risk appetite definitions.
In summary, while Generative AI holds many benefits for the finance and banking sectors, it also introduces complexities and potential pitfalls. Financial institutions must improve their understanding of the risks linked to Generative AI, developing strong mitigation strategies that adhere to regulatory requirements and ensure the responsible and effective use of AI.