We are physically and digitally at the precipice of change. As technology changes the way we work and consumer appetites evolve, legacy industries are tasked with carving out new avenues on digital terrain. At the same time, regulators are acting faster to protect consumers, while finding middle ground with innovators and industry representatives.
In these changing times, businesses need to be reactive enough to handle fresh challenges – and proactive enough to pounce on new opportunities.
The compliance boom is unsustainable
For banks and financial services firms, there has been a rise in the compliance and risk management agenda over the past decade to cope with an increasing abundance of regulation. As HSBC’s Chief Compliance officer Colin Bell recently stated: “You have to build an industrial-scale operation just to digest all the regulatory changes.”
This is far from exclusive to banking. Whether it’s ISFR 17 in the insurance sector, IR35 in the accountancy space, the continued refinement of EU customer data policies, or any number of logistical challenges associated with Brexit, businesses are having to scramble for the right tools to react to the impact of legislation.
In response, the percentage of employees working in compliance, risk and other control functions in many of the world’s largest multinationals have soared over the past decade. But throwing money and resources at the problem, as banks have been doing, is not a sustainable way to continue.
Scale knowledge, not resources
There is a scalable and more efficient way for businesses to ease the load when it comes to matters of risk or compliance assessment. By encoding and scaling the knowledge of their most experienced, specialist personnel, linking it to structured or unstructured data, and applying it to the automation of large-scale decision-making, an organisation can effectively industrialise its best expertise. Risk and compliance become infinitely more manageable as a result. The intricacies of legislation, and the often-fine margins between complying or overstepping boundaries, require the nuance of an expert. These decisions are often finely balanced in terms of subjectivity: one assessor’s green light may be another’s red light. To be able to make judgements at scale, and consistently in line with a company’s recognised best-practice, technology needs to be devised with human expertise at the heart of it all.
Prepare for the changing audit function
“AI carries risks we don’t understand.” These were the jarring words of another large bank’s compliance chief in a recent study by The Economist. To a compliance professional, automated decision-making that operates beyond human understanding should be a source of dread. The notion that machine-learning operates in a ‘black box’ is slightly dated – but the fact remains that the algorithmic calculations that go into these systems can only be understood and audited by data scientists.
Indeed, many of the compliance functions in the finance industry may soon need to involve elements of data science in order to properly audit decisions that have been carried out by (increasingly popular) neural networks or machine learning systems. This presents an expensive quandary. Data scientists are not cheap, and neither is the cost of training staff up to their level. So what is the economical choice in the long run? I believe bringing the experts back to the helm of automated decision-making is an instantly and clearly auditable alternative.
Stay flexible with configurable technology
Besides auditable, automation technology needs to be configurable – allowing for continual tweaks and adjustments to the system’s logic or weighting. Any business that has its rules for transactions and processes inescapably set in stone is stuck in an old way of thinking. They will inevitably lack the tools to cope with an ever-diversifying net of consumers, competitors and legislation. For example, a fraud department may decide to take a new approach to risk, and so lower its threshold for a certain type of fraud while increasing detection for other transaction types, in line with recent activity or research. If the fraud professionals themselves can input their expertise into their automation model, that allows for a far more accurate, nuanced application of their logic across large datasets.
The linear automation technology that many have embedded in their businesses leaves them in an operational straitjacket when new requirements arise. Trying to make adjustments to linear rules-based or ‘decision tree’ automation models can cause whole systems to topple like dominoes. This lack of hands-on adaptability is undoubtedly a reason why decision-trees or robotic process automation (RPA) have failed to offer businesses wholesale, long-term transformation. KPMG estimates, for instance, that barely more than 1 in 10 enterprises have managed to reach industrialised scale with task-based RPA.
Typically, with machine-learning systems, making changes to how a model operates means adjusting the dataset – meaning businesses encounter the same problem as with auditing, namely the need for expensive data science specialists.
Value and amplify the power of human decision-making
It is timely that when the EU Commission released their guidelines for AI ethics this spring, they emphasised the importance of “human-centric” AI for the common good.
The building and management of the two broadest churches of AI – machine learning and RPA – keep an organisation’s human experts at arm’s length, and in doing so forsake the potential for hands-on business customisation and flexible risk management. Bringing the experts back to the core of technology not only makes automated decisions more auditable and more configurable – it also repositions humans at the centre of our AI-powered future.
For more fintech news, follow The European.