Is your organisation future-proofed?
John E. Kaye
- Published
- Home, Technology

We are physically and digitally at the precipice of change. As technology changes the way we work and consumer appetites evolve, legacy industries are tasked with carving out new avenues on digital terrain. At the same time, regulators are acting faster to protect consumers, while finding middle ground with innovators and industry representatives.
In these changing times, businesses need to be reactive enough to handle fresh challenges – and proactive enough to pounce on new opportunities.
The compliance boom is unsustainable
For banks and financial services firms, there has been a rise in the compliance and risk management agenda over the past decade to cope with an increasing abundance of regulation. As HSBC’s Chief Compliance officer Colin Bell recently stated: “You have to build an industrial-scale operation just to digest all the regulatory changes.”
This is far from exclusive to banking. Whether it’s ISFR 17 in the insurance sector, IR35 in the accountancy space, the continued refinement of EU customer data policies, or any number of logistical challenges associated with Brexit, businesses are having to scramble for the right tools to react to the impact of legislation.
In response, the percentage of employees working in compliance, risk and other control functions in many of the world’s largest multinationals have soared over the past decade. But throwing money and resources at the problem, as banks have been doing, is not a sustainable way to continue.
Scale knowledge, not resources
There is a scalable and more efficient way for businesses to ease the load when it comes to matters of risk or compliance assessment. By encoding and scaling the knowledge of their most experienced, specialist personnel, linking it to structured or unstructured data, and applying it to the automation of large-scale decision-making, an organisation can effectively industrialise its best expertise. Risk and compliance become infinitely more manageable as a result. The intricacies of legislation, and the often-fine margins between complying or overstepping boundaries, require the nuance of an expert. These decisions are often finely balanced in terms of subjectivity: one assessor’s green light may be another’s red light. To be able to make judgements at scale, and consistently in line with a company’s recognised best-practice, technology needs to be devised with human expertise at the heart of it all.
Prepare for the changing audit function
“AI carries risks we don’t understand.” These were the jarring words of another large bank’s compliance chief in a recent study by The Economist. To a compliance professional, automated decision-making that operates beyond human understanding should be a source of dread. The notion that machine-learning operates in a ‘black box’ is slightly dated – but the fact remains that the algorithmic calculations that go into these systems can only be understood and audited by data scientists.
Indeed, many of the compliance functions in the finance industry may soon need to involve elements of data science in order to properly audit decisions that have been carried out by (increasingly popular) neural networks or machine learning systems. This presents an expensive quandary. Data scientists are not cheap, and neither is the cost of training staff up to their level. So what is the economical choice in the long run? I believe bringing the experts back to the helm of automated decision-making is an instantly and clearly auditable alternative.
Stay flexible with configurable technology
Besides auditable, automation technology needs to be configurable – allowing for continual tweaks and adjustments to the system’s logic or weighting. Any business that has its rules for transactions and processes inescapably set in stone is stuck in an old way of thinking. They will inevitably lack the tools to cope with an ever-diversifying net of consumers, competitors and legislation. For example, a fraud department may decide to take a new approach to risk, and so lower its threshold for a certain type of fraud while increasing detection for other transaction types, in line with recent activity or research. If the fraud professionals themselves can input their expertise into their automation model, that allows for a far more accurate, nuanced application of their logic across large datasets.
The linear automation technology that many have embedded in their businesses leaves them in an operational straitjacket when new requirements arise. Trying to make adjustments to linear rules-based or ‘decision tree’ automation models can cause whole systems to topple like dominoes. This lack of hands-on adaptability is undoubtedly a reason why decision-trees or robotic process automation (RPA) have failed to offer businesses wholesale, long-term transformation. KPMG estimates, for instance, that barely more than 1 in 10 enterprises have managed to reach industrialised scale with task-based RPA.
Typically, with machine-learning systems, making changes to how a model operates means adjusting the dataset – meaning businesses encounter the same problem as with auditing, namely the need for expensive data science specialists.
Value and amplify the power of human decision-making
It is timely that when the EU Commission released their guidelines for AI ethics this spring, they emphasised the importance of “human-centric” AI for the common good.
The building and management of the two broadest churches of AI – machine learning and RPA – keep an organisation’s human experts at arm’s length, and in doing so forsake the potential for hands-on business customisation and flexible risk management. Bringing the experts back to the core of technology not only makes automated decisions more auditable and more configurable – it also repositions humans at the centre of our AI-powered future.
Further information
For more fintech news, follow The European.
RECENT ARTICLES
-
AI-driven phishing surges 204% as firms face a malicious email every 19 seconds -
Deepfake celebrity ads drive new wave of investment scams -
Europe eyes Australia-style social media crackdown for children -
Europe opens NanoIC pilot line to design the computer chips of the 2030s -
Building the materials of tomorrow one atom at a time: fiction or reality? -
Universe ‘should be thicker than this’, say scientists after biggest sky survey ever -
Lasers finally unlock mystery of Charles Darwin’s specimen jars -
Women, science and the price of integrity -
Meet the AI-powered robot that can sort, load and run your laundry on its own -
UK organisations still falling short on GDPR compliance, benchmark report finds -
A practical playbook for securing mission-critical information -
Cracking open the black box: why AI-powered cybersecurity still needs human eyes -
Tech addiction: the hidden cybersecurity threat -
Parliament invites cyber experts to give evidence on new UK cyber security bill -
ISF warns geopolitics will be the defining cybersecurity risk of 2026 -
AI boom triggers new wave of data-centre investment across Europe -
Make boards legally liable for cyber attacks, security chief warns -
AI innovation linked to a shrinking share of income for European workers -
Europe emphasises AI governance as North America moves faster towards autonomy, Digitate research shows -
Surgeons just changed medicine forever using hotel internet connection -
Curium’s expansion into transformative therapy offers fresh hope against cancer -
What to consider before going all in on AI-driven email security -
GrayMatter Robotics opens 100,000-sq-ft AI robotics innovation centre in California -
The silent deal-killer: why cyber due diligence is non-negotiable in M&As -
South African students develop tech concept to tackle hunger using AI and blockchain


























