Over $3 trillion in illicit funds flows through the global financial system annually. It’s a staggering number, yet it’s just that: a number. The real concern shouldn't be the financial cost, but the human impact. It’s the estimated 27.6 million victims of human trafficking, the 50 million people affected by modern slavery, and the countless numbers who lose their life savings to scammers each year.
As an early employee of UK challenger bank Monzo, where I helped build its industry-first fraud detection system, and now chief scientist for Gradient Labs—an AI fraud prevention startup—I’ve witnessed the devastating impact of financial crime up close. It costs people their security, comfort, dignity, and often their lives, which are far more valuable than money.Yet, technology offers hope. While AI faces criticism for the threat to jobs and livelihoods, it has the potential to do real good, detecting, preventing, and disrupting financial crime at scale, not just to reduce losses but to protect lives.
Reevaluating measures of AI success
In financial services—and across a wide range of industries—early adoption of AI has been guided by a single goal: doing more with less.
Banks have deployed AI to process transactions faster, insurers to streamline claim handling, and retailers to optimize inventory management. Removing back office inefficiencies and reducing staffing numbers, AI is largely viewed as a tool for cutting costs, improving margins, and boosting productivity. In fact, 75% of business and technology leaders prioritize process effectiveness when evaluating AI tools, while 65% look at employee uptake and 50% consider sales impact. The focus is primarily on scalability and profitability. Yet, this narrow view of AI as a revenue-generating machine risks missing its broader potential as a tool that not only benefits businesses but humanity as a whole.
In my experience, most entrepreneurs aren’t motivated by wealth, especially given the odds of failure far outweigh the prospects of an IPO. They’re driven by the prospect of solving real problems and creating lasting change. So why do we measure success primarily in terms of costs cut and money made? Perhaps we should reconsider our KPIs, putting lives improved and customers safeguarded at the top of our priority list.
Unlocking AI’s potential for good
I spent four years building machine learning models to detect financial crime—yet even the most advanced systems are inherently flawed. Most operate by flagging suspicious activity and alerting a human investigator, who then reviews the account and decides a course of action. The problem? Human judgment is prone to error, subjective, and often biased. Two investigators can review the same data and reach entirely different conclusions. This leads to inconsistent outcomes, allowing criminals to slip through the gaps and exploit the system. Even if some accounts are closed and funds frozen, the reward of going undetected makes it worthwhile. And if even one criminal is profiting, at least one victim is suffering.
We may not be able to outsmart these criminals on our own, but we can build systems capable of doing so. Early AI tools have helped. However, as an industry, we need to be far more ambitious if we’re serious about dismantling these illegitimate operations. We have to make the work of criminals so laborious, slow, and unprofitable that it’s no longer worth the time, effort, or risk. The solution lies in training AI not to detect anomalies or replicate one human’s decisions, but to learn from thousands. By combining the best of human expertise with the speed, scale, and consistency of AI, we can move from detecting crime to truly preventing it. And, in doing so, protect far more than just profit margins.
Achieving this will require a shift in mindset. We must stop judging worth based on the financial benefits—would this tool reduce fraud losses enough to justify the cost?—and start valuing the number of human lives it would benefit.
An opportunity to build public trust
Deepfakes, biased algorithms, privacy erosion, and the looming threat of mass job displacement—the public has plenty of reason to be sceptical about AI’s impact. Some 31% believe the technology will do more harm than good, more than double the number who believe its benefits outweigh the risks.
There’s no question that AI holds extraordinary potential, good and bad. Ultimately, the direction it takes depends on us: the developers, researchers, and business leaders shaping these technologies. Trust will only erode further if we continue to focus on developing products that, while benefiting businesses, offer little real-world value. Earning the public’s confidence requires making people the priority—applying technology to solve human problems, especially those that threaten our security and well-being—and widespread acceptance and adoption hinges on it.

