MAS puts AI at the centre of scam prevention

By Gemma Rolfe Artificial Intelligence (AI)
views

Singapore’s financial regulator is moving from theory to practice in the use of artificial intelligence against financial crime.

envato

MAS puts AI at the centre of scam prevention

The Monetary Authority of Singapore (MAS) is working with the Government Technology Agency of Singapore, the Singapore Police Force and five major banks on a proof-of-value project designed to test whether AI and machine learning can identify higher-risk accounts and transactions before scams crystallise into customer losses.

The initiative is significant because it uses real banking data rather than synthetic or heavily simplified test sets.

By drawing on historical transaction data and actual bank account numbers from participating institutions, MAS is seeking to determine whether collaborative model development can outperform the narrower fraud systems operated by individual banks.

A multi-bank model for financial-crime intelligence

Scam detection is notoriously difficult because criminal behaviour often cuts across institutions. A suspicious account at one bank may only become fully visible when linked to payments, transfers or customer activity elsewhere in the system.

By pooling data from five banks, MAS is attempting to build models capable of spotting patterns that a single institution might miss.

This approach could mark an important shift in how financial-crime controls are developed. Rather than relying solely on bank-by-bank detection, Singapore is exploring a more collective intelligence model, where industry-wide signals strengthen the ability to intervene earlier.

For the payments sector, the implications are substantial. Faster payment systems and digital banking channels have increased the speed at which scams can unfold. Pre-emptive detection, therefore, is becoming less of a compliance enhancement and more of a core payments resilience requirement.

Privacy, security and trust remain central

The use of live account data inevitably raises questions about privacy and governance. MAS has sought to address this through a secure data-sharing environment, supported by policies and protocols designed to protect customer information.

Account numbers will be hashed, meaning only the contributing bank can identify its own customers.

That design choice is crucial. Financial institutions need richer shared data to fight scams, but public trust depends on strict limits around identification, access and use.

The project therefore tests not only AI performance, but also the operational safeguards needed to make collaborative financial-crime analytics acceptable in a regulated banking system.

AI governance becomes operational

The MAS initiative also reflects a broader regulatory direction: AI in banking will increasingly be judged by evidence, not aspiration. Models must be accurate, explainable, secure and capable of being monitored under realistic conditions.

If the proof-of-value proves effective, MAS may expand the project to include broader datasets, more sophisticated models and additional financial-crime use cases. That would place Singapore at the forefront of AI-enabled scam prevention.

The deeper lesson is clear. Banks are no longer merely experimenting with AI at the edges of financial crime.

They are being pushed towards shared, testable and governed infrastructure. In payments, that may become the new foundation of trust.

Comments

Post comment

No comments found for this post