“Grey Nickel” threat targeting banking, crypto, and payment platforms

By Alex Rolfe Cyber Security
views

A growing wave of AI-enabled cybercrime is exposing critical weaknesses in the remote identity verification systems used by financial institutions worldwide, according to new intelligence from biometrics specialist iProov.

The firm’s Security Operations Centre (iSOC) has unveiled details of a sophisticated and ongoing threat campaign by a cybercriminal group dubbed “Grey Nickel,” which is exploiting these vulnerabilities to target banking, crypto exchanges, digital wallets and other financial platforms across Asia-Pacific, EMEA and North America.

“Grey Nickel”

Envato

“Grey Nickel” threat

The iSOC team reports that “Grey Nickel” has been active since mid-2023, systematically deploying techniques such as face-swapping, video injection and metadata manipulation to defeat single-frame liveness detection systems.

Such tools are commonly used to authenticate customers during onboarding and KYC processes.

These attacks are not opportunistic but reflect an increasingly professionalised cybercrime ecosystem, where threat actors are using artificial intelligence to engineer highly credible deepfakes capable of bypassing first-generation defences.

“Financial services are now facing an identity assurance gap,” says Dr Andrew Newell, Chief Scientific Officer at iProov.

“The liveness technologies adopted by many firms were designed to stop basic spoofing attacks, not synthetic media generated by AI.

Criminals understand the value of these platforms and are mounting increasingly complex, targeted operations. The threat is existential for institutions undergoing digital transformation.”

Scale and Sophistication

The scale and sophistication of these attacks are growing.

Alongside Grey Nickel, iProov’s researchers have uncovered a web of other threat actors operating across the fraud-as-a-service supply chain.

One group has developed mobile apps for Android and iOS that enable users to inject manipulated or pre-recorded footage into KYC verification flows.

Others offer “deepfake-as-a-service” tools, bundling stolen identity data with AI-generated avatars for large-scale synthetic fraud, often aimed at crypto exchanges and fintechs.

Meanwhile, underground forums are proliferating with tutorials and tools that use publicly available AI platforms to create high-quality deepfakes.

Some even mimic voice and lip-sync to evade biometric challenges.

These innovations are now filtering into the hands of mid- and low-level fraudsters, accelerating the pace and reach of attacks.

Big Losses

The financial toll is significant.

In 2024, a British multinational lost over $25 million to a deepfake scam in Hong Kong.

A Biocatch survey revealed that more than half of affected institutions suffered AI-related losses of between $5 million and $25 million last year.

And according to a UN report, mentions of deepfake-driven fraud across Southeast Asia increased by over 600% in early 2024 alone.

Despite the scale of the threat, regulators remain on the back foot.

A lack of standardised incident reporting across jurisdictions hampers global visibility into the problem, making it difficult for authorities to coordinate an effective response.

The EU is advancing solutions – such as mandating the use of the EU Digital Identity Wallet to meet AML requirements – but progress is patchy elsewhere.

iProov urges firms to adopt a risk-based approach to identity verification, aligning verification techniques with the threat model and risk tolerance of each use case.

As AI-enabled fraud accelerates, financial institutions must rapidly evolve their defences or risk falling victim to a new era of cybercrime – one where the line between real and synthetic is dangerously blurred.

Comments

Post comment

No comments found for this post