AI assistants pose new agentic AI threat to mobile apps

By Alex Rolfe Cyber Security
views

The rapid proliferation of AI assistants such as Apple Siri, Google Gemini, Microsoft Copilot, and OpenAI’s ChatGPT has transformed mobile engagement – ushering in efficiencies for both consumers and enterprises.

Yet according to cybersecurity firm Appdome, these same technologies are now emerging as a serious threat vector for mobile apps, capable of performing covert surveillance, hijacking sessions, and exfiltrating sensitive data.

Envato

AI assistants pose new agentic AI threat

Agentic AI Malware

To combat this growing risk, Appdome has launched a new suite of security plugins designed to detect and mitigate what it terms “Agentic AI Malware.”

These are AI-powered applications – both legitimate and malicious – that can monitor and manipulate in-app user activity in real time.

Available for Android and iOS, the new Detect Agentic AI Malware tools allow enterprises to identify when AI assistants are interacting with their apps, and to take immediate action to prevent data leakage, credential theft, and unauthorised access.

The distinction between good and malicious AI, Appdome argues, is largely irrelevant in the eyes of a mobile device.

As Avi Yehuda, the firm’s CTO, puts it: “The mobile environment has no concept of ‘good’ or ‘bad’ actors, only allowed and disallowed access or permissions.”

Agentic AI Assistants

Agentic AI Assistants are capable of reading screen content, overlaying interfaces, interpreting user behaviour, and accessing contextual data – functions that can just as easily enable fraud as they can assist productivity.

On Android, more permissive APIs amplify the risk, while on iOS, threats include mirroring-based data leaks through mechanisms such as AirPlay.

These concerns are particularly acute in sectors like banking, digital wallets, and healthcare, where regulatory obligations around data privacy are stringent.

As Tom Tovar, Appdome’s CEO, notes: “Whatever a good AI Assistant can do, a bad one can also do. That includes extracting credentials, hijacking sessions, and intercepting transactions.”

Appdome’s solution employs behavioural biometrics to monitor how AI agents interact with an app, whether they’re officially supported or not.

It also provides enterprises with granular control to define a list of Trusted AI Assistants, blocking unapproved or wrapped versions that might impersonate legitimate tools to deceive users.

Chris Roeckl, Appdome’s Chief Product Officer, warns that a wave of AI-driven threats is already underway.

“Most concerning are wrapped versions of legitimate apps, which are increasingly used to trick users into signing in, transacting, and engaging with what looks like your brand – until a malicious agent takes over.”

For payment providers and mobile-first businesses, the message is clear: AI assistants can no longer be viewed solely as productivity tools.

Without dynamic defences, they could become the Trojan horse through which cybercriminals compromise user trust, regulatory compliance, and core infrastructure.

Comments

Post comment

No comments found for this post