IMDA Warns of Security Risks in OpenClaw AI Tool

Singapore’s regulators are issuing a stark warning to businesses and individuals about the risks of deploying autonomous AI agents without strict oversight. The Infocomm Media Development Authority (IMDA) has advised against using OpenClaw in mission-critical settings, specifically cautioning against granting the tool unrestricted access to sensitive files and applications.

The advisory, released on May 14, underscores a growing tension in the enterprise world: the desire for hyper-efficiency versus the necessity of cybersecurity. While OpenClaw offers the ability to automate complex, multi-step workflows, the IMDA warns that without proper guardrails, the agent could “run amok,” potentially leaking sensitive data or inadvertently shutting down critical financial transactions.

The tool, created and released in November 2025 by Austrian developer Peter Steinberger, has gained popularity by acting as a bridge. It allows users to connect powerful large language models—including OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude—directly to their email and instant messaging systems. This enables the AI to not just suggest text, but to actually execute tasks like drafting reports, coordinating schedules, and debugging code autonomously.

However, the IMDA notes that this autonomy comes with a significant security deficit. Because OpenClaw often lacks robust built-in security controls, the responsibility for safety falls entirely on the user. Organizations are being urged to review any implementations of the tool within core production or financial systems to prevent catastrophic operational failures.

The Vulnerability of Unrestricted Access

The primary technical concern lies in how OpenClaw interacts with a host system. By default, the tool inherits the full privileges of the user account that installs it. This means if a user has access to every folder on a corporate server, the AI agent does as well, creating a massive attack surface for potential breaches.

The risks extend beyond the local machine to collaborative environments. The IMDA highlighted a specific vulnerability when OpenClaw is integrated with Slack; in some configurations, the agent may accept and execute instructions from any participant in a channel without requiring additional authentication. This opens the door for unauthorized users to trigger harmful actions via the AI proxy.

The Vulnerability of Unrestricted Access
Security Risks

Data privacy is another critical flashpoint. Because OpenClaw relies on external models like Claude to reason and plan its actions, it must transmit context to those providers. This means that private emails, internal files, and sensitive messages may be sent to third-party AI providers to facilitate the agent’s logic, potentially exposing proprietary corporate data to external entities.

The scale of these risks is reflected in the data. As of April, the intelligence platform OpenCVE had reported more than 400 vulnerabilities and exposures related to OpenClaw, with approximately 25% of those classified as high severity. According to the IMDA, these gaps could lead to severe outcomes, including large-scale data theft.

The Peril of the ‘Skill’ Marketplace

Much of OpenClaw’s utility comes from “skills”—small plugins downloaded from online marketplaces like ClawHub that expand the agent’s capabilities. While these skills drive productivity, the IMDA warns that many are currently flagged as malicious.

The agency pointed to reports of the Atomic macOS Stealer, a piece of malware designed to siphon sensitive data from Apple users, being distributed through the guise of helpful OpenClaw skills. These malicious tools often masquerade as YouTube downloaders, Google Workspace utilities, or cryptocurrency wallet trackers.

To mitigate this, the IMDA recommends a “trust-but-verify” approach to AI extensions. Users are urged to avoid any skills that lack transparent source code, verifiable provenance, or recent maintenance activity. The agency suggests defaulting only to skills maintained by known publishers where the code is publicly inspectable.

A Blueprint for Safe AI Deployment

The IMDA is not suggesting a total ban on autonomous agents, but rather a shift toward a more disciplined architecture. The agency argues that granting broad capabilities should be an “intentional decision” rather than a result of overlooked default settings.

From Instagram — related to Blueprint for Safe, Risk Factor Default Configuration

To secure these environments, the IMDA recommends moving away from a single, “all-powerful” agent. Instead, organizations should deploy multiple agents with narrow, clearly defined roles—for example, maintaining one agent exclusively for calendar scheduling and a separate, isolated agent for coding projects.

Risk Factor Default Configuration IMDA Recommended Control
System Access Full user account privileges Unique identity and managed account
Task Execution Autonomous execution Human-in-the-loop approval workflows
Agent Scope Single “all-powerful” agent Multiple narrow-role agents
Skill Sourcing Public marketplaces (ClawHub) Verified, open-source publishers

Beyond structural changes, the agency emphasizes the need for “managed identity.” By creating a unique identity for the agent rather than letting it reuse personal credentials, companies can ensure that every action taken by the AI is traceable and logged to a persistent directory. This creates an audit trail essential for forensic analysis after a security incident.

Global Trends in AI Governance

Singapore’s caution mirrors a broader global trend of tightening controls around agentic AI. In March, China banned state-run enterprises and government agencies from running OpenClaw on office computers. Similarly, reports indicate that tech giants like Meta have restricted employees from using the tool on work laptops due to similar security fears.

OpenClaw Escalation: Navigating AI Agent Security Risks

The IMDA’s guidance is rooted in its Model AI Governance Framework for Agentic AI, released in January. The current recommendations were developed in collaboration with the Cyber Security Agency of Singapore, the Government Technology Agency of Singapore, and industry leaders including Microsoft, Tencent, and Grab.

Despite the warnings, interest in the tool remains high. More than 20 community-led events have already taken place in Singapore, attracting developers and entrepreneurs eager to leverage autonomous workflows. The IMDA views this as a sign of the technology’s potential, but insists that the pace of adoption must not outstrip the pace of security.

As the landscape of autonomous AI continues to shift, the IMDA views this advisory as a starting point. The agency expects to provide ongoing updates as new vulnerabilities are identified and as the Model AI Governance Framework evolves to meet the needs of enterprises with high-security requirements.

This report is for informational purposes only and does not constitute technical or legal security advice.

We want to hear from you. Is your organization implementing autonomous AI agents, and what guardrails have you put in place? Share your thoughts in the comments or via our social channels.

You may also like

Leave a Comment