The boundary between a helpful AI assistant and a security liability has grown perilously thin. In a sophisticated breach that highlights the vulnerabilities of autonomous AI agents, a user suspected to be based in Indonesia allegedly manipulated xAI’s Grok chatbot into draining approximately $200,000 (Rp 3.4 billion) in cryptocurrency assets.
The heist was not executed through traditional hacking or brute-force code. Instead, the perpetrator utilized a psychological and technical loophole known as “prompt injection,” disguising malicious commands within Morse code to bypass the AI’s safety filters. The incident, which targeted the interplay between Grok and a trading bot named Bankrbot, has sent ripples through the blockchain community and raised urgent questions about the safety of giving AI direct access to financial wallets.
According to reports from Dexerto and Kompas.com, the attacker managed to siphon roughly 3 billion DRB tokens via the Base blockchain network. The breach was so seamless that the AI systems processed the theft as a legitimate set of instructions, executing the transfer automatically before any human intervention could stop the flow of assets.
The Anatomy of a Digital Heist
The attack was carried out in a calculated, multi-stage sequence designed to escalate the AI’s permissions. It began not with a command, but with a digital gift. The attacker sent a “Bankr Club Membership” NFT to Grok’s digital wallet. This specific NFT acted as a key, granting Grok elevated permissions within the Bankrbot system—an automated trading AI with the authority to execute transactions and swap assets.

Once Grok possessed the necessary credentials to act on behalf of the wallet, the attacker employed a deceptive translation request. The user asked Grok to translate a series of Morse code characters. To a standard security filter, a request to “translate Morse code” appears benign. However, the hidden message within the dots and dashes contained a direct order for the AI to transfer billions of DRB tokens to a specific external wallet address.
done. Sent 3B DRB to .- recipient: 0xe8e47…a686b- tx: 0x6fc7eb7da9379383efda4253e4f599bbc3a99afed0468eabfe18484ec525739a- chain: base @bankrbot
Because the AI interpreted the translated text as a valid command from an authorized user, Bankrbot executed the transaction instantly. Almost immediately after the tokens landed in the attacker’s wallet, they were liquidated on the open market, triggering a sharp, short-term price fluctuation for the DRB token.
Breakdown of the Attack Vector
| Stage | Action | Result |
|---|---|---|
| Access | NFT Transfer | Grok gains administrative permissions in Bankrbot. |
| Obfuscation | Morse Code Prompt | Malicious instructions bypass standard text filters. |
| Execution | Automatic Transfer | 3 billion DRB tokens moved to attacker’s wallet via Base chain. |
| Liquidation | Market Sale | Tokens converted to liquid assets, causing price volatility. |
The Danger of ‘Prompt Injection’
Cybersecurity experts have long warned about “prompt injection,” a technique where a user “tricks” an LLM (Large Language Model) into ignoring its original instructions and following new, often malicious, ones. In this case, the use of Morse code served as a layer of obfuscation, effectively hiding the “attack” from the AI’s internal guardrails.
The incident underscores a critical flaw in current AI integration: the lack of a “human-in-the-loop” for financial transactions. When an AI is designed to be an “agent”—meaning it can take actions in the real world rather than just generating text—the stakes shift from misinformation to financial loss. If an AI can translate a message and immediately execute a payment based on that translation without a secondary confirmation, it creates a wide-open door for exploitation.
Tracing the Perpetrator
While the identity of the attacker remains unconfirmed by official law enforcement, the digital trail points toward Indonesia. Users on platform X identified the account @Ilhamrfliansyh as the primary suspect, citing the language used in the account’s interactions and its deep ties to local Indonesian crypto communities. The account has since been deleted, a common move for actors attempting to scrub their digital footprint following a high-profile exploit.
For those of us who have tracked diplomacy and conflict across the Global South, the rise of highly skilled, decentralized cyber-actors in Southeast Asia is a known trend. Indonesia’s vibrant and rapidly growing tech scene has produced not only innovators but also sophisticated actors capable of probing the edges of the world’s most advanced AI systems.
The Economic Times noted that this incident serves as a stark warning for developers. As AI agents are increasingly integrated into servers, computers, and digital wallets, the necessity for “hard” boundaries—where an AI can suggest a transaction but cannot authorize one—becomes paramount.
The industry now awaits a formal response from xAI and the developers of Bankrbot regarding a security patch to prevent similar “hidden instruction” attacks. The next critical checkpoint will be the potential filing of a formal report with blockchain forensics firms to track the movement of the liquidated funds.
Do you believe AI should have the power to execute financial transactions autonomously? Share your thoughts in the comments below or share this story to start the conversation.
