AWS Hack: AI-Powered Cloud Breach in Minutes | The Register

by Priyanka Patel

Summary of the AI-Assisted Cloud Intrusion

This article details a recent cloud intrusion were attackers leveraged AI, specifically Large Language models (LLMs), to rapidly gain and escalate access within an energy firm’s AWS environment. Here’s a breakdown of the key points:

How the Attack Happened:

  1. Initial Access: Attackers stole valid AWS test credentials from publicly accessible Amazon S3 buckets. These credentials granted importent permissions to AWS Lambda and limited access to AWS Bedrock. The S3 bucket also contained RAG data for AI models.
  2. Privilege Escalation: After failing to use common usernames, the attackers exploited the compromised user’s permissions (UpdateFunctionCode and UpdateFunctionConfiguration) to inject malicious code into Lambda functions.
  3. AI-Generated Code: The injected code exhibited characteristics strongly suggesting LLM generation:

* Comments written in Serbian (potentially indicating the attacker’s origin).
* Hallucinated AWS account IDs (non-existent or belonging to external accounts).* Non-existent GitHub repository references.
* Extensive exception handling and adjustments to Lambda execution timeouts.

  1. Account Takeover: The attackers used the compromised access to enumerate IAM users and their keys,create new admin accounts (“frick”),and inventory S3 bucket contents. They attempted to assume OrganizationAccountAccessRole across multiple AWS environments.
  2. Data Exfiltration: With admin access,the attackers stole sensitive data including:

* Secrets from Secrets Manager
* SSM parameters from EC2 Systems Manager
* CloudWatch logs
* Lambda function source code
* Internal data from S3 buckets
* CloudTrail events

  1. LLMjacking: The attackers abused the compromised account’s access to Amazon Bedrock,invoking multiple LLMs (Claude,DeepSeek,Llama,etc.) – a tactic termed “LLMjacking.”

Key Indicators of AI Involvement:

* Rapid Timeframe: The attack progressed from credential theft to Lambda execution very quickly, suggesting automated code generation.
* Code Characteristics: The code’s features (Serbian comments, hallucinated IDs, comprehensive error handling) are consistent with LLM outputs.
* Hallucinations: The inclusion of invalid account IDs aligns with known LLM “hallucination” tendencies.

Recommendations to Prevent Similar Attacks:

* Secure Credentials: Avoid storing access keys in public buckets.
* Temporary Credentials: use temporary credentials for IAM roles.
* Credential Rotation: Rotate long-term credentials periodically for IAM users.
* Monitor Bedrock Usage: Flag invocations of Bedrock models that are not typically used within the account.

In essence,this incident highlights a new and evolving threat landscape where attackers are leveraging the power of AI to automate and accelerate cloud intrusions.

You may also like

Leave a Comment