Add Techlomedia as a preferred source on Google. Preferred Source

Security researchers are warning that artificial intelligence is now helping hackers break into cloud environments at an alarming speed. A new report from the Sysdig Threat Research Team shows how attackers used AI tools to gain full administrative access to an Amazon Web Services account in less than ten minutes.

The incident was observed in November 2025 and shows how the cloud attack lifecycle has changed. What earlier took hours or even days can now be done in minutes. According to the researchers, large language models were likely used to automate reconnaissance, generate attack code, and make decisions during the intrusion in real time.

The attack targeted an AWS environment and began with a simple but common mistake. The attackers found valid AWS credentials stored inside publicly accessible S3 buckets. These buckets contained Retrieval-Augmented Generation data used for AI models. Once the credentials were exposed, the attackers quickly moved ahead.

An AWS spokesperson said the incident was limited to a single customer account and did not impact AWS services or infrastructure.

AWS services and infrastructure are not affected by this issue, and they operated as designed throughout the incident described. The report describes an account compromised through misconfigured S3 buckets. We recommend all customers secure their cloud resources by following security, identity, and compliance best practices, including never opening up public access to S3 buckets or any storage service, least-privilege access, secure credential management, and enabling monitoring services like GuardDuty, to reduce risks of unauthorized activity. AWS customers who suspect or become aware of malicious activity within their AWS accounts should follow guidance for remediating potentially compromised AWS credentials or contact AWS Support for assistance.

The compromised credentials belonged to an IAM user that had read and write access to AWS Lambda and limited access to Amazon Bedrock. The user was also attached to the ReadOnlyAccess policy, which still allowed the attackers to explore a wide range of AWS services. They scanned services such as Secrets Manager, Systems Manager, EC2, ECS, RDS, and CloudWatch within minutes.

After mapping the environment, the attackers abused Lambda permissions, specifically UpdateFunctionCode and UpdateFunctionConfiguration. They injected malicious code into an existing Lambda function called EC2-init. After a few attempts, this allowed them to generate new access keys for an admin-level account named “frick,” effectively giving them full control.

Sysdig researchers noted several signs that AI tools were used during the attack. The injected Lambda code had detailed exception handling, a modified timeout setting, and even comments written in Serbian. Some mistakes were also visible, which are typical of AI-generated output. These included attempts to assume roles in fake AWS account IDs, references to a GitHub repository that does not exist, and session names like “claude-session,” hinting at AI-assisted workflows.

Once inside, the attackers focused on persistence. They spread their activity across 19 different AWS principals, including multiple IAM roles and users. A new backdoor user named “backdoor-admin” was created and granted full AdministratorAccess, ensuring continued control even if some credentials were revoked.

The attackers then turned their attention to Amazon Bedrock. After confirming that model invocation logging was disabled, they began LLMjacking operations. They invoked several foundation models, including Claude Sonnet, Claude Opus, DeepSeek R1, Llama models, and Amazon’s own Nova and Titan models. This allowed them to consume expensive AI resources without authorization.

Researchers also discovered a Terraform module designed to deploy another backdoor. This setup created a Lambda function that generated Bedrock credentials and exposed them through a public Lambda URL with no authentication, making abuse even easier.

The final stage of the attack involved large-scale resource abuse. The attackers queried more than 1,300 machine images related to deep learning and then launched a powerful p4d.24xlarge EC2 instance. This instance costs over $32 per hour and can exceed $23,000 per month. They configured it with CUDA, PyTorch, and a publicly accessible JupyterLab server, giving them a backdoor that did not rely on AWS credentials.

To avoid detection, the attackers used IP rotation tools, changing their source IP address with almost every request. This made it harder for security systems to link activities together.

Sysdig says this incident shows how dangerous AI-assisted attacks can be when basic cloud security hygiene is missing. The researchers recommend strict least-privilege policies for IAM users, locking down Lambda permissions, keeping S3 buckets private, enabling logging for Amazon Bedrock, and closely monitoring IAM activity.

AWS users can reduce the risk of such attacks by following basic cloud security practices. S3 buckets should never be left public unless absolutely required. IAM users should follow least-privilege access and avoid attaching broad policies like ReadOnlyAccess unless needed. Access keys should be rotated regularly and never stored in public locations. Logging and monitoring tools such as CloudTrail, GuardDuty, and Bedrock model invocation logs should always be enabled. Regular security audits can help detect misconfigurations early before attackers can exploit them.

Update: This article has been updated to include a statement from AWS clarifying that the incident involved a single compromised customer account and did not impact AWS services or infrastructure.

Follow Techlomedia on Google News to stay updated. Follow on Google NewsFollow on Google News

Affiliate Disclosure:

This article may contain affiliate links. We may earn a commission on purchases made through these links at no extra cost to you.

LEAVE A REPLY

Please enter your comment!
Please enter your name here