A recent security breach on AWS showcases the alarming capabilities of AI-assisted cyberattacks. Researchers observed a hacker gaining administrative access to an AWS cloud environment in under 10 minutes, leveraging AI for automation. The Sysdig Threat Research Team identified multiple indicators suggesting the use of large language models (LLMs) in various stages of the attack, from reconnaissance to malicious code writing and LLMjacking. The attacker stole valid test credentials from public Amazon S3 buckets, granting access to an IAM user with extensive permissions. Despite initial failures with common admin usernames, the hacker escalated privileges through Lambda function code injection, exploiting the compromised user's permissions. The code, written in Serbian, listed IAM users and their access keys, and contained comprehensive exception handling, further suggesting LLM involvement. The attacker then attempted to assume OrganizationAccountAccessRole, including non-victim organization account IDs, a behavior consistent with AI hallucinations. The breach resulted in access to 19 AWS identities and sensitive data, including secrets, SSM parameters, CloudWatch logs, and CloudTrail events. The hacker also engaged in LLMjacking, abusing Amazon Bedrock access to invoke various models. The security team emphasizes the need for robust identity security and access management practices, such as least privilege principles, restricted permissions, and secure S3 bucket configurations, to prevent similar AI-assisted cyberattacks.