Exploring the Threat Landscape for GenAI Cloud Workloads

With GenAI adoption rising significantly in various sectors over the past years, the threat landscape associated with GenAI will also evolve rapidly. Learn more about what security challenges can potentially threaten the cloud GenAI workloads.
28.10.2024
Kennedy Torkura
5 minutes
Exploring the Threat Landscape for GenAI Cloud Workloads
Contributors
Kennedy Torkura
Kennedy Torkura
Co-Founder & CTO
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This article is published in collaboration with Sopra Steria Ventures promotes strategic collaborations and investments in start-ups to strengthen innovation in areas such as generative AI, cybersecurity, sustainability and blockchain. It is also published in Sopra Steria Digitale Exzellenz website in German language that can be accessed here.

Introduction

GenAI is transforming our lives in various ways, and organizations are rapidly exploiting its potential to enhance productivity, innovation, and business advantages. However, only some organizations can easily access GenAI despite the perceived benefits. Consequently, cloud service providers now provide GenAI-as-a-Service offerings, e.g., Amazon Bedrock, Azure AI Services, and Google Vertex AI. These GenAI-as-a-Service offerings drastically lower the entry barrier for organizations and fast-track development, deployment, and maintenance of GenAI workloads without bothering much about the infrastructure and knowledge requirements. 

However, these GenAI workloads introduce several security and safety challenges that require attention, given the increasing need for Responsible AI systems, e.g., the EU AI Act. Organizations must understand these risks and possible countermeasures to retain customer trust and business excellence and thwart potential cloud attacks.

                                   MITRE ATLAS Showing Several AI Threats (Source: atlas.mitre.org)

Top Threats Against GenAI Cloud Workloads 

LLMJacking

LLMJacking is an attack where criminals gain illegal access to Large Language Models (LLMs) by first stealing cloud credentials, e.g., AWS API Keys, and using the credentials to access cloud GenAI workloads e,g, LLMs. The primary motivation is to pass over the vast bills accrued to the victim account organization (up to $46,000 of LLM consumption/day). KrebsonSecurity recently published a story on LLMJacking attacks based on a research investigation conducted by Permiso Security. The report reveals the lucrativeness of LLMJacking for cybercriminals; a prominent cyber-criminal organization behind this attack generated $1 million in annualized revenue via a business model involving the reselling of sex chatbots. LLMJacking was first reported earlier this year by Sysdig Threat Research Team.

Data Poisoning 

Data poisoning attacks are malicious actions where attackers inject malicious or nonsensical data into the training datasets to corrupt the output produced by LLMs. Data is the core of GenAI; training and fine-tuning data must be easily accessible to Cloud GenAI workloads to allow for customization and contextualization via  Retrieval Augmented Generation (RAG) techniques.  The LLMs provided by CSPs are usually pre-trained and general-purpose; while this drastically reduces the effort and resources needed for training, the responses from the models need to be customized to meet specific use cases required by organizations.  However, fine-tuning data is commonly kept in S3 buckets, thus exposing the data to data-poisoning attacks; attackers are familiar with the vulnerabilities related to S3.

                            Data Poisoning Attack Against Amazon Bedrock GenAI Workload

Data Exfiltration

Similar to the data poisoning attacks, data used by Cloud GenAI workloads are attractive targets for exfiltration attacks. There are several reasons attackers would be interested in training data more than other types of data, the primary reason being the need to acquire proprietary information or intellectual property. Access to intellectual property might give a competing organization a considerable advantage. 

Prompt Injection

Prompt injection attacks are one of the most commonly discussed threat vectors against LLMs. Attackers implement this attack by crafting malicious prompts that trick LLMs into acting unintendedly, such as divulging sensitive information. Prompt injection occurs at the application layer. However, similar to other application security attacks, there are implications and challenges when applications are deployed on the cloud, e.g., to take advantage of rapid scaling and elasticity. With the need to scale GenAI applications, organizations would prefer central management of GenAI applications for the sake of resilience and scalability. This would eventually mean exposing prompt injection attacks to the infrastructure rather than the application layer.

Model Attacks 

Cloud GenAI workloads might include both provider-managed and customer-managed Machine Learning (ML) models. Provider-managed models are less vulnerable to attacks since the providers implement extra security measures, and these are the least exposed. However, customer-managed models are more susceptible to attacks due to higher levels of exposure. Customer-managed models are often stored in container image repositories (e.g., Azure Container Registry), virtual machines, or Kubernetes clusters.  Several ML image attack vectors are documented on the MITRE ATLAS, including Backdoor ML Model, Full ML Access, Evade ML Model, and Erode ML Model Integrity. Several of these model attacks were recently demonstrated against SAP AI Core by Wiz research.

Cloud Misconfiguration

Most cloud attacks are due to misconfigured cloud resources and customers' faults. Naturally, GenAI cloud workloads are not isolated from this menace. Some important aspects are Identity and Access Management (IAM) controls, encryption, and secret management. Each type of misconfiguration could have substantial security impacts on GenAI workloads, resulting in successful attacks.

Addressing Threats Against GenAI Workloads 

Several threats against GenAI cloud workloads are well known, and mitigation strategies are established. However, many of these threats are AI-specific and require novel mitigation strategies. Let’s examine two of these essential strategies for securing GenAI cloud workloads.

AI Security Posture Management

Several security posture management approaches, including CSPM and DSPM, are used to manage diverse aspects of the cloud. These approaches only partially cover the AI-specific gaps; hence, there is a need for a specific approach: AI Security Posture Management (AISPM).

With AISPM, organizations can quickly check for misconfigured GenAI cloud workloads and ensure that security best practices are implemented. Furthermore, organizations would leverage AISPM to monitor and ensure adherence to regulatory and compliance requirements, such as the EU AI Act.

                            AI-Red Teaming Executed Via the Mitigant Cloud Security Platform

AI Red Teaming

Though AISPM allows organizations to implement several security measures for GenAI workloads, it doesn’t meet all security and safety requirements. Responsible AI guidelines require organizations to ensure that GenAI workloads are secure and don’t generate toxic, biased, inappropriate, or factually incorrect content (hallucination). AI Red teaming is suitable for meeting these requirements via rigorous testing of Gen AI workloads.

Responsible AI With Sopra Steria and Mitigant  

Navigating Responsible AI effectively could be a hurdle for most organizations, but working with reliable partners could alleviate these challenges. Sopra Steria has extensive AI expertise and has supported several public and private organizations adopting secure and safe GenAI.  

Mitigant is an innovative startup pushing the limits of cloud security, including all the aspects related to GenAI cloud workloads. More about Mitigant: https://www.mitigant.io.

As a European management and technology consultancy, Sopra Steria has extensive experience in the development and implementation of AI strategies and makes its clients' cybersecurity strategies fit for the AI age. More about Sopra Steria: https://www.soprasteria.de/.

Ready to Secure Your Cloud Infrastructures?
Connect with the Mitigant Team and proactively protect your clouds today.

Join The Cloud Security Revolution Today!

Take control of your cloud security in minutes. No credit card required.
Start 30-day Free Trial