Information security models are methods used to authenticate security policies as they are intended to provide a precise set of rules that a computer can follow to implement the fundamental security concepts, processes, and procedures contained in a security policy. These models can be abstract or intuitive.

State Machine Model

The state machine model refers to a system that is always in secure mode regardless of the operational state it is in. According to the state machine model, a state is a snapshot of a system at a specific moment in time. The state machine model derives from the computer science definition of a finite state machine (FSM), integrating an external input with an internal machine state to model all types of systems, including parsers, decoders, and interpreters.

Given an input and a state, an FSM transitions to another state and may create an output. A transition takes place when accepting input or producing output and always results in a new state. All state transitions must be examined and if all components of the state meet the requirements of the security policy, then the state is considered secure. When each state transitions to another secure state, the system is rendered as a secure state machine. Many other security models are influenced by the secure state concept.

The Bell-LaPadula Model

The Bell-LaPadula Model was developed to formalize the U.S. Department of Defense (DoD) multi-level security policy. The DoD classifies resources into four different levels. In ascending order from least sensitive to most sensitive are the following: Unclassified, Confidential, Secret, and Top Secret. Going by the Bell-LaPadula model, a subject with any level of clearance can access resources at or below its clearance level. However, only those resources that a person needs access to are made available. For example, an individual cleared for the Secret level only has access documents labeled Secret. With these restrictions, the Bell-LaPadula model preserves the confidentiality of objects. It does not acknowledge integrity or availability of objects.

The Bell-LaPadula model is based on the state machine model. It also implements mandatory access controls and the lattice model. The lattice tiers are the classification levels used by the security policy of the organization. In this model, secure states are delimited by two rules called properties:

  1. The Simple Security Property (SS Property) states that a subject at a specific classification level cannot read data with a higher classification level.
  2. The Security Property ( Property) states that a subject at a specific classification level cannot write data to a lower classification level.

Subjects: A subject is an active entity that is seeking rights to a resource or object. A subject can be a person, a program, or a process.

Objects: An object is a passive entity, such as a file or a storage resource. In some cases, an item can be a subject in one context and an object in another. The Bell-LaPadula does not deal with integrity or availability, access control management, and file sharing. It also does not impede covert channels, a mechanism that allows data to be communicated outside of normal, expected, or detectable methods.

The Biba Integrity Model

The Biba model was developed as a direct analogue to the Bell-LaPadula model and is also a state machine model based on a classification lattice with mandatory access controls. It was developed to address three integrity issues:

  1. The prevention of object modification by unauthorized subjects.
  2. The prevention of unauthorized object modification by authorized subjects.
  3. The protection of internal and external object consistency.

In this model there are three axioms:

  1. The Simple Integrity Axiom (SI Axiom), which states that a subject at a specific classification level cannot read data with a lower classification level.
  2. The Integrity Axiom ( Axiom), which states that a subject at a specific classification level cannot write data to a higher classification level.
  3. A subject at one level of integrity cannot invoke a subject at a higher level of integrity.

The Biba model only acknowledges integrity, not confidentiality or availability. Its main focus is safeguarding objects from outside threats and regards internal threats handled by appropriate programs. Access control management is not acknowledged by the Biba model, and there’s no function that allows modification of an object or subject’s classification level. In addition, it does not prevent covert channels.

Clark-Wilson Integrity Model

The Clark-Wilson model is an integrity model that was developed after the Biba model. It addresses integrity protection from a different perspective. Instead of using a lattice structure, it implements a subject-program-object or three-part relationship. Subjects have accessibility to objects exclusively through programs. There’s no direct access.

The Clark-Wilson model offers integrity through two principles: well-formed transactions and separation of duties. Well-formed transactions take the form of programs, the method in which subjects are able to access objects. Each program has restrictions in terms of what it can or can’t do to an object, effectively limiting the subject’s capabilities. If the programs are properly designed, then the triple relationship is successful in protecting the integrity of the object.

Separation of duties is the method of dividing critical functions into two or more parts. Each part is required to be handled by a different subject. This prevents authorized subjects from making unauthorized modifications to objects, further protecting the integrity of the object. The Clark-Wilson model requires auditing along with the above-mentioned principles. Auditing tracks monitors to objects as well as inputs from outside the system.

The Brewer and Nash Model

The Brewer and Nash model has similarities with the Bell-LaPadula model and is also referred to as the Chinese Wall model. This model allows access controls to change dynamically based on a user’s past activity. This model applies to a single integrated database; it seeks to create security domains that are sensitive to the notion of conflict of interest (COI).

Data is created with indications of which security domains are potentially in conflict and blocks any subject with access to one domain that belongs to a specific conflict class from accessing any other domain that belongs to the same conflict class. This structure is based on data isolation within each conflict class to shield users from potential conflict of interest scenarios.

The Take-Grant Model

The Take-Grant model is a confidentiality-based model that uses a directed graph to specify the rights that can be passed from one subject to another or from a subject to an object. The model gives permission to subjects to take rights from other subjects. Subjects with the grant right have permission to grant rights and have permission to grant rights to other subjects.

The Information Flow Model

The information flow model is based on a state machine model, and consists of objects, state transitions, and lattice states. Information flow models are constructed to block unauthorized, insecure, or restricted information flow, either between subjects and objects at the same classification level, or between subjects and objects at different classification levels. It permits authorized information flows within the same classification level or between different classification levels, while preventing all unauthorized information flows between or among the classification levels.

The Bell-LaPadula model and the Biba model are both information flow models. Bell-LaPadula concentrates on blocking the information flow from a high security level to a low security level. Biba is focused on preventing information from flowing from a low security level to a high security level.

The Noninterference Model

The noninterference model is based on the information flow model, but addresses how the actions of a higher security level subject impacts the system state or actions of a subject at a lower security level. In this model, the actions at the higher security level subject should have influence on the actions of a subject at a lower security level. Essentially the higher security subject should go unnoticed at the lower level.

The Access Control Matrix

An access control matrix is a table that states a subject’s access rights on an object. A subject’s access rights can be of the type read, write, and execute. Each column of the access control matrix is called an Access Control List (ACL) while each row is called a capability list.

An ACL is connected to the object and outlines actions each subject can perform on that object. A capability list is connected to the subject and outlines the actions that a specific subject is allowed to perform on each object. The access matrix model follows discretionary access control because the entries in the matrix are at the discretion of an individual who has authority over the table.

The Open Worldwide Application Security Project (OWASP) is a community-led organization and has been around for over 20 years and is largely known for its Top 10 web application security risks (check out our course on it). As the use of generative AI and large language models (LLMs) has exploded recently, so too has the risk to privacy and security by these technologies. OWASP, leading the charge for security, has come out with its Top 10 for LLMs and Generative AI Apps this year. In this blog post we’ll explore the Top 10 risks and explore examples of each as well as how to prevent these risks.

LLM01: Prompt Injection

Those familiar with the OWASP Top 10 for web applications have seen the injection category before at the top of the list for many years. This is no exception with LLMs and ranks as number one. Prompt Injection can be a critical vulnerability in LLMs where an attacker manipulates the model through crafted inputs, leading it to execute unintended actions. This can result in unauthorized access, data exfiltration, or social engineering. There are two types: Direct Prompt Injection, which involves "jailbreaking" the system by altering or revealing underlying system prompts, giving an attacker access to backend systems or sensitive data, and Indirect Prompt Injection, where external inputs (like files or web content) are used to manipulate the LLM's behavior.

As an example, an attacker might upload a resume containing an indirect prompt injection, instructing an LLM-based hiring tool to favorably evaluate the resume. When an internal user runs the document through the LLM for summarization, the embedded prompt makes the LLM respond positively about the candidate’s suitability, regardless of the actual content.

How to prevent prompt injection:

  1. Limit LLM Access: Apply the principle of least privilege by restricting the LLM's access to sensitive backend systems and enforcing API token controls for extended functionalities like plugins.
  2. Human Approval for Critical Actions: For high-risk operations, require human validation before executing, ensuring that the LLM's suggestions are not followed blindly.
  3. Separate External and User Content: Use frameworks like ChatML for OpenAI API calls to clearly differentiate between user prompts and untrusted external content, reducing the chance of unintentional action from mixed inputs.
  4. Monitor and Flag Untrusted Outputs: Regularly review LLM outputs and mark suspicious content, helping users to recognize potentially unreliable information.

LLM02: Insecure Output Handling

Insecure Output Handling occurs when the outputs generated by a LLM are not properly validated or sanitized before being used by other components in a system. Since LLMs can generate various types of content based on input prompts, failing to handle these outputs securely can introduce risks like cross-site scripting (XSS), server-side request forgery (SSRF), or even remote code execution (RCE). Unlike Overreliance (LLM09), which focuses on the accuracy of LLM outputs, Insecure Output Handling specifically addresses vulnerabilities in how these outputs are processed downstream.

As an example, there could be a web application that uses an LLM to summarize user-provided content and renders it back in a webpage. An attacker submits a prompt containing malicious JavaScript code. If the LLM’s output is displayed on the webpage without proper sanitization, the JavaScript will execute in the user’s browser, leading to XSS. Alternatively, if the LLM’s output is sent to a backend database or shell command, it could allow SQL injection or remote code execution if not properly validated.

How to prevent Insecure Output Handling:

  1. Zero-Trust Approach: Treat the LLM as an untrusted source, applying strict allow list validation and sanitization to all outputs it generates, especially before passing them to downstream systems or functions.
  2. Output Encoding: Encode LLM outputs before displaying them to end users, particularly when dealing with web content where XSS risks are prevalent.
  3. Adhere to Security Standards: Follow the OWASP Application Security Verification Standard (ASVS) guidelines, which provide strategies for input validation and sanitization to protect against code injection risks.

LLM03: Training Data Poisoning

Training Data Poisoning refers to the manipulation of the data used to train LLMs, introducing biases, backdoors, or vulnerabilities. This tampered data can degrade the model's effectiveness, introduce harmful biases, or create security flaws that malicious actors can exploit. Poisoned data could lead to inaccurate or inappropriate outputs, compromising user trust, harming brand reputation, and increasing security risks like downstream exploitation.

As an example, there could be a scenario where an LLM is trained on a dataset that has been tampered with by a malicious actor. The poisoned dataset includes subtly manipulated content, such as biased news articles or fabricated facts. When the model is deployed, it may output biased information or incorrect details based on the poisoned data. This not only degrades the model’s performance but can also mislead users, potentially harming the model’s credibility and the organization’s reputation.

How to prevent Training Data Poisoning:

  1. Data Validation and Vetting: Verify the sources of training data, especially when sourcing from third-party datasets. Conduct thorough checks on data integrity, and where possible, use trusted data sources.
  2. Machine Learning Bill of Materials (ML-BOM): Maintain an ML-BOM to track the provenance of training data and ensure that each source is legitimate and suitable for the model’s purpose.
  3. Sandboxing and Network Controls: Restrict access to external data sources and use network controls to prevent unintended data scraping during training. This helps ensure that only vetted data is used for training.
  4. Adversarial Robustness Techniques: Implement strategies like federated learning and statistical outlier detection to reduce the impact of poisoned data. Periodic testing and monitoring can identify unusual model behaviors that may indicate a poisoning attempt.
  5. Human Review and Auditing: Regularly audit model outputs and use a human-in-the-loop approach to validate outputs, especially for sensitive applications. This added layer of scrutiny can catch potential issues early.

LLM04: Model Denial of Service

Model Denial of Service (DoS) is a vulnerability in which an attacker deliberately consumes an excessive amount of computational resources by interacting with a LLM. This can result in degraded service quality, increased costs, or even system crashes. One emerging concern is manipulating the context window of the LLM, which refers to the maximum amount of text the model can process at once. This makes it possible to overwhelm the LLM by exceeding or exploiting this limit, leading to resource exhaustion.

As an example, an attacker may continuously flood the LLM with sequential inputs that each reach the upper limit of the model’s context window. This high-volume, resource-intensive traffic overloads the system, resulting in slower response times and even denial of service. As another example, if an LLM-based chatbot is inundated with a flood of recursive or exceptionally long prompts, it can strain computational resources, causing system crashes or significant delays for other users.

How to prevent Model Denial of Service:

  1. Rate Limiting: Implement rate limits to restrict the number of requests from a single user or IP address within a specific timeframe. This reduces the chance of overwhelming the system with excessive traffic.
  2. Resource Allocation Caps: Set caps on resource usage per request to ensure that complex or high-resource requests do not consume excessive CPU or memory. This helps prevent resource exhaustion.
  3. Input Size Restrictions: Limit input size according to the LLM's context window capacity to prevent excessive context expansion. For example, inputs exceeding a predefined character limit can be truncated or rejected.
  4. Monitoring and Alerts: Continuously monitor resource utilization and establish alerts for unusual spikes, which may indicate a DoS attempt. This allows for proactive threat detection and response.
  5. Developer Awareness and Training: Educate developers about DoS vulnerabilities in LLMs and establish guidelines for secure model deployment. Understanding these risks enables teams to implement preventative measures more effectively.

LLM05: Supply Chain Vulnerabilities

Supply Chain attacks are incredibly common and this is no different with LLMs, which, in this case refers to risks associated with the third-party components, training data, pre-trained models, and deployment platforms used within LLMs. These vulnerabilities can arise from outdated libraries, tampered models, and even compromised data sources, impacting the security and reliability of the entire application. Unlike traditional software supply chain risks, LLM supply chain vulnerabilities extend to the models and datasets themselves, which may be manipulated to include biases, backdoors, or malware that compromises system integrity.

As an example, an organization uses a third-party pre-trained model to conduct economic analysis. If this model is poisoned with incorrect or biased data, it could generate inaccurate results that mislead decision-making. Additionally, if the organization uses an outdated plugin or compromised library, an attacker could exploit this vulnerability to gain unauthorized access or tamper with sensitive information. Such vulnerabilities can result in significant security breaches, financial loss, or reputational damage.

How to prevent Supply Chain Vulnerabilities:

  1. Vet Third-Party Components: Carefully review the terms, privacy policies, and security measures of all third-party model providers, data sources, and plugins. Use only trusted suppliers and ensure they have robust security protocols in place.
  2. Maintain a Software Bill of Materials (SBOM): An SBOM provides a complete inventory of all components, allowing for quick detection of vulnerabilities and unauthorized changes. Ensure that all components are up-to-date and apply patches as needed.
  3. Use Model and Code Signing: For models and external code, employ digital signatures to verify their integrity and authenticity before use. This helps ensure that no tampering has occurred.
  4. Anomaly Detection and Robustness Testing: Conduct adversarial robustness tests and anomaly detection on models and data to catch signs of tampering or data poisoning. Integrating these checks into your MLOps pipeline can enhance overall security.
  5. Implement Monitoring and Patching Policies: Regularly monitor component usage, scan for vulnerabilities, and patch outdated components. For sensitive applications, continuously audit your suppliers’ security posture and update components as new threats emerge.

LLM06: Sensitive Information Disclosure

Sensitive Information Disclosure in LLMs occurs when the model inadvertently reveals private, proprietary, or confidential information through its output. This can happen due to the model being trained on sensitive data or because it memorizes and later reproduces private information. Such disclosures can result in significant security breaches, including unauthorized access to personal data, intellectual property leaks, and violations of privacy laws.

As an example, there could be an LLM-based chatbot trained on a dataset containing personal information such as users’ full names, addresses, or proprietary business data. If the model memorizes this data, it could accidentally reveal this sensitive information to other users. For instance, a user might ask the chatbot for a recommendation, and the model could inadvertently respond with personal information it learned during training, violating privacy rules.

How to prevent Sensitive Information Disclosure:

  1. Data Sanitization: Before training, scrub datasets of personal or sensitive information. Use techniques like anonymization and redaction to ensure no sensitive data remains in the training data.
  2. Input and Output Filtering: Implement robust input validation and sanitization to prevent sensitive data from entering the model’s training data or being echoed back in outputs.
  3. Limit Training Data Exposure: Apply the principle of least privilege by restricting sensitive data from being part of the training dataset. Fine-tune the model with only the data necessary for its task, and ensure high-privilege data is not accessible to lower-privilege users.
  4. User Awareness: Make users aware of how their data is processed by providing clear Terms of Use and offering opt-out options for having their data used in model training.
  5. Access Controls: Apply strict access control to external data sources used by the LLM, ensuring that sensitive information is handled securely throughout the system

LLM07: Insecure Plugin Design

Insecure Plugin Design vulnerabilities arise when LLM plugins, which extend the model’s capabilities, are not adequately secured. These plugins often allow free-text inputs and may lack proper input validation and access controls. When enabled, plugins can execute various tasks based on the LLM’s outputs without further checks, which can expose the system to risks like data exfiltration, remote code execution, and privilege escalation. This vulnerability is particularly dangerous because plugins can operate with elevated permissions while assuming that user inputs are trustworthy.

As an example, there could be a weather plugin that allows users to input a base URL and query. An attacker could craft a malicious input that directs the LLM to a domain they control, allowing them to inject harmful content into the system. Similarly, a plugin that accepts SQL “WHERE” clauses without validation could enable an attacker to execute SQL injection attacks, gaining unauthorized access to data in a database.

How to prevent Insecure Plugin Design:

  1. Enforce Parameterized Input: Plugins should restrict inputs to specific parameters and avoid free-form text wherever possible. This can prevent injection attacks and other exploits.
  2. Input Validation and Sanitization: Plugins should include robust validation on all inputs. Using Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) can help identify vulnerabilities during development.
  3. Access Control: Follow the principle of least privilege, limiting each plugin's permissions to only what is necessary. Implement OAuth2 or API keys to control access and ensure only authorized users or components can trigger sensitive actions.
  4. Manual Authorization for Sensitive Actions: For actions that could impact user security, such as transferring files or accessing private repositories, require explicit user confirmation.
  5. Adhere to OWASP API Security Guidelines: Since plugins often function as REST APIs, apply best practices from the OWASP API Security Top 10. This includes securing endpoints and applying rate limiting to mitigate potential abuse.

LLM08: Excessive Agency

Excessive Agency in LLM-based applications arises when models are granted too much autonomy or functionality, allowing them to perform actions beyond their intended scope. This vulnerability occurs when an LLM agent has access to functions that are unnecessary for its purpose or operates with excessive permissions, such as being able to modify or delete records instead of only reading them. Unlike Insecure Output Handling, which deals with the lack of validation on the model’s outputs, Excessive Agency pertains to the risks involved when an LLM takes actions without proper authorization, potentially leading to confidentiality, integrity, and availability issues.

As an example, there could be an LLM-based assistant that is given access to a user's email account to summarize incoming messages. If the plugin that is used to read emails also has permissions to send messages, a malicious prompt injection could trick the LLM into sending unauthorized emails (or spam) from the user's account.

How to prevent Excessive Agency:

  1. Restrict Plugin Functionality: Ensure plugins and tools only provide necessary functions. For example, if a plugin is used to read emails, it should not include capabilities to delete or send emails.
  2. Limit Permissions: Follow the principle of least privilege by restricting plugins’ access to external systems. For instance, a plugin for database access should be read-only if writing or modifying data is not required.
  3. Avoid Open-Ended Functions: Avoid functions like “run shell command” or “fetch URL” that provide broad system access. Instead, use plugins that perform specific, controlled tasks.
  4. User Authorization and Scope Tracking: Require plugins to execute actions within the context of a specific user's permissions. For example, using OAuth with limited scopes helps ensure actions align with the user’s access level.
  5. Human-in-the-Loop Control: Require user confirmation for high-impact actions. For instance, a plugin that posts to social media should require the user to review and approve the content before it is published.
  6. Authorization in Downstream Systems: Implement authorization checks in downstream systems that validate each request against security policies. This prevents the LLM from making unauthorized changes directly.

LLM09: Overreliance

Overreliance occurs when users or systems trust the outputs of a LLM without proper oversight or verification. While LLMs can generate creative and informative content, they are prone to “hallucinations” (producing false or misleading information) or providing authoritative-sounding but incorrect outputs. Overreliance on these models can result in security risks, misinformation, miscommunication, and even legal issues, especially if LLM-generated content is used without validation. This vulnerability becomes especially dangerous in cases where LLMs suggest insecure coding practices or flawed recommendations.

As an example, there could be a development team using an LLM to expedite the coding process. The LLM suggests an insecure code library, and the team, trusting the LLM, incorporates it into their software without review. This introduces a serious vulnerability. As another example, a news organization might use an LLM to generate articles, but if they don’t validate the information, it could lead to the spread of disinformation.

How to prevent Overreliance:

  1. Regular Monitoring and Review: Implement processes to review LLM outputs regularly. Use techniques like self-consistency checks or voting mechanisms to compare multiple model responses and filter out inconsistencies.
  2. Cross-Verification: Compare the LLM’s output with reliable, trusted sources to ensure the information’s accuracy. This step is crucial, especially in fields where factual accuracy is imperative.
  3. Fine-Tuning and Prompt Engineering: Fine-tune models for specific tasks or domains to reduce hallucinations. Techniques like parameter-efficient tuning (PET) and chain-of-thought prompting can help improve the quality of LLM outputs.
  4. Automated Validation: Use automated validation tools to cross-check generated outputs against known facts or data, adding an extra layer of security.
  5. Risk Communication: Clearly communicate the limitations of LLMs to users, highlighting the potential for errors. Transparent disclaimers can help manage user expectations and encourage cautious use of LLM outputs.
  6. Secure Coding Practices: For development environments, establish guidelines to prevent the integration of potentially insecure code. Avoid relying solely on LLM-generated code without thorough review.

LLM10: Model Theft

Model Theft refers to the unauthorized access, extraction, or replication of proprietary LLMs by malicious actors. These models, containing valuable intellectual property, are at risk of exfiltration, which can lead to significant economic and reputational loss, erosion of competitive advantage, and unauthorized access to sensitive information encoded within the model. Attackers may steal models directly from company infrastructure or replicate them by querying APIs to build shadow models that mimic the original. As LLMs become more prevalent, safeguarding their confidentiality and integrity is crucial.

As an example, an attacker could exploit a misconfiguration in a company’s network security settings, gaining access to their LLM model repository. Once inside, the attacker could exfiltrate the proprietary model and use it to build a competing service. Alternatively, an insider may leak model artifacts, allowing adversaries to launch gray box adversarial attacks or fine-tune their own models with stolen data.

How to prevent Model Theft:

  1. Access Controls and Authentication: Use Role-Based Access Control (RBAC) and enforce strong authentication mechanisms to limit unauthorized access to LLM repositories and training environments. Adhere to the principle of least privilege for all user accounts.
  2. Supplier and Dependency Management: Monitor and verify the security of suppliers and dependencies to reduce the risk of supply chain attacks, ensuring that third-party components are secure.
  3. Centralized Model Inventory: Maintain a central ML Model Registry with access controls, logging, and authentication for all production models. This can aid in governance, compliance, and prompt detection of unauthorized activities.
  4. Network Restrictions: Limit LLM access to internal services, APIs, and network resources. This reduces the attack surface for side-channel attacks or unauthorized model access.
  5. Continuous Monitoring and Logging: Regularly monitor access logs for unusual activity and promptly address any unauthorized access. Automated governance workflows can also help streamline access and deployment controls.
  6. Adversarial Robustness: Implement adversarial robustness training to help detect extraction queries and defend against side-channel attacks. Rate-limit API calls to further protect against data exfiltration.
  7. Watermarking Techniques: Embed unique watermarks within the model to track unauthorized copies or detect theft during the model’s lifecycle.

Wrapping it all up

As LLMs continue to grow in capability and integration across industries, their security risks must be managed with the same vigilance as any other critical system. From Prompt Injection to Model Theft, the vulnerabilities outlined in the OWASP Top 10 for LLMs highlight the unique challenges posed by these models, particularly when they are granted excessive agency or have access to sensitive data. Addressing these risks requires a multifaceted approach involving strict access controls, robust validation processes, continuous monitoring, and proactive governance.

For technical leadership, this means ensuring that development and operational teams implement best practices across the LLM lifecycle starting from securing training data to ensuring safe interaction between LLMs and external systems through plugins and APIs. Prioritizing security frameworks such as the OWASP ASVS, adopting MLOps best practices, and maintaining vigilance over supply chains and insider threats are key steps to safeguarding LLM deployments. Ultimately, strong leadership that emphasizes security-first practices will protect both intellectual property and organizational integrity, while fostering trust in the use of AI technologies.

Start learning with Cybrary

Create a free account

Related Posts

All Blogs