We are excited to announce the release of VRT 1.14. With this release, we expand upon our commitment to enable our customers to use human ingenuity to secure and get value from AI quickly and confidently by adding a new vulnerability category: Data Bias Vulnerabilities.

Expanding AI security into the ecosystem

In December 2023, we released our first, big update to the VRT enabling our customers and hackers to have a shared understanding of how the most likely emerging LLM-related vulnerabilities are defined and should be prioritized for reward and remediation. To further expand upon this, the AI Data Bias vuln types focus on mitigating the risk of AI perpetuating social harm through AI bias and discrimination. This is in accordance with government regulations including Executive Order 14110 and the EU Artificial Intelligence Act.

By adding AI Data Bias vulnerability types to the VRT, we empower hackers to focus on hunting for specific vulns and creating targeted POCs, and help engagement owners with LLM-related assets design scope and rewards that produce the best outcomes for AI Safety.

With these AI security-related updates to the VRT (and still more to come) and our experience working with AI leaders like OpenAI, Anthropic, Google, the U.S. Department of Defense’s Chief Digital and Artificial Intelligence Office, and the Office of the National Cyber Director, the Bugcrowd Platform is positioned as the leading option for meeting the challenges of AI risk in your organization.

What’s inside the VRT v1.14

New “AI Data Bias” category:

  • Data Biases > Representation Bias
    Representation bias occurs when the training data of an AI model has an omission, or insufficient representation, of certain groups which the AI model intends to serve. Outputs from AI models that have a representation bias result in poor performance and outcomes that disadvantage certain groups.
  • Data Biases > Pre-existing Bias
    Pre-existing bias occurs when historical or societal prejudices are present in the training data. This can look like a lack of certain data points, overrepresentation or underrepresentation of groups, a bias in the selection of data points that make up the AI model, or data labels that are discriminatory or subjective. Outputs from AI models that have a pre-existing bias can result in inferior performance and outcomes that disadvantage certain groups. 
  • Algorithmic Biases > Processing Bias
    Processing bias occurs when AI algorithms make biased decisions, or predictions, due to the way that they process data. This can be a result of the algorithm’s design or the training data it has been trained on. Outputs from AI models that have a processing bias can result in discrimination, reinforcement of stereotypes, and unintended consequences such as amplification or polarization of viewpoints that disadvantage certain groups.  
  • Algorithmic Biases > Aggregation Bias
    Aggregation bias occurs in an AI model when systematic favoritism is displayed when processing data from different demographic groups. This bias originates from training data that is skewed, or that has an underrepresentation of certain groups. Outputs from AI models that have an aggregation bias can result in unequal treatment of users based on demographic characteristics, which can lead to unfair and discriminatory outcomes. 
  • Societal Biases > Confirmation Bias
    Confirmation bias occurs when AI algorithms selectively process information that confirms pre-existing beliefs, assumptions, or hypotheses, while disregarding contrary evidence. Outputs from AI models that have a confirmation bias can result in misinformed choices, or the reinforcement of existing stereotypes and misconceptions. This is likely caused from the over usage of synthetic data which can cause this biased output.
  • Societal Biases > Systemic Bias
    Systemic bias occurs when AI models consistently favor certain groups over others due to the way that they process data, or other structural or historical factors. This can be a result of the AI model’s design or the training data it has been trained on. Outputs from AI models that have a systemic bias can result in discrimination, reinforcement of stereotypes, or viewpoints that disadvantage certain groups. 
  • Misinterpretation Biases > Context Ignorance
    Context ignorance occurs when AI models do not consider the broader context when making decisions, leading to uninformed or unfair decision making. This can be a result of the AI model’s design or the training data it has been trained on. Outputs from AI models that have context ignorance can result in discrimination, reinforcement of stereotypes, or viewpoints that disadvantage certain groups.   
  • Developer Biases > Implicit Bias
    Implicit bias occurs when there are biases present within the training data of an AI model that affects its decision-making. These implicit biases are usually introduced into the AI model via the developers who affect the design, implementation, and deployment of the AI system.  

Additions to existing categories:

  • Server Security Misconfiguration > Email Verification Bypass
    Email Verification Bypass refers to scenarios where a system’s email verification process can be skipped or circumvented, allowing users to access certain features without confirming their email addresses. While this may not directly lead to a security breach, it highlights lax verification processes and could undermine trust in user management practices. This issue is typically more about procedural adherence and less about direct security vulnerability.
  • Server Security Misconfiguration > Missing Subresource Integrity
    Missing Subresource Integrity is identified when resources loaded from external sources lack integrity checks. While this doesn’t typically result in an immediate vulnerability, it is considered a best practice to include Subresource Integrity (SRI) to prevent potential future risks from manipulated external resources.
  • Sensitive Data Exposure > Token Leakage via Referer > Password Reset Token
    This involves scenarios where password reset tokens are passed in referer headers, potentially exposing them in logs or to third parties. While often not immediately exploitable, this practice can increase risk and is generally discouraged as a best security practice.
  • Server Security Misconfiguration > Software Package Takeover
    Software Package Takeover happens when an attacker gains control over the software packages used by an application. This can occur through unsecured package repositories or when dependencies are not securely locked down. Such takeovers can lead to malicious code execution within the application.
  • Broken Access Control (BAC) > Privilege Escalation
    Privilege Escalation occurs when a user or process gains elevated access to resources that are normally protected from an application or user. This can happen due to misconfigurations, flaws in access control mechanisms, or exploitation of system bugs. This vulnerability allows attackers to execute commands, access restricted data, or perform unauthorized actions within a system that they would not be able to do under normal circumstances.

Removed from existing category:

  • Broken Authentication and Session Management > Privilege Escalation

Contributions needed!

This update represents our continued commitment in recognizing these attack vectors within the VRT, but is far from the last. The VRT and these categories will evolve over time as hackers, Bugcrowd Application Security Engineers (ASEs), and customers actively participate in the process. If you would like to contribute to the VRT, Issues and Pull Requests are most welcome!

If you found this useful or have any questions, let’s keep the dialogue going! Tweet me at twitter.com/codingo.