CCC.GenAI.F21: Generate Content
Capability ID:CCC.GenAI.F21
Title:Generate Content
Description:Ability to generate a response given a foundation model, parameter values,
and a prompt.
Mapped Threats
ID | Title | Description | External Mappings | Capability Mappings | Control Mappings |
---|---|---|---|---|---|
CCC.GenAI.TH01 | Prompt Injection | Prompt injection may occur when crafted input is used to manipulate the GenAI model's behaviour, resulting in the generation of harmful or unintended outputs. Prompt injection can be either direct (performed via direct interaction with the model) or indirect (performed via external sources ingested by the model). Both text-based and multi-modal prompt injection is possible. | 4 | 1 | 0 |
CCC.GenAI.TH02 | Data Poisoning | Data poisoning occurs when training, fine-tuning or embedding data is tampered with in order to modify the model's behaviour, for example steering it towards specific outputs, degrading performance or introducing backdoors. | 4 | 1 | 0 |
CCC.GenAI.TH04 | Insecure / Unreliable Model Output | A GenAI model may generate content that is incorrect, misleading or harmful, such as convincing misinformation (hallucinations) or vulnerable or malicious code, due to its reliance on statistical patterns rather than factual understanding. Directly using this flawed output without validation can lead to system compromises, poor decision-making, and legal or reputational damage. | 4 | 1 | 0 |
CCC.GenAI.TH05 | Model Overreliance | Model overreliance and misplaced implicit trust in the output of a GenAI model may lead to the acceptance of inaccurate, biased or insecure outputs without proper validation or oversight, potentially resulting in operational failueres, compliance breaches and flawed decision making. | 4 | 1 | 0 |
CCC.GenAI.TH06 | Unintended Action by a Model-Based Agent | A model-based agent, given the authority to execute tools or interact with APIs, may perform an action that is harmful, incorrect, or not aligned with the user's true intent in response to a prompt. This can be caused by the model misinterpreting an ambiguous prompt or being manipulated by an adversary into misusing its delegated authority. | 4 | 1 | 0 |
CCC.GenAI.TH09 | Lack of Explainability | The "black box" nature of GenAI models makes it difficult or impossible to understand the specific reasoning behind a given output. This opacity makes it challenging to diagnose failures, detect hidden biases, and meet regulatory requirements for decision transparency. | 2 | 1 | 0 |