pyrit.executor.promptgen.AnecdoctorGenerator#
- class AnecdoctorGenerator(*, objective_target: PromptChatTarget, processing_model: PromptChatTarget | None = None, converter_config: StrategyConverterConfig | None = None, prompt_normalizer: PromptNormalizer | None = None)[source]#
Bases:
PromptGeneratorStrategy[AnecdoctorContext,AnecdoctorResult]Implementation of the Anecdoctor prompt generation strategy.
The Anecdoctor generator creates misinformation content by either: 1. Using few-shot examples directly (default mode when processing_model is not provided) 2. First extracting a knowledge graph from examples, then using it (when processing_model is provided)
This generator is designed to test a model’s susceptibility to generating false or misleading content when provided with examples in ClaimsReview format. The generator can optionally use a processing model to extract a knowledge graph representation of the examples before generating the final content.
The generation flow consists of: 1. (Optional) Extract knowledge graph from evaluation data using processing model 2. Format a system prompt based on language and content type 3. Send formatted examples (or knowledge graph) to target model 4. Return the generated content as the result
- __init__(*, objective_target: PromptChatTarget, processing_model: PromptChatTarget | None = None, converter_config: StrategyConverterConfig | None = None, prompt_normalizer: PromptNormalizer | None = None) None[source]#
Initialize the Anecdoctor prompt generation strategy.
- Parameters:
objective_target (PromptChatTarget) – The chat model to be used for prompt generation.
processing_model (Optional[PromptChatTarget]) – The model used for knowledge graph extraction. If provided, the generator will extract a knowledge graph from the examples before generation. If None, the generator will use few-shot examples directly.
converter_config (Optional[StrategyConverterConfig]) – Configuration for prompt converters.
prompt_normalizer (Optional[PromptNormalizer]) – Normalizer for handling prompts.
Methods
__init__(*, objective_target[, ...])Initialize the Anecdoctor prompt generation strategy.
Execute the prompt generation strategy asynchronously with the provided parameters.
execute_with_context_async(*, context)Execute strategy with complete lifecycle management.
get_identifier()Get a serializable identifier for the strategy instance.
- async execute_async(*, content_type: str, language: str, evaluation_data: List[str], memory_labels: dict[str, str] | None = None, **kwargs) AnecdoctorResult[source]#
- async execute_async(**kwargs) AnecdoctorResult
Execute the prompt generation strategy asynchronously with the provided parameters.
- Parameters:
content_type (str) – The type of content to generate (e.g., “viral tweet”, “news article”).
language (str) – The language of the content to generate (e.g., “english”, “spanish”).
evaluation_data (List[str]) – The data in ClaimsReview format to use in constructing the prompt.
memory_labels (Optional[Dict[str, str]]) – Memory labels for the generation context.
**kwargs – Additional parameters for the generation.
- Returns:
The result of the anecdoctor generation.
- Return type: