pyrit.score.Scorer#

class Scorer[source]#

Bases: ABC

Abstract base class for scorers.

__init__()#

Methods

__init__()

get_identifier()

Returns an identifier dictionary for the scorer.

scale_value_float(value, min_value, max_value)

Scales a value from 0 to 1 based on the given min and max values.

score_async(request_response, *[, task])

Score the request_response, add the results to the database and return a list of Score objects.

score_image_async(image_path, *[, task])

Scores the given image using the chat target.

score_prompts_batch_async(*, request_responses)

score_text_async(text, *[, task])

Scores the given text based on the task using the chat target.

validate(request_response, *[, task])

Validates the request_response piece to score.

Attributes

get_identifier()[source]#

Returns an identifier dictionary for the scorer.

Returns:

The identifier dictionary.

Return type:

dict

scale_value_float(value: float, min_value: float, max_value: float) float[source]#

Scales a value from 0 to 1 based on the given min and max values. E.g. 3 stars out of 5 stars would be .5.

Parameters:
  • value (float) – The value to be scaled.

  • min_value (float) – The minimum value of the range.

  • max_value (float) – The maximum value of the range.

Returns:

The scaled value.

Return type:

float

abstract async score_async(request_response: PromptRequestPiece, *, task: str | None = None) list[Score][source]#

Score the request_response, add the results to the database and return a list of Score objects.

Parameters:
  • request_response (PromptRequestPiece) – The request response to be scored.

  • task (str) – The task based on which the text should be scored (the original attacker model’s objective).

Returns:

A list of Score objects representing the results.

Return type:

list[Score]

async score_image_async(image_path: str, *, task: str | None = None) list[Score][source]#

Scores the given image using the chat target.

Parameters:
  • text (str) – The image to be scored.

  • task (str) – The task based on which the text should be scored (the original attacker model’s objective).

Returns:

A list of Score objects representing the results.

Return type:

list[Score]

async score_prompts_batch_async(*, request_responses: Sequence[PromptRequestPiece], tasks: Sequence[str] | None = None, batch_size: int = 10) list[Score][source]#
async score_text_async(text: str, *, task: str | None = None) list[Score][source]#

Scores the given text based on the task using the chat target.

Parameters:
  • text (str) – The text to be scored.

  • task (str) – The task based on which the text should be scored (the original attacker model’s objective).

Returns:

A list of Score objects representing the results.

Return type:

list[Score]

scorer_type: Literal['true_false', 'float_scale']#
abstract validate(request_response: PromptRequestPiece, *, task: str | None = None)[source]#

Validates the request_response piece to score. Because some scorers may require specific PromptRequestPiece types or values.

Parameters:
  • request_response (PromptRequestPiece) – The request response to be validated.

  • task (str) – The task based on which the text should be scored (the original attacker model’s objective).