pyrit.score.LookBackScorer#

class LookBackScorer(chat_target: PromptChatTarget)[source]#

Bases: Scorer

Create a score from analyzing the entire conversation and adds them to the database.

Parameters:

chat_target (PromptChatTarget) – The chat target to use for scoring.

__init__(chat_target: PromptChatTarget) None[source]#

Methods

__init__(chat_target)

get_identifier()

Returns an identifier dictionary for the scorer.

scale_value_float(value, min_value, max_value)

Scales a value from 0 to 1 based on the given min and max values.

score_async(request_piece, *[, task])

Scores the entire conversation based on detected behavior change.

score_image_async(image_path, *[, task])

Scores the given image using the chat target.

score_prompts_with_tasks_batch_async(*, ...)

score_responses_inferring_tasks_batch_async(*, ...)

Scores a batch of responses (ignores non-assistant messages).

score_text_async(text, *[, task])

Scores the given text based on the task using the chat target.

validate(request_response, *[, task])

Validates the request_response piece to score.

Attributes

async score_async(request_piece: PromptRequestPiece, *, task: str | None = None) list[Score][source]#

Scores the entire conversation based on detected behavior change.

Parameters:
  • request_piece (PromptRequestPiece) – A piece of the conversation to be scored. The converation ID is used to retrieve the full conversation from memory.

  • task (str) – The task based on which the text should be scored (the original attacker model’s objective). Currently not supported for this scorer.

Returns:

The score is the detected amount of behavior change throughout the conversation.

scorer_type: Literal['true_false', 'float_scale']#
validate(request_response: PromptRequestPiece, *, task: str | None = None)[source]#

Validates the request_response piece to score. Because some scorers may require specific PromptRequestPiece types or values.

Parameters:
  • request_response (PromptRequestPiece) – The request response to be validated.

  • task (str) – The task based on which the text should be scored (the original attacker model’s objective).