pyrit.score.HumanInTheLoopScorerGradio#
- class HumanInTheLoopScorerGradio(*, open_browser=False, validator: ~pyrit.score.scorer_prompt_validator.ScorerPromptValidator | None = None, score_aggregator: ~typing.Callable[[~typing.Iterable[~pyrit.models.score.Score]], ~pyrit.score.score_aggregator_result.ScoreAggregatorResult] = <function _create_aggregator.<locals>.aggregator>)[source]#
Bases:
TrueFalseScorerCreate scores from manual human input using Gradio and adds them to the database.
In the future this will not be a TrueFalseScorer. However, it is all that is supported currently.
- __init__(*, open_browser=False, validator: ~pyrit.score.scorer_prompt_validator.ScorerPromptValidator | None = None, score_aggregator: ~typing.Callable[[~typing.Iterable[~pyrit.models.score.Score]], ~pyrit.score.score_aggregator_result.ScoreAggregatorResult] = <function _create_aggregator.<locals>.aggregator>) None[source]#
Initialize the HumanInTheLoopScorerGradio.
- Parameters:
open_browser (bool) – If True, the scorer will open the Gradio interface in a browser instead of opening it in PyWebview. Defaults to False.
validator (Optional[ScorerPromptValidator]) – Custom validator. Defaults to None.
score_aggregator (TrueFalseAggregatorFunc) – Aggregator for combining scores. Defaults to TrueFalseScoreAggregator.OR.
Methods
__init__(*[, open_browser, validator, ...])Initialize the HumanInTheLoopScorerGradio.
get_identifier()Get an identifier dictionary for the scorer.
get_scorer_metrics(dataset_name[, metrics_type])Get evaluation statistics for the scorer using the dataset_name of the human labeled dataset.
retrieve_score(request_prompt, *[, objective])Retrieve a score from the human evaluator through the RPC server.
scale_value_float(value, min_value, max_value)Scales a value from 0 to 1 based on the given min and max values.
score_async(message, *[, objective, ...])Score the message, add the results to the database, and return a list of Score objects.
score_image_async(image_path, *[, objective])Score the given image using the chat target.
score_image_batch_async(*, image_paths[, ...])Score a batch of images asynchronously.
score_prompts_batch_async(*, messages[, ...])Score multiple prompts in batches using the provided objectives.
score_response_async(*, response[, ...])Score a response using an objective scorer and optional auxiliary scorers.
score_response_multiple_scorers_async(*, ...)Score a response using multiple scorers in parallel.
score_text_async(text, *[, objective])Scores the given text based on the task using the chat target.
validate_return_scores(scores)Validate the scores returned by the scorer.
Attributes
scorer_type- retrieve_score(request_prompt: MessagePiece, *, objective: str | None = None) list[Score][source]#
Retrieve a score from the human evaluator through the RPC server.
- Parameters:
request_prompt (MessagePiece) – The message piece to be scored.
objective (Optional[str]) – The objective to evaluate against. Defaults to None.
- Returns:
A list containing a single Score object from the human evaluator.
- Return type: