pyrit.score.HumanLabeledDataset#
- class HumanLabeledDataset(*, name: str, entries: List[HumanLabeledEntry], metrics_type: MetricsType, version: str)[source]#
Bases:
objectA class that represents a human-labeled dataset, including the entries and each of their corresponding human scores. This dataset is used to evaluate PyRIT scorer performance via the ScorerEvaluator class. HumanLabeledDatasets can be constructed from a CSV file.
- __init__(*, name: str, entries: List[HumanLabeledEntry], metrics_type: MetricsType, version: str)[source]#
Initialize the HumanLabeledDataset.
- Parameters:
name (str) – The name of the human-labeled dataset. For datasets of uniform type, this is often the harm category (e.g. hate_speech) or objective. It will be used in the naming of metrics (JSON) and model scores (CSV) files when evaluation is run on this dataset.
entries (List[HumanLabeledEntry]) – A list of entries in the dataset.
metrics_type (MetricsType) – The type of the human-labeled dataset, either HARM or OBJECTIVE.
version (str) – The version of the human-labeled dataset.
- Raises:
ValueError – If the dataset name is an empty string.
Methods
__init__(*, name, entries, metrics_type, version)Initialize the HumanLabeledDataset.
add_entries(entries)Add multiple entries to the human-labeled dataset.
add_entry(entry)Add a new entry to the human-labeled dataset.
from_csv(*, csv_path, metrics_type, ...[, ...])Load a human-labeled dataset from a CSV file.
- add_entries(entries: List[HumanLabeledEntry])[source]#
Add multiple entries to the human-labeled dataset.
- Parameters:
entries (List[HumanLabeledEntry]) – A list of entries to add.
- add_entry(entry: HumanLabeledEntry)[source]#
Add a new entry to the human-labeled dataset.
- Parameters:
entry (HumanLabeledEntry) – The entry to add.
- classmethod from_csv(*, csv_path: str | Path, metrics_type: MetricsType, human_label_col_names: List[str], objective_or_harm_col_name: str, assistant_response_col_name: str = 'assistant_response', assistant_response_data_type_col_name: str | None = None, dataset_name: str | None = None, version: str | None = None) HumanLabeledDataset[source]#
Load a human-labeled dataset from a CSV file. This only allows for single turn scored text responses. You can optionally include a # comment line at the top of the CSV file to specify the dataset version (using # version=x.y).
- Parameters:
csv_path (Union[str, Path]) – The path to the CSV file.
metrics_type (MetricsType) – The type of the human-labeled dataset, either HARM or OBJECTIVE.
assistant_response_col_name (str) – The name of the column containing the assistant responses. Defaults to “assistant_response”.
human_label_col_names (List[str]) – The names of the columns containing the human assigned labels. For harm datasets, the CSV file should contain float scores between 0.0 and 1.0 for each response. For objective datasets, the CSV file should contain a 0 or 1 for each response.
objective_or_harm_col_name (str) – The name of the column containing the objective or harm category for each response.
assistant_response_data_type_col_name (str, Optional) – The name of the column containing the data type of the assistant responses. If not specified, it is assumed that the responses are text.
dataset_name – (str, Optional): The name of the dataset. If not provided, it will be inferred from the CSV file name.
version (str, Optional) – The version of the dataset. If not provided here, it will be inferred from the CSV file if a version comment line “#version=” is present. See mini_hate_speech.csv for an example.
- Returns:
The human-labeled dataset object.
- Return type:
- Raises:
FileNotFoundError – If the CSV file does not exist.
ValueError – If version is not provided and not found in the CSV file via comment line “# version=”.