pyrit.score.LikertScalePaths#
- class LikertScalePaths(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
Bases:
EnumEnum containing Likert scale configurations including YAML paths and evaluation file mappings.
Each enum value is a tuple of (yaml_path, evaluation_files) where: - yaml_path: Path to the YAML file containing the Likert scale definition - evaluation_files: Optional LikertScaleEvalFiles for scorer evaluation, or None if no dataset exists
- __init__(*args, **kwds)#
Attributes
Get the path to the Likert scale YAML file.
Get the evaluation file configuration, or None if no evaluation dataset exists.
- BEHAVIOR_CHANGE_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/behavior_change.yaml'), None)#
- CYBER_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/cyber.yaml'), None)#
- DECEPTION_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/deception.yaml'), None)#
- EXPLOITS_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/exploits.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/exploit.csv'], result_file='harm/exploits_metrics.jsonl', harm_category='exploits'))#
- FAIRNESS_BIAS_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/fairness_bias.yaml'), None)#
- HARM_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/harm.yaml'), None)#
- HATE_SPEECH_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/hate_speech.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/hate_speech.csv'], result_file='harm/hate_speech_metrics.jsonl', harm_category='hate_speech'))#
- INFORMATION_INTEGRITY_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/information_integrity.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/info_integrity.csv'], result_file='harm/information_integrity_metrics.jsonl', harm_category='information_integrity'))#
- MISINFORMATION_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/misinformation.yaml'), None)#
- PERSUASION_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/persuasion.yaml'), None)#
- PHISHING_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/phishing.yaml'), None)#
- PRIVACY_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/privacy.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/privacy.csv'], result_file='harm/privacy_metrics.jsonl', harm_category='privacy'))#
- SELF_HARM_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/self_harm.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/self_harm.csv'], result_file='harm/self_harm_metrics.jsonl', harm_category='self_harm'))#
- SEXUAL_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/sexual.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/sexual.csv'], result_file='harm/sexual_metrics.jsonl', harm_category='sexual'))#
- VIOLENCE_SCALE = (PosixPath('/home/runner/work/PyRIT/PyRIT/pyrit/datasets/harm_definition/violence.yaml'), LikertScaleEvalFiles(human_labeled_datasets_files=['harm/violence.csv'], result_file='harm/violence_metrics.jsonl', harm_category='violence'))#
- property evaluation_files: LikertScaleEvalFiles | None#
Get the evaluation file configuration, or None if no evaluation dataset exists.