pyrit.prompt_target.HuggingFaceChatTarget#
- class HuggingFaceChatTarget(*, model_id: str | None = None, model_path: str | None = None, hf_access_token: str | None = None, use_cuda: bool = False, tensor_format: str = 'pt', necessary_files: list | None = None, max_new_tokens: int = 20, temperature: float = 1.0, top_p: float = 1.0, skip_special_tokens: bool = True, trust_remote_code: bool = False, device_map: str | None = None, torch_dtype: torch.dtype | None = None, attn_implementation: str | None = None, max_requests_per_minute: int | None = None)[source]#
Bases:
PromptChatTargetThe HuggingFaceChatTarget interacts with HuggingFace models, specifically for conducting red teaming activities. Inherits from PromptTarget to comply with the current design standards.
- __init__(*, model_id: str | None = None, model_path: str | None = None, hf_access_token: str | None = None, use_cuda: bool = False, tensor_format: str = 'pt', necessary_files: list | None = None, max_new_tokens: int = 20, temperature: float = 1.0, top_p: float = 1.0, skip_special_tokens: bool = True, trust_remote_code: bool = False, device_map: str | None = None, torch_dtype: torch.dtype | None = None, attn_implementation: str | None = None, max_requests_per_minute: int | None = None) None[source]#
Initializes the HuggingFaceChatTarget.
- Parameters:
model_id (Optional[str]) – The Hugging Face model ID. Either model_id or model_path must be provided.
model_path (Optional[str]) – Path to a local model. Either model_id or model_path must be provided.
hf_access_token (Optional[str]) – Hugging Face access token for authentication.
use_cuda (bool) – Whether to use CUDA for GPU acceleration. Defaults to False.
tensor_format (str) – The tensor format. Defaults to “pt”.
necessary_files (Optional[list]) – List of necessary model files to download.
max_new_tokens (int) – Maximum number of new tokens to generate. Defaults to 20.
temperature (float) – Sampling temperature. Defaults to 1.0.
top_p (float) – Nucleus sampling probability. Defaults to 1.0.
skip_special_tokens (bool) – Whether to skip special tokens. Defaults to True.
trust_remote_code (bool) – Whether to trust remote code execution. Defaults to False.
device_map (Optional[str]) – Device mapping strategy.
torch_dtype (Optional[torch.dtype]) – Torch data type for model weights.
attn_implementation (Optional[str]) – Attention implementation type.
max_requests_per_minute (Optional[int]) – The maximum number of requests per minute. Defaults to None.
Methods
__init__(*[, model_id, model_path, ...])Initializes the HuggingFaceChatTarget.
Disables the class-level cache and clears the cache.
dispose_db_engine()Dispose database engine to release database connections and resources.
Enables the class-level cache.
get_identifier()Indicates that this target supports JSON response format.
Check if the HuggingFace model ID is valid.
is_response_format_json(message_piece)Checks if the response format is JSON and ensures the target supports it.
Loads the model and tokenizer, downloading if necessary.
send_prompt_async(**kwargs)Sends a normalized prompt async to the prompt target.
set_model_name(*, model_name)Set the model name for this target.
set_system_prompt(*, system_prompt, ...[, ...])Sets the system prompt for the prompt target.
Attributes
- HUGGINGFACE_TOKEN_ENVIRONMENT_VARIABLE = 'HUGGINGFACE_TOKEN'#
- is_json_response_supported() bool[source]#
Indicates that this target supports JSON response format.
- is_model_id_valid() bool[source]#
Check if the HuggingFace model ID is valid. :return: True if valid, False otherwise.
- async load_model_and_tokenizer()[source]#
Loads the model and tokenizer, downloading if necessary.
Downloads the model to the HF_MODELS_DIR folder if it does not exist, then loads it from there.
- Raises:
Exception – If the model loading fails.
- async send_prompt_async(**kwargs)#
Sends a normalized prompt async to the prompt target.