pyrit.prompt_target.OpenAICompletionTarget#
- class OpenAICompletionTarget(max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, presence_penalty: float | None = None, frequency_penalty: float | None = None, n: int | None = None, *args, **kwargs)[source]#
Bases:
OpenAITargetA prompt target for OpenAI completion endpoints.
- __init__(max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, presence_penalty: float | None = None, frequency_penalty: float | None = None, n: int | None = None, *args, **kwargs)[source]#
Initialize the OpenAICompletionTarget with the given parameters.
- Parameters:
model_name (str, Optional) – The name of the model. If no value is provided, the OPENAI_COMPLETION_MODEL environment variable will be used.
endpoint (str, Optional) – The target URL for the OpenAI service.
api_key (str | Callable[[], str], Optional) – The API key for accessing the OpenAI service, or a callable that returns an access token. For Azure endpoints with Entra authentication, pass a token provider from pyrit.auth (e.g., get_azure_openai_auth(endpoint)). Defaults to the OPENAI_CHAT_KEY environment variable.
headers (str, Optional) – Headers of the endpoint (JSON).
max_requests_per_minute (int, Optional) – Number of requests the target can handle per minute before hitting a rate limit. The number of requests sent to the target will be capped at the value provided.
max_tokens (int, Optional) – The maximum number of tokens that can be generated in the completion. The token count of your prompt plus max_tokens cannot exceed the model’s context length.
temperature (float, Optional) – What sampling temperature to use, between 0 and 2. Values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
top_p (float, Optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
presence_penalty (float, Optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
frequency_penalty (float, Optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
n (int, Optional) – How many completions to generate for each prompt.
*args – Variable length argument list passed to the parent class.
**kwargs – Additional keyword arguments passed to the parent OpenAITarget class.
httpx_client_kwargs (dict, Optional) – Additional kwargs to be passed to the
httpx.AsyncClient()constructor. For example, to specify a 3 minute timeout:httpx_client_kwargs={"timeout": 180}
Methods
__init__([max_tokens, temperature, top_p, ...])Initialize the OpenAICompletionTarget with the given parameters.
dispose_db_engine()Dispose database engine to release database connections and resources.
get_identifier()Get an identifier dictionary for this prompt target.
Check if the target supports JSON as a response format.
is_response_format_json(message_piece)Check if the response format is JSON and ensure the target supports it.
send_prompt_async(**kwargs)Send a normalized prompt async to the prompt target.
set_model_name(*, model_name)Set the model name for this target.
set_system_prompt(*, system_prompt, ...[, ...])Set the system prompt for the prompt target.
Attributes
ADDITIONAL_REQUEST_HEADERSsupported_convertersA list of PromptConverters that are supported by the prompt target.