pyrit.prompt_target.OpenAIChatTarget#

class OpenAIChatTarget(*, max_completion_tokens: int | None = None, max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, frequency_penalty: float | None = None, presence_penalty: float | None = None, seed: int | None = None, n: int | None = None, extra_body_parameters: dict[str, Any] | None = None, **kwargs)[source]#

Bases: OpenAITarget

This class facilitates multimodal (image and text) input and text output generation

This works with GPT3.5, GPT4, GPT4o, GPT-V, and other compatible models

__init__(*, max_completion_tokens: int | None = None, max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, frequency_penalty: float | None = None, presence_penalty: float | None = None, seed: int | None = None, n: int | None = None, extra_body_parameters: dict[str, Any] | None = None, **kwargs)[source]#
Parameters:
  • model_name (str, Optional) – The name of the model.

  • endpoint (str, Optional) – The target URL for the OpenAI service.

  • api_key (str, Optional) – The API key for accessing the Azure OpenAI service. Defaults to the AZURE_OPENAI_CHAT_KEY environment variable.

  • headers (str, Optional) – Headers of the endpoint (JSON).

  • use_aad_auth (bool, Optional) – When set to True, user authentication is used instead of API Key. DefaultAzureCredential is taken for https://cognitiveservices.azure.com/.default . Please run az login locally to leverage user AuthN.

  • api_version (str, Optional) – The version of the Azure OpenAI API. Defaults to “2024-06-01”.

  • max_requests_per_minute (int, Optional) – Number of requests the target can handle per minute before hitting a rate limit. The number of requests sent to the target will be capped at the value provided.

  • max_completion_tokens (int, Optional) –

    An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

    NOTE: Specify this value when using an o1 series model.

  • max_tokens (int, Optional) –

    The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.

    This value is now deprecated in favor of max_completion_tokens, and IS NOT COMPATIBLE with o1 series models.

  • temperature (float, Optional) – The temperature parameter for controlling the randomness of the response.

  • top_p (float, Optional) – The top-p parameter for controlling the diversity of the response.

  • frequency_penalty (float, Optional) – The frequency penalty parameter for penalizing frequently generated tokens.

  • presence_penalty (float, Optional) – The presence penalty parameter for penalizing tokens that are already present in the conversation history.

  • seed (int, Optional) – If specified, openAI will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

  • n (int, Optional) – The number of completions to generate for each prompt.

  • extra_body_parameters (dict, Optional) – Additional parameters to be included in the request body.

Methods

__init__(*[, max_completion_tokens, ...])

dispose_db_engine()

Dispose DuckDB database engine to release database connections and resources.

get_identifier()

is_json_response_supported()

Indicates that this target supports JSON response format.

is_response_format_json(request_piece)

Checks if the response format is JSON and ensures the target supports it.

send_prompt_async(**kwargs)

Sends a normalized prompt async to the prompt target.

set_system_prompt(*, system_prompt, ...[, ...])

Sets the system prompt for the prompt target.

Attributes

api_key_environment_variable: str#
endpoint_environment_variable: str#
is_json_response_supported() bool[source]#

Indicates that this target supports JSON response format.

model_name_environment_variable: str#
async send_prompt_async(**kwargs)#

Sends a normalized prompt async to the prompt target.