pyrit.prompt_target.OpenAIChatTarget#
- class OpenAIChatTarget(max_completion_tokens: int | None | NotGiven = NOT_GIVEN, max_tokens: int | None | NotGiven = NOT_GIVEN, temperature: float = 1.0, top_p: float = 1.0, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, seed: int | None = None, *args, **kwargs)[source]#
Bases:
OpenAITarget
This class facilitates multimodal (image and text) input and text output generation
This works with GPT3.5, GPT4, GPT4o, GPT-V, and other compatible models
- __init__(max_completion_tokens: int | None | NotGiven = NOT_GIVEN, max_tokens: int | None | NotGiven = NOT_GIVEN, temperature: float = 1.0, top_p: float = 1.0, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, seed: int | None = None, *args, **kwargs)[source]#
- Parameters:
max_completion_tokens (int, Optional) –
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
NOTE: Specify this value when using an o1 series model.
max_tokens (int, Optional) –
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
This value is now deprecated in favor of max_completion_tokens, and IS NOT COMPATIBLE with o1 series models.
temperature (float, Optional) – The temperature parameter for controlling the randomness of the response. Defaults to 1.0.
top_p (float, Optional) – The top-p parameter for controlling the diversity of the response. Defaults to 1.0.
frequency_penalty (float, Optional) – The frequency penalty parameter for penalizing frequently generated tokens. Defaults to 0.
presence_penalty (float, Optional) – The presence penalty parameter for penalizing tokens that are already present in the conversation history. Defaults to 0.
seed (int, Optional) – If specified, openAI will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Methods
__init__
([max_completion_tokens, ...])dispose_db_engine
()Dispose DuckDB database engine to release database connections and resources.
get_identifier
()Indicates that this target supports JSON response format.
is_response_format_json
(request_piece)Checks if the response format is JSON and ensures the target supports it.
send_prompt_async
(**kwargs)Sends a normalized prompt async to the prompt target.
set_system_prompt
(*, system_prompt, ...[, ...])Sets the system prompt for the prompt target.
Attributes
ADDITIONAL_REQUEST_HEADERS
supported_converters
- is_json_response_supported() bool [source]#
Indicates that this target supports JSON response format.
- async send_prompt_async(**kwargs)#
Sends a normalized prompt async to the prompt target.