2. OpenAI Responses Target#
In this demo, we show an example of the OpenAIResponseTarget. Responses is a newer protocol than chat completions and provides additional functionality with a somewhat modified API. The allowed input types include text, image, web search, file search, functions, reasoning, and computer use.
Before you begin, ensure you are set up with the correct version of PyRIT installed and have secrets configured as described here.
OpenAI Configuration#
Like most targets, all OpenAITargets need an endpoint and often also needs a model and a key. These can be passed into the constructor or configured with environment variables (or in .env).
endpoint: The API endpoint (
OPENAI_RESPONSES_ENDPOINTenvironment variable). For OpenAI, these are just “https://api.openai.com/v1/responses”.auth: The API key for authentication (
OPENAI_RESPONSES_KEYenvironment variable).model_name: The model to use (
OPENAI_RESPONSES_MODELenvironment variable). For OpenAI, these are any available model name and are listed here: “https://platform.openai.com/docs/models”.
import os
from pyrit.auth import get_azure_openai_auth
from pyrit.executor.attack import ConsoleAttackResultPrinter, PromptSendingAttack
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit_async
await initialize_pyrit_async(memory_db_type=IN_MEMORY) # type: ignore
# For Azure OpenAI with Entra ID authentication (no API key needed, run `az login` first):
endpoint = os.environ["OPENAI_RESPONSES_ENDPOINT"]
target = OpenAIResponseTarget(
endpoint=endpoint,
api_key=get_azure_openai_auth(endpoint),
)
# To use an API key instead:
# target = OpenAIResponseTarget() # Uses OPENAI_RESPONSES_ENDPOINT, OPENAI_RESPONSES_MODEL, OPENAI_RESPONSES_KEY env vars
attack = PromptSendingAttack(objective_target=target)
result = await attack.execute_async(objective="Tell me a joke") # type: ignore
await ConsoleAttackResultPrinter().print_conversation_async(result=result) # type: ignore
Found default environment files: ['./.pyrit/.env', './.pyrit/.env.local']
Loaded environment file: ./.pyrit/.env
Loaded environment file: ./.pyrit/.env.local
────────────────────────────────────────────────────────────────────────────────────────────────────
🔹 Turn 1 - USER
────────────────────────────────────────────────────────────────────────────────────────────────────
Tell me a joke
────────────────────────────────────────────────────────────────────────────────────────────────────
🔸 ASSISTANT
────────────────────────────────────────────────────────────────────────────────────────────────────
Why don’t scientists trust atoms?
Because they make up everything!
────────────────────────────────────────────────────────────────────────────────────────────────────
Reasoning Configuration#
Reasoning models (e.g., o1, o3, o4-mini, GPT-5) support a reasoning parameter that controls how much internal reasoning the model performs before responding. You can configure this with two parameters:
reasoning_effort: Controls the depth of reasoning. Accepts"minimal","low","medium", or"high". Lower effort favors speed and lower cost; higher effort favors thoroughness. The default (when not set) is typically"medium".reasoning_summary: Controls whether a summary of the model’s internal reasoning is included in the response. Accepts"auto","concise", or"detailed". By default, no summary is included.
For more information, see the OpenAI reasoning guide.
import os
from pyrit.auth import get_azure_openai_auth
from pyrit.executor.attack import ConsoleAttackResultPrinter, PromptSendingAttack
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit_async
await initialize_pyrit_async(memory_db_type=IN_MEMORY) # type: ignore
endpoint = os.environ["OPENAI_RESPONSES_ENDPOINT"]
target = OpenAIResponseTarget(
endpoint=endpoint,
api_key=get_azure_openai_auth(endpoint),
reasoning_effort="high",
reasoning_summary="detailed",
)
attack = PromptSendingAttack(objective_target=target)
result = await attack.execute_async(objective="What are the most dangerous items in a household?") # type: ignore
await ConsoleAttackResultPrinter().print_conversation_async(result=result, include_reasoning_trace=True) # type: ignore
Found default environment files: ['./.pyrit/.env', './.pyrit/.env.local']
Loaded environment file: ./.pyrit/.env
Loaded environment file: ./.pyrit/.env.local
────────────────────────────────────────────────────────────────────────────────────────────────────
🔹 Turn 1 - USER
────────────────────────────────────────────────────────────────────────────────────────────────────
What are the most dangerous items in a household?
────────────────────────────────────────────────────────────────────────────────────────────────────
🔸 ASSISTANT
────────────────────────────────────────────────────────────────────────────────────────────────────
💭 Reasoning Summary:
**Identifying household dangers**
The user wants to know about the most dangerous items found in a typical home. I should create a
list of these items based on their risk level. This includes chemical hazards like cleaning
products, physical hazards such as sharp objects and furniture, as well as electrical and
choking risks. I’ll compile a list with prevention steps, covering items like cleaning
chemicals, medications, firearms, power tools, and more to give a comprehensive view of
household dangers.
**Listing household hazards**
I’m preparing an answer focused on items that pose risks in a household, including chemicals,
sharp objects, electrical hazards, and choking risks. I'll categorize these hazards into groups
like poisonous substances and fire hazards, providing specific examples for each.
The final list will include items such as cleaning products, medications, and power tools, along
with potential hazards, prevention tips, and recommended storage solutions. It's important to
convey this information clearly to ensure user safety effectively.
Here’s a breakdown of common household items that pose the greatest risks, the hazards they
present, and basic prevention tips:
1. Cleaning Chemicals
– Examples: Bleach, ammonia, drain openers, oven cleaners
– Hazards: Chemical burns, respiratory irritation, poisonous if ingested
– Prevention: Store in original containers, up high or locked away, never mix chemicals, use
gloves and proper ventilation
2. Medications & Supplements
– Examples: Prescription pills, over-the-counter painkillers, vitamins
– Hazards: Poisoning or overdose (especially for children), adverse drug interactions
– Prevention: Keep in child-resistant, lockable cabinets; dispose of unused meds safely; use
pill organizers out of kids’ reach
3. Pesticides, Herbicides & Rodenticides
– Examples: Insect sprays, weed killers, mouse bait
– Hazards: Acute poisoning through skin contact, inhalation, or ingestion
– Prevention: Store in locked outdoor shed or high cabinet; follow label instructions; wear
protective gear when applying
4. Button Batteries & Small Batteries
– Examples: Watch batteries, hearing-aid cells, rechargeable lithium batteries
– Hazards: Severe internal burns if swallowed; fire risk from short circuits
– Prevention: Keep in sealed container; secure battery-compartment covers; recycle spent
batteries promptly
5. Sharp Objects & Tools
– Examples: Kitchen knives, box cutters, power saws, scissors
– Hazards: Cuts, lacerations, puncture wounds
– Prevention: Store knives in blocks or drawers with safety catch; unplug and lock up power
tools; use blade guards
6. Firearms & Ammunition
– Hazards: Accidental discharge, severe injury or death
– Prevention: Unloaded firearms stored in locked safes; ammunition stored separately in locked
boxes; follow all local laws and safety courses
7. Electrical Hazards
– Examples: Overloaded power strips, damaged extension cords, DIY wiring
– Hazards: Shock, electrocution, fire
– Prevention: Replace frayed cords; don’t daisy-chain power strips; hire qualified electricians
for repairs
8. Heating Appliances & Open Flames
– Examples: Space heaters, candles, fireplaces, stovetops
– Hazards: Burns, house fires, carbon monoxide poisoning
– Prevention: Keep flammables away; use stable surfaces; install smoke and CO detectors; never
leave candles or stoves unattended
9. Furniture & Televisions
– Hazards: Tip-over injuries (especially to children), crushing
– Prevention: Anchor tall dressers, bookcases and TVs to the wall; avoid placing tempting
objects on top
10. Choking & Suffocation Hazards
– Examples: Small toy parts, latex balloons, plastic bags
– Hazards: Airway obstruction, asphyxiation
– Prevention: Keep small items and bags out of reach of young children; supervise during play;
use age-appropriate toys
11. Household Plants & Mushrooms
– Examples: Dieffenbachia (dumb cane), oleander, mushrooms foraged outdoors
– Hazards: Gastrointestinal distress, cardiac effects, convulsions
– Prevention: Identify and remove toxic plants; teach children not to eat wild mushrooms; keep
veterinary and poison-control numbers handy
12. Laundry & Dishwasher Pods
– Hazards: Poisoning if ingested, eye injuries on contact
– Prevention: Store in high, locked cabinet; handle with dry hands; keep pods in original,
child-resistant packaging
By identifying and securing or removing these high-risk items, you can dramatically reduce the
likelihood of accidental injury or poisoning in your home.
────────────────────────────────────────────────────────────────────────────────────────────────────
JSON Generation#
We can use the OpenAI Responses API with a JSON schema to produce structured JSON output. In this example, we define a simple JSON schema that describes a person with name and age properties.
For more information about structured outputs with OpenAI, see the OpenAI documentation.
import json
import os
import jsonschema
from pyrit.auth import get_azure_openai_auth
from pyrit.models import Message, MessagePiece
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit_async
await initialize_pyrit_async(memory_db_type=IN_MEMORY) # type: ignore
# Define a simple JSON schema for a person
person_schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer", "minimum": 0, "maximum": 150},
},
"required": ["name", "age"],
"additionalProperties": False,
}
prompt = "Create a JSON object describing a person named Alice who is 30 years old."
# Create the message piece and message
message_piece = MessagePiece(
role="user",
original_value=prompt,
original_value_data_type="text",
prompt_metadata={
"response_format": "json",
"json_schema": json.dumps(person_schema),
},
)
message = Message(message_pieces=[message_piece])
# Create the OpenAI Responses target
endpoint = os.environ["OPENAI_RESPONSES_ENDPOINT"]
target = OpenAIResponseTarget(
endpoint=endpoint,
api_key=get_azure_openai_auth(endpoint),
)
# Send the prompt, requesting JSON output
response = await target.send_prompt_async(message=message) # type: ignore
# Validate and print the response
response_json = json.loads(response[0].message_pieces[1].converted_value)
print(json.dumps(response_json, indent=2))
jsonschema.validate(instance=response_json, schema=person_schema)
Found default environment files: ['./.pyrit/.env', './.pyrit/.env.local']
Loaded environment file: ./.pyrit/.env
Loaded environment file: ./.pyrit/.env.local
{
"name": "Alice",
"age": 30
}
Tool Use with Custom Functions#
In this example, we demonstrate how the OpenAI Responses API can be used to invoke a custom-defined Python function during a conversation. This is part of OpenAI’s support for “function calling”, where the model decides to call a registered function, and the application executes it and passes the result back into the conversation loop.
We define a simple tool called get_current_weather, which simulates weather information retrieval. A corresponding OpenAI tool schema describes the function name, parameters, and expected input format.
The function is registered in the custom_functions argument of OpenAIResponseTarget. The extra_body_parameters include:
tools: the full OpenAI tool schema forget_current_weather.tool_choice: "auto": instructs the model to decide when to call the function.
The user prompt explicitly asks the model to use the get_current_weather function. Once the model responds with a function_call, PyRIT executes the function, wraps the output, and the conversation continues until a final answer is produced.
This showcases how agentic function execution works with PyRIT + OpenAI Responses API.
import os
from pyrit.auth import get_azure_openai_auth
from pyrit.models import Message, MessagePiece
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit_async
await initialize_pyrit_async(memory_db_type=IN_MEMORY) # type: ignore
async def get_current_weather(args):
return {
"weather": "Sunny",
"temp_c": 22,
"location": args["location"],
"unit": args["unit"],
}
# Responses API function tool schema (flat, no nested "function" key)
function_tool = {
"type": "function",
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location", "unit"],
"additionalProperties": False,
},
"strict": True,
}
endpoint = os.environ["OPENAI_RESPONSES_ENDPOINT"]
# Let the model auto-select tools
target = OpenAIResponseTarget(
endpoint=endpoint,
api_key=get_azure_openai_auth(endpoint),
custom_functions={"get_current_weather": get_current_weather},
extra_body_parameters={
"tools": [function_tool],
"tool_choice": "auto",
},
httpx_client_kwargs={"timeout": 60.0},
)
# Build the user prompt
message_piece = MessagePiece(
role="user",
original_value="What is the weather in Boston in celsius? Use the get_current_weather function.",
original_value_data_type="text",
)
message = Message(message_pieces=[message_piece])
response = await target.send_prompt_async(message=message) # type: ignore
for response_msg in response:
for idx, piece in enumerate(response_msg.message_pieces):
print(f"{idx} | {piece.api_role}: {piece.original_value}")
Found default environment files: ['./.pyrit/.env', './.pyrit/.env.local']
Loaded environment file: ./.pyrit/.env
Loaded environment file: ./.pyrit/.env.local
0 | assistant: {"id":"rs_01870ba726278a100069a61c56bf50819584cb83fe9675b813","summary":[],"type":"reasoning","content":null,"encrypted_content":null,"status":null}
1 | assistant: {"type":"function_call","call_id":"call_0SZ2txqM3gQP9Tj2jN3L6zJe","name":"get_current_weather","arguments":"{\"location\":\"Boston, MA\",\"unit\":\"celsius\"}"}
0 | tool: {"type":"function_call_output","call_id":"call_0SZ2txqM3gQP9Tj2jN3L6zJe","output":"{\"weather\":\"Sunny\",\"temp_c\":22,\"location\":\"Boston, MA\",\"unit\":\"celsius\"}"}
0 | assistant: The current weather in Boston, MA is Sunny with a temperature of 22°C.
Using the Built-in Web Search Tool#
In this example, we use a built-in PyRIT helper function web_search_tool() to register a web search tool with OpenAI’s Responses API. This allows the model to issue web search queries during a conversation to supplement its responses with fresh information.
The tool is added to the extra_body_parameters passed into the OpenAIResponseTarget. As before, tool_choice="auto" enables the model to decide when to invoke the tool.
The user prompt asks for a recent positive news story — an open-ended question that may prompt the model to issue a web search tool call. PyRIT will automatically execute the tool and return the output to the model as part of the response.
This example demonstrates how retrieval-augmented generation (RAG) can be enabled in PyRIT through OpenAI’s Responses API and integrated tool schema.
NOTE that web search is NOT supported through an Azure OpenAI endpoint, only through the OpenAI platform endpoint (i.e. api.openai.com)
import os
from pyrit.auth import get_azure_openai_auth
from pyrit.common.tool_configs import web_search_tool
from pyrit.models import Message, MessagePiece
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit_async
await initialize_pyrit_async(memory_db_type=IN_MEMORY) # type: ignore
# Note: web search is only supported on a limited set of models.
responses_endpoint = os.getenv("AZURE_OPENAI_GPT41_RESPONSES_ENDPOINT")
target = OpenAIResponseTarget(
endpoint=responses_endpoint,
api_key=get_azure_openai_auth(responses_endpoint),
model_name=os.getenv("AZURE_OPENAI_GPT41_RESPONSES_MODEL"),
extra_body_parameters={
"tools": [web_search_tool()],
"tool_choice": "auto",
},
httpx_client_kwargs={"timeout": 60},
)
message_piece = MessagePiece(
role="user", original_value="Briefly, what is one positive news story from today?", original_value_data_type="text"
)
message = Message(message_pieces=[message_piece])
response = await target.send_prompt_async(message=message) # type: ignore
for response_msg in response:
for idx, piece in enumerate(response_msg.message_pieces):
print(f"{idx} | {piece.api_role}: {piece.original_value}")
Found default environment files: ['./.pyrit/.env', './.pyrit/.env.local']
Loaded environment file: ./.pyrit/.env
Loaded environment file: ./.pyrit/.env.local
0 | assistant: {"type":"web_search_call","id":"ws_0c79240d5dafd2f40069a61c61913481978bd6edf75da91905"}
1 | assistant: One positive news story from today is about a 19-year-old named Juan Mendoza in Texas, who rescued an elderly couple from a car wreck. His bravery was recognized, and he was given a scholarship to an Emergency Medical Technician (EMT) school and a conditional job offer with a major ambulance service. This act of heroism is now opening doors for him to help even more people in the future [Positively Uplifting Stories | March 2 2026](https://www.dailymotivation.site/positively-uplifting-stories-march-2-2026/).
Grammar-Constrained Generation#
OpenAI models also support constrained generation in the Responses API. This forces the LLM to produce output which conforms to the given grammar, which is useful when specific syntax is required in the output.
In this example, we will define a simple Lark grammar which prevents the model from giving a correct answer to a simple question, and compare that to the unconstrained model.
Note that as of October 2025, this is only supported by OpenAI (not Azure) on “gpt-5”
import os
from pyrit.auth import get_azure_openai_auth
from pyrit.models import Message, MessagePiece
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit_async
await initialize_pyrit_async(memory_db_type=IN_MEMORY) # type: ignore
message_piece = MessagePiece(
role="user",
original_value="What is the capital of Italy?",
original_value_data_type="text",
)
message = Message(message_pieces=[message_piece])
# Define a grammar that prevents "Rome" from being generated
lark_grammar = r"""
start: "I think that it is " SHORTTEXT
SHORTTEXT: /[^RrOoMmEe]{1,8}/
"""
grammar_tool = {
"type": "custom",
"name": "CitiesGrammar",
"description": "Constrains generation.",
"format": {
"type": "grammar",
"syntax": "lark",
"definition": lark_grammar,
},
}
gpt5_endpoint = os.getenv("AZURE_OPENAI_GPT5_RESPONSES_ENDPOINT")
target = OpenAIResponseTarget(
endpoint=gpt5_endpoint,
api_key=get_azure_openai_auth(gpt5_endpoint),
model_name=os.getenv("AZURE_OPENAI_GPT5_MODEL"),
extra_body_parameters={"tools": [grammar_tool], "tool_choice": "required"},
temperature=1.0,
)
unconstrained_target = OpenAIResponseTarget(
endpoint=gpt5_endpoint,
api_key=get_azure_openai_auth(gpt5_endpoint),
model_name=os.getenv("AZURE_OPENAI_GPT5_MODEL"),
temperature=1.0,
)
unconstrained_result = await unconstrained_target.send_prompt_async(message=message) # type: ignore
result = await target.send_prompt_async(message=message) # type: ignore
print("Unconstrained Response:")
for response_msg in unconstrained_result:
for idx, piece in enumerate(response_msg.message_pieces):
print(f"{idx} | {piece.api_role}: {piece.original_value}")
print()
print("Constrained Response:")
for response_msg in result:
for idx, piece in enumerate(response_msg.message_pieces):
print(f"{idx} | {piece.api_role}: {piece.original_value}")
Found default environment files: ['./.pyrit/.env', './.pyrit/.env.local']
Loaded environment file: ./.pyrit/.env
Loaded environment file: ./.pyrit/.env.local
Unconstrained Response:
0 | assistant: {"id":"rs_0428c10711a7859e0069a61c694c48819097007d0fc1872494","summary":[],"type":"reasoning","content":null,"encrypted_content":null,"status":null}
1 | assistant: Rome.
Constrained Response:
0 | assistant: {"id":"rs_07f21e083b7bb57e0069a61c7009f08196a0a6ea83a4fbf8e4","summary":[],"type":"reasoning","content":null,"encrypted_content":null,"status":null}
1 | assistant: I think that it is city