8. OpenAI Responses Target#

In this demo, we show an example of the OpenAIResponseTarget. Responses is a newer protocol than chat completions and provides additional functionality with a somewhat modified API. The allowed input types include text, image, web search, file search, functions, reasoning, and computer use.

Before you begin, ensure you are set up with the correct version of PyRIT installed and have secrets configured as described here.

OpenAI Configuration#

Like most targets, all OpenAITargets need an endpoint and often also needs a model and a key. These can be passed into the constructor or configured with environment variables (or in .env).

  • endpoint: The API endpoint (OPENAI_RESPONSES_ENDPOINT environment variable). For OpenAI, these are just “https://api.openai.com/v1/responses”.

  • auth: The API key for authentication (OPENAI_RESPONSES_KEY environment variable).

  • model_name: The model to use (OPENAI_RESPONSES_MODEL environment variable). For OpenAI, these are any available model name and are listed here: “https://platform.openai.com/docs/models”.

from pyrit.executor.attack import ConsoleAttackResultPrinter, PromptSendingAttack
from pyrit.prompt_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit

initialize_pyrit(memory_db_type=IN_MEMORY)

target = OpenAIResponseTarget()
# For an AzureOpenAI endpoint with Entra ID authentication enabled, use the following command instead. Make sure to run `az login` first.
# target = OpenAIResponseTarget(use_entra_auth=True)

attack = PromptSendingAttack(objective_target=target)

result = await attack.execute_async(objective="Tell me a joke")  # type: ignore
await ConsoleAttackResultPrinter().print_conversation_async(result=result)  # type: ignore
────────────────────────────────────────────────────────────────────────────────────────────────────
🔹 Turn 1 - USER
────────────────────────────────────────────────────────────────────────────────────────────────────
  Tell me a joke

────────────────────────────────────────────────────────────────────────────────────────────────────
🔸 ASSISTANT
────────────────────────────────────────────────────────────────────────────────────────────────────
  Why don’t scientists trust atoms?
    Because they make up everything!

────────────────────────────────────────────────────────────────────────────────────────────────────

Tool Use with Custom Functions#

In this example, we demonstrate how the OpenAI Responses API can be used to invoke a custom-defined Python function during a conversation. This is part of OpenAI’s support for “function calling”, where the model decides to call a registered function, and the application executes it and passes the result back into the conversation loop.

We define a simple tool called get_current_weather, which simulates weather information retrieval. A corresponding OpenAI tool schema describes the function name, parameters, and expected input format.

The function is registered in the custom_functions argument of OpenAIResponseTarget. The extra_body_parameters include:

  • tools: the full OpenAI tool schema for get_current_weather.

  • tool_choice: "auto": instructs the model to decide when to call the function.

The user prompt explicitly asks the model to use the get_current_weather function. Once the model responds with a function_call, PyRIT executes the function, wraps the output, and the conversation continues until a final answer is produced.

This showcases how agentic function execution works with PyRIT + OpenAI Responses API.

from pyrit.models import Message, MessagePiece
from pyrit.setup import IN_MEMORY, initialize_pyrit

initialize_pyrit(memory_db_type=IN_MEMORY)


async def get_current_weather(args):
    return {
        "weather": "Sunny",
        "temp_c": 22,
        "location": args["location"],
        "unit": args["unit"],
    }


# Responses API function tool schema (flat, no nested "function" key)
function_tool = {
    "type": "function",
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {"type": "string", "description": "City and state"},
            "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
        },
        "required": ["location", "unit"],
        "additionalProperties": False,
    },
    "strict": True,
}

# Let the model auto-select tools
target = OpenAIResponseTarget(
    custom_functions={"get_current_weather": get_current_weather},
    extra_body_parameters={
        "tools": [function_tool],
        "tool_choice": "auto",
    },
    httpx_client_kwargs={"timeout": 60.0},
    # api_version=None,  # this can be uncommented if using api.openai.com
)

# Build the user prompt
message_piece = MessagePiece(
    role="user",
    original_value="What is the weather in Boston in celsius? Use the get_current_weather function.",
    original_value_data_type="text",
)
prompt_request = Message(message_pieces=[message_piece])

response = await target.send_prompt_async(prompt_request=prompt_request)  # type: ignore

for idx, piece in enumerate(response.message_pieces):
    print(f"{idx} | {piece.role}: {piece.original_value}")
0 | assistant: The current weather in Boston is Sunny with a temperature of 22°C.

Using the Built-in Web Search Tool#

In this example, we use a built-in PyRIT helper function web_search_tool() to register a web search tool with OpenAI’s Responses API. This allows the model to issue web search queries during a conversation to supplement its responses with fresh information.

The tool is added to the extra_body_parameters passed into the OpenAIResponseTarget. As before, tool_choice="auto" enables the model to decide when to invoke the tool.

The user prompt asks for a recent positive news story — an open-ended question that may prompt the model to issue a web search tool call. PyRIT will automatically execute the tool and return the output to the model as part of the response.

This example demonstrates how retrieval-augmented generation (RAG) can be enabled in PyRIT through OpenAI’s Responses API and integrated tool schema.

NOTE that web search is NOT supported through an Azure OpenAI endpoint, only through the OpenAI platform endpoint (i.e. api.openai.com)

import os

from pyrit.common.tool_configs import web_search_tool
from pyrit.models import Message, MessagePiece
from pyrit.prompt_target.openai.openai_response_target import OpenAIResponseTarget
from pyrit.setup import IN_MEMORY, initialize_pyrit

initialize_pyrit(memory_db_type=IN_MEMORY)

# Note: web search is not yet supported by Azure OpenAI endpoints so we'll use OpenAI from here on.
target = OpenAIResponseTarget(
    endpoint=os.getenv("PLATFORM_OPENAI_RESPONSES_ENDPOINT"),
    api_key=os.getenv("PLATFORM_OPENAI_RESPONSES_KEY"),
    model_name=os.getenv("PLATFORM_OPENAI_RESPONSES_MODEL"),
    api_version=None,
    extra_body_parameters={
        "tools": [web_search_tool()],
        "tool_choice": "auto",
    },
    httpx_client_kwargs={"timeout": 60},
)

message_piece = MessagePiece(
    role="user", original_value="Briefly, what is one positive news story from today?", original_value_data_type="text"
)
prompt_request = Message(message_pieces=[message_piece])

response = await target.send_prompt_async(prompt_request=prompt_request)  # type: ignore

for idx, piece in enumerate(response.message_pieces):
    if piece.original_value_data_type != "reasoning":
        print(f"{idx} | {piece.role}: {piece.original_value}")
1 | assistant: {"id":"ws_0f4da57d2d3f46590069004c09cd10819d9094acc94db1e9b8","type":"web_search_call","status":"completed","action":{"type":"search","query":"October 28 2025 positive news story"}}
3 | assistant: {"id":"ws_0f4da57d2d3f46590069004c0c6018819d8748260711df9d07","type":"web_search_call","status":"completed","action":{"type":"search","query":"site:reuters.com October 28 2025 positive news Reuters"}}
5 | assistant: {"id":"ws_0f4da57d2d3f46590069004c0eb990819d9badc437da40d3d4","type":"web_search_call","status":"completed","action":{"type":"search","query":"Oct 28 2025 Reuters conservation positive Oct 28"}}
7 | assistant: {"id":"ws_0f4da57d2d3f46590069004c12830c819d9af29ce9359bb845","type":"web_search_call","status":"completed","action":{"type":"search","query":"October 28 2025 'good news' site:goodnewsnetwork.org"}}
9 | assistant: {"id":"ws_0f4da57d2d3f46590069004c156f24819d84a01c0a9434429c","type":"web_search_call","status":"completed","action":{"type":"search","query":"Oct 28 2025 Greenpeace positive story Reuters Oct 28"}}
11 | assistant: {"id":"ws_0f4da57d2d3f46590069004c18abd0819d8812b7df936429ab","type":"web_search_call","status":"completed","action":{"type":"search","query":"Reuters Oct 28 2025 scientists discover"}}
13 | assistant: {"id":"ws_0f4da57d2d3f46590069004c1c0a20819d8dfae61d9a38e390","type":"web_search_call","status":"completed","action":{"type":"search","query":"AP News October 28 2025 good news"}}
15 | assistant: {"id":"ws_0f4da57d2d3f46590069004c1f720c819da2b09372d48d1386","type":"web_search_call","status":"completed","action":{"type":"search","query":"Oct 28 2025 Reuters child mortality decline UN Oct 28 2025"}}
17 | assistant: {"id":"ws_0f4da57d2d3f46590069004c22eadc819dbe71f68926b4635a","type":"web_search_call","status":"completed","action":{"type":"search","query":"Oct 28 2025 CNN good news story"}}
19 | assistant: {"id":"ws_0f4da57d2d3f46590069004c26c934819d90506cad3f9d1352","type":"web_search_call","status":"completed","action":{"type":"open_page","url":"https://www.reuters.com/business/energy/us-department-energy-forms-1-billion-supercomputer-ai-partnership-with-amd-2025-10-27/"}}
21 | assistant: {"id":"ws_0f4da57d2d3f46590069004c2a2bc0819d900dc446636516ef","type":"web_search_call","status":"completed","action":{"type":"open_page"}}
23 | assistant: {"id":"ws_0f4da57d2d3f46590069004c2b97ac819d929413078f891848","type":"web_search_call","status":"completed","action":{"type":"search","query":"Oct 28 (Reuters) Positive scientists discover"}}
25 | assistant: One positive story from today: the U.S. Department of Energy announced a $1 billion partnership with AMD to build two next-generation supercomputers—named “Lux” and “Discovery”—that will “supercharge” research into fusion energy, national security and even molecular-level cancer drug discovery ([reuters.com](https://www.reuters.com/business/energy/us-department-energy-forms-1-billion-supercomputer-ai-partnership-with-amd-2025-10-27/)).

Grammar-Constrained Generation#

OpenAI models also support constrained generation in the Responses API. This forces the LLM to produce output which conforms to the given grammar, which is useful when specific syntax is required in the output.

In this example, we will define a simple Lark grammar which prevents the model from giving a correct answer to a simple question, and compare that to the unconstrained model.

Note that as of October 2025, this is only supported by OpenAI (not Azure) on “gpt-5”

from pyrit.setup import IN_MEMORY, initialize_pyrit

initialize_pyrit(memory_db_type=IN_MEMORY)


message_piece = MessagePiece(
    role="user",
    original_value="What is the capital of Italy?",
    original_value_data_type="text",
)
prompt_request = Message(message_pieces=[message_piece])

# Define a grammar that prevents "Rome" from being generated
lark_grammar = r"""
start: "I think that it is " SHORTTEXT 
SHORTTEXT: /[^RrOoMmEe]{1,8}/
"""

grammar_tool = {
    "type": "custom",
    "name": "CitiesGrammar",
    "description": "Constrains generation.",
    "format": {
        "type": "grammar",
        "syntax": "lark",
        "definition": lark_grammar,
    },
}

target = OpenAIResponseTarget(
    endpoint=os.getenv("PLATFORM_OPENAI_RESPONSES_ENDPOINT"),
    api_key=os.getenv("PLATFORM_OPENAI_RESPONSES_KEY"),
    model_name="gpt-5",
    api_version=None,
    extra_body_parameters={"tools": [grammar_tool], "tool_choice": "required"},
    temperature=1.0,
)

unconstrained_target = OpenAIResponseTarget(
    endpoint=os.getenv("PLATFORM_OPENAI_RESPONSES_ENDPOINT"),
    api_key=os.getenv("PLATFORM_OPENAI_RESPONSES_KEY"),
    model_name="gpt-5",
    api_version=None,
    temperature=1.0,
)

unconstrained_result = await unconstrained_target.send_prompt_async(prompt_request=prompt_request)  # type: ignore

result = await target.send_prompt_async(prompt_request=prompt_request)  # type: ignore

print("Unconstrained Response:")
for idx, piece in enumerate(unconstrained_result.message_pieces):
    if piece.original_value_data_type != "reasoning":
        print(f"{idx} | {piece.role}: {piece.original_value}")

print()

print("Constrained Response:")
for idx, piece in enumerate(result.message_pieces):
    if piece.original_value_data_type != "reasoning":
        print(f"{idx} | {piece.role}: {piece.original_value}")
Unconstrained Response:
1 | assistant: Rome.

Constrained Response:
1 | assistant: I think that it is cat