3. AML Chat Targets#
This code shows how to use Azure Machine Learning (AML) managed online endpoints with PyRIT.
Prerequisites#
Deploy an AML-Managed Online Endpoint: Confirm that an Azure Machine Learning managed online endpoint is already deployed.
Obtain the API Key:
Navigate to the AML Studio.
Go to the ‘Endpoints’ section.
Retrieve the API key and endpoint URI from the ‘Consume’ tab
Set the Environment Variable:
Add the obtained API key to an environment variable named
AZURE_ML_KEY
. This is the default API key when the target is instantiated.Add the obtained endpoint URI to an environment variable named
AZURE_ML_MANAGED_ENDPOINT
. This is the default endpoint URI when the target is instantiated.If you’d like, feel free to make additional API key and endpoint URI environment variables in your .env file for different deployed models (e.g. mistralai-Mixtral-8x7B-Instruct-v01, Phi-3.5-MoE-instruct, Llama-3.2-3B-Instruct, etc.) and pass them in as arguments to the
_set_env_configuration_vars
function to interact with those models.
Create a AzureMLChatTarget#
After deploying a model and populating your env file, send prompts to the model using the AzureMLChatTarget
class. Model parameters can be passed upon instantiation
or set using the _set_model_parameters() function. **param_kwargs
allows for the setting of other parameters not explicitly shown in the constructor. A general list of
possible adjustable parameters can be found here: https://huggingface.co/docs/api-inference/tasks/text-generation but note that not all parameters may have an effect
depending on the specific model. The parameters that can be set per model can usually be found in the ‘Consume’ tab when you navigate to your endpoint in AML Studio.
from pyrit.common import default_values
from pyrit.orchestrator import PromptSendingOrchestrator
from pyrit.prompt_target import AzureMLChatTarget
default_values.load_environment_files()
# Defaults to endpoint and api_key pulled from the AZURE_ML_MANAGED_ENDPOINT and AZURE_ML_KEY environment variables
azure_ml_chat_target = AzureMLChatTarget()
# The environment variable args can be adjusted below as needed for your specific model.
azure_ml_chat_target._set_env_configuration_vars(
endpoint_uri_environment_variable="AZURE_ML_MANAGED_ENDPOINT", api_key_environment_variable="AZURE_ML_KEY"
)
# Parameters such as temperature and repetition_penalty can be set using the _set_model_parameters() function.
azure_ml_chat_target._set_model_parameters(temperature=0.9, repetition_penalty=1.3)
with PromptSendingOrchestrator(objective_target=azure_ml_chat_target) as orchestrator:
response = await orchestrator.send_prompts_async(prompt_list=["Hello! Describe yourself and the company who developed you."]) # type: ignore
await orchestrator.print_conversations_async() # type: ignore
Conversation ID: affdfc08-b5c5-42da-bb59-3e989e046b13
user: Hello! Describe yourself and the company who developed you.
assistant: I am an assistant powered by artificial intelligence, developed by the company Mistral AI. I am designed to help users with a wide range of tasks, such as answering questions, setting reminders, providing information, and much more.
Mistral AI is a cutting-edge AI company based in Paris, France. Our mission is to create intelligent assistants that can understand and interact with users in a natural, intuitive way. We believe that AI has the potential to revolutionize the way we live and work, and we are dedicated to making this vision a reality.
Our team is composed of talented researchers and engineers with a passion for AI and a deep understanding of the latest technologies and techniques. We are committed to building AI assistants that are not only intelligent and capable, but also respectful of users' privacy and security.
I am proud to be a part of the Mistral AI family, and I am excited to help you with your needs and questions.
You can then use this cell anywhere you would use a PromptTarget
object.
For example, you can create a red teaming orchestrator and use this instead of the AzureOpenAI
target and do the Gandalf or Crucible Demos but use this AML model.
This is also shown in the Red Teaming Orchestrator documentation.