You can deploy a model locally for limited testing and troubleshooting. To do so, you will need to have Docker installed on your local machine.

If you are using an Azure Machine Learning Compute Instance for development, you can also deploy locally on your compute instance.

local_webservice_deployment_config(port = NULL)

Arguments

port

An int of the local port on which to expose the service's HTTP endpoint.

Value

The LocalWebserviceDeploymentConfiguration object.

Examples

if (FALSE) { deployment_config <- local_webservice_deployment_config(port = 8890) }