R/webservice-local.R
local_webservice_deployment_config.Rd
You can deploy a model locally for limited testing and troubleshooting. To do so, you will need to have Docker installed on your local machine.
If you are using an Azure Machine Learning Compute Instance for development, you can also deploy locally on your compute instance.
local_webservice_deployment_config(port = NULL)
port | An int of the local port on which to expose the service's HTTP endpoint. |
---|
The LocalWebserviceDeploymentConfiguration
object.
if (FALSE) { deployment_config <- local_webservice_deployment_config(port = 8890) }