Installation |
|
---|---|
Install azureml sdk package |
|
WorkspacesFunctions for managing workspace resources. A Workspace is the top-level resource for Azure Machine Learning. It provides a centralized place to work with all the artifacts you create when you use Azure ML. |
|
Create a new Azure Machine Learning workspace |
|
Get an existing workspace |
|
Manages authentication using a service principle instead of a user identity. |
|
Load workspace configuration details from a config file |
|
Write out the workspace configuration details to a config file |
|
Get the default datastore for a workspace |
|
Set the default datastore for a workspace |
|
Delete a workspace |
|
List all workspaces that the user has access to in a subscription ID |
|
Get the details of a workspace |
|
Get the default keyvault for a workspace |
|
Add secrets to a keyvault |
|
Get secrets from a keyvault |
|
Delete secrets from a keyvault |
|
List the secrets in a keyvault |
|
Manages authentication and acquires an authorization token in interactive login workflows. |
|
Compute targetsFunctions for managing compute resources. A Compute Target is a designated compute resource where you run your scripts or host your service deployments. Compute targets make it easy to change your compute environment without changing your code. Supported compute target types in the R SDK include |
|
Create an AmlCompute cluster |
|
Get the details (e.g IP address, port etc) of all the compute nodes in the compute target |
|
Update scale settings for an AmlCompute cluster |
|
Create an AksCompute cluster |
|
Get the credentials for an AksCompute cluster |
|
Attach an existing AKS cluster to a workspace |
|
Detach an AksCompute cluster from its associated workspace |
|
Get an existing compute cluster |
|
Wait for a cluster to finish provisioning |
|
List the supported VM sizes in a region |
|
Delete a cluster |
|
Working with dataFunctions for accessing your data in Azure Storage services. A Datastore is attached to a workspace and is used to store connection information to an Azure storage service. |
|
Upload files to the Azure storage a datastore points to |
|
Upload a local directory to the Azure storage a datastore points to |
|
Download data from a datastore to the local file system |
|
Get an existing datastore |
|
Register an Azure blob container as a datastore |
|
Register an Azure file share as a datastore |
|
Initialize a new Azure SQL database Datastore. |
|
Initialize a new Azure PostgreSQL Datastore. |
|
Initialize a new Azure Data Lake Gen2 Datastore. |
|
Unregister a datastore from its associated workspace |
|
Working with datasetsFunctions for managing datasets. An Azure Machine Learning Dataset allows you to interact with data in your datastores and package your data into a consumable object for machine learning tasks. Datasets can be created from local files, public urls, or specific file(s) in your datastores. Azure ML supports Dataset types of |
|
Register a Dataset in the workspace |
|
Unregister all versions under the registration name of this dataset from the workspace. |
|
Get a registered Dataset from the workspace by its registration name. |
|
Get Dataset by ID. |
|
Return the named list for input datasets. |
|
Create a FileDataset to represent file streams. |
|
Get a list of file paths for each file stream defined by the dataset. |
|
Download file streams defined by the dataset as local files. |
|
Create a context manager for mounting file streams defined by the dataset as local files. |
|
Skip file streams from the top of the dataset by the specified count. |
|
Take a sample of file streams from top of the dataset by the specified count. |
|
Take a random sample of file streams in the dataset approximately by the probability specified. |
|
Split file streams in the dataset into two parts randomly and approximately by the percentage specified. |
|
Create an unregistered, in-memory Dataset from parquet files. |
|
Create an unregistered, in-memory Dataset from delimited files. |
|
Create a TabularDataset to represent tabular data in JSON Lines files (http://jsonlines.org/). |
|
Create a TabularDataset to represent tabular data in SQL databases. |
|
Drop the specified columns from the dataset. |
|
Keep the specified columns and drops all others from the dataset. |
|
Filter Tabular Dataset with time stamp columns after a specified start time. |
|
Filter Tabular Dataset with time stamp columns before a specified end time. |
|
Filter Tabular Dataset between a specified start and end time. |
|
Filter Tabular Dataset to contain only the specified duration (amount) of recent data. |
|
Define timestamp columns for the dataset. |
|
Load all records from the dataset into a dataframe. |
|
Convert the current dataset into a FileDataset containing CSV files. |
|
Convert the current dataset into a FileDataset containing Parquet files. |
|
Configure conversion to bool. |
|
Configure conversion to datetime. |
|
Configure conversion to 53-bit double. |
|
Configure conversion to 64-bit integer. |
|
Configure conversion to string. |
|
Defines options for how column headers are processed when reading data from files to create a dataset. |
|
Represents a path to data in a datastore. |
|
Represent how to deliver the dataset to a compute target. |
|
EnvironmentsFunctions for managing environments. An Azure Machine Learning Environment allows you to create, manage, and reuse the software dependencies required for training and deployment. Environments specify the R packages, environment variables, and software settings around your training and scoring scripts for your containerized training runs and deployments. They are managed and versioned entities within your Azure ML workspace that enable reproducible, auditable, and portable machine learning workflows across different compute targets. For more details, see |
|
Create an environment |
|
Specifies a CRAN package to install in environment |
|
Specifies a Github package to install in environment |
|
Register an environment in the workspace |
|
Get an existing environment |
|
Specify Azure Container Registry details |
|
Training & experimentationFunctions for managing experiments and runs. An Experiment is a grouping of the collection of runs from a specified script. A Run represents a single trial of an experiment. A run is the object used to monitor the asynchronous execution of a trial, log metrics and store output of the trial, and to analyze results and access artifacts generated by the trial. The following run types are supported - |
|
Create an Azure Machine Learning experiment |
|
Return a generator of the runs for an experiment |
|
Submit an experiment and return the active created run |
|
Wait for the completion of a run |
|
Create an estimator |
|
Create an interactive logging run |
|
Mark a run as completed. |
|
Get the context object for a run |
|
Get an experiment run |
|
Get the details of a run |
|
Get the details of a run along with the log files' contents |
|
Get the metrics logged to a run |
|
Get secrets from the keyvault associated with a run's workspace |
|
Cancel a run |
|
List the files that are stored in association with a run |
|
Download a file from a run |
|
Download files from a run |
|
Upload files to a run |
|
Upload a folder to a run |
|
Log a metric to a run |
|
Log an accuracy table metric to a run |
|
Log a confusion matrix metric to a run |
|
Log an image metric to a run |
|
Log a vector metric value to a run |
|
Log a predictions metric to a run |
|
Log a residuals metric to a run |
|
Log a row metric to a run |
|
Log a table metric to a run |
|
Generate table of run details |
|
Specifies a CRAN package to install in environment |
|
Hyperparameter tuningFunctions for configuring and managing hyperparameter tuning (HyperDrive) experiments. Azure ML’s HyperDrive functionality enables you to automate hyperparameter tuning of your machine learning models. For example, you can define the parameter search space as discrete or continuous, and a sampling method over the search space as random, grid, or Bayesian. Also, you can specify a primary metric to optimize in the hyperparameter tuning experiment, and whether to minimize or maximize that metric. You can also define early termination policies in which poorly performing experiment runs are canceled and new ones started. |
|
Create a configuration for a HyperDrive run |
|
Define random sampling over a hyperparameter search space |
|
Define grid sampling over a hyperparameter search space |
|
Define Bayesian sampling over a hyperparameter search space |
|
Specify a discrete set of options to sample from |
|
Specify a set of random integers in the range |
|
Specify a uniform distribution of options to sample from |
|
Specify a uniform distribution of the form
|
|
Specify a log uniform distribution |
|
Specify a uniform distribution of the form
|
|
Specify a real value that is normally-distributed with mean |
|
Specify a normal distribution of the |
|
Specify a normal distribution of the form |
|
Specify a normal distribution of the form
|
|
Define supported metric goals for hyperparameter tuning |
|
Define a Bandit policy for early termination of HyperDrive runs |
|
Define a median stopping policy for early termination of HyperDrive runs |
|
Define a truncation selection policy for early termination of HyperDrive runs |
|
Return the best performing run amongst all completed runs |
|
Get the child runs sorted in descending order by best primary metric |
|
Get the hyperparameters for all child runs |
|
Get the metrics from all child runs |
|
Model management & deploymentFunctions for model management and deployment. Registering a model allows you to store and version your trained model in a workspace. A registered Model can then be deployed as a Webservice using Azure ML. If you would like to access all the assets needed to host a model as a web service without actually deploying the model, you can do so by packaging the model as a |
|
Get a registered model |
|
Register a model to a given workspace |
|
Register a model for operationalization. |
|
Download a model to the local file system |
|
Deploy a web service from registered model(s) |
|
Create a model package that packages all the assets needed to host a model as a web service |
|
Delete a model from its associated workspace |
|
Get the Azure container registry that a packaged model uses |
|
Get the model package creation logs |
|
Pull the Docker image from a |
|
Save a Dockerfile and dependencies from a |
|
Wait for a model package to finish creating |
|
Create an inference configuration for model deployments |
|
Get a deployed web service |
|
Wait for a web service to finish deploying |
|
Retrieve the logs for a web service |
|
Retrieve auth keys for a web service |
|
Regenerate one of a web service's keys |
|
Retrieve the auth token for a web service |
|
Call a web service with the provided input |
|
Delete a web service from a given workspace |
|
Create a deployment config for deploying an ACI web service |
|
Update a deployed ACI web service |
|
Create a deployment config for deploying an AKS web service |
|
Update a deployed AKS web service |
|
Create a deployment config for deploying a local web service |
|
Update a local web service |
|
Delete a local web service from the local machine |
|
Reload a local web service's entry script and dependencies |
|
Initialize the ResourceConfiguration. |