Skip to content

Module 4 - ACA State Store With Dapr State Management API

In this module we will switch the in-memory store of tasks and use a key/value persistent store (Azure Cosmos DB). By using the Dapr State Management Building Block, we will see how we can store the data in Azure Cosmos DB without installing any Cosmos DB SDK or write specific code to integrate our Backend API with Azure Cosmos DB. Moreover, we will use Redis to store tasks when we are running the application locally. You will see that we can switch between different stores without any code changes, thanks to the Dapr pluggable state stores feature. It is a matter of adding new Dapr Component files and the underlying store will be changed. This page shows the supported state stores in Dapr.

dapr-stateapi-cosmosdb

Overview of Dapr State Management API

Dapr's state management API allows you to save, read, and query key/value pairs in the supported state stores. To try this out and without doing any code changes or installing any NuGet packages we can directly invoke the State Management API and store the data on Redis locally. When you initialized Dapr in your local development environment, it installed Redis container instance locally. So we can use Redis locally to store and retrieve state. If you navigate to the path %USERPROFILE%\.dapr\components you will find a file named statestore.yaml. Inside this file, you will see the properties needed to access the local Redis instance. The state store template component file structure can be found on this link.

To try out the State Management APIs, run the Backend API from VS Code by running the following command. Remember to replace the place holders with your own values:

~\TasksTracker.ContainerApps\TasksTracker.TasksManager.Backend.Api> dapr run --app-id tasksmanager-backend-api --app-port <web api application https port number found under properties->launchSettings.json. e.g. 7112> --dapr-http-port 3500 --app-ssl -- dotnet run
~\TasksTracker.ContainerApps\TasksTracker.TasksManager.Backend.Api> dapr run --app-id tasksmanager-backend-api --app-port <web api application https port number found under properties->launchSettings.json. e.g. 7112> --dapr-http-port 3500 --app-ssl -- dotnet run --launch-profile https

Now from any rest client, invoke the below POST request to the endpoint: http://localhost:3500/v1.0/state/statestore

POST /v1.0/state/statestore HTTP/1.1
Host: localhost:3500
Content-Type: application/json
[
    {
        "key": "Book1",
        "value": {
            "title": "Parallel and High Performance Computing",
            "author": "Robert Robey",
            "genre": "Technical"
        }
    },
    {
        "key": "Book2",
        "value": {
            "title": "Software Engineering Best Practices",
            "author": "Capers Jones",
            "genre": "Technical"
        }
    },
    {
        "key": "Book3",
        "value": {
            "title": "The Unstoppable Mindset",
            "author": "Jessica Marks",
            "genre": "Self Improvement",
            "formats":["kindle", "audiobook", "papercover"]
        }
    }
]

What we've done here is the following:

  • The value statestore in the endpoint should match the name value in the global component file statestore.yaml
  • We have sent a request to store 3 entries of books, you can put any JSON representation in the value property

To see the results visually, you can install a VS Code extension to connect to Redis DB and see the results. There are several redis extensions available for VS Code. For this workshop we will use an extension named "Redis Xplorer".

Once you install the extension it will add a tab under the explorer section of VS Code called "REDIS XPLORER". Next you will need to connect to the redis server locally by adding a new "REDIS XPLORER" profile. Click on the + sign in the "REDIS XPLORER" section in VS Code. This will ask you to enter the nickname (e.g. dapr_redis) as well as the hostname and port. For the hostname and port you can get this information by executing the following command in your powershell terminal:

docker ps

Look under the Ports column and use the server and port specified there. In the image below the server is 0.0.0.0 and the port is 6379. Use the values that you see on your own terminal.

dapr-stateapi-redis

After you connect to Redis locally, you should see the 3 entries similar to the ones shown in the image below. Notice how each entry key is prefixed by the Dapr App Id. In our case it is tasksmanager-backend-api. More about key prefix strategy in later sections in this module.

dapr-stateapi-redis

To get the value of a key, you need to issue a GET request to the endpoint http://localhost:3500/v1.0/state/statestore/{YourKey}. This will return the value from the key store. For example if you execute the following GET http://localhost:3500/v1.0/state/statestore/Book3 the results will be the below object:

{
    "formats": [
        "kindle",
        "audiobook",
        "papercover"
    ],
    "title": "The Unstoppable Mindset",
    "author": "Jessica Marks",
    "genre": "Self Improvement"
}

Use Dapr Client SDK for State Store Management

Whereas in the previous section we demonstrated using Dapr State Store without code changes, we will now introduce a change on the Backend API and create a new service named TasksStoreManager.cs which will implement the interface ITasksManager.cs to start storing tasks data on the persist store. Locally we will start testing with Redis, then we are going to change the state store to use Azure Cosmos DB.

1. Add Dapr Client SDK to the Backend API

Similar to what we have done in the Frontend Web App, we need to use Dapr Client SDK to manage the state store. Update below file with highlighted lines:

<ItemGroup>
    <PackageReference Include="Dapr.AspNetCore" Version="1.9.0" />
    <!-- Other packages are removed for brevity -->
</ItemGroup>

2. Create a new concrete implementation to manage tasks persistence

As you recall from the previous module, we were storing the tasks in memory. Now we need to store them in Redis and later on Azure Cosmos DB. The key thing to keep in mind here is that switching from redis to Azure Cosmos DB won't require changing the code below which is a huge advantage of using Dapr.

Add below file under the folder named Services. This file will implement the interface ITasksManager.

using Dapr.Client;
using TasksTracker.TasksManager.Backend.Api.Models;

namespace TasksTracker.TasksManager.Backend.Api.Services
{
    public class TasksStoreManager : ITasksManager
    {
        private static string STORE_NAME = "statestore";
        private readonly DaprClient _daprClient;
        private readonly IConfiguration _config;
        private readonly ILogger<TasksStoreManager> _logger;

        public TasksStoreManager(DaprClient daprClient, IConfiguration config, ILogger<TasksStoreManager> logger)
        {
            _daprClient = daprClient;
            _config = config;
            _logger = logger;
        }
        public async Task<Guid> CreateNewTask(string taskName, string createdBy, string assignedTo, DateTime dueDate)
        {
            var taskModel = new TaskModel()
            {
                TaskId = Guid.NewGuid(),
                TaskName = taskName,
                TaskCreatedBy = createdBy,
                TaskCreatedOn = DateTime.UtcNow,
                TaskDueDate = dueDate,
                TaskAssignedTo = assignedTo,
            };

            _logger.LogInformation("Save a new task with name: '{0}' to state store", taskModel.TaskName);
            await _daprClient.SaveStateAsync<TaskModel>(STORE_NAME, taskModel.TaskId.ToString(), taskModel);
            return taskModel.TaskId;
        }

        public async Task<bool> DeleteTask(Guid taskId)
        {
            _logger.LogInformation("Delete task with Id: '{0}'", taskId);
            await _daprClient.DeleteStateAsync(STORE_NAME, taskId.ToString());
            return true;
        }

        public async Task<TaskModel?> GetTaskById(Guid taskId)
        {
            _logger.LogInformation("Getting task with Id: '{0}'", taskId);
            var taskModel = await _daprClient.GetStateAsync<TaskModel>(STORE_NAME, taskId.ToString());
            return taskModel;
        }

        public async Task<List<TaskModel>> GetTasksByCreator(string createdBy)
        {
            var query = "{" +
                    "\"filter\": {" +
                        "\"EQ\": { \"taskCreatedBy\": \"" + createdBy + "\" }" +
                    "}}";

            var queryResponse = await _daprClient.QueryStateAsync<TaskModel>(STORE_NAME, query);

            var tasksList = queryResponse.Results.Select(q => q.Data).OrderByDescending(o=>o.TaskCreatedOn);
            return tasksList.ToList();
        }

        public async Task<bool> MarkTaskCompleted(Guid taskId)
        {
            _logger.LogInformation("Mark task with Id: '{0}' as completed", taskId);
            var taskModel = await _daprClient.GetStateAsync<TaskModel>(STORE_NAME, taskId.ToString());
            if (taskModel != null)
            {
                taskModel.IsCompleted = true;
                await _daprClient.SaveStateAsync<TaskModel>(STORE_NAME, taskModel.TaskId.ToString(), taskModel);
                return true;
            }
            return false;
        }

        public async Task<bool> UpdateTask(Guid taskId, string taskName, string assignedTo, DateTime dueDate)
        {
            _logger.LogInformation("Update task with Id: '{0}'", taskId);
            var taskModel = await _daprClient.GetStateAsync<TaskModel>(STORE_NAME, taskId.ToString());
            var currentAssignee = taskModel.TaskAssignedTo;
            if (taskModel != null)
            {
                taskModel.TaskName = taskName;
                taskModel.TaskAssignedTo = assignedTo;
                taskModel.TaskDueDate = dueDate;
                await _daprClient.SaveStateAsync<TaskModel>(STORE_NAME, taskModel.TaskId.ToString(), taskModel);
                return true;
            }
            return false;
        }
    }
}
Curious about the code?

Looking at the code above, we have injected the DaprClient into the new service and DaprClient has a set of methods to support CRUD operations. Notice how we are using the state store named statestore which should match the name in the component file.

Note

The query API will not work against the local Redis store as you need to install RediSearch locally on your machine which is out of the scope for this workshop. It will work locally once we switch to Azure Cosmos DB.

3. Register the TasksStoreManager new service and DaprClient

Now we need to register the new service named TasksStoreManager and DaprClient when the Backend API app starts up. Update the below file with the highlighted text as shown below.

Note

Do not forget to comment out the registration of the FakeTasksManager service as we don’t want to store tasks in memory anymore.

var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddDaprClient();
builder.Services.AddSingleton<ITasksManager, TasksStoreManager>();
//builder.Services.AddSingleton<ITasksManager, FakeTasksManager>();
//Code removed for brevity

Now you are ready to run both applications and debug them. You can store new tasks, update them, delete existing tasks and mark them as completed. The data should be stored on your local Redis instance.

Info

For now don't try running the application as you will get an error running the query against the local redis. As mentioned earlier setting up the local redis store is out of scope for this workshop. Instead, we will focus on wiring the Azure Cosmos DB as the store for our tasks.

Use Azure Cosmos DB with Dapr State Store Management API

1. Provision Cosmos DB Resources: Now we will create an Azure Cosmos DB account, Database, and a new container that will store our tasks. You can use the PowerShell script below to create the Cosmos DB resources on the same resource group we used in the previous module. You need to set the variable name of the $COSMOS_DB_ACCOUNT to a unique name as it needs to be unique globally. Remember to replace the placeholders with your own values:

$COSMOS_DB_ACCOUNT="<choose a unique cosmos db account name e.g. taskstracker-state-store-your initials here>" `
$COSMOS_DB_DBNAME="tasksmanagerdb" `
$COSMOS_DB_CONTAINER="taskscollection" 

## Check if Cosmos account name already exists globally
az cosmosdb check-name-exists `
--name $COSMOS_DB_ACCOUNT

## if it returns false continue with the next command 
## else try a new unique name

## Create a Cosmos account for SQL API
az cosmosdb create `
--name $COSMOS_DB_ACCOUNT `
--resource-group $RESOURCE_GROUP

## Create a SQL API database
az cosmosdb sql database create `
--account-name $COSMOS_DB_ACCOUNT `
--resource-group $RESOURCE_GROUP `
--name $COSMOS_DB_DBNAME

## Create a SQL API container
az cosmosdb sql container create `
--account-name $COSMOS_DB_ACCOUNT `
--resource-group $RESOURCE_GROUP `
--database-name $COSMOS_DB_DBNAME `
--name $COSMOS_DB_CONTAINER `
--partition-key-path "/id" `
--throughput 400

Note

The primaryMasterKey connection string is only needed for our local testing on the development machine, we'll be using a different approach (Managed Identities) when deploying Dapr component to Azure Container Apps Environment.

Once the scripts execution is completed, we need to get the primaryMasterKey of the CosmosDB account next. You can do this using the PowerShell script below. Copy the value of primaryMasterKey as we will use it in the next step.

## List Azure CosmosDB keys
az cosmosdb keys list `
--name $COSMOS_DB_ACCOUNT `
--resource-group $RESOURCE_GROUP

2. Create a Component file for State Store Management: Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All the components are pluggable so that you can swap out one component with the same interface for another

Components are configured at design-time with a YAML file which is stored in either a components/local folder within your solution, or globally in the .dapr folder created when invoking dapr init. These YAML files adhere to the generic Dapr component schema, but each is specific to the component specification.

It is important to understand that the component spec values, particularly the spec metadata, can change between components of the same component type. As a result, it is strongly recommended to review a component’s specs, paying particular attention to the sample payloads for requests to set the metadata used to interact with the component.

The diagram below is from Dapr official documentation which shows some examples of the components for each component type. We are now looking at the State Stores components. Specifically the Azure Cosmos DB.

dapr-components

To add the component file state store, add a new folder named components under the directory TasksTracker.ContainerApps and add a new yaml file as show below:

Info

You need to replace the masterKey value with your Cosmos Account key. Remember this is only needed for local development debugging, we will not be using the masterKey when we deploy to ACA.

Replace the url value with the URI value of your cosmos database account. You can get that from the Azure portal by navigating to the cosmos database account overview page and get the uri value from there. Basically the uri should have the following structure. https://COSMOS_DB_ACCOUNT.documents.azure.com:443/.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.azure.cosmosdb
  version: v1
  metadata:
    - name: url
      value: <value>
    - name: masterKey
      value: "<value>"
    - name: database
      value: tasksmanagerdb
    - name: collection
      value: taskscollection
scopes:
  - tasksmanager-backend-api
Curious to learn more about the contents of the yaml file?
  • We've used the name statestore which should match the name of statestore we've used in the TaskStoreManager.cs file. As well, we have set the metadata key/value to allow us to connect to Azure Cosmos DB.
  • We've updated the other metadata keys such as database, collection, etc... to match the values of your Cosmos DB instance. For full metadata specs, you can check this page.
  • By default, all dapr-enabled container apps within the same environment will load the full set of deployed components. By adding scopes to a component, you tell the Dapr sidecars for each respective container app which components to load at runtime. Using scopes is recommended for production workloads. In our case, we have set the scopes to tasksmanager-backend-api which represents the dapr-app-id which is associated to the container app that needs access to Azure Cosmos DB State Store as this will be the application that needs access to Azure Cosmos DB State Store. More about scopes can be found on this link.

Note

Dapr component scopes correspond to the Dapr application ID of a container app, not the container app name.

Now you should be ready to launch both applications and start doing CRUD operations from the Frontend Web App including querying the store. All your data will be stored in Cosmos DB Database you just provisioned.

If you have been running the different microservices using the debug and launch Dapr applications in VSCode then remember to uncomment the following line inside tasks.json file. This will instruct dapr to load the local projects components ./components instead of the global components' folder.

{
  "componentsPath": "./components"
}

If you have been using the dapr cli commands instead of the aforementioned debugging then you will need to execute the backend api with the resources-path property as follows.

Note

Remember to replace the placeholders. Remember to use https port number for the Web API application.

dapr run --app-id tasksmanager-backend-api --app-port <web api application https port number found under properties->launchSettings.json. e.g. 7112> --dapr-http-port 3500 --app-ssl --resources-path "../components" dotnet run
dapr run --app-id tasksmanager-backend-api --app-port <web api application https port number found under properties->launchSettings.json. e.g. 7112> --dapr-http-port 3500 --app-ssl --resources-path "../components" -- dotnet run --launch-profile https

Deprecation Warning

components-path is being deprecated in favor of --resources-path. At the time of producing this workshop the --resources-path was not supported yet by the VS code extension. Hence, you will notice the use of the property "componentsPath": "./components" in the tasks.json file. Check the extension documentation in case that has changed.

After creating a new record you can navigate to the Data explorer on the Azure portal for the azure cosmos database account. It should look like the image below:

cosmos-db-dapr-state-store

Key Prefix Strategies

When you look at the key stored per entry and for example tasksmanager-backend-api||aa3eb856-8309-4e68-93af-119be0d400e8, you will notice that the key is prefixed with the Dapr application App Id responsible to store this entry which in our case is tasksmanager-backend-api. There might be some scenarios which you need to have another service to access the same data store (not recommended as each service should be responsible about its own data store), in which case you can change the default behavior.

This can be done by adding the meta tag below to the component file. For example, if we need to set the value of the prefix to a constant value such as TaskId we can do the following:

spec:
  metadata:
  - name: keyPrefix
  - value: TaskId
If we need to totally omit the key prefix, so it is accessed across multiple Dapr applications, we can set the value to none.

Configure Managed Identities in Container App

As we highlighted earlier, we'll not use a connection strings to establish the relation between our Container App and Azure Cosmos DB when we deploy to ACA. Cosmos DB Master Key/Connection string was only used when debugging locally. Now we will rely on Managed Identities to allow our container app to access Cosmos DB. With Manged Identities you do't worry about storing the keys securely and rotate them inside your application. This approach is safer and easier to manage.

We will be using a system-assigned identity with a role assignment to grant our Backend API container app permissions to access data stored in Cosmos DB. We need to assign it a custom role for the Cosmos DB data plane. In this example ae are going to use a built-in role, named Cosmos DB Built-in Data Contributor, which grants our application full read-write access to the data. You can optionally create custom, fine-tuned roles following the instructions in the official docs.

1. Create system-assigned identity for our container app

Run the command below to create system-assigned identity for our container app:

az containerapp identity assign `
  --resource-group $RESOURCE_GROUP `
  --name $BACKEND_API_NAME `
  --system-assigned

This command will create an Enterprise Application (basically a Service Principal) within Azure AD, which is linked to our container app. The output of this command will be similar to the one shown below. Keep a note of the property principalId as we are going to use it in the next step.

{
  "principalId": "[your principal id will be displayed here]",
  "tenantId": "[your tenant id will be displayed here]",
  "type": "SystemAssigned"
}

2. Assign the container app system-identity to the built-in Cosmos DB role

Next, we need to associate the container app system-identity with the target Cosmos DB resource. You can read more about Azure built-in roles for Cosmos DB or how to create custom fine-tuned roles here. Run the command below to associate the container app system-assigned identity with Cosmos DB Built-in Data Contributor role.

Note

Make sure you save this principal id somewhere as you will need it in later modules. You can't rely on having it saved in powershell under $PRINCIPAL_ID as this variable could replace later on. Remember to replace the placeholders with your own values:

 $PRINCIPAL_ID = "<your principal id goes here>" # Principal Id after creating system identity for container app 
 $ROLE_ID = "00000000-0000-0000-0000-000000000002" #"Cosmos DB Built-in Data Contributor" 

az cosmosdb sql role assignment create `
--account-name  $COSMOS_DB_ACCOUNT `
--resource-group $RESOURCE_GROUP `
--scope "/" `
--principal-id $PRINCIPAL_ID `
--role-definition-id $ROLE_ID

Deploy the Backend API and Frontend Web App Projects to ACA

Now we are ready to deploy all local changes from this module and the previous module to ACA. But before we do that, we need to do one more addition before deploying.

We have to create a dapr component schema file for Azure Cosmos DB which meets the specs defined by Azure Container Apps. Reason for this variance is that ACA Dapr schema is slightly simplified to support Dapr components and removes unnecessary fields, including apiVersion, kind, and redundant metadata and spec properties.

1. Create an ACA-Dapr Component file for State Store Management

Here it is recommended to separate the component files that will be used when deploying to Azure Container Apps from the ones which we will use when running our application locally (Dapr self-hosted).

Create a new folder named aca-components under the directory TasksTracker.ContainerApps, then add a new file as shown below:

Info

Remember to replace the url value with the URI value of your cosmos database account. You can get that from the Azure portal by navigating to the cosmos database account overview page and get the uri value from there. Basically the uri should have the following structure https://COSMOS_DB_ACCOUNT.documents.azure.com:443/

componentType: state.azure.cosmosdb
version: v1
metadata:
  - name: url
    value: <The URI value of your cosmos database account>
  - name: database
    value: tasksmanagerdb
  - name: collection
    value: taskscollection
scopes:
  - tasksmanager-backend-api
Curious to learn more about the contents of the yaml file?
  • We didn't specify the Cosmos DB component name statestore when we created this component file. We are going to specify it once we add this dapr component to Azure Container Apps Environment via CLI.
  • We are not referencing any Cosmos DB Keys/Connection strings as the authentication between Dapr and Cosmos DB will be configured using Managed Identities.
  • We are setting the scopes array value to tasksmanager-backend-api to ensure Cosmos DB component is loaded at runtime by only the appropriate container apps. In our case it will be needed only for the container apps with Dapr application IDs tasksmanager-backend-api. In future modules we are going to include another container app which needs to access Cosmos DB.

2. Build Frontend Web App and Backend API App images and push them to ACR

As we have done previously we need to build and deploy both app images to ACR, so they are ready to be deployed to Azure Container Apps. To do so, continue using the same PowerShell console and paste the code below (Make sure you are on the following directory TasksTracker.ContainerApps):

az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_API_NAME" --file 'TasksTracker.TasksManager.Backend.Api/Dockerfile' .

az acr build --registry $ACR_NAME --image "tasksmanager/$FRONTEND_WEBAPP_NAME" --file 'TasksTracker.WebPortal.Frontend.Ui/Dockerfile' .

3. Add Cosmos DB Dapr State Store to Azure Container Apps Environment

We need to run the command below to add the yaml file .\aca-components\containerapps-statestore-cosmos.yaml to Azure Container Apps Environment.

az containerapp env dapr-component set `
 --name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
 --dapr-component-name statestore `
 --yaml '.\aca-components\containerapps-statestore-cosmos.yaml'

4. Enable Dapr for the Frontend Web App and Backend API Container Apps

Until this moment Dapr was not enabled on the Container Apps we have provisioned. Enable Dapr for both Container Apps by running the two commands below in the PowerShell console.

Info

Remember to replace the placeholders with your own values

az containerapp dapr enable --name $BACKEND_API_NAME `
                            --resource-group $RESOURCE_GROUP `
                            --dapr-app-id  $BACKEND_API_NAME `
                            --dapr-app-port  <web api application port number found under Dockerfile for the web api project. e.g. 5160>

az containerapp dapr enable --name $FRONTEND_WEBAPP_NAME `
                            --resource-group $RESOURCE_GROUP `
                            --dapr-app-id  $FRONTEND_WEBAPP_NAME `
                            --dapr-app-port  <web api application port number found under Dockerfile for the web api project. e.g. 5071>
Curious to learn more about the command above?
  • We've enabled Dapr on both container apps and specified a unique Dapr identifier for the Back End API and Front End Web App container apps. This dapr-app-id will be used for service discovery, state encapsulation and the pub/sub consumer ID.
  • We've set the dapr-app-port which is the port our applications are listening on which will be used by Dapr for communicating to our applications.

For a complete list of the supported Dapr sidecar configurations in Container Apps, you can refer to this link.

5. Deploy new revisions of the Frontend Web App and Backend API to Container Apps

The last thing we need to do here is to update both container apps and deploy the new images from ACR. To do so we need to run the commands found below.

## Update Frontend web app container app and create a new revision 
az containerapp update `
--name $FRONTEND_WEBAPP_NAME  `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20230218


## Update Backend API App container app and create a new revision 
az containerapp update `
--name $BACKEND_API_NAME  `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20230218-1

Tip

Notice here that we used a revision-suffix property, so it will append to the revision name which offers you better visibility on which revision you are looking at.

Success

With this final step, we should be able to access the Frontend Web App, call the backend API app using Dapr sidecar, and store tasks to Azure Cosmos DB.

In the next module, we will introduce the Dapr Pub/Sub Building block which we will publish messages to Azure Service Bus when a task is saved. We will also introduce a new background service will process those incoming messages and send an email to the task assignee.


Last update: 2023-03-26