Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Using Dapr for pub/sub with Azure Cache for Redis

Table of contents

Stop Simulation, TrafficControlService and FineCollectionService, and VehicleRegistrationService by pressing Crtl-C in the respective terminal windows.

Step 1: Create Azure Cache for Redis

In this assignment, you will use Azure Cache for Redis as the message broker with the Dapr pub/sub building block. To be able to do this, you need to have an Azure subscription. If you don’t have one, you can create a free account at https://azure.microsoft.com/free/.

  1. Login to Azure:

     az login
    
  2. Create a resource group:

     az group create --name rg-dapr-workshop-java --location eastus
    

    A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. In our workshop, all the databases, all the microservices, etc. will be grouped into a single resource group.

  3. Azure Cache for Redis is a fully managed, dedicated, in-memory data store for enterprise-grade cloud-native applications. It can be used as a distributed data or content cache, a session store, a message broker, and more. This cache needs to be globally unique. Use the following command to generate a unique name:

    • Linux/Unix shell:

      UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
      REDIS="redis-dapr-workshop-java-$UNIQUE_IDENTIFIER"
      echo $REDIS
      
    • PowerShell:

      $ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
      $UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
      $REDIS = "redis-dapr-workshop-java-$UNIQUE_IDENTIFIER"
      echo $REDIS
      
  4. Create the Azure Cache for Redis:

     az redis create --name $REDIS --resource-group rg-dapr-workshop-java --location eastus --sku basic --vm-size C0 --redis-version 6
    

    The --sku parameter specifies the SKU of the cache to deploy. In this case, you are using the basic SKU. The --vm-size parameter specifies the size of the VM to deploy for the cache. Caches in the Basic tier are deployed in a single VM with no service-level agreement(SLA). The --redis-version parameter specifies the version of Redis to deploy. In this case, you are using version 6.

  5. Get the hostname, SSL port and the primary key:

    • Linux/Unix shell:

      REDIS_HOSTNAME=$(az redis show --name $REDIS --resource-group rg-dapr-workshop-java --query hostName --output tsv)
      REDIS_SSL_PORT=$(az redis show --name $REDIS --resource-group rg-dapr-workshop-java --query sslPort --output tsv)
      REDIS_PRIMARY_KEY=$(az redis list-keys --name $REDIS --resource-group rg-dapr-workshop-java --query primaryKey --output tsv)
      echo "Hostname: $REDIS_HOSTNAME"
      echo "SSL Port: $REDIS_SSL_PORT"
      echo "Primary Key: $REDIS_PRIMARY_KEY"
      
    • PowerShell:

      $REDIS_HOSTNAME = az redis show --name $REDIS --resource-group rg-dapr-workshop-java --query hostName --output tsv
      $REDIS_SSL_PORT = az redis show --name $REDIS --resource-group rg-dapr-workshop-java --query sslPort --output tsv
      $REDIS_PRIMARY_KEY = az redis list-keys --name $REDIS --resource-group rg-dapr-workshop-java --query primaryKey --output tsv
      Write-Output "Hostname: $REDIS_HOSTNAME"
      Write-Output "SSL Port: $REDIS_SSL_PORT"
      Write-Output "Primary Key: $REDIS_PRIMARY_KEY"
      

Step 2: Configure the pub/sub component

  1. Open the file dapr/azure-redis-pubsub.yaml in your code editor.

     apiVersion: dapr.io/v1alpha1
     kind: Component
     metadata:
       name: pubsub
     spec:
       type: pubsub.redis
       version: v1
       metadata:
       - name: redisHost
         value: <replaceWithRedisHostName>:<replaceWithRedisSSLPort>
       - name: redisPassword
         value: <replaceWithPrimaryKey>
       - name: enableTLS
         value: "true"
     scopes:
       - trafficcontrolservice
       - finecollectionservice
    

    As you can see, you specify a different type of pub/sub component (pubsub.redis) and you specify in the metadata section how to connect to Azure Cache for Redis created in step 1. For this workshop, you are going to use the redis hostname, password and port you copied in the previous step. For more information, see Redis Streams pub/sub component.

    In the scopes section, you specify that only the TrafficControlService and FineCollectionService should use the pub/sub building block. To know more about scopes, see Application access to components with scopes.

  2. Copy or Move this file dapr/azure-redis-pubsub.yaml to dapr/components folder.

  3. Replace the redistHost and redisPassword value with the value you copied from the clipboard.

  4. Move the files dapr/components/kafka-pubsub.yaml and dap/components/rabbit-pubsub.yaml back to dapr/ folder if they are present in the component folder.

Step 3: Test the application

You’re going to start all the services now.

  1. Make sure no services from previous tests are running (close the command-shell windows).

  2. Open the terminal window and make sure the current folder is VehicleRegistrationService.

  3. Enter the following command to run the VehicleRegistrationService with a Dapr sidecar:

    mvn spring-boot:run
    
  4. Open a terminal window and change the current folder to FineCollectionService.

  5. Enter the following command to run the FineCollectionService with a Dapr sidecar:

    dapr run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --resources-path ../dapr/components mvn spring-boot:run
    
  6. Open a terminal window and change the current folder to TrafficControlService.

  7. Enter the following command to run the TrafficControlService with a Dapr sidecar:

    dapr run --app-id trafficcontrolservice --app-port 6000 --dapr-http-port 3600 --dapr-grpc-port 60000 --resources-path ../dapr/components mvn spring-boot:run
    
  8. Open a terminal window and change the current folder to Simulation.

  9. Start the simulation:

    mvn spring-boot:run
    

You should see the same logs as before. Obviously, the behavior of the application is exactly the same as before. But now, instead of messages being published and subscribed via kafka topic, are being processed through Redis Streams.

< Assignment 2 - Run with Dapr Assignment 4 - Observability >