Scaling SignalR Core Web Applications With Kubernetes

Ashwin Kumar
The Startup
Published in
5 min readSep 8, 2020

--

Signal R with ASP.Net Core is an open-source library providing real-time communications between the client and the server. With just a couple of lines of code, we can easily add this capability to any asp.net core web application and leverage a powerful feature-set.

In this blog, I’ll use a Microk8s cluster running locally to deploy the app, but the files and commands should work with any cluster without requiring too many changes. The full source code for the example described in this blog can be found here.

The application we’ll be working with is a simple .Net core web app with a Signal R hub and an Angular frontend. The frontend is a simple chat application exchanging messages with all connected clients.

Let’s deploy it and see it work. Pull down the source code and navigate to the Kube folder on a terminal window. I already have the images available publicly so it should be easy to fire it up in our cluster. Before you get started, make sure you have the correct context set,

kubectl config current-context

This should return the context you are on currently. If you are not on the correct context, then you can set it using the command,

kubectl config use-context enter-context-name

Once on the correct context, lets create a namespace where all our resources will go,

kubectl create namespace signalrredis

Once the namespace is created, lets run the secret yml which really isnt used right now but will be used in later steps,

kubectl apply -f secret.yml --namespace signalrredis

Now, to run the deployment,

kubectl apply -f deployment.yml --namespace signalrredis

The deployment should create a single pod. We can now create the service, (alternatively use -n instead of — namespace, as a shortcut)

kubectl apply -f service.yml --namespace signalrredis

We can also create an ingress to access the app, (make sure to enable ingress on your microk8s cluster or on minikube).

kubectl apply -f ingress.yml --namespace signalrredis

Once the ingress is created, we now need to update our hosts file with the cluster’s IP and map it to the ingress host (signalrredis.local). With microk8s, the IP would just be 127.0.0.1. With minikube, you can find the cluster’s ip by using the command

minikube ip

Once you have the IP, update the hosts file by mapping the local address to the ip. On windows the hosts file is located at %systemroot%\system32\drivers\etc\hosts. On Linux, it is located at /etc/hosts.

Map the IP to the local address (signalrredis.local)

Once the mapping is added, the app can now be accessed from a browser using the local host name (http://signarredis.local). Create multiple instances to see it work,

The messages should be exchanged successfully and the pod that sent the message should also be displayed. Since we have only one pod, its the same name thats displayed in both instances.

Great! Now, lets scale it by re-deploying with a higher pod count, so lets change the replicas property in the deployment.yml file to 10,

...app: signalrredisspec:  replicas: 10...

Now, lets re-deploy,

kubectl apply -f deployment.yml --namespace signalrredis

After the pods have been re-deployed, lets test the app again,

Or what didnt happen? When we scaled, we ended up creating multiple hubs but since each client connects to a single hub, the messages will only be propagated to the clients connected to that hub. Any client connected to any other hub will not receive that message since the hubs do not talk to each other. If multiple browser instances happen to connect to the same pod, then the solution might appear to work, but the moment a connection is made to a different pod, the messaging will stop working since the client connected to a different hub.

To fix this, we need a backplane, which will enable the hubs to communicate with each other. We can use an instance of redis as the backplane. It only requires a single code change to our code base in the startup.cs file,

services.AddSignalR().AddStackExchangeRedis(“<redis_conn_str>”);

The sample aplication already has this code change and we can use the backplane by changing an env variable, RedisConfig__UseAsBackplane to true in the deployment.yml, so lets change it,

...containers:- name: signalrredis  image: ashwin027/signalrredis:latest  env:    - name: RedisConfig__UseAsBackplane      value: "true"...

But before we run this, we need redis to be running in the cluster. To install redis, we’ll use helm. The instructions on installing the helm CLI can be found here. Once helm is installed, we need to enable it in the cluster.

On microk8s, enable the addon along with the storage addon (required for redis) using the command,

microk8s enable helm3 storage

On minikube,

minikube addons enable helm-tiller

Once we have helm enabled, while on the Kube folder on a terminal window, run the below command to get redis running,

helm upgrade sigredis ./redis/ --install --namespace signalrredis

Note: With Microk8s, If you haven’t merged your microk8s config into your kubeconfig, then use the command,

microk8s.helm3 upgrade sigredis ./redis/ --install --namespace signalrredis

After the helm install, ensure that the pods for redis are up and running using either the kubernetes dashboard or lens.

Now that we have redis working, lets get the password for the redis cluster,

kubectl get secret --namespace signalrredis sigredis -o jsonpath=”{.data.redis-password}”

This should output a base 64 encoded password that needs to be copied over to the secret.yml file in your kube folder,

apiVersion: v1kind: Secretmetadata:name:  redispassworddata:redispassword: PASTE_PASSWORD_HEREtype: Opaque

Once the secret yml file has been updated with the password, lets run it,

kubectl apply -f secret.yml --namespace signalrredis

We can now run the deployment again with the redis backplane flag set to true,

kubectl apply -f deployment.yml --namespace signalrredis

Once all the pods are deployed, we can test the app again,

When a message is submitted, you can see the pod name change in each browser showing you which pod the message came from. Our backplane is now fully functional!

If you have further questions on the topic, feedback on the article or just want to say hi you can hit me up on twitter or linkedin.

--

--

Ashwin Kumar
The Startup

Full stack developer. Tech blogger. Gamer. Dad.