How many Azure SignalR Resources do I need? - c#

We are going to implement a chat feature within our application using Azure SignalR Service. This will be our first attempt at using SignalR.
We have also identified other areas of our application that can make use of SignalR, but those areas are not related to the chat feature.
Is it advised to create a Azure SignalR resource for each logically structured feature/area? Or will a single Azure SignalR resource handle it all?
Thank you in advance.

You can scale your Azure SignalR resource to handle a large volume of traffic but keep in mind that you can only specify the Azure SignalR connection string in one place at startup. So, if you did need even more resources, you would need to host multiple web apps to supply this kind of resource division. You would also need to design it such that the application does not need to share the SignalR connection between the features/areas of your app.
My advice would be to start with the serverless Azure SignalR service and scale it until you start to reach a capacity limit which will depend heavily on your implementation and intelligently sending messages to the appropriate clients.

Related

SignalR connection limits in Azure Web App

I'm using ASP.NET Core SignalR in one of my ASP.NET Core MVC applications (.NET 6) which is hosted on Azure as a web app.
I'm struggeling to find information on how many concurrent connections my web app can handle before SignalR can't accept more connections.
I know that Azure provides a paid Azure SignalR service for which billing starts at 1000 concurrent connections. Does this indicate that my setup can only work with up to 1000 connections? So far, 400 concurrent connections have worked perfectly.
There are a few variables in play here, so nobody can tell you "Above X connections in a self-hosted SignalR solution, you need to use a SignalR service." Depending on how your solution is provisioned, one component or another may be the limiting factor.
For example, the App Service service limits show the maximum number of web sockets per Web App instance. For the Basic tier, it's 350. When you need 351, your options are:
Scale up your App Service Plan to Standard or higher.
Add an additional instance and use a Redis or Service Bus backplane.
Use SignalR service.
Disable websockets from SignalR and rely on something like long polling, which is limited by server resources.
After you go to the Standard service tier and scale out to multiple Web App instances, you can get pretty far hosting SignalR yourself. We've run over 5K concurrently connected clients this way with four Standard S3 instances. Four is a misleading number because we needed the horsepower for other portions of our app, not just SignalR.
When hosting SignalR yourself, it imposes some constraints and there are various creative ways you can hang yourself. For example, using SignalR netcore, you're required to have an ARR affinity token for a multi-instance environment. That sucks. And I once implemented tight polling reconnect after a connection was closed from the front end. It was fun when our servers went down for over two minutes, came back up, and we had a few thousand web browsers tight polling trying to reconnect. And in the standard tier Web App, it's really hard to get a handle on just what percentage of memory and CPU multiple websocket connections are consuming.
So after saying all of this, the answer is "it depends on a lot of things." Having done this both ways, I'd go ahead and use SignalR service.
Firstly, I don't think it's right to try to calculate the limitation concurrent connections for azure app service. You used asp.net core Signalr and publish the app to azure app service without using Azure Signalr Service. So the limitation is based on azure app service. And we also know that asp.net core Signalr used websocket connections, so we should check the allowed Web sockets per instance value for the app service pricing tier. But, there're also some other configurations:
If you scale an app in the Basic tier to two instances, you have 350
concurrent connections for each of the two instances. For Standard
tier and above, there are no theoretical limits to web sockets, but
other factors can limit the number of web sockets. For example,
maximum concurrent requests allowed (defined by
maxConcurrentRequestsPerCpu) are: 7,500 per small VM, 15,000 per
medium VM (7,500 x 2 cores), and 75,000 per large VM (18,750 x 4
cores).
If there're other azure web app in your app service, it will also influence the connection limitation, that why we always recommend putting Signalr app in a separate app service/server.
By the way, even we can calculate a definite quantity for connection limitation, we can't ignore the bandwidth limiatation, just imagining each signalr message has 1Mb in size.
Another point, in this section:
An app that uses SignalR needs to keep track of all its connections,
which creates problems for a server farm. Add a server, and it gets
new connections that the other servers don't know about. For example,
SignalR on each server in the following diagram is unaware of the
connections on the other servers. When SignalR on one of the servers
wants to send a message to all clients, the message only goes to the
clients connected to that server.
So when you choose to publish your .net 6 Signalr app to Azure web app, it is always recommended using Azure Signalr Service except your number of connections is small all the time and the message size is not "big" and your pricing tier is relatively high. Otherwise, even the connection counts don't reach the limitaion, your app may also meet bandwidth performance issue.
As per the official documentation the Concurrent Connections per unit by default and the maximum limit like below,
In Azure portal we have a Pricing tier and Features seems to be that is the maximum limit and we have a feature - Autoscale as per the below screenshot.
As far as i know and as per the Azure Portal we can't exceed more than 1000 connections, but if you want to have more you can raise a Azure feature request as well as Support ticket.

Azure service bus or just Azure web app when using SignalR

I never used the Azure. Now, i with a problem.
I am going to do a Chat on Xamarin Forms using SignalR. So, the chat will have a 1:1 and a group 1:all . Its for a small group of 700 to 1.000 persons. Looking at the internet, i didnt understand if i have to pay for Azure App Service (standard) + Azure Bus Service, or just Azure Bus Service, or just Azure App Service.
Short answer: you only need to pay for Azure App Service to create a single Web App if you want to use SignalR.
The only reason you'll need Service Bus is if you decide to scale your Web App to multiple instances. To synchronize across multiple web apps, SignalR requires a messaging backplane. That's what Service Bus would be used for. Your other options for a SignalR messaging backplane are Redis (very fast), or Azure SQL (slower). I personally use Service Bus for my SignalR messaging backplane. But again, you do NOT need Service Bus if you're only using one instance for your web app.
If you have an MSDN account you get a certain amount of free credits to put towards it, but each type of Azure Service generally has a charge. Check out the pricing pages on azure and you'll be able to gauge which is your best option, alternatively if you aren't ties to Microsoft, then you can look at the RabbitMQ offerings of Amazon.
If you're unsure on the pricing, speak to a Microsoft Sales person, they've helped me out in the past and are quite good.

Communicating with a Compute role with Service bus vs. WCF

Given a simple high level architecture, e.g a cloud service with a web role, and a compute role, under what circumstances would we choose to use WCF as the communication method between the web role and the compute role, rather than service bus.
There is a lot of documentation, and examples regarding service bus, but I would like to understand if there are any platform benefits to using Service Bus, rather than WCF.
Given the calls are synchronous, and short, e.g a typical API call for getting data onto the website, would you choose WCF over queuing messages and replies onto a queue?
It would appear logically that for a synchronous call WCF would offer the least amount of overhead and latency?
I don't fully understand if the platform offers any "clever" tricks to keep the service bus operating as quickly as a TCP connection over WCF, (Given the queuing overhead?) and would like to understand this further.
At the moment if I was to pick an implementation for this type of call I would choose WCF, which maybe a little naive.
Just to clear, the calls always return data, they are not long running, or fire and forget.
Thanks!
I think it depends on what specifically you want to do.
Service Bus is typically used more for what I would call constant contact type interactions. It should be more performant, but more complex to set up. It also has bi-directional communication capabilities. So you get a lot of extra flexibility out of it.
I would swap WCF for the more modern Web Api. Both solve the same core problem primarily in serving up content. I think of it as just that an API, not necessarily a platform for message passing and handling. They solve 2 different core problems.
I would actually solve the likely problem differently and use Azure Websites + WebJobs. Its the same sort of thing. You can bind the WebJob to an Azure Queue, table or blob and put messages on that storage mechanism, which the job picks up and does something with. The web role I do not believe should rely on content coming back from the job. The job may hit a SignalR Hub that you have on the AzureWeb site post completion, which pushes state back down to the affected parties.
Reference Materials:
WebJobs: https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/
SignalR: http://signalr.net/
Azure Web Apps: https://azure.microsoft.com/en-us/services/app-service/web/

Signalr on Azure: service bus required for calling a single client?

I read that Signalr on Azure requires a service bus implementation (e.g. https://github.com/SignalR/SignalR/wiki/Azure-service-bus) for scalability purpose.
However, my server only makes callbacks to a single client (the caller):
// Invoke a method on the calling client
Caller.addMessage(data);
If don't need Signalr's broadcasting functionality, is an underlaying service bus still necessary?
The Service Bus dependency is not something specific to Azure. Any time you have multiple servers in play, some of your signalR clients will have created their connection to a specific server. If you want to keep multiple servers in sync something needs to handle the server to server real time communication. The pub-sub model of service bus lines up with this requirement quite well.
dfowleR lists a specific case of this in the comments. Make sure you read down that far!
If you are running on a single server (without the sla on Azure) signalR will work just fine on a Cloud Service Web Role as well as the new Azure Web Sites. I did a screencast on this simple scenario that does not take on a service bus dependency, but only runs on a single server.
In order to support the load balance scenario, is it possible to enstablish a "server to server" SignalR PersistConnection between multiple instances (ie on Azure) ?
If so, we can use a SQL Azure Table where all instances register at startup, so newest can connect to previous ones.

Live meeting/Shared view like real time screen sharing service on top of Azure?

What might be the considerations for building a real time screen sharing service (some where close to shared view or live meeting) on top of Windows Azure? Please share your thoughts.
For this, it is obvious that we've to create a custom TCP/IP server - to which clients can connect to and exchange (publish/retrieve) data real time, over a custom protocol on top of TCP/IP.
I think Azure supports TCP/IP only for the web role as of now, on port 80 and 443? Please share your thoughts.
Wow - almost 2 years old and no accepted answer! As Joannes stated, realtime is going to be a challenge - you'll need to carefully evaluate what that means to you in terms of response time and latency.
Windows Azure Worker and Web Roles have evolved considerably since you asked this. You can now have up to 25 input (e.g. external-facing) endpoints in your deployment, spread across any combination of Web and Worker roles - you define the port #s - you're not limited to 80 and 443. You may also have up to 25 Internal endpoints (used for inter-role communication).
Designing to run a desktop-sharing service in Windows Azure would have the same basic considerations as when designing for Windows Server (that's what the Windows Azure VMs are running, afterall - Windows Server 2008 R2). You'll need to deal with authentication and authorization, through your own custom solution or possibly with Access Control Services.
Ok, there "is" one thing you'll need to keep in mind: Windows Azure VMs are stateless, and you shouldn't assume a user will always connect to the same VM instance (there's no way to direct-access a specific instance of a Web or Worker role). So, you'll need to externalize any type of session-specific data (which is very easy, with both SQL Azure and Windows Azure Cache service both very simple to set up and use as session providers).
Low latency is still a tough case for cloud computing providers (Azure being no exception). I think that's going to toughest part in the design. Then, since the Nov'09 release, worker roles can have entry points too (not sure about port limitations though).

Categories