SignalR connection limits in Azure Web App - c#

I'm using ASP.NET Core SignalR in one of my ASP.NET Core MVC applications (.NET 6) which is hosted on Azure as a web app.
I'm struggeling to find information on how many concurrent connections my web app can handle before SignalR can't accept more connections.
I know that Azure provides a paid Azure SignalR service for which billing starts at 1000 concurrent connections. Does this indicate that my setup can only work with up to 1000 connections? So far, 400 concurrent connections have worked perfectly.

There are a few variables in play here, so nobody can tell you "Above X connections in a self-hosted SignalR solution, you need to use a SignalR service." Depending on how your solution is provisioned, one component or another may be the limiting factor.
For example, the App Service service limits show the maximum number of web sockets per Web App instance. For the Basic tier, it's 350. When you need 351, your options are:
Scale up your App Service Plan to Standard or higher.
Add an additional instance and use a Redis or Service Bus backplane.
Use SignalR service.
Disable websockets from SignalR and rely on something like long polling, which is limited by server resources.
After you go to the Standard service tier and scale out to multiple Web App instances, you can get pretty far hosting SignalR yourself. We've run over 5K concurrently connected clients this way with four Standard S3 instances. Four is a misleading number because we needed the horsepower for other portions of our app, not just SignalR.
When hosting SignalR yourself, it imposes some constraints and there are various creative ways you can hang yourself. For example, using SignalR netcore, you're required to have an ARR affinity token for a multi-instance environment. That sucks. And I once implemented tight polling reconnect after a connection was closed from the front end. It was fun when our servers went down for over two minutes, came back up, and we had a few thousand web browsers tight polling trying to reconnect. And in the standard tier Web App, it's really hard to get a handle on just what percentage of memory and CPU multiple websocket connections are consuming.
So after saying all of this, the answer is "it depends on a lot of things." Having done this both ways, I'd go ahead and use SignalR service.

Firstly, I don't think it's right to try to calculate the limitation concurrent connections for azure app service. You used asp.net core Signalr and publish the app to azure app service without using Azure Signalr Service. So the limitation is based on azure app service. And we also know that asp.net core Signalr used websocket connections, so we should check the allowed Web sockets per instance value for the app service pricing tier. But, there're also some other configurations:
If you scale an app in the Basic tier to two instances, you have 350
concurrent connections for each of the two instances. For Standard
tier and above, there are no theoretical limits to web sockets, but
other factors can limit the number of web sockets. For example,
maximum concurrent requests allowed (defined by
maxConcurrentRequestsPerCpu) are: 7,500 per small VM, 15,000 per
medium VM (7,500 x 2 cores), and 75,000 per large VM (18,750 x 4
cores).
If there're other azure web app in your app service, it will also influence the connection limitation, that why we always recommend putting Signalr app in a separate app service/server.
By the way, even we can calculate a definite quantity for connection limitation, we can't ignore the bandwidth limiatation, just imagining each signalr message has 1Mb in size.
Another point, in this section:
An app that uses SignalR needs to keep track of all its connections,
which creates problems for a server farm. Add a server, and it gets
new connections that the other servers don't know about. For example,
SignalR on each server in the following diagram is unaware of the
connections on the other servers. When SignalR on one of the servers
wants to send a message to all clients, the message only goes to the
clients connected to that server.
So when you choose to publish your .net 6 Signalr app to Azure web app, it is always recommended using Azure Signalr Service except your number of connections is small all the time and the message size is not "big" and your pricing tier is relatively high. Otherwise, even the connection counts don't reach the limitaion, your app may also meet bandwidth performance issue.

As per the official documentation the Concurrent Connections per unit by default and the maximum limit like below,
In Azure portal we have a Pricing tier and Features seems to be that is the maximum limit and we have a feature - Autoscale as per the below screenshot.
As far as i know and as per the Azure Portal we can't exceed more than 1000 connections, but if you want to have more you can raise a Azure feature request as well as Support ticket.

Related

How many Azure SignalR Resources do I need?

We are going to implement a chat feature within our application using Azure SignalR Service. This will be our first attempt at using SignalR.
We have also identified other areas of our application that can make use of SignalR, but those areas are not related to the chat feature.
Is it advised to create a Azure SignalR resource for each logically structured feature/area? Or will a single Azure SignalR resource handle it all?
Thank you in advance.
You can scale your Azure SignalR resource to handle a large volume of traffic but keep in mind that you can only specify the Azure SignalR connection string in one place at startup. So, if you did need even more resources, you would need to host multiple web apps to supply this kind of resource division. You would also need to design it such that the application does not need to share the SignalR connection between the features/areas of your app.
My advice would be to start with the serverless Azure SignalR service and scale it until you start to reach a capacity limit which will depend heavily on your implementation and intelligently sending messages to the appropriate clients.

Azure resource types for long running, scheduled jobs

I have a .NET Core Console Application that I need to deploy to Azure and schedule to run once a day. The application is creating a TCP Socket to get market data. I need to schedule it to run in the morning, and the application will receive a close message near the end of the day from the market and automatically close. Approximately run time is estimated at 16 hours, 5 days a week.
Here are the option I've looked researched:
Cloud Service, which might be deprecated (I'm having a hard to validating the comments I've read to this effect)
Service Fabric - but this really looks like it's tailored for stateless applications that can spin up and down for scale. In my case, it should always be a single instance (I do like the self "healing", if my service does go down, it would be great if it is automatically restarted or a new one is spun up)
Azure Web job and azure scheduler. It looks like I could set this to "always on" and add a settings file that has cron configuration, but it seems like a waste of resources to have it "always on". This option also appears to be limited in it's deployment options - I can't set up (that I see) a git integration and auto-deploy. This does seem like the way to go
I'm looking for the pro's and con's of these options above for my use case, or any other options that I might have missed.
There's one thing that seems to be overlooked here. This part:
The application is creating a TCP Socket to get market data.
Is that 80/TCP or 443/TCP? Does it talk HTTP over one of those ports?
Because if your application talks custom protocol over an arbitrary TCP port, you can't use WebJobs. The App Service sandbox does not allow arbitrary port binding. This applies ingress. Egress (outbound), there's no restriction. You can make raw TCP requests from the WebJob to any destination and port.
From https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#network-endpoint-listening:
Network endpoint listening
The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
There's no need to involve the Azure Scheduler service. WebJobs have a built-in cron implementation which is completely free.
Also, the Always On feature really means hit this site with synthetic requests every couple minutes so it serves a 200 OK, and thus prevent the Application Pool from being unloaded from memory due to inactivity.
I would use the tank. Can't find anything wrong with the tank if you can pick your tank size. There's also very little maintenance with tanks.
Size (id) Cores Ram Net Bandwidth Total disk size
---------------------------------------------------------------
ExtraSmall 1 0.75 GB Low 19 GB
Small 1 1.75 GB Moderate 224 GB
...
ServiceDefinition.csdef if you need to listen on a socket:
<Endpoints>
<InputEndpoint name="aRawTCPEndpoint" protocol="tcp" port="54321" localPort="54321" />
</Endpoints>
Where does your application keep state? Memory/disk/off-the-box database? Cloud Service roles are stateless in nature and if one instance gets sick it's barbecued and a new one is spun up. It's crucial that state be kept off-the-box, in durable storage - Blob/Table storage, Azure SQL, DocumentDB, etc.
Imagine you've build your house 6 years ago. And you've used this material called ClassicBrick in the structure. It is a good material, strong, waterproof, scratch-resistant. But recently this newer and better material - let's call it Armritis (which by the way is designed to be used in BRIDGES not houses, but i digress) came out which everybody tells you is better in every way. Do you tear down the house? Cloud Services are not deprecated and until i see an official Microsoft roadmap telling the opposite i'm not going to entertain this in any way.
On the topic of Service Fabric, it CAN do stateful and it's actually one of its biggest selling points:
From https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-quick-start/:
Create a stateful service
Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain state reliably within the service itself, co-located with the code that's using it. State is made highly available by Service Fabric without the need to persist state to an external store.
Azure Functions is also worth a good look if you can speak HTTP over standard ports.
First, we need to compare the pricing between Cloud Service, Service Fabric, Web job &Scheduler if you doesn't want to wast resources. Here is the pricing calculator. Because your console application job will have special scheduler time to work. It is better to save money if the job is not work. So, Web job(if you have web app meantime) &Scheduler will be good choice to achieve your purpose.
Using a cloud service for such a tiny job is like using a tank to go to work
Service Fabric is mainly for building Micro-services style application not a console apps or jobs that run once a day
Web Jobs require a web app, so you have the remaining option which is the base for web jobs.
You can create the scheduler and make it run every day at specific time or execute it manually on demand
I would go with either one solution, in this priority:
App Service: Web Apps (Web Job)
Virtual Machine
Cloud Service
App Service: Web Apps (Web Job) provides a free plan. And it started to support Web Jobs in a free plan. You will be able to work with files, should you need it. Just as mentioned in other answers, just set a scheduler. If you have doubts and think it is not kosher to do it on a website, then think of it as getting a free website (if you use paid plan) as a bonus. Either way, everything runs on a machine - be it with web-server or without. Maybe you will even start some long-ago-forgotten web project of yours?
Cloud Service and Virtual Machine are both straightforward and simple. Gotta be honest, I haven't used Cloud Service, yet I think you can connect to it via Remote Desktop just like to an ordinary VM. You will have complete control. I would choose Virtual Machine over the Cloud Service though, just because it is cheaper.
Solutions that will NOT work:
Azure Scheduler does not fit you, because it allows only HTTP/S requests and posting messages to Azure Storage queues, Azure Service Bus queues, or Azure Service Bus.
Personally I would go with a WebJob without AlwaysOn, and use Azure Scheduler to fire the WebJob at the desired times using HTTPS. Then the WebJob can do the calls needed to get the data. This does not need AlwaysOn since the Scheduler call wakes it up anyway.
Azure Functions might also be worth a look, though they aren't meant for long-running tasks.
This would also probably be the cheapest option as the Web App can probably run on Free tier (depending on how long the job takes) and the Scheduler can also be Free tier.

Hundreds of threads from Worker Role connecting to SignalR on Web Role

My system has a Cloud Service with a Worker Role that reads messages from a queue (Azure Service Bus) and spawns a thread that uses the C# SignalR client to connect to a Cloud Service running a Web Role hosting the SignalR Hub. The worker thread runs for about 5 minutes doing various things including intermittently sending messages to the Hub - maybe 25 messages total. I am scaling out with Azure Service Bus topics - the default of 5. The Cloud Services are separate but reside in the same Virtual Network - the Worker Role points to the load balancer probes for the Web Role (but right now I am only running a single instance of each Role).
I am trying to determine the capacity of both the Worker Role (with the SignalR clients) and the Web Role (hosting the SIgnalR hub).
I can run 200 concurrent threads on the Worker Role with each connecting, exchanging messages, and disconnecting cleanly. Neither Role experiences more than a 35% CPU spike during the testing. SignalR Performance counters all look great - there are no errors, no SSE or LP connections, and no scaleout queueing or scaleout errors.
When I try 300, suddenly all but 1 of my threads on the Worker Role cannot connect, and experience TimeoutExceptions that read "Transport timed out trying to connect" issue. I enabled tracing on the C# client in the Worker Role and I see that WebSockets, SSE, and LP all fail (Auto: Failed to connect to using transport webSockets/serverSideEvents/longPolling).
I am hoping to understand if:
a) my expectations are off - that I expect that I should be able to have more than 200+ concurrent connections from my Worker Role to my WebRole,
b) are the IIS settings for a Web Role adequate out of the box? Note that I have applied the SignalR performance changes supplied in the Wiki
c) are there Worker Role configurations / limitations with the number of concurrent connections I can make to a single source? Note that I applied the system.net configuration to allow a max of 1000.
d) is the type of Cloud Service size inhibiting me in any way? Both are set to "Medium" size which is 2 cores and 3.5 GB. Am I short-changing anything by stay small? The idea was to find the limits of this size server and then be able to apply more instances in real-time as needed.
It should be stated that if I add instances, I can get past this limitation. But I want to understand why my current bottleneck is 200.
Any ideas or comments are welcome. I'm kind of stuck.

Signalr on Azure: service bus required for calling a single client?

I read that Signalr on Azure requires a service bus implementation (e.g. https://github.com/SignalR/SignalR/wiki/Azure-service-bus) for scalability purpose.
However, my server only makes callbacks to a single client (the caller):
// Invoke a method on the calling client
Caller.addMessage(data);
If don't need Signalr's broadcasting functionality, is an underlaying service bus still necessary?
The Service Bus dependency is not something specific to Azure. Any time you have multiple servers in play, some of your signalR clients will have created their connection to a specific server. If you want to keep multiple servers in sync something needs to handle the server to server real time communication. The pub-sub model of service bus lines up with this requirement quite well.
dfowleR lists a specific case of this in the comments. Make sure you read down that far!
If you are running on a single server (without the sla on Azure) signalR will work just fine on a Cloud Service Web Role as well as the new Azure Web Sites. I did a screencast on this simple scenario that does not take on a service bus dependency, but only runs on a single server.
In order to support the load balance scenario, is it possible to enstablish a "server to server" SignalR PersistConnection between multiple instances (ie on Azure) ?
If so, we can use a SQL Azure Table where all instances register at startup, so newest can connect to previous ones.

Live meeting/Shared view like real time screen sharing service on top of Azure?

What might be the considerations for building a real time screen sharing service (some where close to shared view or live meeting) on top of Windows Azure? Please share your thoughts.
For this, it is obvious that we've to create a custom TCP/IP server - to which clients can connect to and exchange (publish/retrieve) data real time, over a custom protocol on top of TCP/IP.
I think Azure supports TCP/IP only for the web role as of now, on port 80 and 443? Please share your thoughts.
Wow - almost 2 years old and no accepted answer! As Joannes stated, realtime is going to be a challenge - you'll need to carefully evaluate what that means to you in terms of response time and latency.
Windows Azure Worker and Web Roles have evolved considerably since you asked this. You can now have up to 25 input (e.g. external-facing) endpoints in your deployment, spread across any combination of Web and Worker roles - you define the port #s - you're not limited to 80 and 443. You may also have up to 25 Internal endpoints (used for inter-role communication).
Designing to run a desktop-sharing service in Windows Azure would have the same basic considerations as when designing for Windows Server (that's what the Windows Azure VMs are running, afterall - Windows Server 2008 R2). You'll need to deal with authentication and authorization, through your own custom solution or possibly with Access Control Services.
Ok, there "is" one thing you'll need to keep in mind: Windows Azure VMs are stateless, and you shouldn't assume a user will always connect to the same VM instance (there's no way to direct-access a specific instance of a Web or Worker role). So, you'll need to externalize any type of session-specific data (which is very easy, with both SQL Azure and Windows Azure Cache service both very simple to set up and use as session providers).
Low latency is still a tough case for cloud computing providers (Azure being no exception). I think that's going to toughest part in the design. Then, since the Nov'09 release, worker roles can have entry points too (not sure about port limitations though).

Categories