SQL Connections Max from .NET? - c#

Using .NET running in an Azure cloud service, with a process with many SQL Server connection strings (SQL authentication, not Windows), what is the maximum number of connections that can be made to different databases?
I am putting together a system that needs to read/write to many MSSQL instances on different hosts at the same time and am looking to gather information/documentation on limits. This is not the same as multiple connections going to the same database, this is for example 40 strings (therefore 40 connection pools) to 40 different databases under different security contexts.
Thanks.

Maximum Capacity Specifications for SQL Server
User connections: 32767
You see also
Maximum number of concurrent users connected to SQL Server 2008
For you question (num max connections pools), I view the code of ADO.NET and I found that the pools are stored ( _poolCollection) in a ConcurrentDictionary then
the number max of pool is the the number max of entry in the dictionary. In the documentation says
For very large ConcurrentDictionary objects, you can increase the maximum array size to 2 gigabytes (GB) on a 64-bit system by setting the configuration element to true in the run-time environment.
I think that there aren't real limits, depends on the machine

Related

How to set Max Pool Size of SqlConnection string in c# on asmx web service?

I have a physical server with SQL Server 2008 installed on it. Recently sqlservr.exe consumes excessive CPU and RAM.
When I investigated the reasons for this, I learned that it was due to the links opened and that the pool setting was enabled.
All my connections to my database are in a single asmx file, so there is no other connection string.
Here is my connection string
Server=server;Database=dbxxx;User Id=sas;Password=sssxxxx;Max Pool Size=1000;
The program that connects to this web service uses approximately 3 to 4 thousand users per day. The web service daily runs approximately 20 thousand SQL queries per day.
The presenter has 8 gb of RAM and he gets to use 7 gb ram right in the morning with no decrease in usage from 1.7 gb in the morning, or even ever increasing.
How should max pool be set up because you have a single web service connection? Do I do the right thing or is there something extreme?

Memory is not released after a executing queries with SqlBulkCopy

I get that: The process is terminated with all the records in database inserted, then when I see the task manager in Windows the sqlserver.exe process still have a 3.663.263mb occupying my in memory and is not released.......
You can control the server memory configuration and set MAX SERVER MEMORY. here is a good overview document https://msdn.microsoft.com/en-us/library/ms178067.aspx
Minimum Maximum Memory Values are : 32bit - 64MB 64bit - 128mb
Set min server memory and max server memory to span a range of memory
values. This method is useful for system or database administrators to
configure an instance of SQL Server in conjunction with the memory
requirements of other applications that run on the same computer.
Use min server memory to guarantee a minimum amount of memory
available to the SQL Server Memory Manager for an instance of SQL
Server. SQL Server will not immediately allocate the amount of memory
specified in min server memory on startup. However, after memory usage
has reached this value due to client load, SQL Server cannot free
memory unless the value of min server memory is reduced.

Is there maximum connection limit to SQL Server

I finished my desktop application using ADO.NET and SQL Server 2008. I planned to run my application on 10 computers connected to a SQL Server database on a Windows 7 computer via a local network.
Is there any problem if all computers are running simultaneously?
And what about SQL Server connection pools?
Note that when I try to find active connections of my database on SQL Server using this command:
sp_who
I found more than 20 active connection .
Does this cause a problem ?
If you're just looking at ten clients, SQL Server should be able to handle that.
Beyond that, it's just a matter of designing for that kind of load. For instance, you want to make sure you've got the right locks happening at the right times. In essence, having multiple concurrent clients accessing the same database is the same as having multiple, concurrent threads accessing the same variables on a single machine. The database is a bit smarter than just, say, an int, but you'll still want to make sure you're on top of how things happen.
For instance, say you have a table for Tasks that you want to complete, and it has a column for the ID and one for the LastCompleted time. On a single client, you might want to write something that accesses it like this:
Fetch the next one which has LastCompleted < DATEADD(HOUR, -1, GETDATE())
Do that task, which takes five minutes
UPDATE the table to set LastCompleted = GETDATE()
You could then complete all tasks every hour, pretty easily.
However, if you were faced with multiple clients accessing, this would result in multiple clients grabbing the same task multiple times, and doing it concurrently. In that scenario, you'd probably want another column to indicate InProgressAsOf as a date, then you can retry orphaned tasks as necessary but you never risk overlapping things happening. And even with that, you'd probably want to use OUTPUT clauses on your UPDATE to make sure it was all atomic.
Fetch the result from UPDATE Tasks SET InProgressAsOf = GETDATE() OUTPUT DELETED.* WHERE LastCompleted < DATEADD(HOUR, -1, GETDATE()) AND InProgressAsOf < DATEADD(MINUTE, -10, GETDATE())
Do that task, which takes five minutes
UPDATE the table to set LastCompleted = GETDATE()
But yes, as long as you're on top of those nuances of multithreaded operations, you should be fine running ten concurrent connections against the same database.
For Windows 7 runing SQL Server:
It appears that only 20 simultaneous connections are allowed from other devices.(Source) However, depending on the applications that are running on these 20 devices, possibly more connections can be initiated and that can be causing what you see.
Also make sure that you are properly closing connections within your application as necesary and not leaving connections open unnecesarily.
For any Windows Server OS, the following applies in case your need for simultaneous connections is greater:
From How to: Set User Connections (SQL Server Management Studio) - MSDN
To set user connections
In Object Explorer, right-click a server and click Properties.
Click the Connections node.
Under Connections, in the Max number of concurrent connections box, type or select a value from 0 through 32767 to set the maximum number
of users that are allowed to connect simultaneously to the instance of
SQL Server.
Restart SQL Server.
Maximum connections, therefore, is 32,767 and this is true through SQL Server 2014.
You can run sp_configure to get the current running value as well.

Out of proc SessionState memory management

We're using an out-of-proc session state service/ASP.Net Session state. We know were having problems with this as it's been abused in the past, too much stored in session state too often, so were in the process of moving onto a more scalable system.
In the meantime, though, we're trying to get our heads around how a session state service manages it's memory and what limits do we have. But none of the Microsoft docs seem to go into any details.
Specifically I want to know:
What are the limits to how much "the standard" out of proc session state service (installed along with IIS in the windows management console) can store?
(x64)
Is there a per user limit?
by standard service, I mean this one:
There is no limit beyond that of the machine hosting the service. If it has 16 gigs of RAM, assuming a few gigs are used for other processes / the OS / etc., there would be something like 13 GB of memory available for the session data. The data is not persisted to disk so the data only ever exists in RAM / memory; this is why when you restart the service all sessions are gone. The memory is volatile and works like a RAM disk.
If you're reaching the memory limits of the machine hosting your session state service, you are either storing too much data per user, or have too many users storing a little data. You're already on the right track as the next step is moving to a distributed session state provider to scale correctly. This is often achieved via a distributed caching system which comes with a session state provider, or by writing your own provider against said system.
There is no per-user limit on data, but note that out of process communication always happens via serialization. Therefore, there is a practical limit as serializing/deserializing a gig of user data per request is going to be very slow no matter how you approach it.

Detecting C# and SQL Server memory conflicts

I have a memory-intensive C# app that resides on the same server as SQL Server.
I have tweaks in my application to limit in-memory caching and I am aware of how to set (and have set) maximum memory limits on my SQL Server.
PROBLEM: When the C# app wants to use more memory than is available due to SQL Server's caching, my app slows greatly, presumably to venturing into Virtual Memory space. I can prevent this most of the time by eyeballing how much RAM is available and setting the appropriate values in my custom C# code (whether to cache to RAM and/or to SQL Server) and in SQL Server's settings.
HOWEVER: Sometimes the machine's memory usage goes beyond my eyeballed boundaries due to other processes on the machine, typical OS needs fluctuating, etc.
I have noticed that SQL Server will often yield RAM to other processes, such as Chrome, MS Word... it doesn't seem to do so for my process. I have a gut feeling that my C# app isn't actively using all of the cached data in SQL Server...
So, how do I detect when SQL Server won't yield the RAM to my application and/or how do I detect when my application cannot allocate additional bytes of physical RAM?

Categories