Is there maximum connection limit to SQL Server - c#

I finished my desktop application using ADO.NET and SQL Server 2008. I planned to run my application on 10 computers connected to a SQL Server database on a Windows 7 computer via a local network.
Is there any problem if all computers are running simultaneously?
And what about SQL Server connection pools?
Note that when I try to find active connections of my database on SQL Server using this command:
sp_who
I found more than 20 active connection .
Does this cause a problem ?

If you're just looking at ten clients, SQL Server should be able to handle that.
Beyond that, it's just a matter of designing for that kind of load. For instance, you want to make sure you've got the right locks happening at the right times. In essence, having multiple concurrent clients accessing the same database is the same as having multiple, concurrent threads accessing the same variables on a single machine. The database is a bit smarter than just, say, an int, but you'll still want to make sure you're on top of how things happen.
For instance, say you have a table for Tasks that you want to complete, and it has a column for the ID and one for the LastCompleted time. On a single client, you might want to write something that accesses it like this:
Fetch the next one which has LastCompleted < DATEADD(HOUR, -1, GETDATE())
Do that task, which takes five minutes
UPDATE the table to set LastCompleted = GETDATE()
You could then complete all tasks every hour, pretty easily.
However, if you were faced with multiple clients accessing, this would result in multiple clients grabbing the same task multiple times, and doing it concurrently. In that scenario, you'd probably want another column to indicate InProgressAsOf as a date, then you can retry orphaned tasks as necessary but you never risk overlapping things happening. And even with that, you'd probably want to use OUTPUT clauses on your UPDATE to make sure it was all atomic.
Fetch the result from UPDATE Tasks SET InProgressAsOf = GETDATE() OUTPUT DELETED.* WHERE LastCompleted < DATEADD(HOUR, -1, GETDATE()) AND InProgressAsOf < DATEADD(MINUTE, -10, GETDATE())
Do that task, which takes five minutes
UPDATE the table to set LastCompleted = GETDATE()
But yes, as long as you're on top of those nuances of multithreaded operations, you should be fine running ten concurrent connections against the same database.

For Windows 7 runing SQL Server:
It appears that only 20 simultaneous connections are allowed from other devices.(Source) However, depending on the applications that are running on these 20 devices, possibly more connections can be initiated and that can be causing what you see.
Also make sure that you are properly closing connections within your application as necesary and not leaving connections open unnecesarily.
For any Windows Server OS, the following applies in case your need for simultaneous connections is greater:
From How to: Set User Connections (SQL Server Management Studio) - MSDN
To set user connections
In Object Explorer, right-click a server and click Properties.
Click the Connections node.
Under Connections, in the Max number of concurrent connections box, type or select a value from 0 through 32767 to set the maximum number
of users that are allowed to connect simultaneously to the instance of
SQL Server.
Restart SQL Server.
Maximum connections, therefore, is 32,767 and this is true through SQL Server 2014.
You can run sp_configure to get the current running value as well.

Related

Application switched to live URL causes excessive DB usage

Very very strange issue here... Apologies in advance for the wall of text.
We have a suite of applications running on an EC2 instance, all connecting to an RDS instance.
We are hosting the staging and production applications on the same EC2 server.
With one of the applications, as soon as the staging app is moved to prod, over 250 or so connections to the DB are opened, causing the RDS instance to max out CPU usage and make the entire suite slow down. The staging application itself does not have this issue.
The issue can be replicated by both deploying the app via our Octopus setup, and also physically copy pasting the BIN/Views folder from staging to live.
The connections are instant, boosting the CPU usage to 99% in less than a minute.
Things to note...
Running how to see active SQL Server connections? will show the bulk connections, none of which have a LoginName.
Resource monitor on the FE server will list the connections, all coming from a IIS, seemingly scanning all outbound ports, attempting to connect to the DB server on its port. FE server address and DB server address blacked out respectively. Only a snippet of all all of the connections.
The app needs users to log in to perform 99.9% of tasks. There is a public "Forgot your password" method that was updated to accept either a username or password. No change to the form structure or form action URL, just an extra check in the back.
Other changes were around how data was to be displayed and payment restrictions under certain conditions. Both of which require a login.
Things I've tried...
New app pools
Just giving it a few days to forget this ever happened
Not using Octopus to publish
Checking all areas that were updated between versions to see if a connection was not closed properly.
Really at a loss as to what is happening. This is the first time that I've seen something like this. Especially strange that staging is fine, but the same app on another URL/Connection string fails so badly.
The only think I can think of would potentially be some kind of scraper that is polling the public form, but that makes no sense as why isn't it happening with the current app...
Is there something in AWS that can monitor the calls that are being made? I vaguely remember something in NewRelic being able to do so.
Any suggestions and/or similar experiences are welcomed.
Edits.
Nothing outstanding in logs for the day of the issue (yesterday)
No incoming traffic to match all of the outbound requests
No initialisation is performed by the application on startup
Update...
We use ADO for most of our queries. A query was updated to get data from different tables. The method name and parameters were not changed, just the body of the query. If I use sys.dm_exec_sql_text to see what is getting sent to the DB, I can see that is IS the updated query that is being sent in each of the hundreds of connections. They are all showing as suspended though... Nothing has changed in regards to how that query is sent to the server, just the body of the query itself...
So, one of the other queries that was published in the update broke it. We reverted only that query and deployed a new version, and it is fine.
Strangely enough, it's one that is being run in one form or another over the entire suite. But just died under any sort of load that wasn't staging, which is why I assumed it would be the last place to look.

SQL Connections Max from .NET?

Using .NET running in an Azure cloud service, with a process with many SQL Server connection strings (SQL authentication, not Windows), what is the maximum number of connections that can be made to different databases?
I am putting together a system that needs to read/write to many MSSQL instances on different hosts at the same time and am looking to gather information/documentation on limits. This is not the same as multiple connections going to the same database, this is for example 40 strings (therefore 40 connection pools) to 40 different databases under different security contexts.
Thanks.
Maximum Capacity Specifications for SQL Server
User connections: 32767
You see also
Maximum number of concurrent users connected to SQL Server 2008
For you question (num max connections pools), I view the code of ADO.NET and I found that the pools are stored ( _poolCollection) in a ConcurrentDictionary then
the number max of pool is the the number max of entry in the dictionary. In the documentation says
For very large ConcurrentDictionary objects, you can increase the maximum array size to 2 gigabytes (GB) on a 64-bit system by setting the configuration element to true in the run-time environment.
I think that there aren't real limits, depends on the machine

Very slow opening MySQL connection using MySQL Connector for .net

I am trying to solve the problem of very long response times from MySQL when opening a connection using the MySQL Connector for .net.
I have installed MySQL 5.5 running on an Azure VM (Server 2008) with --skip-name-resolve, and the database user accounts' host restrictions are using IP addresses. I am using the latest MySQL Connector for .net in my WCF service running on Azure (in the same location US- East, I have been using a trial subscription, no affinity set). My connection string in the WCF service is using the internal IP address of the VM hosting MySQL as the server parameter value. I also have "pooling = true;Min Pool Size=2;" just in case (I have tried without these parameters too).
When tracing the WCF the query response time once the service is running and processing requests are pretty good (even where each query result is unique and so not being cached) and I have no issues with the performance of MySQL providing it's getting hit frequently.
But the huge problem I haven't been able to crack is the length time it takes to get the connection to MySQL Open after no calls to the database have been made for about 3 or 4 minutes. If no database calls are made for a few minutes it takes 8 or 9 seconds or more to open the connection again. I wrapped the actual "conn.open();" with trace statements before and after calling, and this is the behaviour I am seeing logged time and time again after a few minutes of inactivity.
Incidentally, I have also tried (and still am using) the 'using' style of connection handling to ensure that the MySQL Connector is managing the connection pool.
e.g.:
using (var conn = new MySqlConnection(Properties.Settings.Default.someConnectionString)) { ... statements ..}
I feel like I have reached a dead end on this one so any suggestions would be greatly appreciated.
I can explain your question "the length time it takes to get the connection to MySQL Open after no calls to the database have been made for about 3 or 4 minutes. If no database calls are made for a few minutes it takes 8 or 9 seconds or more to open the connection again." why it happens:
The Windows Azure websites uses concept of hot (active) and cold (inactive) sites in which if a websites has no active connection the site goes it cold state means the host IIS process exits. When a new connection is made to that websites it takes a few seconds to get the site ready and working. While you have MySQL backend associated to this website, it take a few more seconds longer to get the requested served as there is some time taken by IIS host process to get started. That is the reason after few minutes of in activity the the response time is longer.
You can see the following presentation for more details on Windows Azure Hot (active) and Cold (inactive) Websites:
http://video.ch9.ms/teched/2012/na/AZR305.pptx
As this time, I am not sure and do not know how you can keep the websites always hot, even if moving to shared website or it is not possible at all. What I can suggest you to write your issue to Windows Azure WebSites Forum and someone from that team will provide you an appropriate answer.

ORA-1000 Cursor count exceeded. Doesn't replicate on different servers

I have two different IIS servers that are running IIS 7.0 and running the same build of code for my ASP.NET web application with an Oracle back-end. They are both using the same Oracle database but when I run the application on one server, it causes a Cursor Count Exceeded error whereas on the other server the code runs perfectly fine and never encounters the error. The one that is "broken" just so happens to be the production server vs. the development server.
What would be the cause of this? And if there is a way to kill Oracle Sessions in ASP.NET, how do you do it besides waiting for them to timeout.
Assuming it was the ORA-01000 error, the solution is simple, increase the value of open_cursors in the database configuration.
Assuming a release >= 10g and using spfile:
alter system set open_cursors = 512;
The change should be in effect immediately.
The default value (50?) is a bit low for many situation.

C#, Sql Server 2008: Stream large result set to end user only works on some databases

I have a long running query that returns a large data set. This query is called from a web service and the results are converted to a CSV file for the end user. Previous versions would take 10+ minutes to run and would only return results to the end user once the query completes.
I rewrote the query to where it runs in a minute or so in most cases, and rewrote the way it is accessed so the results would be streamed to the client as they came into the asp.net web service from the database server. I tested this using a local instance of SQL Server as well as a remote instance without issue.
Now, on the cusp of production deployment it seems our production SQL server machine does not send any results back to the web service until the query has completed execution. Additionally, I found another machine, that is identical to the remote server that works (clones), is also not streaming results.
The version of SQL Server 2008 is identical on all machines. The production machine has a slightly different version of Windows Server installed (6.0 vs 6.1). The production server has 4 cores and several times the RAM as the other servers. The other servers are single core with 1GB ram.
Is there any setting that would be causing this? Or is there any setting I can set that will prevent SQL Server from buffering the results?
Although I know this won't really affect the overall runtime at all, it will change the end-user perception greatly.
tl;dr;
I need the results of a a query to stream to the end user as the query runs. It works with some database machines, but not on others. All machines are running the same version of SQL Server.
The gist of what I am doing in C#:
var reader = cmd.ExecuteReader();
Response.Write(getHeader());
while(reader.Read())
{
Response.Write(getCSVForRow(reader));
if(shouldFlush()) Response.Flush()
}
Clarification based on response below
There are 4 database servers, Local, Prod, QA1, QA2. They are all running SQL Server 2008. They all have identical databases loaded on them (more or less, 1 day lag on non-prod).
The web service is hosted on my machine (though I have tested remotely hosted as well).
The only change between tests is the connection string in the web.config.
QA2 is working (streaming), and it is a clone of QA1 (VMs). The only difference between QA1 and QA2 is an added database on QA2 not related to this query at all.
QA1 is not working.
All tests include the maximum sized dataset in the result (we limit to 5k rows at this time). The browser displays a download dialog once the first flush happens. This is the desired result. We want them to know their download is processing, even if the download speed is low and at times drops to zero (such is the way with databases).
My flushing code is simple at this time. Every k rows we flush, with k currently set to 20.
The most perplexing part of this is the fact that QA1 and QA2 behave differently. I did notice our production server is set to compatibility mode 2005 (90) where both QA and local database are set to 2008 (100). I doubt this matters. When I exec the sprocs through SSMS I have similar behavior across all machines. I see results stream in immediately.
Is there any connection string setting that could disable the streaming?
Everything I know says that what you're doing should work; both the DataReader and Response.Write()/.Flush() act in a "streaming" fashion and will result in the client getting the data one row at a time as soon as there are rows to get. Response does include a buffer, but you're pushing the buffer to the client after every read/write iteration which minimizes its use.
I'd check that the web service is configured to respond correctly to Flush() commands from the response. Make sure the production environment is not a Win2008 Server Core installation; Windows Server 2008 does not support Response.Flush() in certain Server Core roles. I'd also check that the conditions evaluated in ShouldFlush() will return true when you expect them to in the production environment (You may be checking the app config for a value, or looking at IIS settings; I dunno).
In your test, I'd try a much larger set of sample data; it may be that the production environment is exposing problems that are also present on the test environments, but with a smaller set of test data and a high-speed Ethernet backbone, the problem isn't noticeable compared to returning hundreds of thousands of rows over DSL. You can verify that it is working in a streaming fashion by inserting a Thread.Sleep() call after each Flush(250); this'll slow down execution of the service, and let you watch the response get fed to your client at 4 rows per second.
Lastly, make sure that the client you're using in the production environment is set up to display CSV files in a fashion that allows for streaming. This basically means that a web browser acting as the client should not be configured to pass the file off to a third-party app. A web browser can easily display a text stream passed over HTTP; that's what it does, really. However, if it sees the stream as a CSV file, and it's configured to hand CSV files over to Excel to open, the browser will cache the whole file before invoking the third-party app.
Put a new task that builds this huge CSV file in a task table.
Run the procedure to process this task.
Wait for the result to appear in your task table with SqlDependency.
Return the result to the client.

Categories