SQL Server performance for high concurrent access - c#

One win service developed by C# and based on SQL Server.
I created a SqlConnection and then do ExecuteReader, but I face a problem when there is a high concurrent access.
On my local machine, code as below, average is 1.2s for one execution. MyObject table has 40 columns and 400'000 rows.
for (int i = 0; i < 500; i++)
{
Task.Factory.StartNew(() =>
{
Stopwatch sw = new Stopwatch();
sw.Start();
var serialNumber = "55292572";
var orderIdArr = ORM.GetObjecys<MyObject>(t =>t.PrimaryId== Id).ToList();//Only Open connection, do SqlCommandExecute, close connection here
//sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
});
}
I deployed this service to 2 win server. You can guess I have 500 customers and they access this service very often.
Online environment, for one execution will cost 10s.
So I don't know why does this low performance happen? On my local machine, it only takes 1.2s.
Could anybody give me some answers or give my some improvable suggestions?
Many thanks!

Related

SQLite poor performance issue in multi-user local-network environment

We use SQLite as shared DB in our application. (I know this is not the best solution but server/client architecture was not possible)
There are only a few users, a very small db and just few writes.
The application is written in c# and we use System.Data.SQLite.dll but the problem occures also for example with the SQLiteDatabaseBrowser
As long as only one user connects to the DB and queries some results, it is very fast. Just some milliseconds. One user can establish multiple connections and execute select statements in parallel. This has also no impact on the performance.
But as soon as another user from a different mashine connects to the db, the performance becomes very poor for every connected user. The performance keeps poor as long as all connections/apps are closed.
After that, the first user connecting, gets the good performance back until the next user connects.
I tried many things:
PRAGMA synchronous = OFF
updated to the lates sqlite version (and created a new db file with that version)
DB-File read-only
network share read-only for everyone
connection string with different options (nearly all)
different sqlite programms (our application and SQLiteDatabaseBrowser)
different filesystems hostet on (NTFS and FAT32)
After that, I wrote a little app that opens a connection, queries some results and displays the passed time. This all in an endless loop.
Here is the Code of this simple app:
static void Main(string[] args)
{
SQLiteConnectionStringBuilder conBuilder = new SQLiteConnectionStringBuilder();
conBuilder.DataSource = args[0];
conBuilder.Pooling = false;
conBuilder.ReadOnly = true;
string connectionString = conBuilder.ConnectionString;
while (true)
{
RunQueryInNewConnection(connectionString);
System.Threading.Thread.Sleep(500);
}
}
static void RunQuery(SQLiteConnection con)
{
using (SQLiteCommand cmd = con.CreateCommand())
{
cmd.CommandText = "select * from TabKatalog where ReferenzName like '%0%'";
Console.WriteLine("Execute Query: " + cmd.CommandText);
Stopwatch watch = new Stopwatch();
watch.Start();
int lines = 0;
SQLiteDataReader reader = cmd.ExecuteReader();
while (reader.Read())
lines++;
watch.Stop();
Console.WriteLine("Query result: " + lines + " in " + watch.ElapsedMilliseconds + " ms");
}
}
static void RunQueryInNewConnection(string pConnectionString)
{
using (SQLiteConnection con = new SQLiteConnection(pConnectionString, true))
{
con.Open();
RunQuery(con);
}
System.Data.SQLite.SQLiteConnection.ClearAllPools();
GC.Collect();
GC.WaitForPendingFinalizers();
}
While testing with this little app, I realised, that it is enough to let another system take a file handle on the sqlite db to decrease the performance. So it seems, that this has nothing to do wih the connection to the db. The performance keeps low until ALL file handles are released. I tracked it with procexp.exe. In addition, only the remote systems encounter the performance issue. On the db file host itself, the queries runs fast every time.
Has anybody encountered the same issue or has some hints?
Windows does not cache files that are concurrently accessed on another computer.
If you need high concurrency, consider using a client/server database.

Bandwith usage monitor returns excessive number

I've got the following code that is supposed to measure the current download and upload speed. The issue I'm facing is that there are often usages recorded that my network and/or internet connection can't even handle (above my bandwidth).
public static IStatistics GetNetworkStatistics(string interfaceName) {
var networkStats = _interfaces[interfaceName];
var dataSentCounter = new PerformanceCounter("Network Interface", "Bytes Sent/sec", interfaceName);
var dataReceivedCounter = new PerformanceCounter("Network Interface", "Bytes Received/sec", interfaceName);
float sentSum = 0;
float receiveSum = 0;
var sw = new Stopwatch();
sw.Start();
while (sw.ElapsedMilliseconds < 1000) {
sentSum += dataSentCounter.NextValue();
receiveSum += dataReceivedCounter.NextValue();
}
sw.Stop();
Console.WriteLine("Download:\t{0} KBytes/s", receiveSum / 1024);
Console.WriteLine("Upload:\t\t{0} KBytes/s\n", sentSum / 1024);
networkStats.AddSentData(sentSum);
networkStats.AddReceivedData(receiveSum);
return networkStats;
}
Sample output:
As you can see most of these entries indicate a pretty heavily used network, up to an excessive amount of almost 160MB/s. I realize that you can't measure transfer speed with just one record (this is test data, in the actual application I use the mean of the latest 3), but even so: how can I ever receive 160MB in one second. I believe it's safe to say that I must have made an error somewhere, but I can't find where.
One thought I had was that I should keep a counter in the loop to show me how many times the PerformanceCounter.NextValue() were accessed (generally between 46 and 48), but in the end I believed this shouldn't matter: in the grand picture I'm still having a too large number for just one second of bandwidth usage. I can't shake the feeling that the performance counter might be the cause though.
Sidenote: the 160MB/s number was recorded the moment I loaded a new youtube video and other (+1000 KB/s) recordings are usually done when I refresh a tab, so it should be a (relative) display of my network usage.
Have I overlooked something in my approach?
Edit:
Upon following #Sam's advice and checking my results against the built-in perfmon.exe I noticed that my bursts in bandwith usage generally occur at the same time as those shown in Perfmon, but mine are way larger. I have tried to link the simultaneous bursts and find something in common, but it seems rather random (possibly because Perfmon might combine several results to get their current speed, whereas I'm only using the latest second).
Same goes for the lower numbers: Perfmon usually shows < 10kbps whereas I'm constantly around 50kbps.
Edit2:
This is the code I used in reference to #Hans' comment
var initSent = dataSentCounter.NextValue();
var initReceived = dataReceivedCounter.NextValue();
Thread.Sleep(1000);
sentSum = dataSentCounter.NextValue() - initSent;
receiveSum = dataReceivedCounter.NextValue() - initReceived;

azure queue performance

For the windows azure queues the scalability target per storage is supposed to be around 500 messages / second (http://msdn.microsoft.com/en-us/library/windowsazure/hh697709.aspx). I have the following simple program that just writes a few messages to a queue. The program takes 10 seconds to complete (4 messages / second). I am running the program from inside a virtual machine (on west-europe) and my storage account also is located in west-europe. I don't have setup geo replication for my storage. My connection string is setup to use the http protocol.
// http://blogs.msdn.com/b/windowsazurestorage/archive/2010/06/25/nagle-s-algorithm-is-not-friendly-towards-small-requests.aspx
ServicePointManager.UseNagleAlgorithm = false;
CloudStorageAccount storageAccount=CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
var cloudQueueClient = storageAccount.CreateCloudQueueClient();
var queue = cloudQueueClient.GetQueueReference(Guid.NewGuid().ToString());
queue.CreateIfNotExist();
var w = new Stopwatch();
w.Start();
for (int i = 0; i < 50;i++ )
{
Console.WriteLine("nr {0}",i);
queue.AddMessage(new CloudQueueMessage("hello "+i));
}
w.Stop();
Console.WriteLine("elapsed: {0}", w.ElapsedMilliseconds);
queue.Delete();
Any idea how I can get better performance?
EDIT:
Based on Sandrino Di Mattia's answer I re-analyzed the code I've originally posted and found out that it was not complete enough to reproduce the error. In fact I had created a queue just before the call to ServicePointManager.UseNagleAlgorithm = false; The code to reproduce my problem looks more like this:
CloudStorageAccount storageAccount=CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
var cloudQueueClient = storageAccount.CreateCloudQueueClient();
var queue = cloudQueueClient.GetQueueReference(Guid.NewGuid().ToString());
//ServicePointManager.UseNagleAlgorithm = false; // If you change the nagle algorithm here, the performance will be okay.
queue.CreateIfNotExist();
ServicePointManager.UseNagleAlgorithm = false; // TOO LATE, the queue is already created without 'nagle'
var w = new Stopwatch();
w.Start();
for (int i = 0; i < 50;i++ )
{
Console.WriteLine("nr {0}",i);
queue.AddMessage(new CloudQueueMessage("hello "+i));
}
w.Stop();
Console.WriteLine("elapsed: {0}", w.ElapsedMilliseconds);
queue.Delete();
The suggested solution from Sandrino to configure the ServicePointManager using the app.config file has the advantage that the ServicePointManager is initialized when the application starts up, so you don't have to worry about time dependencies.
I answered a similar question a few days ago: How to achive more 10 inserts per second with azure storage tables.
For adding 1000 items in table storage it took over 3 minutes, and with the changes I described in my answer it dropped to 4 seconds (250 requests/sec). In the end, table storage and storage queues aren't all that different. The backend is the same, data is simply stored in a different way. And both table storage and queues are exposed through a REST API, so if you improve the way you handle your requests, you'll get a better performance.
The most important changes:
expect100Continue: false
useNagleAlgorithm: false (you're already doing this)
Parallel requests combined with connectionManagement/maxconnection
Also, ServicePointManager.DefaultConnectionLimit should be increased before making a service point. Actually Sandrino's answer says the same thing but using config.
Turn off proxy detection even in the cloud. Auto detect in proxy config element. Slows initialisation.
Choose distributed partition keys.
Collocate your account near to compute, and customers.
Design to add more accounts as needed.
Microsoft set the SLA at 2,000 tps on queues and tables as of 07 2012.
I didn't read Sandrino's linked answer, sorry, just was on this question as I was watching Build 2012 session on exactly this.

Bad perfomance of mass insertion to Redis DB with Sider .NET client

I need to insert about one million key-value pairs in Redis DB. I have a Redis server instance on the same computer with my C# application. I use Sider client to connect to Redis. All settings are default. The following code executes for 4 seconds:
redis_client.Pipeline(c =>
{
for (int i = 0; i < 1000; ++i)
{
Console.Write("\r" + i);
string key = "aaaaaaaaaaa" + i;
string value = "bbbbbbbbbb";
c.Set(key, value);
}
});
I tried both usual and pipeline method of insertion. Standard benchmark of Redis shows similar results. CPU or HDD have no problems and them enough for another mass insertion in different databases. Official benchmark page of Redis says about possibility of ~100000 SET operations per second. I have less then 1000... What's the problem?

.NET best practices for MongoDB connections?

I've been playing with MongoDB recently (It's AMAZINGLY FAST) using the C# driver on GitHub. Everything is working just fine in my little single threaded console app that I'm testing with. I'm able to add 1,000,000 documents (yes, million) in under 8 seconds running single threaded. I only get this performance if I use the connection outside the scope of a for loop. In other words, I'm keeping the connection open for each insert rather than connecting for each insert. Obviously that's contrived.
I thought I'd crank it up a notch to see how it works with multiple threads. I'm doing this because I need to simulate a website with multiple concurrent requests. I'm spinning up between 15 and 50 threads, still inserting a total of 150,000 documents in all cases. If I just let the threads run, each creating a new connection for each insert operation, the performance grinds to a halt.
Obviously I need to find a way to share, lock, or pool the connection. Therein lies the question. What's the best practice in terms of connecting to MongoDB? Should the connection be kept open for the life of the app (there is substantial latency opening and closing the TCP connection for each operation)?
Does anyone have any real world or production experience with MongoDB, and specifically the underlying connection?
Here is my threading sample using a static connection that's locked for insert operations. Please offer suggestions that would maximize performance and reliability in a web context!
private static Mongo _mongo;
private static void RunMongoThreaded()
{
_mongo = new Mongo();
_mongo.Connect();
var threadFinishEvents = new List<EventWaitHandle>();
for(var i = 0; i < 50; i++)
{
var threadFinish = new EventWaitHandle(false, EventResetMode.ManualReset);
threadFinishEvents.Add(threadFinish);
var thread = new Thread(delegate()
{
RunMongoThread();
threadFinish.Set();
});
thread.Start();
}
WaitHandle.WaitAll(threadFinishEvents.ToArray());
_mongo.Disconnect();
}
private static void RunMongoThread()
{
for (var i = 0; i < 3000; i++)
{
var db = _mongo.getDB("Sample");
var collection = db.GetCollection("Users");
var user = GetUser(i);
var document = new Document();
document["FirstName"] = user.FirstName;
document["LastName"] = user.LastName;
lock (_mongo) // Lock the connection - not ideal for threading, but safe and seemingly fast
{
collection.Insert(document);
}
}
}
Most answers here are outdated and are no longer applicable as the .net driver has matured and had numberless features added.
Looking at the documentation of the new 2.0 driver found here:
http://mongodb.github.io/mongo-csharp-driver/2.0/reference/driver/connecting/
The .net driver is now thread safe and handles connection pooling. According to documentation
It is recommended to store a MongoClient instance in a global place, either as a static variable or in an IoC container with a singleton lifetime.
The thing to remember about a static connection is that it's shared among all your threads. What you want is one connection per thread.
When using mongodb-csharp you treat it like you would an ADO connection.
When you create a Mongo object it borrows a connection from the pool, which it owns until it is disposed. So after the using block the connection is back into the pool.
Creating Mongo objects are cheap and fast.
Example
for(var i=0;i<100;i++)
{
using(var mongo1 = new Mongo())
using(var mongo2 = new Mongo())
{
mongo1.Connect();
mongo2.Connect();
}
}
Database Log
Wed Jun 02 20:54:21 connection accepted from 127.0.0.1:58214 #1
Wed Jun 02 20:54:21 connection accepted from 127.0.0.1:58215 #2
Wed Jun 02 20:54:21 MessagingPort recv() errno:0 No error 127.0.0.1:58214
Wed Jun 02 20:54:21 end connection 127.0.0.1:58214
Wed Jun 02 20:54:21 MessagingPort recv() errno:0 No error 127.0.0.1:58215
Wed Jun 02 20:54:21 end connection 127.0.0.1:58215
Notice it only opened 2 connections.
I put this together using mongodb-csharp forum.
http://groups.google.com/group/mongodb-csharp/browse_thread/thread/867fa78d726b1d4
Somewhat but still of interest is CSMongo, a C# driver for MongoDB created by the developer of jLinq. Here's a sample:
//create a database instance
using (MongoDatabase database = new MongoDatabase(connectionString)) {
//create a new document to add
MongoDocument document = new MongoDocument(new {
name = "Hugo",
age = 30,
admin = false
});
//create entire objects with anonymous types
document += new {
admin = true,
website = "http://www.hugoware.net",
settings = new {
color = "orange",
highlight = "yellow",
background = "abstract.jpg"
}
};
//remove fields entirely
document -= "languages";
document -= new[] { "website", "settings.highlight" };
//or even attach other documents
MongoDocument stuff = new MongoDocument(new {
computers = new [] {
"Dell XPS",
"Sony VAIO",
"Macbook Pro"
}
});
document += stuff;
//insert the document immediately
database.Insert("users", document);
}
Connection Pool should be your answer.
The feature is being developed (please see http://jira.mongodb.org/browse/CSHARP-9 for more detail).
Right now, for web application, the best practice is to connect at the BeginRequest and release the connection at EndRequest. But to me, I think that operation is too expensive for each request without Connection Pool. So I decide to have the global Mongo object and using that as shared resource for every threads (If you get the latest C# driver from github right now, they also improve the performance for concurrency a bit).
I don't know the disadvantage for using Global Mongo object. So let's wait for another expert to comment on this.
But I think I can live with it until the feature(Connection pool) have been completed.
I am using csharp-mongodb driver and it doesn't help me with his connection pool :( I have about 10-20 request to mongodb per web request.(150 users online - average) And i can't even monitor statistics or connect to mongodb from shell it throw exception to me.
I have created repository, which open and dispose connection per request. I rely on such things as:
1) Driver has connection pool
2) After my research(i have posted some question in user groups about this) - i understood that creating mongo object and open connection doesn't heavy operation, so heavy operation.
But today my production go down :(
May be i have to save open connection per request...
here is link to user group http://groups.google.com/group/mongodb-user/browse_thread/thread/3d4a4e6c5eb48be3#

Categories