Updating SQL Server database table every second - c#

I have a C# application that receives some data (about 700 values) per second.
I need to update these all values in a column of a database table (about 700 rows) every second.
Is it possible in this time slot? What will be the recommended way?

Yes. Possible. You will have to change few default settings namely, total transaction threshold and user transaction threshold.
Check this out for more details.
Microsoft SQL Server Transactions Per Second too high indication
DBA: Very high transactions per second
SQL Server memory performance metrics : Part 5
Also, this question is more suited for https://dba.stackexchange.com/ Please do check this SO website.

Related

Optimal way to iterate over rows from Oracle in C# Multi thread

I'm using a Oracle 12c database and I wonder which is the optimal way to iterate in C# over the result of a select query. Once I get the row values I use it to do some work.
My idea is to use the full processor capacity so I thought I need one thread per cpu core, each thread would have his own connection, which is use to get (select count(*) from table where condition)/(cores) rows and then each thread makes the work.
Each table has more 500000 rows.
Am I right or there is a better way to do this?
Thank you in advance, and I'm apology for my English.
It all DEPENDS... :).
Does it all sits on the same box? Oracle and your program?
If not then the number of connections SHOULD NOT be the function of the number of the CPU cores! It should depend on your network infrastructure, the number of connections between the box with your program and the box with your database. Sometimes even on one physical connection is possible to get higher throughput with multiple DB connections, but it has own limits and is useful as long as the DB does not provide maximum resources to one request, rather sharing resources to multiple requests and in case, when generating requests takes time per each generated row of the result. But that is best decided by benchmarking.
If your program and database are sharing the same box, then it is the different scenario. In such case, I would in general use on the side of my app less or equally half of the cores available, as the DB needs cores for processing too and the big number of result rows comes in continually...
It also depends on what you do next with the rows.

C# - Mysql insert limit

I'm new in programming and databases. I've started learning Mysql and C#. So, I created a really simple test program in C# to test how many inserts can do in a minute. (Just a simple infinite loop to insert a simple text into a column) I am watching the dashboard in MySQL Workbench and the problem is that the program can only insert 1000 queries/second. If I run 2-3 instances of the program at the same time, I can see 2-3 * 1000 queries/second.
Is there any limit in MySQL?
There's no built-in limit to insert rates.
Lots of things come into play when you're trying for high insert rates. For example.
How big is each row you're inserting?
How complex are the indexes on your target table? Index updates take time during INSERT operations.
Which access method does your table use? MyISAM is transactionless, so a naive program can push more rows/sec. InnoDB has transactions, so doing your inserts in batches of 1000 or so, wrapped in BEGIN / COMMIT statements, can speed things up.
How fast are the disks / ssds on your server? How much RAM does it have?
How fast are your client machines and the network between them and the MySQL server?
Are other programs trying to read the target table at the same time you're doing inserts?
You've mentioned that your total insert rate scales up approximately linearly for 2-3 instances of your insert program. That means the bottleneck is in your insert program, not the server, at that scale.
C#, like many language frameworks, offers prepared statements. They are a way to write a query once and use it over and over with different data values. It's faster. It's also safer if your data comes from an untrusted source (look up SQL injection).
MySQL lets you insert multiple rows with a single INSERT operation. Faster.
INSERT INTO TABLE tbl (a, b, c)
VALUES (1,2,3),(4,5,6),(7,8,9)
MySQL offers the LOAD DATA INFILE statement. You can get astonishingly high bulk load rates with that statement if you need them..

Inserting Large volume of data in SQL Server 2005

We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day).
Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0.
To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization).
Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations.
Currently i am using xml batch insertion method. (OPENXML in Storred Proc)
I want to know different views of members on this.
Please provide your way of dealing with such situation.
Your question is reading, but title implies writing?
When reading, consider (bit don't blindly use) temporary tables to cache data if you're going to do some processing. However, by simple calculations I assume aggregates live AVG, MAX etc?
It would generally be inane to drag data around, cache it in the client and aggregate it there.
If batch uploads:
SQLBulkCopy or similar to a staging table
Single write from staging to final table with
If single upload, just insert it
A million rows a day is a rounding error for what SQL Server ('Orable, MySQL, DB2 etc) is capable of
Example: 35k transaction (not rows) per second

Is it a problem if i query again and again to SQL Server 2005 and 2000?

Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):-
Approach One:-
Query the database Select GUID from Table1 where DateTime <= #givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID.
Second Approach:-
Query the database Select Top 1 GUID from Table1 where DateTime <= #givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation.
What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).
PS: Database is also on local machine (MSDE or 2005 express version)
I would go with a hybrid approach. I would select maybe 50 records at a time instead of just one. This way, you aren't loading the entire number of records, but you are also drastically reducing the number of calls to the database.
Go with approach 1 and use SQLDataReader to iterate through the data without eating up memory.
If you only have 128 MB of ram I think number 2 would be your best approach......that said can't you do this SET based with a stored procedure perhaps, this way all the processing would happen on the server
If memory use is a concern, I would consider caching the data to disk locally. You can then read the data from the files using a FileStream object.
Your number 2 solution will be really slow, and put a lot of burden on the db server.
I would have a paged enabled Stored Procedure.
I would do it in chunks of 1k rows and test from there up until I get the best performance.
usp_GetGUIDS #from = 1, #to = 1000
This may be a totally inaproprite approach for you, but if you're that worried about performance and your machine is low spec, I'd try the following:
Move your SQL server to another machine, as this eats up a lot of resources.
Alternativly, if you don't have that many records, store as XML or SQLite, and get rid of the SQL server altogether?

SQL Design: Big table, thread access serialization

I have one BIG table(90k rows, size cca 60mb) which holds info about free rooms capacities for about 50 hotels. This table has very few updates/inserts per hour.
My application sends async requests to this(and joined tables) at max 30 times per sec.
When i start 30 threads(with default AppPool class at .NET 3.5 C#) at one time(with random valid sql query string), only few(cca 4) are processed asynchronously and other threads waits. Why?
Is it becouse of SQL SERVER 2008 table locking, or becouse of .NET core? Or something else?
If it is a SQL problem, can help if i split this big table into one table per each hotel model?
My goal is to have at least 10 threads servet at a time.
This table is tiny. It's doesn't even qualify as a "medium sized" table. It's trivial.
You can be full table scanning it 30 times per second, you can be copying the whole thing in ram, no server is going to be the slightest bit bothered.
If your data fits in ram, databases are fast. If you don't find this, you're doing something REALLY WRONG. Therefore I also think the problems are all on the client side.
It is more than likely on the .NET side. If it were table locking more threads would be processing, but they would be waiting on their queries to return. If I remember correctly there's a property for thread pools that controls how many actual threads they create at once. If there are more pending threads than that number, then they get in line and wait for running threads to finish. Check that.
Have you tried changing the transaction isolation level?
Even when reading from a table Sql Server will lock the table
try setting the isolation level to read uncommitted and see if that improves the situation,
but be advised that its feasible that you will read 'dirty' data make sure you understand the ramifications if this is the solution
this link explains what it is.
link text
Rather than ask, measure. Each of your SQL queries that is actually submitted by your application will create a request on the server, and the sys.dm_exec_requests DMV shows the state of each request. When the request is blocked the wait_type column shows a non-empty value. You can judge from this whether your requests are blocked are not. If they are blocked you'll also know the reason why they are blocked.

Categories