Replication from an SQL Server into memory of a process - c#

I have a couple of tables in a SQL Server database, most of them are updated only rarely, i.e. they are mostly-read.
In order not to have to go to the database every time I read an entry, what we have done is, on startup we load all tables completely into memory of our .net process (the data is small enough), and at intervals of 10 seconds we reread the whole thing and replace our in-memory representation of the data.
This in-memory representation of the data is then used for reading, and we don't have to go synchronously to the DB, unless we want to update the data.
Suffice to say this currently hand-coded process (for each table we have to write code that SELECT * and handles the received rows) is tedious, and bound to attract bugs during the maintenance cycle. In addition, it is obviously inefficient to always read the whole DB and reprocess all entries, even though nothing has changed.
I can think of a couple of meaningful optimizations to the above procedure, but my point is, I don't want to have to do manually what looks like a feature that could come out of the box: The replication of a set of tables into memory of a process to speed up read access.
I guess if I went ORM and used nhibernate etc., I could get something like that in addition to the ORM layer (by means of caching and eager loading).
Now if I don't want the ORM part, just the replication of the lower relational level, is there anything that I can just switch on?

You can look at the metadata and make something generic which can load any table into some kind of structure you like or simply use an ADO.NET DataSet.
Also, instead of reloading your data on a timer even when it hasn't changed, you can subscribe to changes using SqlDependency

Related

Safe pagination without a rowlock

I'm working with SQL Server and Entity Framework. I need to fetch a large number of records and process them.
I'm afraid of "out-of-memory" or other performance problems, so I want to implement the fetching and processing in batches. The problem is that between each fetch, the underlying data in the database might change, resulting in omitting elements (a record is removed, then next offset is applied to the select operation but the data in the database has 'moved to the left' and the first item of the next page is omitted).
This is important because the purpose of the select is not just presentation but further processing.
I thought about setting the transaction's isolation level to repeatable read to avoid changes coming from other users, but the processing takes a lot of time and that would lock the changes to the table for the whole time.
I also thought about paginating using the key rather than the offset (SQL where/limit). This way no data that was initially in the db could be omitted, only the changes that happen after the pagination has started could be omitted or not. However the user is informed about the number of items to be processed in the beginning of the process. And this number might not be correct, but we only learn about it in the end of the long process.
What would your advice be? Am I missing something?

ORM for stateful application. Does EF fit? Or any?

I need an ORM that is suitable for stateful application. I'm going to keep entities between requests in low-latency realtime game server with persistent client connections. There is an only 1 server instance connected to database so no data can be changed from "outside" and the server can rely on its cache.
When user remotely logs in to the server its whole profile is loaded to server memory. Several higher-level services are also created for each user to operate profile data and provide functionality. They can also have internal fields (state) to store temporary data. When user wants to change his signature he asks corresponding service to do so. The service tracks how frequently user changes his signature and allows it only once per ten minutes (for example) - such short interval is not tracked in db, this is a temporary state. This change should be stored to db executing only 1 query: UPDATE users SET signature = ... WHERE user_id = .... When user logs off it's unloaded from server memory after minutes/hours of inactivity. Db here is only a storage. This is what I call stateful.
Some entities are considered "static data" and loaded only once at application start. Those can be referenced from other "dynamic" entities. Loading "dynamic" entity should not require reloading referenced "static data" entity.
Update/Insert/Delete should set/insert/delete only changed properties/entities even with "detached" entity.
Write operations should not each time load data from database (perform Select) preliminary to detect changes. (A state can be tracked in dynamically generated inheritor.) I have a state locally, there is no sense to load anything. I want to continue tracking changes even outside of connection scope and "upload" changes when I want.
While performing operations references of persisted objects should not be changed.
DBConnection-per-user is not going to work. The expected online is thousands of users.
Entities from "static data" can be assigned to "dynamic" enitity properties (which represent foreign keys) and Update should handle it correctly.
Now I'm using NHibernate despite it's designed for stateless applications. It supports reattaching to session but that looks like very uncommon usage, requires me to use undocumented behavior and doesn't solve everything.
I'm not sure about Entity Framework - can I use it that way? Or can you suggest another ORM?
If the server will recreate (or especially reload) user objects each time user hits a button it will eat CPU very fast. CPU scales vertically expensively but have small effect. Contrary if you are out of RAM you can just go and buy more - like with horizontal scaling but easier to code. If you think that another approach should be used here I'm ready to discuss it.
Yes, you can use EF for this kind of application. Please keep in mind, that on heavy load you will have some db errors time to time. And typically, it's faster to recover after errors, when you application track changes, not EF. By the way, you can use this way NHibernate too.
I have used hibernate in a stateful desktop application with extremely long sessions: the session starts when the application launches, and remains open for as long as the application is running. I had no problems with that. I make absolutely no use of attaching, detaching, reattaching, etc. I know it is not standard practice, but that does not mean it is not doable, or that there are any pitfalls. (Edit: but of course read the discussion below for possible pitfalls suggested by others.)
I have even implemented my own change notification mechanism on top of that, (separate thread polling the DB directly, bypassing hibernate,) so it is even possible to have external agents modify the database while hibernate is running, and to have your application take notice of these changes.
If you have lots and lots of stuff already working with hibernate, it would probably not be a good idea to abandon what you already have and rewrite it unless you are sure that hibernate absolutely won't do what you want to accomplish.

I have roughly 30M rows to Insert Update in SQL Server per day what are my options?

I have roughly 30M rows to Insert Update in SQL Server per day what are my options?
If I use SqlBulkCopy, does it handle not inserting data that already exists?
In my scenario I need to be able to run this over and over with the same data without duplicating data.
At the moment I have a stored procedure with an update statement and an insert statement which read data from a DataTable.
What should I be looking for to get better performance?
The usual way to do something like this is to maintain a permanent work table (or tables) that have no constraints on them. Often these might live in a separate work database on the same server.
To load the data, you empty the work tables, blast the data in via BCP/bulk copy. Once the data is loaded, you do whatever cleanup and/or transforms are necessary to prep the newly loaded data. Once that's done, as a final step, you migrate the data to the real tables by performing the update/delete/insert operations necessary to implement the delta between the old data and the new, or by simply truncating the real tables and reloading them.
Another option, if you've got something resembling a steady stream of data flowing in, might be to set up a daemon to monitor for the arrival of data and then do the inserts. For instance, if your data is flat files get dropped into a directory via FTP or the like, the daemon can monitor the directory for changes and do the necessary work (as above) when stuff arrives.
One thing to consider, if this is a production system, is that doing massive insert/delete/update statements is likely to cause blocking while the transaction is in-flight. Also, a gigantic transaction failing and rolling back has its own disadvantages:
The rollback can take quite a while to process.
Locks are held for the duration of the rollback, so more opportunity for blocking and other contention in the database.
Worst, after all that happens, you've achieved no forward motion, so to speak: a lot of time and effort and you're right back where you started.
So, depending on your circumstances, you might be better off doing your insert/update/deletes in smaller batches so as to guarantee that you achieve forward progress. 30 million rows over 24 hours works out to be c. 350 per second.
Bulk insert into a holding table then perform either a single Merge statement or an Update and an Insert statement. Either way you want to compare your source table to your holding table to see which action to perform

what can affect nhibernate bulk insert performance?

I have various large data modification operations in a project built on c# and Fluent NHibernate.
The DB is sqlite (on disk rather than in memory as I'm interested in performance)
I wanted to check performance of these so I created some tests to feed in large amounts of data and let the processes do their thing. The results from 2 of these processes have got me pretty confused.
The first is a fairly simple case of taking data supplied in an XML file doing some light processing and importing it. The XML contains around 172,000 rows and the process takes a total of around 60 seconds to run with the actual inserts taking around 40 seconds.
In the next process, I do some processing on the same set of data. So I have a DB with approx 172,000 rows in one table. The process then works through this data, doing some heavier processing and generating a whole bunch of DB updates (inserts and updates to the same table).
In total, this results in around 50,000 rows inserted and 80,000 updated.
In this case, the processing takes around 30 seconds, which is fine, but saving the changes to the DB takes over 30 mins! and it crashes before it finishes with an sqlite 'disk or i/o error'
So the question is: why are the inserts/updates in the second process so much slower? They are working on the same table of the same database with the same connection. In both cases, IStatelessSession is used and ado.batch_size is set to 1000.
In both cases, the code looks that does the update like this:
BulkDataInsert((IStatelessSession session) =>
{
foreach (Transaction t in transToInsert) { session.Insert(t); }
foreach (Transaction t in transToUpdate) { session.Update(t); }
});
(although the first process has no 'transToUpdate' line as it's only inserts - Removing the update line and just doing the inserts still takes almost 10 minutes.)
The transTo* variables are List with the objects to be updated/inserted.
BulkDataInsert creates the session and handles the DB transaction.
I didn't understand your second process. However, here are some things to consider:
Are there any clustered or non-clustered indexes on the table?
How many disk drives do you have?
How many threads are writing to the DB in the second test?
It seems that you are experiencing IO bottlenecks that can be resolved by having more disks, more threads, indexes, etc.
So, assuming a lot of things, here is what I "think" is happening:
In the first test your table probably has no indexes, and since you are just inserting data, it is a sequential insert in a single thread which can be pretty fast - especially if you are writing to one disk.
Now, in the second test, you are reading data and then updating data. Your SQL instance has to find the record that it needs to update. If you do not have any indexes this "find" action is basically a table scan, which will happen for each one of those 80,000 row updates. This will make your application really really slow.
The simplest thing you could probably do is add a clustered index on the table for a unique key, and the best option is to use the columns that you are using in the where clause to "update" those rows.
Hope this helps.
DISCLAIMER: I made quite a few assumptions
The problem was due to my test setup.
As is pretty common with nhibernate based projects, I had been using in-memory sqlite databases for unit testing. These work great but one downside is that if you close the session, it destroys the database.
Consequently, my unit of work implementation contains a 'PreserveSession' property to keep the session alive and just create new transactions when needed.
My new performance tests are using on-disk databases but they still use the common code for setting up test databases and so have PreserveSession set to true.
It seems that having several sessions all left open (even though they're not doing anything) starts to cause problems after a while including the performance drop off and the disk IO error.
I re-ran the second test with PreserveSession set to false and immediately I'm down from over 30 minutes to under 2 minutes. Which is more where I'd expect it to be.

Mid-tier caching for Windows Forms Application

I have a simple Windows Forms Application which is written C# 4.0. The application shows some of the records from database. The application features a query option which is initiated by user.
The records in the database we can call as jobs
Consider the two columns JobID and Status
These being updated by two of the background services which in fact work like a producer consumer services. The status of the job will be updated by these services running behind.
Now for the user, who has an option to query the records from the database, say for e.g. to query data based on status (Submitted, processing, completed). This can result in thousands of records and the GUI might face some performance glitches on displaying these much of data.
Hence, it's important to display chunks of the query results as pages. The GUI isn't refreshed until user manually refresh or make the new query.
Say for e.g. Since the jobs are being constantly updated from the services, the job status can be different at any point of time. The basic requirement that the pages should have the data at the time those were fetched from the DB.
I am using LINQ to SQL for fetching data from the DB. It's quite easy to use but there isn't something mid-level caching required to meet this demand. Using the process memory to cache the results can shoot up page memory to the extreme if the number of records are very high. Unfortunately LINQ isn't providing any mid-tier caching facilities with the DataContext objects.
What are the preferable way to implement a paging mechanism with C# 4.0 + SQL Server + Windows environment?
Some of the alternatives I feel like to have a duplicated table/DB which can temporarily store the results as cache. Or using Enterprising Application Library's Application Cache Block. I believe that this is a typical problem faced by most of the developers. Which is the most efficient way to solve this problem. (NOTE: my application and DB running on same box)
While caching is a sure way to improve performance, implementing a caching strategy properly can be more difficult than it may seem. The problem is managing cache expiration or essentially ensuring that the cache is synchronized up to a desired degree. Therefore, before considering caching consider whether you need it in the first place. Based on what I can gather from the question it seems like the data model is relatively simple and doesn't require any joins. If that is the case, why not optimize the tables and indexes for pagination? SQL server and Linq To SQL will handle pagination for thousands of records transparently and with a breeze.
You are correct in stating that displaying too many records at once is prohibitive for the GUI and it is also prohibitive for the user. No user will want to see more records than are filling the screen at any given time. Given the constraint that the data doesn't need to be refreshed until requested by the user, it should be safe to assume that the number of queries will be relatively low. The additional constraint that the DB is on the same box as the application further solidifies the point that you don't need caching. SQL server already does caching internally.
All advice about performance tuning states that you should profile and measure performance before attempting to make optimizations. As state by Donald Knuth, premature optimization is the root of all evil.

Categories