Improving Game Performance C# - c#

We are all aware of the popular trend of MMO games. where players face each other live. However during gameplay there is a tremendous flow of SQL inserts and queries, as given below
There are average/minimum 100 tournaments online per 12 minutes or 500 players / hour
In Game Progress table, We are storing each player move
12 round tournament of 4 player there can be 48 records
plus around same number for spells or special items
a total of 96 per tournament or 48000 record inserts per hour (500 players/hour)
In reponse to my previous question ( Improve MMO game performance ), I changed the schema and we are not writing directly to database.
Instead accumulating all values in a DataTable. The process then whenever the DataTable has more than 100k rows (which can sometimes be even within the hour) writes to a text file in csv format. Another background application which frequently scans the folder for CSV files, reads any available CSV file and stores the information into server database.
Questions
Can we access the datatable present in the game application from another application, directly (it reads the datatable and clears records that have read). So that the in place of writing and reading from disk, we read and write directly from memory.
Is there any method that is quicker that DataTable, that can hold large data and yet be fairly quicker in sorting and updating operation. Because we have to frequenly scan for userids, update game status (almost at every insert). It can be a cache utility OR a fast
Scan/Search algorithm OR even a CollectionModel. Right now, we use a foreach loop to go through all records in a DataTable and update rows if user is present. If not then we create a new row. I tried using SortedList and classes, but then it not only doubles the effort, memory usage increases tremendously slowing down overall game performance.
thanks
arvind

Well, let's answer:
You can share object between applications, using Remoting - but it's much slower, and makes the code less readable. But, you have another solution so you'll keep working with memory. you can use MemoryMappedFiles, so all the work will be actually using the memory and not the disk: http://msdn.microsoft.com/en-us/library/dd997372.aspx
you can use NoSQL DB from some kind (there are many out there: Redis, MongoDB, RavenDB) - all of them based on key-value access, and you should test their performance. Even better, some of this db's are persistent and can be used with multiple servers.
Hope this helps.

Using memcache would increase your performance

Related

SQL filtering with select VS lambda filtering with memory

I have a program in C# for a Church service where you can select a song from a list and put on a projector or a screen. This list is currently saved in .txt files (I made it several years ago) and currently I have about 270 different songs.
In the program you can filter and search by the title of the song and by the content of the song. This is very expensive as every time I search some text, the program checks each of the 270 songs in txt files: it opens, reads and closes, each of them every time.
Now I want to change and create a DB (SQLite) to save and search the songs and the verses from there.
My doubt is the following, if I make a select every time I want to consult, filter or search a song, what is better in a performance view? Is better to load everything in memory and create Lambda functions to improve searches and filters? Or is better to make every operation through SQL?
Thanks!
PD: I add some numbers:
270 songs: in disk space is about 1 MB in total.
5-6 strophes per song: 1.620 strophes.
4-5 verses per strophe: 8.100 verses.
Good news is that you do not have to care. See, any database written like in the last 60 years or so will keep whatever it can in memory anyway as long as there is enough memory. So, unless you unload SqlLite and if you set sensible memory limits, the database ends up in memory anyway. Your (implied) assumption that the query hits storage would be a colossal failure to properly use the database. Particularly given how pathetic (1mb in total) it is - we do not talk of a dozen gigabyte memory db that may challenge memory, the db is small enough to end up in the CPU cache.

Streaming large Datasets into smaller chunks in c#

I frequently use large Datasets in a c# console application ( 1 million rows with around 30 odd columns) that I need to process in a sequential manner, these Datasets are first extracted from a remote database, I can't extract them in smaller chunks because the round trip over the wire would be too expensive.
What kind of options do I have in terms of breaking these into smaller chunks locally and reading it say, 10000 records at a time?.
I don't have a lot of RAM just around 2 GB or so, is there an efficient way for me to page these Datasets locally?.
Edit:
Would it make sense to serialize the DataTable or List and store it in a local NoSQL repository and then keep fetching 10000 records at a time?.
If you are using web application then you can enable "EnablePaging" attirbute of your data source control.

sql performance degrading - loading 60,000 xml files - ssis - xml source

I have a ssis data flow task which loads xml data into sql database - more than 60,000 xml files. My first few thousands of xml files gets loaded into the table faster. But as time progresses, the loading speed is reduced drastically.
first 10k files gets loaded in 10 minutes approx. next 10k takes 25 minutes, then slowly the performance degrades. By the time all my 60k+ files get loaded, it takes around 4 hours.
Is there any way to keep a check on the performance and load the files with the same speed as that for the initial files.
I have tried with bulk copy in c# too. But the issue exist even there as well. Is their any work around method to improve my performance ?
Parts of your code would make it easier for us to give you tips and ideas!
I believe that this issue is memory related. Are you reading all of the files into the memory before putting it in the sql database?
Check the Task Manager! If the memory usage keeps growing and growing, you have a potential issue with the memory usage.
I don't know how the files are stored or named, but if you could - why not work with like 1-5000 at a time, move them and take the next?
Try doing it with multiple DFT's instead of a single DFT. Limiting each one to around 5k/10k. This would result in lesser time frame hopefully.
Also, the difference in time might be due to the indexing on the table. Remove the indexing. Load the records. Reapply indexing once the loading is done. To query record sets on an indexed table is fast. But performing Insert on an indexed table and that too 60k records is a time consuming process.
1.execute SQL Task (Drop index Before Loading)
2.for loop ( Multiple Control flow for xml file load)
3.execute SQL Task (Recreate Index)

Listing more than 10 million records from Oracle With C#

I have a database that contains more than 100 million records. I am running a query that contains more than 10 million records. This process takes too much time so i need to shorten this time. I want to save my obtained record list as a csv file. How can I do it as quickly and optimum as possible? Looking forward your suggestions. Thanks.
I'm assuming that your query is already constrained to the rows/columns you need, and makes good use of indexing.
At that scale, the only critical thing is that you don't try to load it all into memory at once; so forget about things like DataTable, and most full-fat ORMs (which typically try to associate rows with an identity-manager and/or change-manager). You would have to use either the raw IDataReader (from DbCommand.ExecuteReader), or any API that builds a non-buffered iterator on top of that (there are several; I'm biased towards dapper). For the purposes of writing CSV, the raw data-reader is probably fine.
Beyond that: you can't make it go much faster, since you are bandwidth constrained. The only way you can get it faster is to create the CSV file at the database server, so that there is no network overhead.
Chances are pretty slim you need to do this in C#. This is the domain of bulk data loading/exporting (commonly used in Data Warehousing scenarios).
Many (free) tools (I imagine even Toad by Quest Software) will do this more robustly and more efficiently than you can write it in any platform.
I have a hunch that you don't actually need this for an end-user (the simple observation is that the department secretary doesn't actually need to mail out copies of that; it is too large to be useful in that way).
I suggest using the right tool for the job. And whatever you do,
donot roll your own datatype conversions
use CSV with quoted literals and think of escaping the double quotes inside these
think of regional options (IOW: always use InvariantCulture for export/import!)
"This process takes too much time so i need to shorten this time. "
This process consists of three sub-processes:
Retrieving > 10m records
Writing records to file
Transferring records across the network (my presumption is you are working with a local client against a remote database)
Any or all of those issues could be a bottleneck. So, if you want to reduce the total elapsed time you need to figure out where the time is spent. You will probably need to instrument your C# code to get the metrics.
If it turns out the query is the problem then you will need to tune it. Indexes won't help here as you're retrieving a large chunk of the table (> 10%), so increasing the performance of a full table scan will help. For instance increasing the memory to avoid disk sorts. Parallel query could be useful (if you have Enterprise Edition and you have sufficient CPUs). Also check that the problem isn't a hardware issue (spindle contention, dodgy interconnects, etc).
Can writing to a file be the problem? Perhaps your disk is slow for some reason (e.g. fragmentation) or perhaps you're contending with other processes writing to the same directory.
Transferring large amounts of data across a network is obviously a potential bottleneck. Are you certain you're only sending relevenat data to the client?
An alternative architecture: use PL/SQL to write the records to a file on the dataserver, using bulk collect to retrieve manageable batches of records, and then transfer the file to where you need it at the end, via FTP, perhaps compressing it first.
The real question is why you need to read so many rows from the database (and such a large proportion of the underlying dataset). There are lots of approaches which should make this scenario avoidable, obvious ones being synchronous processing, message queueing and pre-consolidation.
Leaving that aside for now...if you're consolidating the data or sifting it, then implementing the bulk of the logic in PL/SQL saves having to haul the data across the network (even if it's just to localhost, there's still a big overhead). Again if you just want to dump it out into a flat file, implementing this in C# isn't doing you any favours.

SQLBulkCopy or Bulk Insert

I have about 6500 files for a sum of about 17 GB of data, and this is the first time that I've had to move what I would call a large amount of data. The data is on a network drive, but the individual files are relatively small (max 7 MB).
I'm writing a program in C#, and I was wondering if I would notice a significant difference in performance if I used BULK INSERT instead of SQLBulkCopy. The table on the server also has an extra column, so if I use BULK INSERT I'll have to use a format file and then run an UPDATE for each row.
I'm new to forums, so if there was a better way to ask this question feel free to mention that as well.
By test, BULK INSERT is much faster. After an hour using SQLBulkCopy, I was maybe a quarter of the way through my data, and I had finished writing the alternative method (and having lunch). By the time I finished writing this post (~3 minutes), BULK INSERT was about a third of the way through.
For anyone who is looking at this as a reference, it is also worth mentioning that the upload is faster without a primary key.
It should be noted that one of the major causes for this could be that the server was a significantly more powerful computer, and that this is not an analysis of the efficiency of the algorithm, however I would still recommend using BULK INSERT, as the average server is probably significantly faster than the average desktop computer.

Categories