Any real-world, enterprise-grade experience with Transactional NTFS (TxF)? - c#

Background:
I am aware of this SO question about Transactional NTFS (TxF) and this article describing how to use it, but I am looking for real-world experience with a reasonably high-volume enterprise system where lots of blob data (say documents and/or photos) need to be persisted once transactionally and read many times.
We are expecting a few tens of thousands of documents written per day and reads of several tens of thousands per hour.
We could either store indexes within the file system or in SQL Server but must be able to scale this out over several boxes.
We must retain the ability to back up and restore the data easily for disaster recovery.
The Question:
Any real-world, enterprise-grade experience with Transactional NTFS (TxF)?
Related questions:
Anyone tried distributed transactions using TxF where the same file is committed to two mirror servers at once?
Anyone tried a distributed transaction with the file system and a database?
Any performance concerns/reliability concerns/performance data you can share?
Has anyone even done something on this scale before where transactions are a concern?
Edits: To be more clear, I have researched other technologies, including SQL Server 2008's new FILESTREAM data type, but this question is specificially targeted at the transactional file system only.
More Resources:
An MSDN Magazine article on TxF called "Enhance Your Apps With File System Transactions".
A webcast called "Transactional Vista: Kernel Transaction Manager and friends (TxF, TxR)". This video quotes an overhead from using TxF of 2-5%, with the performance discussion starting about 25 minutes in. This is first set of hard numbers I've found. And the video is a very good overview of how this works under the hood. At about 34:30, the speaker describes a very similar scenario to this question.
A Channel 9 screencast called "Surendra Verma: Vista Transactional File System". He talks about performance starting around 35 minutes in. No hard numbers.
A list of TxF articles on the B# .NET Blog.
An Channel 9 screencast called "Transactional NTFS".

I suppose "real-world, enterprise-grade" experience is more subjective than it sounds.
Windows Update uses TXF. So it is being used quite heavily in terms of frequency. Now, it isn't doing any multi-node work and it isn't going through DTC or anything fancy like that, but it is using TXF to manipulate file state. It coordinates these changes with changes to the registry (TXR). Does that count?
A colleague of mine presented this talk to SNIA, which is pretty frank about a lot of the work around TXF and might shed a little more light. If you're thinking of using TXF, it's worth a read.

Unfortunately, it appears that the answer is "No."
In nearly two weeks (one week with a 100 point bounty) and 156 views, no one has answered that they have used TxF for any high-volume applications as I described. I can't say this was unexpected, and of course I cannot prove a negative, but it appears this feature of Windows is not well known or frequently used, at least by active members of the SO community at the time of writing.
If I ever get around to writing some kind of proof of concept, I'll post here what I learn.

Have you considered filestream support in SQL Server 2008 (if you're using SQL Server 2008 of course)? I'm not sure about performance, but it offers transactionality and supports backup/restore.

While I don't have extensive experienve with TxF, I do have experience with MS DTC. TxF itself is fairly performant. When you throw in the MS DTC to handle multiple resource managers across multiple machines, performance takes a considerable hit.
From your description, it sounds like you are storing and indexing very large volumes of unstructured data. I assume that you also need the ability to search for this data. As such, I would highly recommend looking into something like Microsoft's Dryad or Google's MapReduce and a high performance distributed file system to handle your unstructured data storage and indexing. The best examples of high-volume enterprise systems that store and index massive volumes of blob data are Internet search engines like Bing and Google.
There are quite a few resources available for managing high-throughput unstructured data, and they would probably solve your problem more effectively than SQL Server and NTFS.
I know its a bit farther out of the box than you were probably looking for...but you did mention that you had already exhausted all other search avenues around the NTFS/TxF/SQL box. ;)

Ronald: FileStream is layered on top of TxF.
JR: While Windows Update uses TxF/KTM and demonstrates it's utility, it is not a high throughput application.

Related

Trying to optimize I/O for MongoDB

I have an updater script that runs every few hours for various regions on a gaming server. I am looking to run this script more frequently and add more regions. Ideally I would love to spread the load of the CPU and I/O as evenly as possible. I used to run this script using mysql, but now the website uses mongodb for everything, so it kinda made sense to move the updater scripts to mongodb too. I am having really high I/O spikes when mongodb flushes all of the updates to the database.
The script is written in C#, although I don't think that's too relative. More importantly is that we are doing about 500k to 1.2 million updates each time one of these scripts runs. We have done some small optimizations in the code and with indexes, but at this point we are stuck at how to optimize the actual mongodb settings.
Some other important information is that we do something like this
update({'someIdentifier':1}, $newDocument)
instead of this:
$set : { internalName : 'newName' }
Not sure if this is a lot worse in performance than doing $set or not.
What can we do to try and spread the load out? I can assign more memory to the VM if that will help as well.
I am happy to provide more information.
Here are my thoughts:
1) Properly explain your performance concerns.
So far I can't really figure out what the issue is or if you have one at all. As far as I can tell you're doing around a GB of updates and are writing about a GB of data to the disk... not much of a shock.
Oh and do some damn testing - Not sure if this is a lot worse in performance than doing $set or not. - why don't you know? What do your tests say?
2) Check to see if there is any hardware mismatch.
Is your disk just slow? Is your working set bigger than RAM?
3) Ask on mongo-user and other MongoDB specific communities...
...simply because you might get a better answer there than the lack of answers here.
4) Consider trying TokuMX.
Wait what? Didn't I just accuse the last guy of suggesting that basically spamming his own product?
Sure, it's a new product that's only been very newly introduced into Mongo (it appears to have a mysql version for a bit longer), but the fundamentals seem sound. In particular it's very good at being fast of not only insertions, but updates/deletions. It does this by not needing to actually go and make the changes to the document in question - but store the insertion/update/deletion message in a buffered queue as part of the index structure. As the buffer fills up it applies these changes in bulk, which is massively more efficient in terms of I/O. On top of that, it uses compression in storing data which should additionally reduce I/O - there's physically less to write.
The biggest disadvantage I can see so far is that its best performance is seen with big data - if your data fits into RAM than it loses to BTrees in a bunch of tests. Still fast, but not as fast.
So yeah, it's very new and I would not trust it for anything without testing, and even then only for non-mission-critical stuff, but it might be what you're looking for. And TBH, as it's just a new index/store sub-system... it fits the bill of being an optimisation for mongodb than a separate product. Especially since index/storage systems in mongodb have always been a bit simple - 'lets use memory-mapped files for caching' etc.

Caching big data

I have an application that monitors various systems in realtime. I got different reports with different fields depending on the monitored application. We are gathering data in 3 minute intervals. And these 3 minute intervals can be 120mb as raw json and 2-3mb as zipped or gzipped json. We are zipping then caching to the disk to avoid database requests by requesting those caches from disk, unzipping them and loading the json data to application. We are holding these caches for like 3 days to 30 days depending on the report type.
For years we have used disk caching. Zipping the 3 minute interval data and then saving it to disk. This led me to use a lot of locks and mutexes.
I know I'm not the only one with this kind of problem. My cache is big. My question is; Is there a better way to save this data and obtain it? Memory caching is not a solution for me because 30 days of data can't be on memory and I am not able to add memory to the server for this application. I need something else. Something better than disk and without the usage of locks.
P.S. : Application is also multi-threaded.
I would consider a NoSQL storage engine. I am thinking at Redis in particular. Redis is a in-memory, fast, key-value store with persistence, which should be a good fit for this kind of scenario. You can then defer most of the lock/consistency hassle to it.
A problem with Redis is if you are really bound to a Windows env. There is an "unofficial" port of redis; the port is done by Microsoft itself.. but I admit that I would not be extremely confident in using it in production.
As for a C# client/library, there is Booksleeve. This site (SO) uses it :) so I bet it is pretty stable!
Of course you will need to tailor Redis to your needs. Redis does offer persistence, and the persistence is configurable (see http://redis.io/topics/persistence). Also, it offers expiration of objects (http://redis.io/commands/expire), very handy for a cache-like mechanism, and the ability to build more complex, atomic commands starting from simpler ones.
I would use Redis to handle the in-memory cache, keeping all the (primary) keys in memory, with data both on disk and in-memory. The in-memory data associated with an volatile key. The primary key points to the in-memory key and to a file name; if the key it points at is invalid, you can re-load data and access it.
This is a complex solution, but it has two advantages:
it should be vary fast
it offloads some of the locks/etc burden to Redis
should be easy to migrate from your solution to this one
Alternatively, Redis also offers a VM solution
http://oldblog.antirez.com/post/redis-virtual-memory-story.html, but I do not know how stable it is, nor have I ever tried it.
Another alternative is to explore other NoSQL solutions; since you mentioned JSON data, I will look at MongoDB.
Finally, a crazy idea... are you on a 64-bit machine?
Have you considered "letting the OS handle it", with a really big page file and page-file-backed memory mapped files (or a standard file too)? Mind you, it might be a very BAD idea...! But it is something that maybe you could try out/research about?

Database recommendations needed -> Columnar, Embedded (if possible)

EDIT: As result of the answers so far I like to add more focus in what I like to zero in on: A database that allows writing in-memory (could be simple C# code) with persistence to storage options in order to access the data from within R. Redis so far looks the most promising. I also consider to actually use something similar to Lockfree++ or ZeroMQ, in order to avoid writing data concurrently to the database, but rather sending all to be persisted data over a message bus/other implementation and to have one "actor" handle all write operations to an in-memory db or other solution. Any more ideas aside Redis (some mentioned SQLite and I will need to still test its performance). Any other suggestions?
I am searching for the ideal database structure/solution that meets most of my below requirements but so far I utterly failed. Can you please help?
My tasks: I run a process in .Net 4.5 (C#) and generate (generally) value types that I want to use for further analysis in other applications and therefore like to either preserve in-memory or persist on disk. More below. The data is generated within different tasks/threads and thus a row based data format does not lend itself well to match this situation (because the data generated in different threads is generated at different times and is thus not aligned). Thus I thought a columnar data structure may be suitable but please correct me if I am wrong.
Example:
Tasks/Thread #1 generates the following data at given time stamps
datetime.ticks / value of output data
1000000001 233.23
1000000002 233.34
1000000006 234.23
...
Taks/Thread #2 generates the following data at given time stamps
datetime.ticks / value of output data
1000000002 33.32
1000000005 34.34
1000000015 54.32
...
I do not need to align the time stamps at the .Net run-time, I am first and foremost after preserving the data and to process the data within R or Python at a later point.
My requirements:
Fast writes, fast writes, fast writes: It can happen that I generate 100,000- 1,000,000 data points per second and need to persist (worst case) or retain in memory the data. Its ok to run the writes on its own thread so this process can lag the data generation process but limitation is 16gb RAM (64bit code), more below.
Preference is for columnar db format as it lends itself well to how I want to query the data later but I am open to any other structure if it makes sense in regards to the examples above (document/key-value also ok if all other requirements are met, especially in terms of write speed).
API that can be referenced from within .Net. Example: HDF5 may be considered capable by some but I find their .Net port horrible.Something that supports .Net a little better would be a plus but if all other requirements are met then I can deal with something similar to the HDF5 .Net port.
Concurrent writes if possible: As described earlier I like to write data concurrently from different tasks/threads.
I am constrained by 16gb memory (run .Net process in 64bit) and thus I probably look for something that is not purely in-memory as I may sometimes generate more data than that. Something in-memory which persists at times or a pure persistence model is probably preferable.
Preference for embedded but if a server in a client/server solution can run as a windows service then no issue.
In terms of data access I have strong preference for a db solution for which interfaces from R and Python already exist because I like to use the Panda library within Python for time series alignments and other analysis and run analyses within R.
If the API/library supports in addition SQL/SQL-like/Linq/ like queries that would be terrific but generally I just need the absolute bare bones such as load columnar data in between start and end date (given the "key"/index is in such format) because I analyze and run queries within R/Python.
If it comes with a management console or data visualizer that would be a plus but not a must.
Should be open source or priced within "reach" (no, KDB does not qualify in that regards ;-)
OK, here is what I have so far, and again its all I got because most db solution simply fail already on the write performance requirement:
Infobright and Db4o. I like what I read so far but I admit I have not checked into any performance stats
Something done myself. I can easily store value types in binary format and index the data by datetime.ticks , I just would need to somehow write scripts to load/deserialize the data in Python/R. But it would be a massive tasks if I wanted to add concurrency, a query engine, and other goodies. Thus I look for something already out there.
I can't comment -- low rep (I'm new here) -- so you get a full answer instead...
First, are you sure you need a database at all? If fast write speed and portability to R is your biggest concern then have you just considered a flat file mechanism? According to your comments you're willing to batch writes out but you need persistence; if those were my requirements I'd write a straight-to-disck buffering system that was lightning fast then build a separate task that periodically took the disk files and moved them into a data store for R, and that's only if R reading the flat files wasn't sufficient in the first place.
If you can do alignment after-the-fact, then you could write the threads to separate files in your main parallel loop, cutting each file off every so often, and leave the alignment and database loading to the subprocess.
So (in crappy pseudo_code), build a thread process that you'd call with backgroundworker or some such and include a threadname string uniquely identifying each worker and thus each filestream (task/thread):
file_name = threadname + '0001.csv' // or something
open(file_name for writing)
while(generating_data) {
generate_data()
while (buffer_not_full and very_busy) {
write_data_to_buffer
generate_data()
}
flush_buffer_to_disk(file_name)
if(file is big enough or enough time has passed or we're not too busy) {
close(file_name)
move(file_name to bob's folder)
increment file_name
open(file_name for writing)
}
)
Efficient and speedy file I/O and buffering is a straightforward and common problem. Nothing is going to be faster than this. Then you can just write another process to do the database loads and not sweat the performance there:
while(file_name in list of files in bob's folder sorted by date for good measure)
{
read bob's file
load bob's file to database
align dates, make pretty
}
And I wouldn't write that part in C#, I'd batch script it and use the database's native loader which is going to be as fast as anything you can build from scratch.
You'll have to make sure the two loops don't interfere much if you're running on the same hardware. That is, run the task threads at a higher priority, or build in some mutex or performance limiters so that the database load doesn't hog resources while the threads are running. I'd definitely segregate the database server and hardware so that file I/O to the flat files isn't compromised.
FIFO queues would work if you're on Unix, but you're not. :-)
Also, hardware is going to have more of a performance impact for you than the database engine, I'd imagine. If you're on a budget I'm guessing you're on COTS hardware, so springing for a solid state drive may up performance fairly cheaply. As I said, separating the DB storage from the flat file storage would help, and the CPU/RAM for R, the Database, and your Threads should all be segregated ideally.
What I'm saying is that choice of DB vendor probably isn't your biggest issue, unless you have a lot of money to spend. You'll be hardware bound most of the time otherwise. Database tuning is an art, and while you can eek out minor performance gains at the top end, having a good database administrator will keep most databases in the same ballpark for performance. I'd look at what R and Python support well and that you're comfortable with. If you think in columnar fashion then look at R and C#'s support for Cassandra (my vote), Hana, Lucid, HBase, Infobright, Vertica and others and pick one based on price and support. For traditional databases on a single commodity machine, I haven't seen anything that MySQL can't handle.
This is not to answer my own question but to keep track of all data bases which I tested so far and why they have not met my requirements (yet): each time I attempted to write 1 million single objects (1 long, 2 floats) to the database. For ooDBs, I stuck the objects into a collection and wrote the collection itself, similar story for key/value such as Redis but also attempted to write simple ints (1mil) to columnar dbs such as InfoBright.
Db4o, awefully slow writes: 1mil objects within a collection took about 45 seconds. I later optimized the collection structure and also wrote each object individually, not much love here.
InfoBright: Same thing, very slow in terms of write speed, which surprised me quite a bit as it organizes data in columnar format but I think the "knowledge tree" only kicks in when querying data rather than when saving flat data structures/tables-like structures.
Redis (through BookSleeve): Great API for .Net: Full Redis functionality (though couple drawbacks to run the server on Windows machines vs. a Linux or Unix box). Performance was very fast...North of 1 million items per second. I serialized all objects using Protocol Buffers (protobuf-net, both written by Marc Gravell), still need to play a lot more with the library but R and Python both have full access to the Redis DB, which is a big plus. Love it so far. The Async framework that Marc wrote around the Redis base functions is awesome, really neat and it works so far. I wanna spend a little more time to experiment with the Redis Lists/Collection types as well, as I so far only serialized to byte arrays.
SqLite: I ran purely in-memory and managed to write 1 million value type elements in around 3 seconds. Not bad for a pure RDBMS, obviously the in-memory option really speeds things up. I only created one connection, one transaction, created one command, one parameter, and simply adjusted the value of the parameter within a loop and ran the ExecuteNonQuery on each iteration. The transaction commit was then run outside the loop.
HDF5: Though there is a .Net port and there also exists a library to somehow work with HDF5 files out of R, I strongly discourage anyone to do so. Its a pure nightmare. The .Net port is very badly written, heck, the whole HDF5 concept is more than questionable. Its a very old and in my opinion outgrown solution to store vectorized/columnar data. This is 2012 not 1995. If one cannot completely delete datasets and vectors out of the file in which they were stored before then I do not call that an annoyance but a major design flaw. The API in general (not just .Net) is very badly designed and written imho, there are tons of class objects that nobody, without having spent hours and hours of studying the file structure, understands how to use. I think that is somewhat evidenced by the very sparse amount of documentation and example code that is out there. Furthermore, the h5r R library is a drama, an absolute nightmare. Its badly written as well (often the file upon writing is not correctly close due to a faulty flush and it corrupts files), the library has issues to even be properly installed on 32 bit OSs...and it goes on and on. I write the most about HDF5 because I spent the most of my time on this piece of .... and ended up with the most frustration. The idea to have a fast columnar file storage system, accessible from R and .Net was enticing but it just does not deliver what it promised in terms of API integration and usability or lack thereof.
Update: I ditched testing velocityDB simply because there does not seem any adapter to access the db from within R available. I currently contemplate writing my own GUI with charting library which would access the generated data either from a written binary file or have it sent over a broker-less message bus (zeroMQ) or sent through LockFree++ to an "actor" (my gui). I could then call R from within C# and have results returned to my GUI. That would possibly allow me the most flexibility and freedom, but would obviously also be the most tedious to code. I am running into more and more limitations during my tests that with each db test I befriend this idea more and more.
RESULT: Thanks for the participation. In the end I awarded the bounty points to Chipmonkey because he suggested partly what I considered important points to the solution to my problem (though I chose my own, different solution in the end).
I ended up with a hybrid between Redis in memory storage and direct calls out of .Net to the R.dll. Redis allows access to its data stored in memory by different processes. This makes it a convenient solution to quickly store the data as key/value in Redis and to then access the same data out of R. Additionally I directly send data and invoke functions in R through its .dll and the excellent R.Net library. Passing a collection of 1 million value types to R takes about 2.3 seconds on my machine which is fast enough given that I get the convenience to just pass in the data, invoke computational functions within R out of the .Net environment and getting the results back sync or async.
Just a note: I once had a similar problem posted by a fellow in a delphi forum. I could help him with a simple ID-key-value database backend I wrote at that time (kind of a NoSQL engine). Basically, it uses a B-Tree to store triplets (32bit ObjectID, 32bit PropertyKey, 64bit Value). I could manage to save about 500k/sec Values in real time (about 5 years ago). Of course, the data was indexed on all three values (ID, property-ID and value). You could optimize this by ignoring the value index.
The source I still have is in Delphi, but I would think about implementing something like that using C#. I cannot tell you whether it will meet your needs for performance, but if all else fails, give it a try. Using a buffered write should also drastically improve performance.
I would go with way combining persistence storage (I personally prefer db4o, but you can use files as well as mentioned above) and storing objects into memory this way:
use BlockingCollection<T> to store objects in memory (I believe you will achieve better performance then 1000000/s to store objects in memory), and than have one or more processing threads which will consume the objects and store them into persistent database
// Producing thread
for (int i=0; i<1000000; i++)
blockingCollection.Add(myObject);
// Consuming threads
while (true)
{
var myObject = blockingCollection.Take();
db4oSession.Store(myObject); // or write it to the files or whathever
}
BlockingCollection pretty much solves Producer-Consumer workflow, and in case you will use multiple instance of them and use AddToAny/TakeFromAny you can reach any kind of multithreaded performance
each consuming thread could have different db4o session (file) to reach desired performance (db4o is singlethreaded).
Since you want to use ZeroMQ why not use memcache over Redis?
ZeroMQ offers no persistence as far as I know. Memcache also offers no persistence and is a bit faster than Redis.
Or perhaps the other way, if you use Redis why not use beanstalk MQ?
If you want to use Redis (for the persistence) you might want to switch from ZeroMQ to beanstalk MQ (also a fast in memory queue, but also has persistence via logging). Beanstalk also has C# libs.

Is it reasonable to write a server application in C# in my case?

I want it to work on windows servers.
It will be a cloud type server - it'll consist of modules\parts running on different machines all over the world using http\tcp + upnp to connect to each other
There are going to be controlling\monitoring\observing modules on each machine to provide stats on performance
This net is going to be working with large amount of VIDEO\AUDIO life streaming\broadcasting data
It is going to use FFMPEG for re-encoding and OpenGL, OpenCV and such for filtering (.NET wrappers exist and work BTW)
It will not use any WCF or IIS
I want to develop it in team of 2-4 developers, smart students.
So is it OK to create this in C# .Net or I shall not waste my time on promises of ease it could provide to a developer and go C\C++?
So is it reasonable to write a server application in C# in my case?
Offtop - why not WCF
Warning: it gets way to subjective in here.
WCF is grate when you have big corp with relatively small data exchange per one session of service.
When you have video, LIVE video, it all gets complicated. Large amounts of data, lots of users stream in and out from your service at the same time.
Try to do live video streaming over http binding - than try it with others than you'll see why I do not like idea of live streaming with WCF - it is slow, with way2much not needed for live streaming info and after all have you ever seen a live video streaming app on WCF? No - you haven't - may be you have seen +- live video on Silverlight + IIS pair which I do not like because it is just for Silverlight\WindowsMediaPlayer video streaming solution while I want more than that.
I love to have cross-platform clients with reach UI’s. And I do not like (it is all here my personal opinion - so it is subjective) Silverlight+IIS+WCF group. So what shall I do - right go to sockets, streams in such old and simple formats like FLV and Flash as back end client - Simpler in development in some parts, more conservative way of doing live video over the web than one you get from MS today.
I love Flash FLV live streaming because you just open socket and start sending live FLV video data onto it (for each user FLV header and than FLV "TAG's", one by one: video tag, audio tag, video tag, audio tag etc) and Flash plays it! With no special\unusual code. It is fast, easy in supporting, and does not make client need anything new\unusual. And you on server side can take grate use of that "TAG" form of video\audio data representation.
So that is in short why I just do not want to use WCF - hard to get live video playing out from it on client side, no general benefits for live video server.
And when most of live data goes thru sockets why to bother with using WCF for service management.
During last half of 2009 and first half of 2010 I was getting into WCF, live video streaming, silverlight and flash, comparing process of client\server creation, reading different formats with a team of wary interesting developers. In general at the end of project we had lots of mini servers streaming live data and lots of different clients receiving it. Comparing all we've done we came to conclusions which are near one I present you here.
That is why I do not want to use WCF in my nearest project - I do not want to think about how to deliver media data, I want to focus on its filtering\editing.
Why the question appeared
We started playing with FFmpeg\OpenCV in C, and it is pretty simple to manipulate data using them... in C... on Linux...
But when we started to play with there .Net bindings (we are now playing with Tao.FFmpeg) we found that in most cases we end up playing with C# Marshal a lot, and having 2 variables for its C analog (problem of pointers) and so on. I hope we will not see such problem with Emgu CV but steel it makes me a little bit afraid...
I think it's entirely reasonable. The benefits of C# with regard to ease of development will greatly outweigh any performance drawbacks of not using C++.
C# is generally more cross-platform than C++. True, C++ is a cross-platform language, but there are large differences between the APIs that C++ programs use to interact with the system. C# and .Net/Mono have a much more standardized interface to the socket layer.
Finally, with ambitious projects like this, getting the project into a usable form is a much more important goal than getting the highest performance possible. Performance only matters if the project is complete. Write it in C# because that will give you the greatest odds of completion. Then worry about performance.
I'm not exactly sure why people have brought up Cross Platform concerns as clearly the OP has stated the app will run on Windows.
As to the actual questions.
Can you build a server application that communicates via tcp/http in C# that does not have to run in IIS. -> Yes.
Can you build a server application that is performant and scales in C# -> Yes.
Can you do so with Students -> Maybe. Depends on the students... ;) But that is irrespective of the language in use.
Is this something I would do? Yes. We've done that. We have a c# app running on approximately 20,000 machines right now that are communicating effectively over tcp. We aren't using WCF, but we did decide to use RESTful style services over http for the data transfer.
Our biggest issue was simply tuning the app to transfer the "right" amount of data over the wire at a time. This network is for data collection and storage. It's averaging around 200GB of data collected a day..
UPDATE
I wanted to clarify a bit about the above app. The 20,000 machines at the above installation are clients (XP, Vista, 7, 2003 Server, and 2008 Servers). There's only one data collection point server in the mix. The clients post data to the server, when connected to a network, once every 45 seconds. Roughly 97% of the machines stay connected in this manner, the rest connect a couple times a week.
This works out to the server processing about 37 million requests a day.
Now, to be sure, each request is relatively small at around 5KB to 6KB each. However, the shear number of requests shows that a C# application can handle managing those connections, which is the bigger part of the OP's problem.
Because the OP's files are large (Video), then the real issue is simply in data transfer. Which will be hindered more by hard drive speeds, as well as network speed and latency. Those issues are irrespective of which language you are working in and will limit the number of connections per server based on available bandwidth.
Working this out let's limit it down to one server for an example. If you have a video rate of 400kb/s then and a 25MB connection to the internet, then that box could physically only handle around 62 simultaneous connections. Which is so FAR below the number of connections our app is doing as to be a rounding error.
Assuming perfect network conditions (which don't exist), pumping that internet connection up to 100MB (which can be expensive) means a 4x increase in simultaneous connections to 240; still completely manageable.
However, the network is only one side of the equation. Drive speed on the servers matters a lot. You better have a good disk array capable of continuously delivering that amount of data. I know drives claim 3GB data transfers, but a drive which can saturate the channel has never been built. Which means serious planning and money in the server setup.
The point of all of this is to say that the language doesn't matter one bit in your situation. You have other much larger contention issues. With that being the case, go with the language that will help you get the project done faster.
Why stop at C#, if you (possibly) want cross-platform, write it in Python or similar, you'll find that the networking aspects of a scripting language are far better than C# (as that's pretty much the role scripting languages are put to nowadays, running web-based servers).
You'll find developer productivity is much improved over C# (just as C# has better productivity over C++), and there are lots of people who know and want to work on these systems. It sounds like performance of the servers themselves is of less importance than the networking, so it appears that script would be your best choice. Plus ffmpeg libraries are more tightly integrated with python using pyffmpeg than C# (well, mostly).
And it'd be a lot cooler, more fun, and very much cross-platform!
If you want C# and also cross-platform abilities, your development will have to target the Mono platform (or another cross-platform .NET runtime, if you can find one). You might have to give up VisualStudio, and maybe some Microsoft-specific libraries and tools, but you can still have C# on multiple platforms. Just make sure you start the multi-platform building and testing EARLY in the process or it will be hell to change things later.
If the target of the application is to run only on Windows platforms, I'm completely sure to write this application in C#. Many applications like that can be running right now and we don't even know that.
If the target is to run on multiple platformms, you should encapsulate first all the problems that a non-windows platform can bring to your application.
Why do you have to write it in C++ if, in this case, C# is capable to do everything that C++ does? I would use C++ to program things on hardware-level things, like a robot or something else. To write a server application, C# will fit very well what you want, it was designed for these things.
And C# is cross-platform, you just need the right tool to make it work on a specific platform.

is access 2007 can work good with 30 users parallel

is access 2007 can work good with 30 users parallel through my C# program ?
thank's in advance
Access is not very good for concurrent use. I have seen recommendation of a maximum of 10 people at one time.
To be honest, it depends on how these users are working and the load on it, however, it is not designed for such use (it is designed to be a desktop databasew not an enterprise database) so may fail under such usage. Use a database designed for you scenario - something like MySql or SQL Server Express, if you want to avoid extra costs.
See this article on 15seconds for a discussion on the suitability (or lack thereof) of access to concurrent usage.
The Jet and ACE database engines can support 255 users, not just 255 concurrent connections. This is because the standard for interaction with a Jet/ACE data store is a single connection for each user, opened and then re-used throughout the session. However, it definitely is the case that under normal usage Jet/ACE may open more than one connection per user, so 255 is not even a reliable theoretical limit.
Jet/ACE interacts with a data file, and maintains locking via its locking file (*.LDB). Contention for the data file and the LDB file can easily overwhelm the file system's ability to keep up, so in general, the practical limit on number of users is much lower than the 255 theoretical limit (you'll note that 255 is one less than a power of 2, hint, hint).
In real-world scenarios, a properly designed Access application with a Jet/ACE data store running on a reliable network and stored on a server with a native Windows file system can be quite stable into the 20-30 users range. But it depends on what those users are doing. The more that are read-only, the higher the number of simultaneous users that can be supported.
Experienced Access developers report engineering apps to work with as many as 100 simultaneous users, but at that point, you basically have to rewrite as an unbound app, and then you're giving up most of the advantages of Access as front end in order to nurse along a back end that is better used with a smaller user population.
My basic rule is that any time a user population reaches 15 simultaneous users, I start talking to the client about upsizing to SQL Server, not because it's required, but because they need to get used to the idea that as usage grows, they're going to need to upsize. Whether that happens at 15 users or 20 or 30 depends on the nature of the particular app. As I said above, if many of the users are read-only for most of their session, you have more headroom than if everybody is adding/updating records most of the time.
Given that a C# app is going to be an unbound app, I wouldn't think that 30 users should be terribly problematic, but I'm not a C# programmer. If it's new development and there's any possibility that the user population will grow beyond 30 users, it just seems like a no-brainer to me to build with a server back end instead of with Jet/ACE.
I never did it with 2007, but I had problems in the past with the XP version and only 3 users working 8 hours a day.
So, based in my previous experience, try to avoid it. Make your customer to change their requirement will be easier than the problems derived from to use Access in a paralell enviroment. After all, also based in my experience... your customer will be changing their requirements almost every week! :D
May the Force be with you.

Categories