Threading not possible suggest any other way to do parallel processing - c#

My code in C# which i put in threading is not thread safe as it involves lots of database connections, stored procedure executions. but still i want to minimize the time required for the execution, can anyone suggest me something for parallel or asynchronous processing.. anything from database side or .net side...
I am stuck with threading.. not able to apply...
Thanks

There is not nearly enough information in your question to suggest an optimized solution. However, if you must deal with resources that are not thread safe, one strategy is to spawn separate processes to handle sub-tasks.
Keep in mind, though, that separate processes still would have to handle portions of the overall solution that do not step on each other.

Well . There are couple of things that can be done . However I dont know how much it really fits you.
On database side you can do Query Optimizations , Indexing and other stuff that may help increase the query run time. Use profilers to analyse . See query plans to check if the indexes are properly used.
Use NOLOCKS but only where you see that you can USE NO LOCKS in your selects. (Not always a good practice but is used)
Implement proper Synchronized threading that can process multiple request. There is no other way from .Net Side. You have to review your design properly. However you can do code optimizations as well using profiler. You can use Task Library as well.
Other thing is There could be an issue with your server. Check CPU and Memory utilizations.
(This is my least of the concerns).

You should explain better what you are doing in your code...
If you do a lot of loops, you should try Parallel.For / Parallel.Foreach
http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.aspx
In Parallels you can also queue tasks for ordered computation or divide loops into blocks that could improve overall performance...
The best I can say, with so little information.
Hope it helps.

Related

Maintaining a large number of open sockets in a c# application

I'm writing an application in C# where I want to maintain a large number of open socket-connections (tcp), and be able to respond to data that comes in from all of them. Now, the normal thing to do (afaik) would be to call BeginRead on all of them, but as far as I know BeginRead spawns a thread and sits and waits for data in that thread, and I've read that threads don't tend to scale very well when using a lot of them. Though I might be wrong about this, if that's the case please just tell me.
This will probably sound idiotic, but what I want is to be able to maintain as many open socket-connections as possible (as my machine/os allows), while still being able to react to data coming in from any of them as fast as possible, and using as little system-resources as possible doing this.
What I've thought about is to put all of them in some kind of list, and then run a single thread in loop over them checking for new data and acting on that new data (if there is any), though I'd think that a loop like that will end up frying my cpu (cause there is nothing in the loop that blocks). Is there any way (simple or not, I don't mind getting into some more complex algorithms to solve this) that I can achieve this? Any help would be appreciated.
No. The asynchronous methods do not spawn threads, they use IO Completion Ports.
The BeginXXX methods have been deprecated. Use the XxxxAsync methods instead.
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.sendasync.aspx
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.receiveasync.aspx

What to make parallel? What will make me better? (.net Web Business Application, MVC+SL)

I'm working on a web application framework, which uses MSSQL for data storage, mostly just does CRUD operations (but on arbitrarly complex structures), provides a WCF interface for rich Silverlight admin and has an MVC3 display (and some basic forms like user settings, etc).
It's getting quite good at being able to load, display, edit and save any (reasonably) complex data structure, in a user-friendly way.
But, I'm looking towards the future, and want to expand my capabilities (and it would be fun to learn new things along the way as well...) - so I've decided (in the light of what's coming for C#5...) to try to get some parallel/async optimalization... Now, I haven't even learned TPL and PLinq yet, so I'm happy for any advice there as well.
So my question is, what are possible areas where parallel processing maybe of help, and where does TPL and PLinq help me on that?
My guts tell me, I could try saving branches of a data structure in a parallel way to the database (this is where I'd expect the biggest peformance optimalization), I could perform some complex operations (file upload, mail sending maybe?) in a multithreaded enviroment, etc. Can I build complex SL UI views in parallel on the client? (Creating 60 data-bound fields on a view can cause "blinking"...) Can I create partial views (menus, category trees, search forms, etc) in MVC at once?
ps: If this turns into "Tell me everything about parallel stuffs" thread, I'm happy to make it community-wiki...
Remember that an asp.net web application is intrinsically a parallel application in any case. Requests can be serviced in parallel and this will all be managed by the asp.net framework. So there are two cases:
You have lots of users all hitting the site at once. In which case the parallel processing capability of the server is probably being used to capacity in any case.
You don't have lots of users all hitting the site at once. In which case the server is probably quite capable of dealing with the responses without parallel processing in a suitable fast response time.
Any time you start thinking about optimising something just because it might be fun, or because you just think you should make stuff faster then you are almost certainly guilty of premature optimization. Your efforts could almost certainly be better spent enriching the functionality of the framework, rather than making what is probably a plenty fast enough solution a little bit faster (at the cost of significantly increase complexity).
In answer to the question of where can TPL and PLINQ really help. In my opinion the main advantage of these technologies is in places in the application where you really do have a lot of long running blocking processes. For example if you have a situation where you call out several times to an external web service - it can be a significant advantage to make these calls in parallel. I would strongly question whether writing to a local database - or even a database on a different box on a local network would count as being a long running blocking process to the extent that this kind of parallelisation is of any significant value.
Pretty much all the examples you list fall in to the category of getting the PC to do something in parallel that it was previously doing in sequence. How many CPUs are on your server - how many are really free when the website is under load. Making something parallel does not necessarily equate to making it faster unless the process involved has some measure of time when you PC is sitting around doing nothing waiting for an external event.
First question is to ask the users / testers which bits seem slow. The only way to know for sure what's slowing you down is to use a profiler like dottrace. The results are sometimes surprising.
If you do find something, parallel processing may not be the answer. You need to remember that there is an overhead in splitting tasks up, so if the task is fairly quick in the first place, it could end up being slower. You also have to consider the added complexity, e.g. what happens if half a task succeeds, and half fails? (Although TPL and PLINQ hide you from this to an extend)
Have fun, but I wondering whether this is a case of 1) solution chasing a problem, and 2) premature optimization.

Parallelism in .Net

I have been asked to show the benefits and limitations of Parallelism and evaluate it for use within our company. We are predominantly a data orientated business, and essentially load objects from the database, then put them through some business logic, display to the user, then save back to the DB. In my mind, there isn't too much in that pipe line that would benefit from running in parallel, but being fairly new to the concept, I could be completely wrong. Would there be any part of that simple pipe line that would benefit from running in parallel? And are there any guidelines for how to implement this style of programming?
Also, are there any tools (preferably that come with VS2010) that would show where bottle necks occur and would be able to visually show what's going on when I click "Go" on a simple app that runs a given amount of loops (pre-written simple maths loops e.g. for i as integer = 1 to 1000 - do some calculations) in parallel, then in series?
I need to be able to display the difference using a decent profiling tool.
Yes, even from that simple model you could greatly benefit from parrallelism.
Say for instance that during a load of your data you're doing something like this:
foreach(var datarow in someDataSet)
{
//put your data into some business objects here
}
you could optimize this with parrallelism by doing something like this:
Parrallel.ForEach(someDataSet, datarow =>
{
//put your data into some business objects here
});
This could greatly increase your performance depending on how much data your processing here.
Each data row will now be processed asynchronously instead of in sequence like the typical foreach loop.
My suggestion to you would be to run some simple performance tests on an example as simple as this one and see what kind of results you get. Plot it out in a spreadsheet or something, and show it to your team. You might be suprised with the results you get.
You may reap more benefit from implementing a caching layer (distributed or otherwise) than parallelizing your current pipeline.
With a caching layer, the objects you use frequently will reside in the in-memory cache, allowing for much greater read/write performance. There are a number of options for keeping the cache in sync, and these will vary depending on which vendor you choose.
I'd suggest having a look at MemCached and NCache and see if you think they would be a good fit.
EDIT: As far as profiling tools go, I've used dotTrace extensively and would highly recommend it. You can download a 30 day trial from JetBrains' website.
Certainly there are many tasks that can be parallelized, a detailed analysis can help but bottlenecks are possible candidates.
This material can help you Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4
Possibly, but my general response to this sort of query would typically be - Do you have any performance problems in your application(s)? If yes then by all means investigate why and consider whether parallel execution can help. If not then time is probably best spent elsewhere.
Have you checked out Microsoft's Parallel Computing with Managed Code site? It contains several articles on implementation guidelines discussing both when and how to use .Net 4's parallel features.

C# - Moving files - to queue or multi-thread

I have an app that moves a project and its files from preview to production using a Flex front-end and a .NET web service. Currently, the process takes about 5-10 mins/per project. Aside from latency concerns, it really shouldn't take that long. I'm wondering whether or not this is a good use-case for multi-threading. Also, considering the user may want to push multiple projects or one right after another, is there a way to queue the jobs.
Any suggestions and examples are greatly appreciated.
Thanks!
Something that does heavy disk IO typically isn't a good candidate for multithreading since the disks can really only do one thing at a time. However, if you're pushing to multiple servers or the servers have particularly good disk subsystems some light threading may be beneficial.
As a note - regardless of whether or not you decide to queue the jobs, you will use multi-threading. Queueing is just one way of handling what is ultimately solved using multi-threading.
And yes, I'd recommend you build a queue to push out each project.
You should compare the speed of your code compared to just copying in Windows (i.e., explorer or command line) vs copying with something advanced like TeraCopy. If your code is significantly slower than Window then look at parts in your code to optimize using a profiler. If your code is about as fast as Windows but slower than TeraCopy, then multithreading could help.
Multithreading is not generally helpful when the operation I/O bound, but copying files involves reading from the disk AND writing over the network. This is two I/O operations, so if you separate them onto different threads, it could increase performance. For something like this you need a producer/consumer setup where you have a Circular queue with one thread reading from disk and writing to the queue, and another thread reading from the queue and writing to the network. It'll be important to keep in mind that the two threads will not run at the same speed, so if the queue gets full, wait before writing more data and if it's empty, wait before writing. Also the locking strategy could have a big impact on performance here and could cause the performance to degrade to slower than a single-threaded implementation.
If you're moving things between just two computers, the network is going to be the bottleneck, so you may want to queue these operations.
Likewise, on the same machine, the I/O is going to be the bottleneck, so you'd want to queue there, too.
You should try using the ThreadPool.
ThreadPool.QueueUserWorkItem(MoveProject, project);
Agreed with everyone over the limited performance of running the tasks in parallel.
If you have full control over your deployment environment, you could use Rhino Queues:
http://ayende.com/Blog/archive/2008/08/01/Rhino-Queues.aspx
This will allow you to produce a queue of jobs asynchronously (say from a WCF service being called from your Silverlight/Flex app) and consume them synchronously.
Alternatively you could use WCF and MSMQ, but the learning curve is greater.
When dealing with multiple files using multiple threads usually IS a good idea in concerns of performance.The main reason is that most disks nowadays support native command queuing.
I wrote an article recently about reading/writing files with multiple files on ddj.com.
See http://www.ddj.com/go-parallel/article/showArticle.jhtml?articleID=220300055.
Also see related question
Will using multiple threads with a RandomAccessFile help performance?
In particular i made the experience that when dealing with very many files it IS a good idea to use a number of threads. In contrary using many thread in many cases does not slow down applications as much as commonly expected.
Having said that i'd say there is no other way to find out than trying all possible different approaches. It depends on very many conditions: Hardware, OS, Drivers etc.
The very first thing you should do is point any kind of profiling tool towards your software. If you can't do that (like, if you haven't got such a tool), insert logging code.
The very first thing you need to do is figure out what is taking a long time to complete, and then why is it taking a long time to complete. That your "copy" operation as a whole takes a long time to complete isn't good enough, you need to pinpoint the reason for this down to a method or a set of methods.
Until you do that, all the other things you can do to your code will likely be guesswork. My experience has taught me that when it comes to performance, 9 out of 10 reasons for things running slow comes as surprises to the guy(s) that wrote the code.
So measure first, then change.
For instance, you might discover that you're in fact reporting progress of copying the file on a byte-per-byte basis, to a GUI, using a synchronous call to the UI, in which case it wouldn't matter how fast the actual copying can run, you'll still be bound by message handling speed.
But that's just conjecture until you know, so measure first, then change.

Concurrency issues while accessing data via reflection in C#

I'm currently writing a library that can be used to show the internal state of some running code (mainly fields and properties both public and private). Objects are accessed in a different thread to put their info into a window for the user to see. The problem is, there are times while I'm walking a long IList in which its structure may change. Some piece of code in the program being 'watched' may add a new item, or even worse, remove some. This of course causes the whole thing to crash.
I've come up with some ideas but I'm afraid they're not quite correct:
Locking the list being accessed while I'm walking it. I'm not sure if this would work since the IList being used may have not been locked for writing at the other side.
Let the code being watched to be aware of my existence and provide some interfaces to allow for synchronization. (I'd really like it to be totally transparent though).
As a last resort, put every read access into a try/catch block and pretend as if nothing happened when it throws. (Can't think of an uglier solution that actually works).
Thanks in advance.
The only way you're going to keep things "transparent" to the code being monitored is to make the monitoring code robust in the face of state changes.
Some suggestions
Don't walk a shared list - make a copy of the list into a local List instance as soon (and as fast) as you can. Once you have a local (non-shared) list of instances, noone can monkey with the list.
Make things as robust as you can - putting every read into a try/catch might feel nasty, but you'll probably need to do it.
Option number 3 may feel ugly, but this looks to be similar to the approach the Visual Studio watch windows use, and I would choose that approach.
In Visual Studio, you can often set a watch on some list or collection and at a later point notice the watch simply displays an exception when it can't evaluate a certain value due to user or code state changes.
This is the most robust approach when dealing with such an open ended range of possibilities. The fact is, if your watching code is designed to support as many scenarios as possible you will not be able to think of all situations in advance. Handling and presenting exceptions nicely is the next best approach.
By the way, someone else mentioned that locking your data structures will work. This is not true if the "other code" is not also using locks for synchronization. In fact both pieces of code must lock the same synchronization object, very unlikely if you don't control the other code. (I think you mention this in your question, so I agree.)
While I like Bevan's idea of copying the list for local read access, if the list is particularly large, that may not be a truly viable option.
If you really need seamless, transparent, concurrent access to these lists, you should look into the Parallel Extensions for .NET library. It is currently available for .NET 2.0 through 3.5 as a CTP. The extensions will be officially included in .NET 4.0 along with some additional collections. I think you would be interested in the BlockingCollection from the CTP, which would give you that transparent concurrent access you need. There is obviously a performance hit as with any threaded stuff that involves synchronization, however these collections are fairly well optimized.
As I understand, you don't want to have ANY dependency/requirement on the code being watched or enforce any constrains on how the code is written.
Although this is my favourite approach to code a "watcher", this causes you application to face a very broad range of code and behaviours, which can cause it to crash.
So, as said before me, my advice is to make the watcher "robust" in the first step. You should be prepared for anything going wrong anywhere in your code, because considering the "transparency", many things can potentially go wrong! (Be careful where to put your try/catch, entering and leaving the try block many times can have a visible performance impact)
When you're done making your code robust, next steps would be making it more usable and dodging the situations that can cause exceptions, like the "list" thing you mentioned. For example, you can check the watched object and see if it's a list, and it's not too long, first make a quick copy of it and then do the rest. This way you eliminate a large amount of the probability that can make your code throw.
Locking the list will work, because it is being modified, as you've observed via the crashing :)
Seems to me, though, that I'd avoid locking (because it seems that your thread is only the 'watcher' and shouldn't really interrupt).
On this basis, I would just try and handle the cases where you determine missing things. Is this not possible?

Categories