I will be using MSMQ in C# to read messages; and I am putting this in Window Service so on OnStart I will start reading messages using queue.Receive method which would be blocking/synchronous call. And OnEnd method I want to stop the queue with queue.Close(); queue.Dispose().
Is there any drawback of this approach ?
Thanks
Ocean
This is incorrect approach. OnStart is called once when service starts, you should put initialization logic. For example start thread that will call Receive in a loop.
This is a fairly common pattern, but it has some drawbacks.
First, you should consider using a thread pool (or the .NET parallel libs in 4.0) to process your messages asynchronously. Whether or not your queue reader can be asynchronous depends a lot on your transaction pattern. Will the processing be atomic?
Second, you should also consider using a timer (System.Timers.Timer) which you start in your OnStart and end in your OnEnd, and which reads one or more messages from the queue on each timer event.
Third, you should seriously consider just using the WCF MSMQ binding, which handles a lot of the complexity of this stuff.
See: http://jamescbender.com/bendersblog/archive/2009/04/04/the-one-where-i-talk-about-the-msmq-binding.aspx
Your approach looks fine to me. My only advice is to make sure that every machine on which you intend to deploy the Windows Service is either in the same Domain, or are in Domains with mutual trust and that reside within the same Forest. I had a problem recently with a solution that I inherited that utilised MSMQ, and that worked much the same way as you have proposed above. It was tested as working on a single domain with no performance issues. Unfortunately, the client was in the process of a merger, and when it came time to implement the solution across the wider company it turned out that the solution had to be implemented on machines that existed in different domains in different Forests, in which case MSMQ wont work at all and another approach entirely had to be used.
Related
I have a c# .net (4.7.2) rest api web app which needs to communicate (http) periodically with a group of up to 100 devices.
Currently we basically have a event handler that intially makes a single Task.Run (containing communication work*) per device. At the end of each such a Task.Run an event will be triggered so that this event handler fires again. So when having 100 devices we have approximatley 100 short-lived "background worker threads" running, which all die and cause a Task.Run again in a time period of ~ 3 seconds.
As it turns out this seems to be very expensive - in fact I am suspecting this architecture to cause severe problems like 'freezes' from time to time.
I understand that this is not best practise and that calling Task.Run is not free, but spinning up
up to 100 threads periodically should not be that big of an issue - at least that's what I thougt.
I don't care if the the Tasks beeing enqueued on the thread pool are worked off with a little delay because of Task management.
So I am wondering which architecture would be appropriate for a dynamic growing/shrinking background work load that consists mainly of "asyncable" code.
Despite of following best practises - is there really a big pit fall here with this Task.Run / Eventhandler approach?
*The main work consists of establishing a http connection and waiting for its result. Finally database read/writes have to be done. So it could be done by using async code.
I recommend you to use Hangfire to do this continuously. You can access your device via APIs and hand fire connect with all of its and you main devices.
It can show you reports and state of activities and threads and you can program it. I found out it is more reliable and stable that running a thread!
On the other hand, you can use "Observer Design Pattern" in your sub-applications. When the time or Event fire all subscribers in your code can fire and do answer to you.
you can read more in here :
Observer Design Pattern
For the problem that you describe perfectly suits Durable Task Freamork.
From the box, you will have:
possibility to scale (due to the architecture of DTF use ServiceBus and each instance can process work)
you can control the execution
provides the possibility to configure the level of parallelism
Also for long-running processing, you can use Azure Durable Function.
I'm looking for less technical and more conceptual answers on this one.
I am looking to build a WPF application using .NET 4.5 for controlling a rover, (glorified RC Car). Here is the intended functionality:
The application and rover will communicate wirelessly by sending and receiving strings - JSON over TCP Socket.
The GUI will display multiple video feeds via RTSP.
A control panel - custom hardware - will connect to the computer via USB and these signals will be converted to JSON before being sent over the TCP connection and providing movement instructions.
The GUI will need to update to reflect the state of the control panel as well as the state of the rover based on data received.
I'm not sure which technologies to use to implement this, but from my research, BackgroundWorkers or Threads, and Asynchronous techniques would be things to look into. Which of these seems like a good route to take? Also, should I use TCP Sockets directly in the application or should/could I use WCF to provide this data?
Any wisdom on this would be great. Thanks in advance.
EDIT:
Here was the final implementation used and boy did it workout great:
Everything fell into place around using the MVVM pattern.
There were Views for the control panel and the networking component which each had a corresponding ViewModel that handled the background operations.
Updating the UI was done via databinding, not the Dispatcher.
Wireless Communication was done Asynchronously (async/await) via TCPListener along with the use of Tasks.
Serial Port Communication was done Asynchronously via SerialPort and Tasks.
Used ModernUI for interface.
Used JSON.NET for the JSON parsing.
Here is a link to the project. It was done over the course of a month so it isn't the prettiest code. I have refined my practices a lot this summer so I'm excited to work on a refactored version that should be completed next year.
As you are using .NET 4.5 you dont need to use Threads and background workers for your project. you dont need to take care of all of your threads. As WPF's Dispatcher is a very powerful tool for handling UI from other threads.
For TCP Communication i would suggest you to use TCP Client and TCP Listner with Async Callbacks. and use Dispatcher for Updating your UI.
For Displaying Cameras over RTSP, Use VLC.Net an Open source wrapper for VLC library good for handling many real time video protocols.
Use Tasks instead of Threads, set their priority according to your requirement.
You don't need WCF for your application.
As far as I can tell (I'm no expert), MS's philosophy these days is to use asynchronous I/O, thread pool tasks for lengthy compute operations, and have a single main thread of execution for the main part of the application. That main thread drives the GUI and commissions the async I/O and thread pool tasks as and when required.
So for your application that would mean receiving messages asynchronously, and initiating a task on the thread pool to process the message, and finally displaying the results on the GUI when the task completes. It will end up looking like a single threaded event loop application. The async I/O and thread pool tasks do in fact use threads, its just they're hidden from you in an as convenient a way as possible.
I've tried (once) bucking this philosophy with my own separate thread handling all my I/O and an internal pipe connection to my main thread to tell it what's happening. I made it work, but it was really, really hard work. For example, I found it impossible to cancel a blocking network or pipe I/O operation in advance of its timeout (any thoughts from anyone out there more familiar with Win32 and .NET?). I was only trying to do that because there's no true equivalent to select() in Windows; the one that is there doesn't work with anything other than sockets... In case anyone is wondering 'why of why oh why?', I was re-implmenting an application originally written for Unix and naively didn't want to change the architecture.
Next time (if there is one) I'll stick to MS's approach.
Hopefully two simple questions relating to creating a server application:
Is there a theoretical/practical limit on the number of simultaneous sockets that can be open? Ignoring the resources required to process the data once it has arrived! If its of relevance I am targeting the .net framework
Should each connection be run in a separate thread that's permanently assigned to it, or should use of a Thread Pool be made? The dedicated thread approach seems simpler, but it seems odd to have 100+ threads running it once. Is this acceptable practice?
Any advice is greatly appreciated
Venatu
You may find the following answer useful. It illustrates how to write a scalable TCP server using the .NET thread pool and asynchronous sockets methods (BeginAccept/EndAccept and BeginReceive/EndReceive).
This being said it is rarely a good idea to write its own server when you could use one of the numerous WCF bindings (or even write custom ones) and benefit from the full power of the WCF infrastructure. It will probably scale better than every custom written server.
There are practical limits, yes. However, you will most likely run out of resources to handle the load long before you reach them. CPU, or memory are more likely to be exhausted before number of connections.
For maximum scalability, you don't want a seperate thread per connection, but rather you would use an Asynchronous model that only uses threads when servicing active (as in receiving or sending data) connections.
As I remember correctly (did sockets long time ago) the very best way of implementing them is with ReceiveAsync (.NET 3.5) / BeginReceive methods using asynchronous callbacks which will utilize thread pool. Don't open a thread for every connection, it is a waste of resources.
I have a basic queue of tasks executing (c# WinForms App that talks to 3 separate systems). Everything is great until one of the web services decides not to respond in with the usual speed.
I'm not interested in speeding things up via multi threading the jobs, but now that I am using it in production, I can see the benefit of having at least two threads running jobs - if one blocks and it's an anomaly, the other will keep truckin', and if both block, then probably any number of threads would and I just deal with it.
So, the question is: is this a common pattern I just described, and is there a name for that pattern and/or some awesome reference or framework, or anything to help me not re-invent any wheels.
Additions based on Comments/Answers
The tasks can be run simultaneously. I have chosen not to peruse a multi-threaded design for the purposes of speed, but I am now considering it to attain consistent performance in the face of infrequent task latency.
My assumption is that every once in a while a call to a web service takes disproportionately longer to complete while still being considered non-exceptional. This has a NON(corrected)-negligible impact on the total run time of, say, N jobs, if the average execution time is 1 second (including a host of disparate web service calls) and 0.0001% of the time a web service takes 15 seconds to respond.
Is a thread pool just another way of saying, "spin up worker threads and manage their state manually" ? Or is there something that can help me manage the complexity? I worry that the chance of introducing bug(s) grows out of proportion to the benefits in this case...
I think I am looking for something similar to a thread pool, but one that only spins up additional threads when latency is detected.
If anyone can give me more info on what one of the comments refers to as a work-stealing thread, that sounds promising.
The reason I didn't use the BackgroundWorker component is because they seem to be built for the case when you know how many workers you want, and I'd ideally keep the design flexible
PS: Thanks again.
Thanks!
It depends on how important is the order of the queue items and how important it is that an item is completed before the next one is processed.
If one item must be completely processed before the next one is, then basically you are stuck.
If not, you may decide to implement a simple pool of worker threads. if .NET 4.0 is an option I would recommend using the Parallel Extensions for that and especially the AsParallel() and AsOrdered() methods.
It sounds like what you might be looking for here is the Backgroundworker. A class that neatly encapsulates launching a worker thread, monitoring progress and receiving results. http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx. It's quick and easy to use just wire up the DoWork, ProgressChanged and RunWorkerCompleted events and then start it.
The producer-consumer pattern may be the best fit for this, and using a queue (either Queue<T> wrapped in a lock or the new ConcurrentQueue<T>) is a good approach. It also gives you a place to recycle web service requests that fail due to timeouts or dropped connections.
If you want to use more than the maximum default of two simultaneous web connections then add this to your app.config (replace "10" with your new maximum):
<configuration>
<system.net>
<connectionManagement>
<add address="*" maxconnection="10"/>
</connectionManagement>
</system.net>
</configuration>
If you're using .Net 4 you can also have a look at Tasks
I’m looking for the best way of using threads considering scalability and performance.
In my site I have two scenarios that need threading:
UI trigger: for example the user clicks a button, the server should read data from the DB and send some emails. Those actions take time and I don’t want the user request getting delayed. This scenario happens very frequently.
Background service: when the app starts it trigger a thread that run every 10 min, read from the DB and send emails.
The solutions I found:
A. Use thread pool - BeginInvoke:
This is what I use today for both scenarios.
It works fine, but it uses the same threads that serve the pages, so I think I may run into scalability issues, can this become a problem?
B. No use of the pool – ThreadStart:
I know starting a new thread takes more resources then using a thread pool.
Can this approach work better for my scenarios?
What is the best way to reuse the opened threads?
C. Custom thread pool:
Because my scenarios occurs frequently maybe the best way is to start a new thread pool?
Thanks.
I would personally put this into a different service. Make your UI action write to the database, and have a separate service which either polls the database or reacts to a trigger, and sends the emails at that point.
By separating it into a different service, you don't need to worry about AppDomain recycling etc - and you can put it on an entire different server if and when you want to. I think it'll give you a more flexible solution.
I do this kind of thing by calling a webservice, which then calls a method using a delegate asynchronously. The original webservice call returns a Guid to allow tracking of the processing.
For the first scenario use ASP.NET Asynchronous Pages. Async Pages are very good choice when it comes to scalability, because during async execution HTTP request thread is released and can be re-used.
I agree with Jon Skeet, that for second scenario you should use separate service - windows service is a good choice here.
Out of your three solutions, don't use BeginInvoke. As you said, it will have a negative impact on scalability.
Between the other two, if the tasks are truly background and the user isn't waiting for a response, then a single, permanent thread should do the job. A thread pool makes more sense when you have multiple tasks that should be executing in parallel.
However, keep in mind that web servers sometimes crash, AppPools recycle, etc. So if any of the queued work needs to be reliably executed, then moving it out of process is a probably a better idea (such as into a Windows Service). One way of doing that, which preserves the order of requests and maintains persistence, is to use Service Broker. You write the request to a Service Broker queue from your web tier (with an async request), and then read those messages from a service running on the same machine or a different one. You can also scale nicely that way by simply adding more instances of the service (or more threads in it).
In case it helps, I walk through using both a background thread and Service Broker in detail in my book, including code examples: Ultra-Fast ASP.NET.