I might be completely out in the woods on this one, but. Since TAP (Task-based Asynchronous Pattern, see this thread, to learn more) separates background operation logic from your UI/ViewModel update code, it would be good to do so when communicating with a serialport.
However, the serialport uses EAP (http://msdn.microsoft.com/en-us/library/jj152938%28v=vs.110%29.aspx), or at I believe so.
So my question is essentially, can TAP be used with a serialport, and should it?
//Rikard
ps. If someone could create TAP/EAP/APM tags (or simple an Asynchronous-Patterns tag), so this qeustion can be properly tagged, I would be most grateful (I lack the rep for it) .ds
As we know that TAP is good for heavy computational operations in memory, and bad for IO-bound operations, and communicating via Serial Ports is an IO-bound operation.
Hence, I don't think it's a good idea to change from EAP to TAP. Because:
For instance, your code mounts a task, which sends some data to a COM port. At the other end, a device will pick up that data to process, and the time required to process the data may be several seconds. During that time, would you hold up the task, hence the thread, several seconds before releasing it back to the processor? It would be really inefficient, IMHO.
PS: Could you reveal what are you trying to achieve?
Related
I'm looking for less technical and more conceptual answers on this one.
I am looking to build a WPF application using .NET 4.5 for controlling a rover, (glorified RC Car). Here is the intended functionality:
The application and rover will communicate wirelessly by sending and receiving strings - JSON over TCP Socket.
The GUI will display multiple video feeds via RTSP.
A control panel - custom hardware - will connect to the computer via USB and these signals will be converted to JSON before being sent over the TCP connection and providing movement instructions.
The GUI will need to update to reflect the state of the control panel as well as the state of the rover based on data received.
I'm not sure which technologies to use to implement this, but from my research, BackgroundWorkers or Threads, and Asynchronous techniques would be things to look into. Which of these seems like a good route to take? Also, should I use TCP Sockets directly in the application or should/could I use WCF to provide this data?
Any wisdom on this would be great. Thanks in advance.
EDIT:
Here was the final implementation used and boy did it workout great:
Everything fell into place around using the MVVM pattern.
There were Views for the control panel and the networking component which each had a corresponding ViewModel that handled the background operations.
Updating the UI was done via databinding, not the Dispatcher.
Wireless Communication was done Asynchronously (async/await) via TCPListener along with the use of Tasks.
Serial Port Communication was done Asynchronously via SerialPort and Tasks.
Used ModernUI for interface.
Used JSON.NET for the JSON parsing.
Here is a link to the project. It was done over the course of a month so it isn't the prettiest code. I have refined my practices a lot this summer so I'm excited to work on a refactored version that should be completed next year.
As you are using .NET 4.5 you dont need to use Threads and background workers for your project. you dont need to take care of all of your threads. As WPF's Dispatcher is a very powerful tool for handling UI from other threads.
For TCP Communication i would suggest you to use TCP Client and TCP Listner with Async Callbacks. and use Dispatcher for Updating your UI.
For Displaying Cameras over RTSP, Use VLC.Net an Open source wrapper for VLC library good for handling many real time video protocols.
Use Tasks instead of Threads, set their priority according to your requirement.
You don't need WCF for your application.
As far as I can tell (I'm no expert), MS's philosophy these days is to use asynchronous I/O, thread pool tasks for lengthy compute operations, and have a single main thread of execution for the main part of the application. That main thread drives the GUI and commissions the async I/O and thread pool tasks as and when required.
So for your application that would mean receiving messages asynchronously, and initiating a task on the thread pool to process the message, and finally displaying the results on the GUI when the task completes. It will end up looking like a single threaded event loop application. The async I/O and thread pool tasks do in fact use threads, its just they're hidden from you in an as convenient a way as possible.
I've tried (once) bucking this philosophy with my own separate thread handling all my I/O and an internal pipe connection to my main thread to tell it what's happening. I made it work, but it was really, really hard work. For example, I found it impossible to cancel a blocking network or pipe I/O operation in advance of its timeout (any thoughts from anyone out there more familiar with Win32 and .NET?). I was only trying to do that because there's no true equivalent to select() in Windows; the one that is there doesn't work with anything other than sockets... In case anyone is wondering 'why of why oh why?', I was re-implmenting an application originally written for Unix and naively didn't want to change the architecture.
Next time (if there is one) I'll stick to MS's approach.
If I have 1 thread for my MMORPG server running a async socket and async packet handler, and in that 1 thread I have a static World that contains all entities in the game.
Would there be any threading issues if say, the async packet handler recieves an Attack message, resulting in a search of the entities in the world to figure out the target.
At the same time the static World Proc method is increasing the size of the Dictionary containing the monster entities adding extra monsters that spawned.
If this is all on the same thread, will the server explode?
will the server explode?
Yes, you can run into problems ("explode") because the async stuff is running on a different thread (even though you didn't create that thread explicitly) and it might access a shared object (world) at the same time as your main thread. Many datastructures (including the Dictionary) are not designed for this scenario and might crash or return the wrong answer.
The typical approach is to use locks to protect your shared objects: take the lock before modifying it, do whatever modification, and then release the lock. This way, only one thread at a time accesses the world (and its dictionary) and so everything remains consistent. Explosion averted.
Another way would be to switch to a more synchronous form of networking, perhaps for example avoiding completion handlers and instead waiting to hear from each of the players, and then acting on the inputs. This can be done very simply, but the simple way has drawbacks: any one slow player can slow the whole thing down. So sadly you're probably going to have to deal with some complexity, one way or another.
If I give answer in one line. Server will explode. As network activity and game logic is in same thread. And as you have mentioned you will be needing high network usage.
I seriously like that if you have a look to F#. It has all the things that you needed. As far as I got it from question. And few things are like collection change, and async is by default in language. Even Nodejs it is also worth trying. But again it all depends on requirement. Though I try to explain few keywords that may help you to take decision.
Non-blocking : It means thread will not be block on events. It will not wait for function to wait for another function to execute. But that doesn't mean you can't block it. In any case it is a single thread.
Async: It is some what like that. But in C# 5 async comes with keyword so you don't have to do threading part of programming.
Parallel Processing: In game development parallel processing is important. Now, that you can do with multiple thread or just use TPL.
In the case of UI based (where there are many objects) game I highly recommended that you separate processing thread and UI thread to improve user experience. Else FPS will go down while you are processing data.
Let me know if any further information needed.
Server will not go down if you take little care of it.
If on the same thread, then no. If you are doing all the work mentioned on a single thread, then there's no issue. However, as stated, if you are accessing a "shared" object instance across threads, then yes, there will be an issue, and locking will be required (using a "lock(){...}" block).
As your user base increases, you will have to keep an eye on the number of threads generated, or event messages if using a non-blocking event model for incoming requests.
On a different, yet related note, keep an eye on this C# based MMO server (with scripting support): https://dreamspace.codeplex.com/ - it may become a big help to MMO game creators very soon (will support Construct 2 by default).
I'm in need of some advice in proper coding:
I'm working on a program where multiple serial connections are used. Each communication line has a controller working as an abstraction layer. Between the controller and the serial port, a protocol is inserted to wrap the data in packages, ready for transfer. The protocol takes care of failed deliveries, resending etc.
To ensure that the GUI won't hang, the each connection line (protocol and serial port) is created on a separate thread. The controller is handled by the main thread, since it has controls in the GUI.
Currently, when I create the threads, I have chosen to create a message loop on them (Application.Run()), so instead polling buffers and yielding if no work, i simply invoke the thread (BeginInvoke) and uses the message loop as a buffer. This currently works nicely, and no serious problems so far.
My question is now: Is this "good coding", or should i use a while loop on the tread and be polling buffers instead?, or some third thing?
I would like to show code, but so far it is several thousand lines of code, so please be specific if you need to see any part of the code. :)
Thank you.
Using message loops in each thread is perfectly fine; Windows is optimized for this scenario. You are right to avoid polling, but you may want to look into other event-based designs that are more efficient still, for example preparing a package for transfer and calling SetEvent to notify a thread it's ready, or semaphore and thread-safe queue as Martin James suggests.
I'm not 100% sure what you are doing here but, with a bit of 'filling in' it doesn't sound bad:)
When your app is idle, (no comms), is CPU use 0%?
Is your app free of sleep(0)/sleep(1), or similar, polling loops?
Does it operate with a reasonably low latency?
If the answers are three 'YES', you should be fine :)
There are a few, (very few!), cases where polling for results etc. is a good idea, (eg. when the frequency of events in the threads is so high that signaling every progress event to the GUI would overwhelm it), but mostly, it's just poor design.
Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation).
I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them.
But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives.
If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues
The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority.
I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me.
Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?
UPDATE:
So I think that I was mis-understanding the async pattern in (2) above. I would get a callback on one worker thread when there was data available. Then I would begin another async receive and get another callback, etc. I wouldn't get N callbacks at the same time.
The question still is the same though. Is there any reason that the callbacks would be better in my specific situation where I'm the client and only connected to one server.
The slowest part of your application will be the network communication. It's highly likely that you will make almost no difference to performance for a one thread, one connection client by tweaking things like this. The network communication itself will dwarf all other contributions to processing or context switching time.
Say that I get N callbacks at almost
the same time on N different
threadpool threads notifying me that
there's new data.
Why is that going to happen? If you have one socket, you Begin an operation on it to receive data, and you get exactly one callback when it's done. You then decide whether to do another operation. It sounds like you're overcomplicating it, though maybe I'm oversimplifying it with regard to what you're trying to do.
In summary, I'd say: pick the simplest programming model that gets you what you want; considering choices available in your scenario, they would be unlikely to make any noticeable difference to performance whichever one you go with. With the blocking model, you're "wasting" a thread that could be doing some real work, but hey... maybe you don't have any real work for it to do.
The number one rule of performance is only try to improve it when you have to.
I see you mention standards but never mention problems, if you are not having any, then you don't need to worry what the standards say.
"This class was specifically designed for network server applications that require high performance."
As I understand, you are a client here, having only a single connection.
Data on this connection arrives in order, consumed by a single thread.
You will probably loose performance if you instead receive small amounts on separate threads, just so that you can assemble them later in a serialized - and thus like single-threaded - manner.
Much Ado about Nothing.
You do not really need to speed this up, you probably cannot.
What you can do, however is to dispatch work units to other threads after you receive them.
You do not need SocketAsyncEventArgs for this. This might speed things up.
As always, measure & measure.
Also, just because you can, it does not mean you should.
If the performance is enough for the foreseeable future, why complicate matters?
What exactly do I need delegates, and threads for?
Delegates act as the logical (but safe) equivalent to function-pointers; they allow you to talk about an operation in an abstract way. The typical example of this is events, but I'm going to use a more "functional programming" example: searching in a list:
List<Person> people = ...
Person fred = people.Find( x => x.Name == "Fred");
Console.WriteLine(fred.Id);
The "lambda" here is essentially an instance of a delegate - a delegate of type Predicate<Person> - i.e. "given a person, is something true or false". Using delegates allows very flexible code - i.e. the List<T>.Find method can find all sorts of things based on the delegate that the caller passes in.
In this way, they act largely like a 1-method interface - but much more succinctly.
Delegates: Basically, a delegate is a method to reference a method. It's like a pointer to a method which you can set it to different methods that match its signature and use it to pass the reference to that method around.
Thread is a sequentual stream of instructions that execute one after another to complete a computation. You can have different threads running simultaneously to accomplish a specific task. A thread runs on a single logical processor.
Delegates are used to add methods to events dynamically.
Threads run inside of processes, and allow you to run 2 or more tasks at once that share resources.
I'd suggest have a search on these terms, there is plenty of information out there. They are pretty fundamental concepts, wiki is a high level place to start:
http://en.wikipedia.org/wiki/Thread_(computer_science)
http://en.wikipedia.org/wiki/C_Sharp_(programming_language)
Concrete examples always help me so here is one for threads. Consider your web server. As requests arrive at the server, they are sent to the Web Server process for handling. It could handle each as it arrives, fully processing the request and producing the page before turning to the next one. But consider just how much of the processing takes place at hard drive speeds (rather than CPU speeds) as the requested page is pulled from the disk (or data is pulled from the database) before the response can be fulfilled.
By pulling threads from a thread pool and giving each request its own thread, we can take care of the non-disk needs for hundreds of requests before the disk has returned data for the first one. This will permit a degree of virtual parallelism that can significantly enhance performance. Keep in mind that there is a lot more to Web Server performance but this should give you a concrete model for how threading can be useful.
They are useful for the same reason high-level languages are useful. You don't need them for anything, since really they are just abstractions over what is really happening. They do make things significantly easier and faster to program or understand.
Marc Gravell provided a nice answer for 'what is a delegate.'
Andrew Troelsen defines a thread as
...a path of execution within a process. "Pro C# 2008 and the .NET 3.5 Platform," APress.
All processes that are run on your system have at least one thread. Let's call it the main thread. You can create additional threads for any variety of reasons, but the clearest example for illustrating the purpose of threads is printing.
Let's say you open your favorite word processing application (WPA), type a few lines, and then want to print those lines. If your WPA uses the main thread to print the document, the WPA's user interface will be 'frozen' until the printing is finished. This is because the main thread has to print the lines before it can process any user interface events, i.e., button clicks, mouse movements, etc. It's as if the code were written like this:
do
{
ProcessUserInterfaceEvents();
PrintDocument();
} while (true);
Clearly, this is not what users want. Users want the user interface to be responsive while the document is being printed.
The answer, of course, is to print the lines in a second thread. In this way, the user interface can focus on processing user interface events while the secondary thread focuses on printing the lines.
The illusion is that both tasks happen simultaneously. On a single processor machine, this cannot be true since the processor can only execute one thread at a time. However, switching between the threads happens so fast that the illusion is usually maintained. On a multi-processor (or mulit-core) machine, this can be literally true since the main thread can run on one processor while the secondary thread runs on another processor.
In .NET, threading is a breeze. You can utilize the System.Threading.ThreadPool class, use asynchronous delegates, or create your own System.Threading.Thread objects.
If you are new to threading, I would throw out two cautions.
First, you can actually hurt your application's performance if you choose the wrong threading model. Be careful to avoid using too many threads or trying to thread things that should really happen sequentially.
Second (and more importantly), be aware that if you share data between threads, you will likely need to sychronize access to that shared data, e.g., using the lock keyword in C#. There is a wealth of information on this topic available online, so I won't repeat it here. Just be aware that you can run into intermittent, not-always-repeatable bugs if you do not do this carefully.
Your question is to vague...
But you probably just want to know how to use them in order to have a window, a time consuming process running and a progress bar...
So create a thread to do the time consuming process and use the delegates to increase the progress bar! :)