I'm currently building a C# application which will automatically authenticate a user against certain network resources when they connect to specific wireless networks.
At the moment, I'm using the Managed Wifi API to discover when a user connects / disconnects from a wireless network. I have an event handler, so that when any of these activities occurs, one of my methods is called to inspect the current state of the wireless connection.
To manage the state of the application, I have another class which is called the "conductor", which performs the operations required to change the state of the application. For instance, when the wireless card connects to the correct network, the conductor needs to change the system state from "Monitoring" to "Authenticating". If authentication succeeds, the conductor needs to change the state to "Connected". Disconnection results in the "Monitoring" state again, and an authentication error results in an "Error" state. These state changes (if the user requests) can result in TrayIcon notifications, so the user knows that they are being authenticated.
My current idea involves having the method used to inspect the current state of the wireless call the "authenticate" or "disconnect" methods within the state manager. However, I'm not sure if this is an appropriate use of the event handler -- should it instead be setting a flag or sending a message via some form of IPC to a separate thread which will begin the authentication / disconnection process?
In addition to the event handler being able to request connection / disconnection, a user can also perform it via the tray icon. As a result, I need to ensure these background operations are not blocking the tray's interactions with the user.
Only one component should be able to request a change of the system state at any time, so I would need to use a mutex to prevent concurrent state changes. However, how I should synchronous the rest of these components is a slight mystery to me.
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
EDIT: Most importantly, I want to verify that an event will be executed as a separate thread, so it cannot block the main UI. In addition, I want to verify that if I have an event handler subscribed to an event, it will handle events serially, not in parallel (so if the user connects and disconnects before the first connection event is processed, two state changes will not be occurring simultaneously).
Any advice or literature I should read would be appriciated. I have no formal training in C# language, so I apologize if I've misstated anything.
That explains a few things. :)
I would read up on threads, event handling, and creation of system tray icons/interfaces.
It is important to note the following:
Events are processed on the same thread they are called from. If you want the processing of an event not to lock the GUI then you will need to have the button move the work to a different thread.
When an event is fired it passes the appropriate arguments to all the methods in its list. This is pretty much the same as calling one method which in turn calls all the others (see EventFired example). The purpose of events is not to call methods as we can do that already, it is to call methods which may not be known when the code is compiled (the click event on a button control would not be known when the library the control is in is compiled for example). In short, if you can call the method instead of using an event the do so.
void EventFired(int arg1, object arg2)
{
subscribedMethod1(arg1, arg2);
SubscribedMethod2(arg1, arg2);
SubscribedMethod3(arg1, arg2);
SubscribedMethod4(arg1, arg2);
SubscribedMethod5(arg1, arg2);
SubscribedMethod6(arg1, arg2);
SubscribedMethod7(arg1, arg2);
}
If you want to prevent a user interface from locking do the work on another thread. Remember though, user interface elements (forms, buttons, grids, labels, etc.) can only be accessed from their host thread. Use the control.Invoke method to call methods on their thread.
Removing an option from an interface is not a good way to prevent raceway conditions (the user starts a connect/disconnect while one is already running) as the user interface will be on a different thread and could be out of sync (it takes time for separate threads to sync up). While there are many ways to resolve this problem, the easiest for someone new to threading is to use a lock on the value. This way .NET will make sure only one thread can change the setting at a time. You will still need to update the user interface so the user knows the update is occurring.
Your general design sound fine. You could use 2-3 threads (1 for the user interface (tray icon), 1 for checking for new network connections, and 1 (could be merged with connection check) which checks the internet connection.
Hope this helps, let us know if you need more (or accept an answer).
As an option, alternative...
If I were you, and since you're starting anew anyway, I would seriously consider the
Rx Reactive Extensions
It gives a completely fresh look at events and event based programming and helps a lot exactly with the things you're dealing with (including synchronizing, dealing with threads, combining events, stopping, starting etc. etc.).
It might be a bit of a 'steep curve' to learn at start, but again, it might be worth it.
hope this helps,
To me it seems that you're going to overengineer the project.
You basically need to implement an event in Commander and in main application subscribe to them. That is.
If there is always one component can make a change and you can have more then one, using some sync mechanism, like a Mutex noted by you, is perfectly valid choice.
Hope this helps.
If you want to have at most one state change pending at any time it is probably best to have the event handlers of the external events you are listening to hold a lock during their execution. This ensure an easy way to program because you are guaranteed that the state of your app does not change underneath you. A separate thread is not needed in this particular case.
You need to make a distinction between the current state of the application and the target state. The user dictates the target state ("connected", "disconnected"). The actual state might be different. Example: the user wants to be disconnected but the actual state is authenticating. Once the authentication step is completed the state machine must examine the target state:
targetState == connected => set current state to connected
targetState == disconnected => begin to disconnect and set state to disconnecting
Separating actual and target state allows the user to change his mind any time and the state machine to steer towards the desired state.
It's hard to give a precise answer without seeing the whole (proposed) structure of your app. But in general, yes, it's OK to use an event hander for that sort of thing - though I'd probably move the actual implementation out to a separate method, so that you can more easily trigger it from other locations.
The comment about disabling the "Connect" button sounds right on to me, though it's quite conceivable you might need other forms of synchronization as well. If your app doesn't need to be multi-threaded, though, I'd steer away from introducing multiple threads just for the sake of it. If you do, look into the new Task API's that have been included as part of the Task Parallel Library. They abstract a lot of that stuff fairly well.
And the comment about not over-thinking the issue is also well-taken. If I were in your shoes, just beginning with a new language, I'd avoid trying to get the architecture just right at the start. Dive in, and develop it with the cognitive toolset you've already got. As you explore more, you'll figure out, "Oh, crap, this is a much better way to do that." And then go and do it that way. Refactoring is your friend.
Related
So - I have this external assembly that I'm using. It fires an event DataReceived. Then I'm doing some database operations which may fail due to problems with the data or because of some errors in the code. It would be great if I could "bubble up" the exception into the GUI. In my case I would need a blocking call to the GUI because of the way the assembly works. I'm not sure if this is a good idea but right now that's the only thing that comes to mind based on how the external code works.
The assembly assumes that if the callback (the event) returned safely then the data was processed succesfully - which may not be the case. Of course I would have to deal with the error in some way but that would mean that the server on the other side would always assume that the data was handled correctly.
My questions are:
Can I throw the exception into the GUI? If so, how?
How can I handle the exception in my event so that the assembly doesn't think I processed the data? Do I need some kind of blocking call/exception into the GUI? (Is this even possible?)
On a side note: Isn't that assembly broken by design somehow? Why would it automatically assume that everything went fine just based on if the callback returned?
I don't think that this is broken by design. If you receive the event you'll get informed that something has changed in the source. Now you should only do what is needed to get the informations you need from the source and do any further processing decoupled from the source. For that purpose I would within the event handler simply grab the data (maybe from the source; maybe from the event args) and put them into a ConcurrentQueue. Within my class another Task is running that using the BlockingCollection the retrieve the elements out of this queue to process them. If anything fails, simply call Invoke() to the gui thread and inform the user about what happened.
Ah, and another approach instead of using ConcurrentQueue would be to use Rx. With that you can subscribe to an event and observe it on a different thread by using ObserveOn() which would lead to nearly the same (in this case) but using a more LINQish syntax.
If I have 1 thread for my MMORPG server running a async socket and async packet handler, and in that 1 thread I have a static World that contains all entities in the game.
Would there be any threading issues if say, the async packet handler recieves an Attack message, resulting in a search of the entities in the world to figure out the target.
At the same time the static World Proc method is increasing the size of the Dictionary containing the monster entities adding extra monsters that spawned.
If this is all on the same thread, will the server explode?
will the server explode?
Yes, you can run into problems ("explode") because the async stuff is running on a different thread (even though you didn't create that thread explicitly) and it might access a shared object (world) at the same time as your main thread. Many datastructures (including the Dictionary) are not designed for this scenario and might crash or return the wrong answer.
The typical approach is to use locks to protect your shared objects: take the lock before modifying it, do whatever modification, and then release the lock. This way, only one thread at a time accesses the world (and its dictionary) and so everything remains consistent. Explosion averted.
Another way would be to switch to a more synchronous form of networking, perhaps for example avoiding completion handlers and instead waiting to hear from each of the players, and then acting on the inputs. This can be done very simply, but the simple way has drawbacks: any one slow player can slow the whole thing down. So sadly you're probably going to have to deal with some complexity, one way or another.
If I give answer in one line. Server will explode. As network activity and game logic is in same thread. And as you have mentioned you will be needing high network usage.
I seriously like that if you have a look to F#. It has all the things that you needed. As far as I got it from question. And few things are like collection change, and async is by default in language. Even Nodejs it is also worth trying. But again it all depends on requirement. Though I try to explain few keywords that may help you to take decision.
Non-blocking : It means thread will not be block on events. It will not wait for function to wait for another function to execute. But that doesn't mean you can't block it. In any case it is a single thread.
Async: It is some what like that. But in C# 5 async comes with keyword so you don't have to do threading part of programming.
Parallel Processing: In game development parallel processing is important. Now, that you can do with multiple thread or just use TPL.
In the case of UI based (where there are many objects) game I highly recommended that you separate processing thread and UI thread to improve user experience. Else FPS will go down while you are processing data.
Let me know if any further information needed.
Server will not go down if you take little care of it.
If on the same thread, then no. If you are doing all the work mentioned on a single thread, then there's no issue. However, as stated, if you are accessing a "shared" object instance across threads, then yes, there will be an issue, and locking will be required (using a "lock(){...}" block).
As your user base increases, you will have to keep an eye on the number of threads generated, or event messages if using a non-blocking event model for incoming requests.
On a different, yet related note, keep an eye on this C# based MMO server (with scripting support): https://dreamspace.codeplex.com/ - it may become a big help to MMO game creators very soon (will support Construct 2 by default).
I have a very long running workflow that moves video files around between video processing devices and then reports the files state to a database which is used to drive a UI
At times the users press a button on the UI to "Accept" a file into a video storage server. This involves copying a file from one server to another.]
They have asked if this activity can be cancelled.
I've looked at the wf4 documentation and I can't see a way to roll back part of a workflow.
Is this possible and what technique should I use.
The are two basic inbuild activities for reverting work.
The TransactionScope for ACID transaction
The Compensable activity for long running work.
With the Compensable activity you add activities to the compensation handler to undo work previously done. The Compensate activity can be used to trigger compensation. If there is no compensation you will get the confirmation handler either at the end of the workflow automatically or when you use the Conform activity.
See A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 by Matt Milner for more details.
Okay, so let's first say that the processing of "rolling back" what was already uploaded will have to be done by hand, so where ever you're storing those chunks you'll need to clean up by hand when they cancel.
Now, on to the workflow itself, in my opinion you could setup your FlowChart like this:
Alright so let's break down this workflow. The entire service should be correlated on some client key so that way you can start the service with Start once per client to keep the startup costs down.
Next, when said client wants to start a transfer you'll call BeginTransfer which will move into the transfer loop. The transfer loop is setup so that you can cancel between chunks if necessary by calling CancelTransfer.
That same branch, in this model, is used to finish the transfer as well because it gets out of the loop, so when your done transferring chunks just call CancelTransfer (if you don't like that just setup a different branch that looks exactly the same).
Finally, when you're in the process loop, you can SoftExit the entire workflow and shut it down so that you can kill it softly if there is necessary maintenance or when the client is finished with its connection it needs to call SoftExit to dispose of it.
not sure if I totally understand your scenario but I think you would need to run your transfer process on an asynchronous thread, that from time to time check a "cancel" variable to perform a rollback. This variable can be modified on the main thread on your UI.
Of course, this will allow you to cancel between transfers, not in the midle on one single transfer.
I am programming a TAPI application which uses the state pattern for dealing with the different states a TK can be in. Incoming and outgoing calls are recorded via an ObservableCollection in a ListView (call journal). The call data gets compared with contacts stored in a SQL-Server database to determine possible matches. That information is then used to update the call journal. All this in real time of course and all governed by/in the different states of the FSM (finite state machine).
To distinguish calls, I do use a call ID (which is provided by TAPI). When the phone rings or I start calling out, a new record including its call ID are added to the call journal and the customer database is searched for the number and certain data in the journal is updated accordingly. When proceeding through the different call states the application dynamically updates the journal (i.e. changing an icon that visually shows the state of the specific call, etc).
Exactly those updates to the ObservableCollection are giving me headaches, as they need to happen in a certain order. For example, when receiving a call the associated state will create a new entry in the ObservableCollection. When the state changes the new state might try to update the collection even though it is not clear weather the entry that is to be changed has been added already. The states happen to switch really fast, apparently faster than updating the collection can happen.
Would some kind of message queue be a possible/good solution? If so, how could such a message queue be implemented - in the context of either a state machine or an ObservableCollection. I am not looking for complete solutions, but any information which I cannot easily find via google or stackoverflow would be appreciated.
Edit: greatly rephrased the question.
Edit: I added my own solution for the problem, but will wait and see if there is possibly someone with a better idea.
Have you checked whether the result of FirstOrDefault is null? This can happen if no element with given id exists in the collection.
For example:
var element = this.FirstOrDefault(p => p.ID == id);
if (element != null) {
// Do something with element.Number.
}
Or you could just call First and see if you get InvalidOperationException.
--- EDIT ---
I see from your comment that you seem to be accessing the same ObservableCollection from multiple threads concurrently. If that is the case, you need to protect the shared data structure through locking. It is entirely possible that one thread begins inserting a new element just at the moment the other one is searching for it, leading to all sorts of undefined behavior. According to MSN documentation for ObservableCollection:
"Any instance members are not guaranteed to be thread safe."
As for debugging, you can "freeze" other threads and so you can concentrate only on the thread of interest without excessive "jumping". See the Threads panel, right-click menu, Freeze and Thaw options.
Updating the ObservableCollection is a long running process, at least compared to receiving and handling the TAPI-events. This can lead to race conditions, where a call state which would have to edit a call entry could not find the entry as it aquired the lock to writing/updating the collection prior to the call state that would actually have to add the call. Also, not handling the TAPI-events in the proper order would break the state machine.
I decided to implement a simplified Command Pattern. The TAPI-Events, who used to trigger the performance heavy state transactions, get added to a thread safe, non-blocking and observable command queue. When a command gets enqueued the queue class starts "executing" (and dequeuing) the commands in a new thread, that is it is triggering the proper call states in the finite state machine, until there are no commands left in the queue. If there is a dequeuing thread already running no new thread is created (multi-threading would lead to race conditions again) and the queue class is blocking reentrancy to make sure that only one command will ever be executed at the any one time.
So basically: all TAPI-events (the invoker) are added to a queue (the client) in the order they are happening, as fast as possible. The queue then relays the TAPI information to the receiver, the finite state machine performing the business logic, taking its time but making sure the information gets updated in the proper order.
Edit: Starting from .NET 4.0 you can use the ConcurrentQueue(T) Class to achieve the same result.
What exactly do I need delegates, and threads for?
Delegates act as the logical (but safe) equivalent to function-pointers; they allow you to talk about an operation in an abstract way. The typical example of this is events, but I'm going to use a more "functional programming" example: searching in a list:
List<Person> people = ...
Person fred = people.Find( x => x.Name == "Fred");
Console.WriteLine(fred.Id);
The "lambda" here is essentially an instance of a delegate - a delegate of type Predicate<Person> - i.e. "given a person, is something true or false". Using delegates allows very flexible code - i.e. the List<T>.Find method can find all sorts of things based on the delegate that the caller passes in.
In this way, they act largely like a 1-method interface - but much more succinctly.
Delegates: Basically, a delegate is a method to reference a method. It's like a pointer to a method which you can set it to different methods that match its signature and use it to pass the reference to that method around.
Thread is a sequentual stream of instructions that execute one after another to complete a computation. You can have different threads running simultaneously to accomplish a specific task. A thread runs on a single logical processor.
Delegates are used to add methods to events dynamically.
Threads run inside of processes, and allow you to run 2 or more tasks at once that share resources.
I'd suggest have a search on these terms, there is plenty of information out there. They are pretty fundamental concepts, wiki is a high level place to start:
http://en.wikipedia.org/wiki/Thread_(computer_science)
http://en.wikipedia.org/wiki/C_Sharp_(programming_language)
Concrete examples always help me so here is one for threads. Consider your web server. As requests arrive at the server, they are sent to the Web Server process for handling. It could handle each as it arrives, fully processing the request and producing the page before turning to the next one. But consider just how much of the processing takes place at hard drive speeds (rather than CPU speeds) as the requested page is pulled from the disk (or data is pulled from the database) before the response can be fulfilled.
By pulling threads from a thread pool and giving each request its own thread, we can take care of the non-disk needs for hundreds of requests before the disk has returned data for the first one. This will permit a degree of virtual parallelism that can significantly enhance performance. Keep in mind that there is a lot more to Web Server performance but this should give you a concrete model for how threading can be useful.
They are useful for the same reason high-level languages are useful. You don't need them for anything, since really they are just abstractions over what is really happening. They do make things significantly easier and faster to program or understand.
Marc Gravell provided a nice answer for 'what is a delegate.'
Andrew Troelsen defines a thread as
...a path of execution within a process. "Pro C# 2008 and the .NET 3.5 Platform," APress.
All processes that are run on your system have at least one thread. Let's call it the main thread. You can create additional threads for any variety of reasons, but the clearest example for illustrating the purpose of threads is printing.
Let's say you open your favorite word processing application (WPA), type a few lines, and then want to print those lines. If your WPA uses the main thread to print the document, the WPA's user interface will be 'frozen' until the printing is finished. This is because the main thread has to print the lines before it can process any user interface events, i.e., button clicks, mouse movements, etc. It's as if the code were written like this:
do
{
ProcessUserInterfaceEvents();
PrintDocument();
} while (true);
Clearly, this is not what users want. Users want the user interface to be responsive while the document is being printed.
The answer, of course, is to print the lines in a second thread. In this way, the user interface can focus on processing user interface events while the secondary thread focuses on printing the lines.
The illusion is that both tasks happen simultaneously. On a single processor machine, this cannot be true since the processor can only execute one thread at a time. However, switching between the threads happens so fast that the illusion is usually maintained. On a multi-processor (or mulit-core) machine, this can be literally true since the main thread can run on one processor while the secondary thread runs on another processor.
In .NET, threading is a breeze. You can utilize the System.Threading.ThreadPool class, use asynchronous delegates, or create your own System.Threading.Thread objects.
If you are new to threading, I would throw out two cautions.
First, you can actually hurt your application's performance if you choose the wrong threading model. Be careful to avoid using too many threads or trying to thread things that should really happen sequentially.
Second (and more importantly), be aware that if you share data between threads, you will likely need to sychronize access to that shared data, e.g., using the lock keyword in C#. There is a wealth of information on this topic available online, so I won't repeat it here. Just be aware that you can run into intermittent, not-always-repeatable bugs if you do not do this carefully.
Your question is to vague...
But you probably just want to know how to use them in order to have a window, a time consuming process running and a progress bar...
So create a thread to do the time consuming process and use the delegates to increase the progress bar! :)