Scenario
I've written a distributed application in C# using WCF.
It uses Client/Server architecture, implementing the Publisher/Subscriber design pattern for "pushing" new data to the client.
The server-side is hosted in a windows service, the client is a windows forms app.
The server-side continually loops through a series of processes and sends the results to the client.
I want to add a whole area to the application for monitoring everything that is going on server-side.
Problem
Here is where I am a bit stuck - I can't decide how I should monitor this stuff.
Thoughts
Do I create an object for storing lots of different information - logs of where the process is up to in the loop on the server-side, exceptions if any, errors etc??
I guess the real question is, how can I successfully maintain a monitoring aspect of the application that gives me relevant information?
Perhaps a central cache on the server-side that gets "snapped" at a point in time every so often and updates the client with the info?
Do you want to know what's currently going on in the server or do you also want to keep a history of what has happened?
If you only want to know what is going on at this moment, my solution would be to maintain the current server state in-memory (this shouldn't be too hard) and have the monitoring client call the server when it wants to know what is happening.
If you want to keep a history of what has happened, you need some data store where the server can write events to. The monitoring client can then read this data store to show what is happening now and what has happened in the past. Even better would be if the client did not have direct access to this data store but instead contacts the server to obtain the relevant information. This way you hide the implementation details of your monitoring history from the client.
Related
I would like to make an app that periodically, sporadically and automatically downloads some data from a list of user-defined sites, so it can then analyze and show historical graphs and other reports based on that data.
If I were to do this in Windows, I'd use the system Task Scheduler; if I were in Unix, I'd use cron; if I were in Android I'd use services. I would like to know how to do it in iOS.
As far as my research goes, this is not trivial in iOS, as there is no public interface for doing this. There are however, some workarounds to get this done:
Pull the historical data when the app awakens: Not possible, because I am not the provider of the data, and most of the data providers I will support don't store or offer access to historical data.
Download the data myself and have the clients pull it when awakening: Not desirable. Not only this requires additional costly infrastructure on my side (which would mean charging my users for what I intend to be a free app), but also some of the content providers require login credentials. I'd rather not ask for my users' login information to access information they can get themselves.
Save a timestamp from the last update and download data when the user puts the app in the foreground if the timestamp is expired: This doesn't serve my purposes, because data may (and is expected to) rapidly change in time. The entire purpose of this app is to automatically download this data periodically so all the historical data is available once the user opens the app again.
Use local notifications: It's pretty much the same as before. It requires user interaction to start the app, and the entire point of the app is to get this data even when the user is not using the device.
Use push notifications: Since these are just notifications that require user interaction to awaken the app, they can't be used for the same reason as local notifications. It seems you can process all pending push notifications once the app awakens, but I read you can't define custom fields for these notifications though.
Use background tasks: This technically seems the most promising of all options, but this is only available for very specific types of apps. I guess that a "Newsstand app" is the closest I can get, and it is actually meant to download data in the background. However, as it is named, it is meant for downloading "magazine or newspaper issues". Whether what I want to do can be classified as this is completely up to the app reviewer, and I'd rather not make an app that may get rejected on a technicality.
So, my question is: are there any other ways to do this that I am not aware of? Are there any apps that already do something similar?
Your assessment is correct. Your only 2 options are to host your own service that periodically downloads the data (your second bullet point) or use Newstand. For Newstand, it's possible that your app could fit the definition; it may just depend on how you characterize the app.
Your only choice in iOS really is to go with server-side infrastructure. Don't be afraid of charging the user; if the service you're providing is really useful, people will pay. I do get that it's a lot of extra work, etc, but it really is the only way.
Newsstand apps can only download data once a day, and they still require s server-side push notification to start the download, so you would have to put some infrastructure in place. More importantly, though, Apple is actually quite strict about being in the newsstand; I've been thru this a few times: you don't necessarily have to be a magazine/periodical, but your app should be primarily used for content distribution.
I think you have one further option, location based updates, but this depends on your users moving around regularly.
See e.g. http://blog.instapaper.com/post/24293729146 and http://blog.news.me/post/21643399885/introducing-paper-boy-automatically-download-your-news
Please don't get confuse yourself with the title of this question, I don't know what is the exact technical term of what I want to accomplish :). My requirement may be little strange and I already implemented it but I need some best practice/method to do it properly.
Here is my situation.
I am developing a client system monitoring windows application (Tracking software in client side and monitoring software in my system). I have many systems connected to a LAN and I have a monitoring system. If any certain actions happen on client system, I will get notified. I cannot use any databases in my network so what I am doing is, Since my system is also connected to LAN I shared one folder in my system. Whenever some actions happens in client system, Tracking software will create a file containing event to the shared folder in my system. The monitoring software uses a timer which will continuously check for any new files in the shared folder on a certain interval(15 Minutes). If any file found, monitoring system will know some event has happened and will show the event.
But the problem I will get notified only after 15 minutes. Also is I don't think this is the best way. There may be some good and best methods. Is there any way like registering event directly to my Monitoring application from client machine?
Please NOTE: I cannot use any Database for this purpose.
Any suggestions will be appreciated.
Take a look at SignalR - it provides real time notification and can be used exactly as you describe.
You would not require a database (but remember if your server isn't running you will miss events - this may or may not be acceptable).
Take a look at FileSystemWatcher. This will monitor directories and raise events. IME, it works well, but can fail with large amounts of traffic.
This sounds like a perfect candidate for MSMQ (MS Message Queue) and Triggers.
Create an MSMQ that all your Tracking Softwares can write to. Then have an MSMQ trigger (perhaps connecting to a front-end through WCF/named pipes) to display an alert in your Monitoring Software
You may want to use WCF Framework.
Here is two links that can help you:
wcf-tutorial-events-and-callbacks
wcf-tutorial-basic-interprocess-communication
Im not even sure how to ask this question, but i'll give it a shot.
I have a program in c# which reads in values from sensors on a manufacturing line that are indicative of the line health. These values update every 500 milisecconds. I have four lines that this is done for. I would like to write a "overview" program which will be able to access these values over the network to give a good summary on how the factory is doing. My question is how do I get the values from the c# programs on the line to the c# overview program realtime?
If my question doesnt make much sense, let me know and I'll try to rephrase it.
Thanks!
You have several options:
MSMQ
Write the messages in MSMQ (Microsoft Message Queuing). This is an (optionally) persistent and fast store for transporting messages between machines.
Since you say that you need the messages in the other app in near realtime, then it makes sense to use MSMQ because you do not want to write logic in that app for handling large amounts of incoming messages.
Keep the MSMQ in the middle and take out what you need and most importantly when you can.
WCF
The other app could expose a WCF service which can be called by your realtime app each time there's data available. The endpoint could be over net.tcp, meaning low overhead, especially if you send small messages.
Other options include what has been said before: database, file, etc. So you can make your choice between a wide variety of options.
It depends on a number of things, I would say. First of all, is it just the last value of each line that is interesting for the 'overview' application or do you need multiple values to determine line health or do you perhaps want to have a history of values?
If you're only interested in the last value, I would directly communicate this value to the overview app. As suggested by others, you have numerous possibilities here:
Raw TCP using TcpClient (may be a bit too low-level).
Expose a http endpoint on the overview application (maybe it's a web application) and post new values to this endpoint.
Use WCF to expose some endpoint (named pipes, net.tcp, http, etc.) on the overview application and call this endpoint from each client application.
Use MSMQ to have each client enqueue messages that are then picked up by the overview app (also directly supported by WCF).
If you need some history of values or you need multiple values to determine line health, I would go with a database solution. Then again you have to choose: does each client write to the database or does each client post to the overview app (using any of the communication means described above) and does the overview app write to the database.
Without knowing any more constraints for your situation, it's hard to decide between any of these.
You can use named pipes (see http://msdn.microsoft.com/en-us/library/bb546085.aspx) to have a fast way to communicate between two processes.
A database. Put your values into a database and the other app then pulls them out that same database. This is a very common solution to this problem and opens up worlds of new scenarios.
see: Relation database
I've got a C# service that currently runs single-instance on a PC. I'd like to split this component so that it runs on multiple PCs. Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine.
Data synchronization can be done by the DB, so that should not be much of an issue. My current idea is to use some kind of load balancer that splits and sends the incoming requests to the array of PCs and makes sure the work is actually processed.
How would I implement such a functionality? I'm not sure if I'm asking the right question. If my understanding of how this goal should be achieved is wrong, please give me a hint.
Edit:
I wonder if the idea given above (load balancer splitswork packages to PCs and checks for result) is feasible at all. If there is some kind of already implemented solution so this seemingly common problem, I'd love to use that solution.
Availability is a critical requirement.
I'd recommend looking at a Pull model of load-sharing, rather than a Push model. When pushing work, the coordinating server(s)/load-balancer must be aware of all the servers that are currently running in your system so that it knows where to forward requests; this must either be set in config or dynamically set (such as in the Publisher-Subscriber model), then constantly checked to detect if any servers have gone offline. Whilst it's entirely feasible, it can complicate the scaling-out of your application.
With a Pull architecture, you have a central work queue (hosted in MSMQ, Sql Server Service Broker or similar) and each processing service pulls work off that queue. Expose a WCF service to accept external requests and place work onto the queue, safe in the knowledge that some server will do the work, even though you don't know exactly which one. This has the added benefits that each server monitors it's own workload and picks up work as-and-when it is ready, and you can easily add or remove servers to/from this model without any change in config.
This architecture is supported by NServiceBus and the communication between Windows Azure Web & Worker roles.
From what you said each PC will require a full copy of your service -
Each PC should be assigned a certain
part of the work. If one PC fails, its
work should be moved to a backup
machine
Otherwise you won't be able to move its work to another PC.
I would be tempted to have a central server which farms out work to individual PCs. This means that you would need some form of communication between each machine and and keep a record back on the central server of what work has been assigned where.
You'll also need each machine to measure it's cpu loading and reject work if it is too busy.
A multi-threaded approach to the service would make good use of those multiple processor cores that are ubiquitoius nowadays.
How about using a server and multi-threading your processing? Or even multi-threading on a PC as you can get many cores on a standard desktop now.
This obviously doesn't deal with the machine going down, but could give you much more performance for less investment.
you can check windows clustering, and you have to handle set of issues that depends on the behaviour of the service (you can put more details about the service itself so I can answer)
This depends on how you wanted to split your workload, this usually done by
Splitting the same workload by multiple services
Means same service being installed on
different servers and will do the
same job. Assume your service is reading huge data from the db servers and processing them to produce huge client specific datafiles and finally this datafile is been sent to the clients. In this approach all your services installed in diff servers will do the same work but they split the work to increaese the performance.
Splitting the part of the workload by multiple services
In this approach each service will be assigned to the indivitual jobs and works on different goals. in above example one serivce is responsible for reading data from db and generating huge data files and another service is configured only to read the data file and send it to clients.
I have implemented the 2nd approach in one of my work. Because this let me isolate and debug the errors in case of any failures.
The usual approach for load balancer is to split service requests evenly between all service instances.
For each work item (request) you can store relative information in database. Then each service should also have at least one background thread checking database for abandoned work items.
I would suggest that you publish your service through WCF (Windows Communication Foundation).
Then implement a "central" client application which can keep track of available providers of your service and dish out work. The central app will act as scheduler and load balancer of the tasks to be performed.
Check out Juwal Lövy's book on WCF ("Programming WCF Services") for a good introduction on this topic.
You can have a look at NGrid : http://ngrid.sourceforge.net/
or Alchemi : http://www.gridbus.org/~alchemi/index.html
both are grid computing framework with load balancers that will get you started in no time.
Cheers,
Florian
We are developing a Windows Forms application that will be installed on about 1,000 employee pcs. Users may run multiple instances of the application at the same time. The clients are all on a single intranet.
Changes in the application may cause database record changes, which in turn must be communicated to the other clients so their UIs are updated.
Our team has talked about two different approaches:
1. Multicast packets
The source client modifies the records and then sends out a multicast packet with a payload that something has changed. The other clients receive this and fetch the data specified. We need to account for the cases when the packet is not received, falling back onto actively retrieving the data.
My question at this point is how does a client know it didn't receive a packet? (don't know what you don't know) Which brings us to some sort of event log with timestamps in the database, and UI controls track the last time they were updated. They come into focus, check their timestamps, and update as needed.
Someone else said the UI elements would just reload every time they come into focus (think modes in outlook, bringing controls to the front of a stack workspace with CAB). And that the multicast is to update the clients that their current context has changed. If they miss it they work with stale data until they change modes and come back.
2. WCF and Callbacks
Clients register with WCF contracts for callbacks over a tcp binding. The primary technical concern with this is the server maintaining many open sockets. We have read up on how it isn't open in the traditional sense, it is put to sleep for a maximum of 90 seconds and then re-established at that point. We also read about the maximum number of open connections a Windows 2003 Server machine can handle, and how to modify that in the registry.
If we have 1,000 open socket connections to a server is this going to fall apart?
If anyone has faced this same situation and tried or evaluated the WCF approach we would love to hear about it.
I have not implemented a situation like this. However, I would think that one of the duplex bindings would not necessarily have a high overhead.
It all depends on how much information the server needs to send back to the clients. I understand you said the information will be used for them to update their UI. However it seems possible that they may not all need the same amount of information at the same time. For instance, if information about the Western region has changed, all 1000 clients may want to know that there is a change, and they may all want to update summary-level information about the Western region, but perhaps only 1/4 of them may need to see the details of the change.
If this is the case, then I'd recommend that the callback only provide information about what has changed, mostly at a summary level. Let those clients who are interested in the details of the change ask for the details. You might even go as far as to provide all the details for the top one or two levels of hierarchy, then for the rest, just include information saying "this changed at time". That way, depending on the level of hierarchy being viewed by a particular client, the client could then ask or not ask.
If necessary, you could batch updates together. If the clients only need to be updated once per second, then you could accumulate the changes for the last second and send them all at once.
You may also want to use some of the Peer to Peer bindings for some tasks. Perhaps the clients in a particular area of your business would like to know a little about what each other are working on - that sort of thing.