We are developing a Windows Forms application that will be installed on about 1,000 employee pcs. Users may run multiple instances of the application at the same time. The clients are all on a single intranet.
Changes in the application may cause database record changes, which in turn must be communicated to the other clients so their UIs are updated.
Our team has talked about two different approaches:
1. Multicast packets
The source client modifies the records and then sends out a multicast packet with a payload that something has changed. The other clients receive this and fetch the data specified. We need to account for the cases when the packet is not received, falling back onto actively retrieving the data.
My question at this point is how does a client know it didn't receive a packet? (don't know what you don't know) Which brings us to some sort of event log with timestamps in the database, and UI controls track the last time they were updated. They come into focus, check their timestamps, and update as needed.
Someone else said the UI elements would just reload every time they come into focus (think modes in outlook, bringing controls to the front of a stack workspace with CAB). And that the multicast is to update the clients that their current context has changed. If they miss it they work with stale data until they change modes and come back.
2. WCF and Callbacks
Clients register with WCF contracts for callbacks over a tcp binding. The primary technical concern with this is the server maintaining many open sockets. We have read up on how it isn't open in the traditional sense, it is put to sleep for a maximum of 90 seconds and then re-established at that point. We also read about the maximum number of open connections a Windows 2003 Server machine can handle, and how to modify that in the registry.
If we have 1,000 open socket connections to a server is this going to fall apart?
If anyone has faced this same situation and tried or evaluated the WCF approach we would love to hear about it.
I have not implemented a situation like this. However, I would think that one of the duplex bindings would not necessarily have a high overhead.
It all depends on how much information the server needs to send back to the clients. I understand you said the information will be used for them to update their UI. However it seems possible that they may not all need the same amount of information at the same time. For instance, if information about the Western region has changed, all 1000 clients may want to know that there is a change, and they may all want to update summary-level information about the Western region, but perhaps only 1/4 of them may need to see the details of the change.
If this is the case, then I'd recommend that the callback only provide information about what has changed, mostly at a summary level. Let those clients who are interested in the details of the change ask for the details. You might even go as far as to provide all the details for the top one or two levels of hierarchy, then for the rest, just include information saying "this changed at time". That way, depending on the level of hierarchy being viewed by a particular client, the client could then ask or not ask.
If necessary, you could batch updates together. If the clients only need to be updated once per second, then you could accumulate the changes for the last second and send them all at once.
You may also want to use some of the Peer to Peer bindings for some tasks. Perhaps the clients in a particular area of your business would like to know a little about what each other are working on - that sort of thing.
Related
I am new to GUIs, and i have encountered a problem in my client-server program.
My program is like a "customer-support", where multiple clients can use it from different computers simultaneously.My problem is that when one client changes some info, its inserted into the db but the other client will not see it unless I add a "Refresh" button to my gui.
I want the gui to be dynamic and react to different clients actions. How can you come over this issue?
EDIT:
1. .net4,
2. sql-server,
3. The actions happends after a button click
Basically, you have two options: push or poll. Push (some central server announcing the change to all the listeners) is more immediate, but demands suitable infrastructure. It also depends on the number of clients you need to support, and how many events are passing through the system. Personally, I'm a big fan of redis pub/sub for this (it is actually what we use for the live updates here on stackexchange, coupled with web-sockets). But in some cases you can get the database to provide change notifications directly (personally I prefer not to use this). You may also be able to use events over something like WCF from a central app-server, but that depends on there only being one app-server, which doesn't sound like a good idea to me.
The other option is polling - i.e. have the application automatically query the system periodically (every minute perhaps) to see if the data being displayed has changed. If you can, using the timestamp/rowversion is a cheap way of doing this.
I would like to make an app that periodically, sporadically and automatically downloads some data from a list of user-defined sites, so it can then analyze and show historical graphs and other reports based on that data.
If I were to do this in Windows, I'd use the system Task Scheduler; if I were in Unix, I'd use cron; if I were in Android I'd use services. I would like to know how to do it in iOS.
As far as my research goes, this is not trivial in iOS, as there is no public interface for doing this. There are however, some workarounds to get this done:
Pull the historical data when the app awakens: Not possible, because I am not the provider of the data, and most of the data providers I will support don't store or offer access to historical data.
Download the data myself and have the clients pull it when awakening: Not desirable. Not only this requires additional costly infrastructure on my side (which would mean charging my users for what I intend to be a free app), but also some of the content providers require login credentials. I'd rather not ask for my users' login information to access information they can get themselves.
Save a timestamp from the last update and download data when the user puts the app in the foreground if the timestamp is expired: This doesn't serve my purposes, because data may (and is expected to) rapidly change in time. The entire purpose of this app is to automatically download this data periodically so all the historical data is available once the user opens the app again.
Use local notifications: It's pretty much the same as before. It requires user interaction to start the app, and the entire point of the app is to get this data even when the user is not using the device.
Use push notifications: Since these are just notifications that require user interaction to awaken the app, they can't be used for the same reason as local notifications. It seems you can process all pending push notifications once the app awakens, but I read you can't define custom fields for these notifications though.
Use background tasks: This technically seems the most promising of all options, but this is only available for very specific types of apps. I guess that a "Newsstand app" is the closest I can get, and it is actually meant to download data in the background. However, as it is named, it is meant for downloading "magazine or newspaper issues". Whether what I want to do can be classified as this is completely up to the app reviewer, and I'd rather not make an app that may get rejected on a technicality.
So, my question is: are there any other ways to do this that I am not aware of? Are there any apps that already do something similar?
Your assessment is correct. Your only 2 options are to host your own service that periodically downloads the data (your second bullet point) or use Newstand. For Newstand, it's possible that your app could fit the definition; it may just depend on how you characterize the app.
Your only choice in iOS really is to go with server-side infrastructure. Don't be afraid of charging the user; if the service you're providing is really useful, people will pay. I do get that it's a lot of extra work, etc, but it really is the only way.
Newsstand apps can only download data once a day, and they still require s server-side push notification to start the download, so you would have to put some infrastructure in place. More importantly, though, Apple is actually quite strict about being in the newsstand; I've been thru this a few times: you don't necessarily have to be a magazine/periodical, but your app should be primarily used for content distribution.
I think you have one further option, location based updates, but this depends on your users moving around regularly.
See e.g. http://blog.instapaper.com/post/24293729146 and http://blog.news.me/post/21643399885/introducing-paper-boy-automatically-download-your-news
Im not even sure how to ask this question, but i'll give it a shot.
I have a program in c# which reads in values from sensors on a manufacturing line that are indicative of the line health. These values update every 500 milisecconds. I have four lines that this is done for. I would like to write a "overview" program which will be able to access these values over the network to give a good summary on how the factory is doing. My question is how do I get the values from the c# programs on the line to the c# overview program realtime?
If my question doesnt make much sense, let me know and I'll try to rephrase it.
Thanks!
You have several options:
MSMQ
Write the messages in MSMQ (Microsoft Message Queuing). This is an (optionally) persistent and fast store for transporting messages between machines.
Since you say that you need the messages in the other app in near realtime, then it makes sense to use MSMQ because you do not want to write logic in that app for handling large amounts of incoming messages.
Keep the MSMQ in the middle and take out what you need and most importantly when you can.
WCF
The other app could expose a WCF service which can be called by your realtime app each time there's data available. The endpoint could be over net.tcp, meaning low overhead, especially if you send small messages.
Other options include what has been said before: database, file, etc. So you can make your choice between a wide variety of options.
It depends on a number of things, I would say. First of all, is it just the last value of each line that is interesting for the 'overview' application or do you need multiple values to determine line health or do you perhaps want to have a history of values?
If you're only interested in the last value, I would directly communicate this value to the overview app. As suggested by others, you have numerous possibilities here:
Raw TCP using TcpClient (may be a bit too low-level).
Expose a http endpoint on the overview application (maybe it's a web application) and post new values to this endpoint.
Use WCF to expose some endpoint (named pipes, net.tcp, http, etc.) on the overview application and call this endpoint from each client application.
Use MSMQ to have each client enqueue messages that are then picked up by the overview app (also directly supported by WCF).
If you need some history of values or you need multiple values to determine line health, I would go with a database solution. Then again you have to choose: does each client write to the database or does each client post to the overview app (using any of the communication means described above) and does the overview app write to the database.
Without knowing any more constraints for your situation, it's hard to decide between any of these.
You can use named pipes (see http://msdn.microsoft.com/en-us/library/bb546085.aspx) to have a fast way to communicate between two processes.
A database. Put your values into a database and the other app then pulls them out that same database. This is a very common solution to this problem and opens up worlds of new scenarios.
see: Relation database
am trying yo build a client-server application using :
c# , MySql Server
the idea is < i have two PCs (clients) are connected to another PC (server)
as shown here :
my questions :
how to show live data in both clients when one change a table , the view will changed at the another PC
how to build a method to manage clients' access to shared resources (db) to prevent errors -
edit : i don't need a source code , just i need path to walk through to cross the road
There are two broad approaches to choose from.
1) Have each client periodically poll the server for updates. Not recommended but easy to implement.
2) Have the server notify the clients of changes. Much more efficient but can be tricky to implement.
To notify clients about changes from other client you should do the following:
Aside from your connection threads you should store references to all currently connected clients, in some kind of synchronized collection (to make sure there are no race conditions).
Now, if any client commits any changes, the server iterates over the other clients and notifies each of them about the change, either with a "Entity X has changed, you should load it again" message or by just pushing the updated entity to the client, hoping that the client will react accordingly.
If you use the first approach, the client now has the choice of either loading the updated entity or load it when it is accessed the next time. The second approach will enforce the client to cache the data (or not, since the client may just cache the ID and reload the entity at another time as if the server just notified it about the update, like in the first approach).
If you can (for whatever reason) not trust the concurrent access safety of your database, you should employ something like a single threaded task queue (in the simplest case... There are more optimized approaches, which allow parallel read actions and prioritizing and such, but implementing that is really a pain).
First, you might want to consider a middle tier that interacts with a both the clients and the DB (ASP?,COM?,Custom Built?). Otherwise, the individual clients will most likely need timers to check the last time the DB was updated.
AFA the sharing issue, it is a database. Databases are designed for concurrent access, so.... not sure about the error part. I you are using c#, and really worried about, ADO.NET has "pesimistic" mode to connect to the DB, but at the cost of performance.
Scenario
I've written a distributed application in C# using WCF.
It uses Client/Server architecture, implementing the Publisher/Subscriber design pattern for "pushing" new data to the client.
The server-side is hosted in a windows service, the client is a windows forms app.
The server-side continually loops through a series of processes and sends the results to the client.
I want to add a whole area to the application for monitoring everything that is going on server-side.
Problem
Here is where I am a bit stuck - I can't decide how I should monitor this stuff.
Thoughts
Do I create an object for storing lots of different information - logs of where the process is up to in the loop on the server-side, exceptions if any, errors etc??
I guess the real question is, how can I successfully maintain a monitoring aspect of the application that gives me relevant information?
Perhaps a central cache on the server-side that gets "snapped" at a point in time every so often and updates the client with the info?
Do you want to know what's currently going on in the server or do you also want to keep a history of what has happened?
If you only want to know what is going on at this moment, my solution would be to maintain the current server state in-memory (this shouldn't be too hard) and have the monitoring client call the server when it wants to know what is happening.
If you want to keep a history of what has happened, you need some data store where the server can write events to. The monitoring client can then read this data store to show what is happening now and what has happened in the past. Even better would be if the client did not have direct access to this data store but instead contacts the server to obtain the relevant information. This way you hide the implementation details of your monitoring history from the client.