Scenario:
4 users launch separate instances of the same client program (Winforms) that is connected to a database based To-Do list.
The first user selects the 3rd To-Do list item.
How do I update/refresh the other 3 users screens to reflect that item #3 is no longer available?
My thought was a table that contains the last update date-time stamp. Then a timer would check every few seconds to see if there have been any changes.
UPDATE1:
Thanks to all - there are definitely a number of valid answers.
I went with simpler version of the scenario that Icemanind recommended.
As Lucas suggested you can implement a 'Push' style system that whenever an entity is modified it is 'Pushed' to the other connected users. This can be a bit complex. Working with a legacy system the way we handle this is through a 'Change Number' column but really it can be anything that is updated each time the record is modified.
When a user attempts to modify an entity we query the database to row-lock that entity where the 'Change Number' reflects the 'Change Number' the user currently has.
If the lock is successful the user is able to update/delete the entity. When they are done they 'Save/Commit' and 'Change Number' on the entity is increased.
If they fail to get the row-lock and the 'Change Number' was the same, we display a message that the entity they requested is in use by another user. If the 'Change Number' was different then the message states they must refresh their view.
Yes. The best way to do this is to implement a "push" type system. Here is how it would work. Whenever someone clicks something on the client end, the client would send a message to the server. The server would need to receive this signal and then the server would send out a refresh message to all clients connected to the server.
I don't know your client or server is coded, but you'll want to create a thread on the server that "listens" for incoming messages from clients and once it receives a message, puts it into a queue, the returns to listening for more messages. A second thread on the server needs to process the messages in the queue.
On the client side, you'll also want a second thread that listens for incoming messages from the server. Once a message is received, it can process the message and take whatever action is necessary.
A pretty decent tutorial on client/server and socket programming can be found here: http://www.codeproject.com/KB/IP/serversocket.aspx
Of course, it is a guide. You will need to modify it as you see fit.
Hope this makes sense and good luck!
You could implement a 'Push' system where when 1 user updates something, the server sends an update message to all connected clients.
I'd opt for an isDirty timestamp/flag on the items. No need to get all the items again and no need to create a difficult push system. Reread the items since last call every now and then.
If you're using SQL Server as your store, and ADO.NET or something on top of that for your data access, you should also check into SQL Dependency.
Basically, you can create a query from a table, and you can instruct SQL Server to notify you if any of the data in that query result changes.
See some intro articles on the topic:
Using SqlDepedency for data change events
SqlDepedency: creating a smarter cache
Using SqlDependency to monitor SQL Database changes
Query notifications in ADO.NET 2.0
It requires a bit of initial setup effort, but the huge advantage here is: you'll be notified automagically when something you're interested in (which is part of your query result set) changes, and you can react to that. No messy polling or anything....
Related
I have a webform which has certain fields with values displayed on it. These fields are editable. There is a list of things that needs to be updated (which are time consuming) following the edits:
1. all the edited values has to be entered into database
2. There is a scheuler which picks these values and run cetain modules
So when edits are made in quick succession, we cant go ahead and perform these updates. Before we enter The values into the database, we ll see if two edits are made for the same field. If so we ll pick only the latest edit and enter into the db.So the edited values has to be periodically picked (must be configurable) so that we can avoid running the expensive updates twice.
I am planning it to design it using a push notification and queue. That is whenever an edit is made it will be pushed into the queue. This queue will be periodically checked to get the updates and finally data is pushed into the database.Is there any other better way of doing this? Sorry for the lengthy explanation.
This sounds like you need something like nservicebus
http://particular.net/nservicebus
You can get further information about what service bus does here:
What is a servicebus?
I would consider perhaps looking into SignalR which allows bi-directional communication between the client and the server. It sounds like you have quite a lot of process "kicking off" and you may be able to better orchestrate it using SignalR to push/pull info to and from the server as necessary - just something worth considering that might help!.
I am developing a Client-Server application using C# .NET Winforms with SQL Server 2008. The client pc's connect to the database server via the LAN.
The task I want to achieve is when an insert (or update or delete) is performed on one client-PC, all other clients must get that update in real-time.
I am currently using timers, so that each client queries the database every 15 seconds then refreshes the gridviews, combo boxes and list boxes. But this makes the application slow and bulky to use.
What is the correct method to use in such scenario. What are such operations called (correct terminology)? Should I use windows services? or same application with background threads ?
First of all, its Windows, so it cannot ever be realtime.
The solutions that – Igby Largeman suggests is well possible. It does have the disadvantage that it can cause very heavvy network traffic, because every time something changes in the database, it is broadcasted to all the clients.
You also have to consider the possibility that something clogs ub the communication between the server and one or more clients, so realtime is out of the question.
that's tricky!
If you really want a user with a grid open on PCA to see data inserted on a PCB without performing any action, you will need a timer to refresh the grid. But I dont think this is a good aproach, you can easily overload the system.
The good practice here is display only the data due to be manipulated. So for example, lets say you want to alter a client's name. You build a search form with a grid where the user can inform search parameters (to filter the data) and once the client is found and altered, you perform another search to the DB to get and display the new data.
But lets say another user had the same grid open showing the client before you perform the alteration. It is showing the old value, but once it clicks on it to see the details, you'll perform a search to the DB to get the new data so that would be ok.
We have a reporting app thats needs to update it's charts as the data gets written to it's corresponding table. (the report is based off just one table). Currently we just keep the last read sessionid + rowid (unique combo) in memory and a polling timer just does a select where rowid > what we have in memory (to get the latest rows added). Timer runs every second or so and the fast sql reader does it's job well. So far so good. However I feel this is not optimal because sometimes there are pauses in the data writes due to the process by design. (user clicking the pause button on the system that writes data ..). Meanwhile our timer keeps hitting the db and does not get any new rows. No errors or anything. How is this situation normally handled. The app that writes the data is separate from the reporting app. The 2 apps run on different machines. Bottomline : How to get data into a c# app as and when it is written into a sql server table without polling unnecessarily. thank you
SQL Server has the capability to notify a waiting application for changes, see The Mysterious Notification. This is how SqlDependency works. But this will only work up to a certain threshold of data change rate. If your data changes too frequently then the cost of setting up a query notification just to be immediately invalidated by receiving the notification is too much. For really high end rates of changes the best place is to notify the application directly from the writer, usually achieved via some forms of a pub-sub infrastructure.
You could also attempt a mixed approach: pool for changes in your display application and only set up a query notification if there are no changes. This way you avoid the cost of constantly setting up Query Notifications when the rate of changes is high, but you also get the benefits of non-pooling once the writes settle down.
Unfortunately the only 'proper' way is to poll, however you can reduce the cost of this polling by having SQL wait in a loop (make sure you WAITFOR something like 30ms each loop pass) until data is available (or a set time period elapses, e.g. 10s). This is commonly used when writing SQL pseudoqueues.
You could use extended procs - but that is fragile, or, you could drop messages into MSMQ.
If your reporting application is running on a single server then you can have the application that is writing the data to SQL Server also send a message to the reporting app letting it know that new data is available.
However, having your application connect to the server to see if new records have been added is the most common way of doing it. As long as you do the polling on a background thread, it shouldn't effect the performance of your application at all.
you will need to push the event out of the database into the realm of your application.
The application will need to listen for the message. (you will need to decide what listening means - what port, what protocol, what format etc.)
The database will send the message based on the event through a trigger. (you need to look up how to use external application logic in triggers)
I have a form with a list that shows information from a database. I want the list to update in real time (or almost real time) every time something changes in the database. These are the three ways I can think of to accomplish this:
Set up a timer on the client to check every few seconds: I know how to do this now, but it would involve making and closing a new connection to the database hundreds of times an hour, regardless of whether there was any change
Build something sort of like a TCP/IP chat server, and every time a program updates the database it would also send a message to the TCP/IP server, which in turn would send a message to the client's form: I have no idea how to do this right now
Create a web service that returns the date and time of when the last time the table was changed, and the client would compare that time to the last time the client updated: I could figure out how to build a web service, but I don't how to do this without making a connection to the database anyway
The second option doesn't seem like it would be very reliable, and the first seems like it would consume more resources than necessary. Is there some way to tell the client every time there is a change in the database without making a connection every few seconds, or is it not that big of a deal to make that many connections to a database?
Try the SqlDependency class. It will fire an OnChange event whenever the results of its SqlCommand change.
EDIT:
Note that if there are large numbers of copies of your program running, it can generate excessive server load. If your app will be publicly available, it might not be a good idea.
Also note that it can fire the event on different threads, so you'll need to use Control.BeginInvoke to update your UI.
You can use Event Notifications in SQL Server to raise events in .Net letting you know that the data has changed. See article linked below.
Eric
SQL Server Event Notification
With ASP.Net you can cache query results in memory and setup a dependency that registers with the SQL Server. When something within the data changes the cache is refreshed automatically. Perhaps looking into this might point you in a good direction.
http://msdn.microsoft.com/en-us/library/ms178604.aspx
We are developing a Windows Forms application that will be installed on about 1,000 employee pcs. Users may run multiple instances of the application at the same time. The clients are all on a single intranet.
Changes in the application may cause database record changes, which in turn must be communicated to the other clients so their UIs are updated.
Our team has talked about two different approaches:
1. Multicast packets
The source client modifies the records and then sends out a multicast packet with a payload that something has changed. The other clients receive this and fetch the data specified. We need to account for the cases when the packet is not received, falling back onto actively retrieving the data.
My question at this point is how does a client know it didn't receive a packet? (don't know what you don't know) Which brings us to some sort of event log with timestamps in the database, and UI controls track the last time they were updated. They come into focus, check their timestamps, and update as needed.
Someone else said the UI elements would just reload every time they come into focus (think modes in outlook, bringing controls to the front of a stack workspace with CAB). And that the multicast is to update the clients that their current context has changed. If they miss it they work with stale data until they change modes and come back.
2. WCF and Callbacks
Clients register with WCF contracts for callbacks over a tcp binding. The primary technical concern with this is the server maintaining many open sockets. We have read up on how it isn't open in the traditional sense, it is put to sleep for a maximum of 90 seconds and then re-established at that point. We also read about the maximum number of open connections a Windows 2003 Server machine can handle, and how to modify that in the registry.
If we have 1,000 open socket connections to a server is this going to fall apart?
If anyone has faced this same situation and tried or evaluated the WCF approach we would love to hear about it.
I have not implemented a situation like this. However, I would think that one of the duplex bindings would not necessarily have a high overhead.
It all depends on how much information the server needs to send back to the clients. I understand you said the information will be used for them to update their UI. However it seems possible that they may not all need the same amount of information at the same time. For instance, if information about the Western region has changed, all 1000 clients may want to know that there is a change, and they may all want to update summary-level information about the Western region, but perhaps only 1/4 of them may need to see the details of the change.
If this is the case, then I'd recommend that the callback only provide information about what has changed, mostly at a summary level. Let those clients who are interested in the details of the change ask for the details. You might even go as far as to provide all the details for the top one or two levels of hierarchy, then for the rest, just include information saying "this changed at time". That way, depending on the level of hierarchy being viewed by a particular client, the client could then ask or not ask.
If necessary, you could batch updates together. If the clients only need to be updated once per second, then you could accumulate the changes for the last second and send them all at once.
You may also want to use some of the Peer to Peer bindings for some tasks. Perhaps the clients in a particular area of your business would like to know a little about what each other are working on - that sort of thing.