I want to rewrite my program that currently uses DataSets over WinForms and move it to WPF.
Currently the program is using Citrix for the users in order to login.
Now when someone is doing some sort of action on the data the main thread is committing the BI on the change and sending it back to the Server, or getting a new data (or modified data) from the server and adding it to the cache.
The problem today is the extensive use of locks and unlocks every time that a user is working on the data or a message arrives from the server.
I'm looking for a data entity or some way to work multithreaded in my client side.
That means that I would like that every thread will be able to commit the BI on the data and communicate with the server while been synchronized with all the other users and their changes.
I looked at EF but it is not thread safe meaning when an update will arrive from the server I'll need to lock my EF and update it and again when the user works on the data inside of the EF.
Is there any way to do it more easily without making the programmer lock/unlock the data every time?
If you are creating the multithreaded application , you cannot avoid the locks.
here is few thing you can apply while using EF :
Don't use a unique context with locks (no singleton pattern).
Instantiate and dispose one context per request and some concurrency
control system
avoid lock on context as much as possible.
Related
My ApplicationUser model contains a property:
public bool SubscribedToNewsletter { get;set; }
I would like to make sure that whenever I update its value in the database, an external API will be called to add or remove the user from a list in my email automation system, without manually calling the method myself to ensure synchronization regardless of programmer's intention.
Is there a built-in functionality provided in ASP.NET? Or do I have to extend the UserManager class and centralize all the calls updating the database?
Calling an external API to keep in sync with your application data is a little more complicated than making a simple change in a domain model.
If you did this, would you call the API before or after you persist changes to the database? If before:
How do you make sure that the change is going to be accepted by the DB?
What if the API call fails? Do you refuse to update the DB?
What if the API call succeeds but the application crashes before updating the DB or the DB connection is temporarily lost?
If after:
The API could be unavailable (e.g. outage). How do you make sure this gets called later to keep things in sync?
The application crashes after updating the DB. How do you make sure the API gets called when it restarts?
There are a few different ways you could potentially solve this. However, bear in mind that by synchronising to an external system that you have lost the ACID semantics you may be used to and your application will have to deal with eventual consistency.
A simple solution would be to have another database table that acts as a queue of API calls to be made (it's important this is ordered by time). When the user's email is updated, you add a row as part of the DB transaction with the relevant details needed. This ensures the request to call the API is always recorded with an update.
Then you would have a separate process (or thread) that polls this table. You could use pg_notify to support push notifications rather than polling.
This process can read the row (in order) then call the relevant API to make the change in the external system. If it succeeds, it can remove the row. If it fails, it can try again using an exponential back-off. Continued failures should be logged for investigation.
The worst case scenario now is that you have at-least-once delivery semantics for updating the system (e.g. if API call succeeded but process crashed before removing the row then the call would be made again when process restarted). If you needed at-most-once, you would remove the row before attempting to make the call.
This is obviously glossing over some of the details and would need modified for a high through-put system but should hopefully explain some of the principles.
I usually tackle this sort of thing with LISTEN and NOTIFY plus a queue table. You send a NOTIFY from a trigger when there's a change of interest, and insert a row into a queue table. A LISTENing connection notices the change, grabs the new row(s) from the queue table, actions them, and marks them as completed.
Instead of listen and notify you can just poll a queue table, listen and notify are an optimisation.
To make this reliable, either the actions you take must be in the same DB and done on the same connection as the update to the queue, or you need to use two-phase commit to synchronise actions. That's beyond the scope of this sort of answer, as you need a transaction resolver for crash recovery etc.
If it's safe to call the API multiple times (it's idempotent), then on failure midway through an operation it becomes fine to just execute all entries in the pending queue table again on crash recovery/restart/etc. You generally only need 2PC etc if you cannot safely repeat one of the actions.
I need an ORM that is suitable for stateful application. I'm going to keep entities between requests in low-latency realtime game server with persistent client connections. There is an only 1 server instance connected to database so no data can be changed from "outside" and the server can rely on its cache.
When user remotely logs in to the server its whole profile is loaded to server memory. Several higher-level services are also created for each user to operate profile data and provide functionality. They can also have internal fields (state) to store temporary data. When user wants to change his signature he asks corresponding service to do so. The service tracks how frequently user changes his signature and allows it only once per ten minutes (for example) - such short interval is not tracked in db, this is a temporary state. This change should be stored to db executing only 1 query: UPDATE users SET signature = ... WHERE user_id = .... When user logs off it's unloaded from server memory after minutes/hours of inactivity. Db here is only a storage. This is what I call stateful.
Some entities are considered "static data" and loaded only once at application start. Those can be referenced from other "dynamic" entities. Loading "dynamic" entity should not require reloading referenced "static data" entity.
Update/Insert/Delete should set/insert/delete only changed properties/entities even with "detached" entity.
Write operations should not each time load data from database (perform Select) preliminary to detect changes. (A state can be tracked in dynamically generated inheritor.) I have a state locally, there is no sense to load anything. I want to continue tracking changes even outside of connection scope and "upload" changes when I want.
While performing operations references of persisted objects should not be changed.
DBConnection-per-user is not going to work. The expected online is thousands of users.
Entities from "static data" can be assigned to "dynamic" enitity properties (which represent foreign keys) and Update should handle it correctly.
Now I'm using NHibernate despite it's designed for stateless applications. It supports reattaching to session but that looks like very uncommon usage, requires me to use undocumented behavior and doesn't solve everything.
I'm not sure about Entity Framework - can I use it that way? Or can you suggest another ORM?
If the server will recreate (or especially reload) user objects each time user hits a button it will eat CPU very fast. CPU scales vertically expensively but have small effect. Contrary if you are out of RAM you can just go and buy more - like with horizontal scaling but easier to code. If you think that another approach should be used here I'm ready to discuss it.
Yes, you can use EF for this kind of application. Please keep in mind, that on heavy load you will have some db errors time to time. And typically, it's faster to recover after errors, when you application track changes, not EF. By the way, you can use this way NHibernate too.
I have used hibernate in a stateful desktop application with extremely long sessions: the session starts when the application launches, and remains open for as long as the application is running. I had no problems with that. I make absolutely no use of attaching, detaching, reattaching, etc. I know it is not standard practice, but that does not mean it is not doable, or that there are any pitfalls. (Edit: but of course read the discussion below for possible pitfalls suggested by others.)
I have even implemented my own change notification mechanism on top of that, (separate thread polling the DB directly, bypassing hibernate,) so it is even possible to have external agents modify the database while hibernate is running, and to have your application take notice of these changes.
If you have lots and lots of stuff already working with hibernate, it would probably not be a good idea to abandon what you already have and rewrite it unless you are sure that hibernate absolutely won't do what you want to accomplish.
First, I apologize for the seemingly dumb question I'm not very strong with databases.
I'm re-designing a desktop application in C# for use over a local network that basically is a highly specialized ticket tracking system. Essentially when a user launches the application they'll be asked for their credentials to gain access to the system and then the application will query the central database for data (currently a MySQL server running on a local machine), displaying it on the screen.
My question is if four users are connected and two users enter new data, what is the most efficient method of letting each user know of the new data? Would it be simply to query the database and update the application with the new data on a timer? Or would creating a server application to sit in between the user and the database server to perform queries itself and notify each connected user of updated data?
See it all depends how important is it to notify the clients in real time about the changes in your database. If your clients have no issue with a delay of minute or two you can probably go for the timer approach. But if they really wish the data to be real time (delay of less than 1-2 sec), go for the other approach. Create a separate service which polls the database and notify the client application for any update. For this you can make use of socket listners.
Hope that helps !!
4 users? On a local LAN? Using simple, indexed queries? Just poll the DB from the clients. Kick off a thread at application start up and have it run a query every 2-5 seconds, then notify the user using whatever is appropriate for background threads updating GUIs in .NET.
This is straightforward, don't over think it. The "hardest" part is the asynchronous notification of the user (which depending on your GUI layout and required actions is probably not a big deal either, thus the quotes).
am trying yo build a client-server application using :
c# , MySql Server
the idea is < i have two PCs (clients) are connected to another PC (server)
as shown here :
my questions :
how to show live data in both clients when one change a table , the view will changed at the another PC
how to build a method to manage clients' access to shared resources (db) to prevent errors -
edit : i don't need a source code , just i need path to walk through to cross the road
There are two broad approaches to choose from.
1) Have each client periodically poll the server for updates. Not recommended but easy to implement.
2) Have the server notify the clients of changes. Much more efficient but can be tricky to implement.
To notify clients about changes from other client you should do the following:
Aside from your connection threads you should store references to all currently connected clients, in some kind of synchronized collection (to make sure there are no race conditions).
Now, if any client commits any changes, the server iterates over the other clients and notifies each of them about the change, either with a "Entity X has changed, you should load it again" message or by just pushing the updated entity to the client, hoping that the client will react accordingly.
If you use the first approach, the client now has the choice of either loading the updated entity or load it when it is accessed the next time. The second approach will enforce the client to cache the data (or not, since the client may just cache the ID and reload the entity at another time as if the server just notified it about the update, like in the first approach).
If you can (for whatever reason) not trust the concurrent access safety of your database, you should employ something like a single threaded task queue (in the simplest case... There are more optimized approaches, which allow parallel read actions and prioritizing and such, but implementing that is really a pain).
First, you might want to consider a middle tier that interacts with a both the clients and the DB (ASP?,COM?,Custom Built?). Otherwise, the individual clients will most likely need timers to check the last time the DB was updated.
AFA the sharing issue, it is a database. Databases are designed for concurrent access, so.... not sure about the error part. I you are using c#, and really worried about, ADO.NET has "pesimistic" mode to connect to the DB, but at the cost of performance.
I have an application that once started will get some initial data from my database and after that some functions may update or insert data to it.
Since my database is not on the same computer of the one running the application and I would like to be able to freely move the application server around, I am looking for a more flexible way to insert/update/query data as needed.
I was thinking of using an website API on a separated thread on my application with some kinda of list where this thread will try to update the data every X minutes and if a given entry is updated it will be removed from the list.
This way instead of being held by the database queries and the such the application would run freely queuing what has to be update/inserted etc
The main point here is so I can run the functions without worrying about connectivity issues to the database end, or issues related, since all the changes are queued to be updated on it.
Is this approach ok ? bad ? are the better recommendations for this scenario ?
On "can access DB through some web server instead of talking directly to DB server": yes this is very common and recommended approach. It is much easier to limit set of operations exposed through custom API (web services, REST services, ...) than restrict direct communication with DB.
On "sync on separate thread..." - you need to figure out what are requirements of the synchronization. Delayed sync may be ok if you don't need to know latest data and not care if updates from client are commited to storage immediately.