My ApplicationUser model contains a property:
public bool SubscribedToNewsletter { get;set; }
I would like to make sure that whenever I update its value in the database, an external API will be called to add or remove the user from a list in my email automation system, without manually calling the method myself to ensure synchronization regardless of programmer's intention.
Is there a built-in functionality provided in ASP.NET? Or do I have to extend the UserManager class and centralize all the calls updating the database?
Calling an external API to keep in sync with your application data is a little more complicated than making a simple change in a domain model.
If you did this, would you call the API before or after you persist changes to the database? If before:
How do you make sure that the change is going to be accepted by the DB?
What if the API call fails? Do you refuse to update the DB?
What if the API call succeeds but the application crashes before updating the DB or the DB connection is temporarily lost?
If after:
The API could be unavailable (e.g. outage). How do you make sure this gets called later to keep things in sync?
The application crashes after updating the DB. How do you make sure the API gets called when it restarts?
There are a few different ways you could potentially solve this. However, bear in mind that by synchronising to an external system that you have lost the ACID semantics you may be used to and your application will have to deal with eventual consistency.
A simple solution would be to have another database table that acts as a queue of API calls to be made (it's important this is ordered by time). When the user's email is updated, you add a row as part of the DB transaction with the relevant details needed. This ensures the request to call the API is always recorded with an update.
Then you would have a separate process (or thread) that polls this table. You could use pg_notify to support push notifications rather than polling.
This process can read the row (in order) then call the relevant API to make the change in the external system. If it succeeds, it can remove the row. If it fails, it can try again using an exponential back-off. Continued failures should be logged for investigation.
The worst case scenario now is that you have at-least-once delivery semantics for updating the system (e.g. if API call succeeded but process crashed before removing the row then the call would be made again when process restarted). If you needed at-most-once, you would remove the row before attempting to make the call.
This is obviously glossing over some of the details and would need modified for a high through-put system but should hopefully explain some of the principles.
I usually tackle this sort of thing with LISTEN and NOTIFY plus a queue table. You send a NOTIFY from a trigger when there's a change of interest, and insert a row into a queue table. A LISTENing connection notices the change, grabs the new row(s) from the queue table, actions them, and marks them as completed.
Instead of listen and notify you can just poll a queue table, listen and notify are an optimisation.
To make this reliable, either the actions you take must be in the same DB and done on the same connection as the update to the queue, or you need to use two-phase commit to synchronise actions. That's beyond the scope of this sort of answer, as you need a transaction resolver for crash recovery etc.
If it's safe to call the API multiple times (it's idempotent), then on failure midway through an operation it becomes fine to just execute all entries in the pending queue table again on crash recovery/restart/etc. You generally only need 2PC etc if you cannot safely repeat one of the actions.
Related
I want to rewrite my program that currently uses DataSets over WinForms and move it to WPF.
Currently the program is using Citrix for the users in order to login.
Now when someone is doing some sort of action on the data the main thread is committing the BI on the change and sending it back to the Server, or getting a new data (or modified data) from the server and adding it to the cache.
The problem today is the extensive use of locks and unlocks every time that a user is working on the data or a message arrives from the server.
I'm looking for a data entity or some way to work multithreaded in my client side.
That means that I would like that every thread will be able to commit the BI on the data and communicate with the server while been synchronized with all the other users and their changes.
I looked at EF but it is not thread safe meaning when an update will arrive from the server I'll need to lock my EF and update it and again when the user works on the data inside of the EF.
Is there any way to do it more easily without making the programmer lock/unlock the data every time?
If you are creating the multithreaded application , you cannot avoid the locks.
here is few thing you can apply while using EF :
Don't use a unique context with locks (no singleton pattern).
Instantiate and dispose one context per request and some concurrency
control system
avoid lock on context as much as possible.
I have a data entry ASP.NET application. During a one complete data entry many transactions occur. I would like to keep track of all those transactions so that if the user wants to abandon the data entry, all the transaction of which I have been keeping record can be rolled back.
SQL 2008 ,Framework version is 4.0 and I am using c#.
This is always a tough lesson to learn for people that are new to web development. But here it is:
Each round trip web request is a separate, stand-alone thread of execution
That means, simply put, each time you submit a page request (click a button, navigate to a new page, even refresh a page) then it can run on a different thread than the previous one. What's more, even if you do get the same thread twice, several other web requests may have been processed by the thread in the time between your two requests.
This makes it effectively impossible to span simple transactions across more than one web request.
Here's another concept that you should keep in mind:
Transactions are intended for batch operations, not interactive operations.
What this means is that transactions are meant to be short-lived, and to encompass several operations executing sequentially (or simultaneously) in which all operations are atomic, and intended to either all complete, or all fail. Transactions are not typically designed to be long-lived (meaning waiting for a user to decide on various actions interactively).
Web apps are not desktop apps. They don't function like them. You have to change your thinking when you do web apps. And the biggest lesson to learn, each request is a stand-alone unit of execution.
Now, above, I said "simple transactions", also known as lightweight or local transactions. There's also what's known as a Distributed Transaction, and to use those requires a Distributed Transaction Coordinator. MSDTC is pretty commonly used. However, DT's perform much more slowly than LWT's. Also, they require that the infrastructure be setup to use a DTC.
It's possible to span a transaction over web requests using a DTC. This is done by "Enlisting" in a Distribute Transaction, and then somehow sharing this transaction identifier between requests. But this is a lot of work to setup, and deal with, and has a lot of error prone situations. It's not something you want to do if you have other options.
In general, you're better off adding the data to a temporary table or tables, and then when the final save is done, transfer that data to the permanent tables. Another option is to maintain some state (such as using ViewState or Session) to keep track of the changes.
One popular way of doing this is to perform operations client-side using JavaScript and then submitting all the changes to the server when you are done. This is difficult to implement if you need to navigate to different pages, however.
From your question, it appears that the transactions are complete when the user exercises the option to roll them back. In such cases, I doubt if the DBMS's transaction rollback semantics would be available. So, I would provide such semantics at the application layer as follows:
Any atomic operation that can be performed on the database should be encapsulated in a Command object. Each command will implement the undo method that would revert the action performed by its execute method.
Each transaction would contain a list of commands that were run as part of it. The transaction is persisted as is for further operations in future.
The user would be provided with a way to view these transactions that can be potentially rolled back. Upon selection of a transaction by user to roll it back, the list of commands corresponding to such a transaction are retrieved and the undo method is called on all those command objects.
HTH.
You can also store them on temporary Table and move those records to your original table 'at later stage'..
If you are just managing transactions during a single save operation, use TransactionScope. But it doesn't sound like that is the case.
If the user may wish to abandon n number of previous save operations, it suggests that an item may exist in draft form. There might be one working draft or many. Subsequently, there must be a way to promote a draft to a final version, either implicitly or explicitly. Think of how an email program saves a draft. It doesn't actually send your message, you may abandon it at any time, and you may recall it at a later time. When you send the message, you have "committed the transaction".
You might also add a user interface to rollback to a specific version.
This will be a fair amount of work, but if you are willing to save and manage multiple copies of the same item it can be accomplished.
You may save the a copy of the same data in the same schema using a status flag to indicate that it is a draft, or you might store the data in an intermediate format in separate table(s). I would prefer the first approach in that it allows the same structures to be used.
I have an application that once started will get some initial data from my database and after that some functions may update or insert data to it.
Since my database is not on the same computer of the one running the application and I would like to be able to freely move the application server around, I am looking for a more flexible way to insert/update/query data as needed.
I was thinking of using an website API on a separated thread on my application with some kinda of list where this thread will try to update the data every X minutes and if a given entry is updated it will be removed from the list.
This way instead of being held by the database queries and the such the application would run freely queuing what has to be update/inserted etc
The main point here is so I can run the functions without worrying about connectivity issues to the database end, or issues related, since all the changes are queued to be updated on it.
Is this approach ok ? bad ? are the better recommendations for this scenario ?
On "can access DB through some web server instead of talking directly to DB server": yes this is very common and recommended approach. It is much easier to limit set of operations exposed through custom API (web services, REST services, ...) than restrict direct communication with DB.
On "sync on separate thread..." - you need to figure out what are requirements of the synchronization. Delayed sync may be ok if you don't need to know latest data and not care if updates from client are commited to storage immediately.
In the service I am currently developing I need to provide a twofold operation:
The request being made should be registered in the database (using Register() method); and
The request should be sent to an external webservice for further processing (using Dispatch() method).
Considering that I can't switch the order of the operations, I would like to be able to "rollback" the first one if something goes wrong with the second, so that a then-invalid record does not get inserted to the BD. The problem here is that, of course, I am commiting the transaction inside the Register method. Is there any way I can roll it back from inside the Dispatch method if anything goes wrong?
Edit: All transaction are being managed from the .NET-side.
The database won't help you in this case. You have to create compensating transactions, using pairs of operations that undo each other. Your services will effectively have to replace all the work and logic that has gone into relational databases for managing transactions.
I have a requirement to monitor the Database rows continuously to check for the Changes(updates). If there are some changes or updates from the other sources the Event should be fired on my application (I am using a WCF). Is there any way to listen the database row continuously for the changes?
I may be having more number of events to monitor different rows in the same table. is there any problem in case of performance. I am using C# web service to monitor the SQL Server back end.
You could use an AFTER UPDATE trigger on the respective tables to add an item to a SQL Server Service Broker queue. Then have the queued notifications sent to your web service.
Another poster mentioned SqlDependency, which I also thought of mentioning but the MSDN documentation is a little strange in that it provides a windows client example but also offers this advice:
SqlDependency was designed to be used
in ASP.NET or middle-tier services
where there is a relatively small
number of servers having dependencies
active against the database. It was
not designed for use in client
applications, where hundreds or
thousands of client computers would
have SqlDependency objects set up for
a single database server.
Ref.
I had a very similar requirement some time ago, and I solved it using a CLR SP to push the data into a message queue.
To ease deployment, I created an CLR SP with a tiny little function called SendMessage that was just pushing a message into a Message Queue, and tied it to my tables using an AFTER INSERT trigger (normal trigger, not CLR trigger).
Performance was my main concern in this case, but I have stress tested it and it greatly exceeded my expectations. And compared to SQL Server Service Broker, it's a very easy-to-deploy solution. The code in the CLR SP is really trivial as well.
Monitoring "continuously" could mean every few hours, minutes, seconds or even milliseconds. This solution might not work for millisecond updates: but if you only have to "monitor" a table a few times a minute you could simply have an external process check a table for updates. (If there is a DateTime column present.) You could then process the changed or newly added rows and perform whatever notification you need to. So you wouldn't be listening for changes, you'd be checking for them. One benefit of doing the checking in this manner would be that you wouldn't risk as much of a performance hit if a lot of rows were updated during a given quantum of time since you'd bulk them together (as opposed to responding to each and every change individually.)
I pondered the idea of a CLR function
or something of the sort that calls
the service after successfully
inserting/updating/deleting data from
the tables. Is that even good in this
situation?
Probably it's not a good idea, but I guess it's still better than getting into table trigger hell.
I assume your problem is you want to do something after every data modification, let's say, recalculate some value or whatever. Letting the database be responsible for this is not a good idea because it can have severe impacts on performance.
You mentioned you want to detect inserts, updates and deletes on different tables. Doing it the way you are leaning towards, this would require you to setup three triggers/CLR functions per table and have them post an event to your WCF Service (is that even supported in the subset of .net available inside sql server?). The WCF Service takes the appropriate actions based on the events received.
A better solution for the problem would be moving the responsibility for detecting data modification from your database to your application. This can actually be implemented very easily and efficiently.
Each table has a primary key (int, GUID or whatever) and a timestamp column, indicating when the entry was last updated. This is a setup you'll see very often in optimistic concurrency scenarios, so it may not even be necessary to update your schema definitions. Though, if you need to add this column and can't offload updating the timestamp to the application using the database, you just need to write a single update trigger per table, updating the timestamp after each update.
To detect modifications, your WCF Service/Monitoring application builds up a local dictionay (preferably a hashtable) with primary key/timestamp pairs at a given time interval. Using a coverage index in the database, this operation should be really fast. The next step is to compare both dictionaries and voilá, there you go.
There are some caveats to this approach though. One of them is the sum of records per table, another one is the update frequency (if it gets too low it's ineffective) and yet another pinpoint is if you need access to the data previous to modification/insertion.
Hope this helps.
Why don't you use SQL Server Notification service? I think that's the exact thing you are looking for. Go through the documentation of notification services and see if that fits your requirement.
I think there's some great ideas here; from the scalability perspective I'd say that externalizing the check (e.g. Paul Sasik's answer) is probably the best one so far (+1 to him).
If, for some reason, you don't want to externalize the check, then another option would be to use the HttpCache to store a watcher and a callback.
In short, when you put the record in the DB that you want to watch, you also add it to the cache (using the .Add method) and set a SqlCacheDependency on it, and a callback to whatever logic you want to call when the dependency is invoked and the item is ejected from the cache.