How will the target application get the messages send to it while it was unresponsive, stopped and restarting? Will they be sent again automatically when it comes back online?
How would you implement this with EF and C#? Where are the tutorials!
Service Broker sends from SQL Server to SQL Server. The protocol used is fully resilient to crashes, messages stay in the sender's sys.transmission_queue until acknowledged by the target, and the target only acknowledges them after committing them into the destination service queue. SQL Server also handles everything related to transient failures: unresponsive destination, network partitioning, servicing/patching outages. All this is handled by SQL Server itself, as it guarantees Exactly Once In Order delivery.
Now what happens if your application crashes, ie. while processing a RECEIVE statement, is very simple: you interact with Service Broker through T-SQL, in a database transaction context. If the application crashes, the normal behavior of ACID database transactions kick in: since the transaction did not commit, it will be rolled back and the application will have a chance to process the message again, after restart.
So, from your application point of view, you only interact with a database, queues and tables and all, within a database transaction context. Your questions are the same as 'what happens to an INSERT if the application crashes?'
Related
I have a software operating via WLAN mounted on a moving device.
At the moment transactions are opened and closed in code. Between the businesslogic is happening.
Now i'm suffering of lost connection and staying open transactions on the database. (MSSQL 2012)
My solution was to move all transactions/logic to a sp.
So the client only calls the sp and transations are handled inside.
My question here is:
What happens to a sp wenn the connection is lost? Does it run to the end?
This is covered in the documentation Controlling Transactions (Database Engine), specifically in the Errors During Transaction Processing section:
If an error prevents the successful completion of a transaction, SQL
Server automatically rolls back the transaction and frees all
resources held by the transaction. If the client's network connection
to an instance of the Database Engine is broken, any outstanding
transactions for the connection are rolled back when the network
notifies the instance of the break. If the client application fails or
if the client computer goes down or is restarted, this also breaks the
connection, and the instance of the Database Engine rolls back any
outstanding connections when the network notifies it of the break. If
the client logs off the application, any outstanding transactions are
rolled back.
I've emphasised the relevant section.
So, moving the transactions to the SP won't stop the transaction being rolled back if your connection drops. I would suggest finding out why your connection is unstable and fixing that. Otherwise you'll need to work out a way to run the query locally on the instance (perhaps, using SQL Agent).
I need to implement a persistent queue in my C# service. It polls data from an external source. After it received a unit of data, it shall send it to a server. If the sending fails, it shall write it to disk in a queue-manner and try to resend it with an interval but also continue to poll the data and thus keep fill up the queue. I need to save it to disk because the network can fail and meanwhile the server can be shutdown. Resulting in a restart of the service and thus in-memory queue deleted. (Of course no polling will be made during reboot but the data during the network failure before will be lost).
I have now solved this problem by implementing a queue in Sql CE. After it polls the data, it directly writes it to the sql ce database, another thread then reads (peek) the database and tries to send the data. If it managed to send it, the message gets dequeued. I feel this solution is quite heavy and not very efficient.
Do anyone have experience with similar scenario and tips of how to implement it in a better way?
I was wondering if it's possible to do a 'soft shutdown' or 'soft reboot' of a cloud service. In other words the server would refuse new incoming http requests (which come in through ASP.net controller actions), but would finish all existing requests that are in progress. After this happens the server would then shutdown or stop as normal.
Server Version
Azure OS Family 3 Release
Windows Server 2012
.NET 4.5
iis-8.0
asp.net 4.0
Usage Scenario
I need to ensure that any actions responding to remote http requests currently in progress finish before a server begins the process of shutting down or becoming unresponsive because of a staging to production swap.
I've done some research, but don't know if this is possible.
A hacky work around might be using a CloudConfigurationManager variable to initiate that an error 503 code should be returned on any incoming actions over http, but then I'd have to sit around and wait for a while without any way to verify that condition. At that point I could then stop the service or perform a swap.
See http://azure.microsoft.com/blog/2013/01/14/the-right-way-to-handle-azure-onstop-events/ for information on how to drain HTTP requests when a role is stopping (attaching image below, I don't know why the source uses an image instead of text...):
Also note that doing a VIP swap won't affect the role instances themselves or any TCP connections to the instances, so nothing should become unresponsive just because you do a VIP swap. Once you begin shutting down the staging deployment after a VIP swap that is when the code above will help drain the requests before actually shutting down.
I have a REST web service that is used for comunication with multiple clients, some sort of chat, but I have to make all the changes into the database as soon as the clients comunicate something and then inform all the others clients when a change is made.
I basically get a POST request and I have to reply as soon as an entry is mofified.
Now I make my thread sleep 1 second and then keep recreating the context every second for each request and if there are changes to the database I send the response.
This looks ugly to me and I wonder if there is some event or async method to be notified when a specific entry in the database is modified?
Thank you Advance.
If you're using MS SQL Server, you might have success with using Sql Dependencies. Here's a link to a brief tutorial: C# & SqlDependency - Monitoring your database for data changes.
Microsoft's successor to Notification Services, SQL Server 2008 R2 – Complex Event Processing (CEP) Technology might also serve your purposes, but I know nothing about it but what's on the web page.
I have a company network under my control and a couple of closed customer networks. I want to communicate from a web application in my network to a database inside a customer network. My first idea was:
Web application stores query in a database in the company network and waits for answer.
Windows service inside client network polls our database a couple of times every second through a (WCF) web service also in our company network.
If a query is available the Windows service executes it in it's local database and stores the answer in the company database.
I've been thinking about removing the polling idea and instead using persistent connection between a client in the customer network and a server in our company network. The client initiates the connection and then waits for queries from the server. What would be better or worse compared to polling through a web service? Would WCF be the right thing to use here?
you have few approaches:
WCF Duplex, Once the web application stores a query in database, you initiate a call to the client (in this case the Windows Service) instead of making the windows service polls every few seconds. net.tcp will be good choice but still you can use http.
Long polling, Instead of letting your Windows Service client sends a request every few seconds, let it send the request, the channel is open, set the timeout in both client and WCF service for longer time, let the server method loops and checks the database for new notifications. Once new notifications found, the method returns with them. At Client side, once you get a return send another request to the server and then process the data. If timeOut error occure, send another request. Just Google Long polling you will find a lot.
Regarding querying the database every few seconds, the better approach would be making a table for notifications, So instead of querying a large table with a complex sql string every few seconds you can let the client add the notifications in a separate table (after they are done adding them the main table), so your query will be much simpler and takes less resources. you can add direct pointers (Like Ids) in the notifications table to save time. Later clean up the notifications table..