Our company needs to write a web service which will use to submit data from remote site to HQ Servers. This web service will update more then 10 tables from both site(Remote and HQ).
We use SQL Server 2008 in both sides and one batch, will insert data into HQ Server and at the same time need to do some updates on the remote side as well.
The remote site will call the web service on HQ with required data.
Simply I need to commit or roll-back both servers when the web method executed.
What is the best way to handle the transaction block in this situation?
Related
What are cons and pros of web service vs direct client-sql server communication?
From my understanding:
Pros:
If web service is installed on same computer as remote db then there is no need to open sql server port so that client can access to remote db. If web service is on another computer then port needs to be open for web service to access remote db.
If someone manages to get a hold of user:pass he only can do operations on db that web service exposes, not all operations on entire db.
Cons:
More work for programmer
Slower
The main difference is that if you go with a Web Service/Rest Api you are centralizing the Business Layer where if there is a bug or a change you can control it very easily with no need of client upgrades.
I will only recommend you to go with direct db access only if you have a small number of clients, over a local network with too few updates to the business logic (aka simple app).
Do to some constraints with a client's system and setup, I am wondering if it is possible for SQL Server to fire off a web request?
I need to be able to send emails after a sql server column gets updated. I had a trigger with database mail but the database server on the client won't allow that. The alternative would have that figure fire off a web request when ready. I would do it via a web service on the web server and have it just poll the table, but that seems like a lot of wasteful cycles.
Any ideas?
You can execute managed code from SQL server. Create a windows service and let that do whatever you want in C# to fulful your need and SQL server call it.
Checkout this link for sample program
Do not call web services from SQLCLR, and especially do not call web services (or send email) via SQLCRL from triggers. There are many reasons why using SQLCLR for external http or smtp access is a bad idea, but doing it from a trigger is horrendous. Just think how will your application behave in a day when the web service responds with a 10 second latency...
You must do the mail interaction from an external application. You should decouple the interaction between your application and the emailer application via a queue. See Using Tables as Queues for how to use a SQL Server table as a queue. You can enqueue the mailing request directly from your application, or use a stored proc that both inserts the data and enqueue the request, or as the last resort, use a trigger on the table to enqueue the mailing request. You emailer application (a separate service) should monitor this queue and service it by doing the actual email send (or http web request , for the matter).
I am trying to write a monitoring tool to monitor some information
It will gonna work on azure normally. So i gonna host the database on azure also the webservice will be hosted at azure.
On the client's i read from the config file how many time's he need to update the information to the azure database ( with the webservice on azure ).
Now i want to send also some commands to the client itself. Like start service, .... what is the best way to do that?
How can i send it from a website that is hosted on the azure platform?
I think you should consider implementing a WCF service at the client as well. The Azure side of your software could call operations from this service when it needs to instruct the client to do something.
The WCF service at the client should be something simple,hosted in a Windows Service or in your actual client (whatever it is... win forms, console, etc).
Since you have no VPN, it sounds like you may have a problem with hosting a WCF service on the client. If the client is behind a firewall, you would have to modify the firewall configuration to allow your server to connect to this service.
Last time I had to do a service like this, I used Comet. The server maintains a queue of messages to be sent to the client. Your client connects to the web service and requests any available messages. If messages are available, the server returns them. If not, the server leaves the request open for some time. As soon as a message arrives, the server sends it down the already-open connection. The client will either periodically time out/reconnect or send a keep-alive message (perhaps once per minute) in order to keep the connection alive in the intervening firewalls.
I have an application developed in C# for desktop which will access MySql Database in an online server. Since it is a shared database the client IP can't access the Database directly so i need to implement a proxy server which can accept the request from the client and after processing the business logic by using the Database it will return the result back to the client. The proxy will be acting as an intermediator. I haven't been able to think of a way to implement the proxy server to handle the request. Leaving the Proxy aside the other things are in place and running fine. Any other alternative is most welcome!!!!
Wouldn't it be better if your application just makes a HTTP-request to a page on your server, and the page itself executes the query and returns the result as json, xml, csv or whathaveyou? This way you avoid problems with users behind a firewall who can only use port 80, and it's easier for you to filter out malicious queries in your script before they reach the database.
I have a company network under my control and a couple of closed customer networks. I want to communicate from a web application in my network to a database inside a customer network. My first idea was:
Web application stores query in a database in the company network and waits for answer.
Windows service inside client network polls our database a couple of times every second through a (WCF) web service also in our company network.
If a query is available the Windows service executes it in it's local database and stores the answer in the company database.
I've been thinking about removing the polling idea and instead using persistent connection between a client in the customer network and a server in our company network. The client initiates the connection and then waits for queries from the server. What would be better or worse compared to polling through a web service? Would WCF be the right thing to use here?
you have few approaches:
WCF Duplex, Once the web application stores a query in database, you initiate a call to the client (in this case the Windows Service) instead of making the windows service polls every few seconds. net.tcp will be good choice but still you can use http.
Long polling, Instead of letting your Windows Service client sends a request every few seconds, let it send the request, the channel is open, set the timeout in both client and WCF service for longer time, let the server method loops and checks the database for new notifications. Once new notifications found, the method returns with them. At Client side, once you get a return send another request to the server and then process the data. If timeOut error occure, send another request. Just Google Long polling you will find a lot.
Regarding querying the database every few seconds, the better approach would be making a table for notifications, So instead of querying a large table with a complex sql string every few seconds you can let the client add the notifications in a separate table (after they are done adding them the main table), so your query will be much simpler and takes less resources. you can add direct pointers (Like Ids) in the notifications table to save time. Later clean up the notifications table..