I've been asked to create a Service for our Parent Company. They don't care how I do it, as long as the data is sent to them.
We have an SQL 2000 Server that receives machine data via Data Transformation Services (DTS).
Our Parent Company wants me to create a Service that runs every 5 minutes or so to collect new data, summarize it, and forward it to them.
With my background in Windows Forms development, I naturally think that I should poll the database every 5-minutes using some type of Windows Service, then send that data over to our Parent Company.
The machine housing this data is an old Windows 2000 machine, and our Network Administrator has recommended that I write this as a Web Service on our newer Web Server.
I created a Web Service a few months back for the Web Server to pull work order information from our Parent Company, but I do not know how to make this Service execute a process every 5-minutes.
Yesterday, I learned how to create an Insert After Trigger when records were added to the table. Unfortunately, the triggers are not called because this old server uses DTS jobs. I was able to learn about Controlling Trigger Execution When Bulk Importing Data, but there does not seem to be a way to modify our old DTS jobs to enable the BULK INSERT command. It may not work on SQL Server 2000.
So, with this background, should I create a Windows Service or a Web Service?
How should I proceed?
I would not make a web service for a recurring task. Web services are not very comparable to a windows service.
btw: A simpler alternative might be to create a command-line app that runs, periodically via a scheduled task (read about the "AT scheduler in Server 2000"). I is just easier to install and make updates because it wouldn't require a reboot of your server each time you make an update.
If the webservice has a method which you can call that executes the data importing/converting exactly one time, you can use a windows task or cron job to make a request to that method. You can either add this task to the server that is hosting the service, or some other server as long as it can access the webservice.
Related
I have to create an external service for existing database which is working with ERP system.
Edit: service will be running on the same machine where SQL Server is running.
Service has to listen for a trigger on a table with documents.
Scenario:
User creates a document
Trigger calls service method
Method queries the database to get data, create document and send it to external API.
Is it possible to catch trigger like that from a C# Worker Service?
It's technically possible to run arbitrary SQL CLR code in a trigger, and make remote web service or rpc calls to code running in a separate service. But since the trigger runs during the transaction and any code you run can delay or undo the database change, it's not recommended. And probably would not be supported by the ERP system.
So the best pattern here is to have the trigger write a row into a local table, and then have an external process poll that table and do the rest of the work, or to configure Change Data Capture or Change Tracking and have the external program query that.
I need some architectural advice on how to build a background service application.
Background:
I have 2 websites, and I need to transfer some data from website A to website B. Service
would have to run in a background (as windows service) and should connect (every 5 minutes)
to websites's A database directly (MSSQL) grab some data and insert this data through
websites's B API (API is build on MVC Web Api). Both websites are hosted on a same virtual
machine (Windows Server 2008 R2 Datacenter), but this might change (website B can be switched to another virtual server or cloud hosting as Windows Azure or Amazon AWS).
Question:
What do you suggest (best practices) and what guidelines you can give me? I want this to
be scalable and fast as possible and that service will receive multiple requests.
Thank you,
Jani
If it is important to know what data was transferred, then:
Add logs - log4net for instance
Issue tickets if the process stops, and close the ticket when it restarts, this way you will know if a process fails. Depending on the amount of data use you could use Redis/Riak.
Put monitoring on each service A and B, and you might also consider restarting the service via IIS on fail down.
I am looking for some advice. We have multiple systems that sync from Microsoft Dynamics 4 (CRM) on a nightly basis. However I am often asked to manually execute the SP during the day as the sync is sometimes required immediately. This SP is on a different server to CRM.
I am wanting to add a custom button to CRM that allows the user to sync CRM to other systems themselves if they need it immediately. I do not want this to be a Workflow that occurs every time a record is updated, it will be the a user decision.
I have done some Googling and I'm at a crossroads as to how best to approach this.. 1) Executing a service from the button's javascript or 2) Using a Workflow to execute an SSIS and calling this from the button.
If I use Option 1, what would be the best approach to creating the web service? Baring in mind that the Sync procedure can take a fair few minutes to run depending on internet speed between sites, I have to be careful of timeouts. If I use a .Net 3.5 asmx web service, is there any way to immediately respond, and run the SP in the background? I know I can call this from Javascript. I believe the alternative is to use a WCF Workflow Service (which is completely new to me) which will allow for background processing - but can this be called from Javascript?
If I use Option 2, how do I execute a local SSIS from CRM? I've read it's possible, but I was unable to find an example.
Sorry for the wall of text. Any guidance or recommendations on approach would be greatly appreciated. Many Thanks.
TL;DR: I am looking for some advice on how to execute stored procedures on a different database in Dynamics 4 using a Custom Button.
You would have to create a web service. It can be a WCF service or classic ASMX service. Deploying it on the CRM server will be slightly complex but it can be done (deploy the application within ISV folder and then modify your application's web.config file so that it restores all the custom stuff CRM sets in its own configuration). If you have a separate web server (or at least a separate web site on the CRM server) it will be much easier.
The service has to be marked as OneWay - so that it returns to the caller as soon as it is invoked. Unfortunately it also means that you won't be able to return error messages - those will have to be written in a log.
Now create a button in the CRM form and call the service using JavaScript. You should be able to configure your service so that it can be invoked by passing any arguments in the URL parameters.
I am writing a application in C# that needs to do the following:
without connecting to the database I need to check if there are some new logs in database. If there are then I am allowed to open the connection and retrieve them.
So I just need to know if there are new logs (elements) in the database WITHOUT opening the connection to it.
Server can send mail to administrator and I could monitor mailbox for changes but that solution is unacceptable.
Can server on inserting new rows create *.txt file on disk with text indication new rows which I can check and delete/edit after downloading change?
(database is in SQL Server 2008 R2)
Is it even possible? Any/And/Or other options to do this are welcome.
Thank you all very much in advance.
Based on the following clarifying comments from the OP under the question:
There is Web application which checks for change every 30 sec and shows latest authorizations. Database is tracking employee authorization and has frequent updates. Now I'm building desktop application, which has local connection to the server and can update more frequently, but client does-not want application to open connection every sec, aldo connection is opened for several ms.
I think that the appropriate solution is a business layer.
If you build a business layer hosted in IIS that performs the database access on behalf of the users using a single database user for access (the application pool user or an impersonated user within the web application), then connection pooling will reduce the number of connections made to the database significantly.
Here is an MSDN article that describes the mechanics and benefits of connection pooling in great detail.
All of the clients, including the web layer, would connect to the business layer using WCF or .Net Remoting (depending on your .Net version), and the business layer would be the only application performing database access.
The added benefit of this approach is that you can move all database access (including from the web client) inside the DMZ so that there is no direct database access from the DMZ outward. This could be a good selling point for your customer.
We use this mechanism extensively for very large, very security and performance conscious customers.
Update
As an alternative, you could have the business layer query the database every 30 seconds, extract the necessary information, and store it locally to the business layer in a database of some sort (Access, Sql Server Express, etc). When requests from clients are received, they will be served from the local data store instead of the database.
You could do this by kicking off a background thread in global.asax's Application_Start event or by adding a cache entry that expires every 30 seconds and performing the work in the cache timeout event.
This will reduce the number of connections to 1 (or 2 if the web isn't modified) every 30 seconds (or whatever the time is).
Try to monitor files change date inside DB folder.
If the client desktop application is not going to be deployed massively you could use SqlDependency. Then you wouldn't have to poll the database on a frequent basis, instead the database will notify you if something changes.
You could also deploy a service on the server which uses SqlDependency and then connect to this service from your desktop applications.
If that's not an option this document mentions some other options.
These two could be applied to your situation:
Create an AFTER UPDATE trigger on the table being monitored, whose action uses SQL Server Service Broker to send a message to the entity needing the notification.
Use Windows Server App Fabric Cache, which supports a change notifications mechanism, based on an in-memory object cache and callback functions you register with the objects.
I am new to WCF and i am designing a project in which i want to run a crawler program (coded in c#) which crawlers some websites and it stores the crawled data in the tables of database (sql server db). I want that crawler runs repeatedly after 30 minutes and updated the database.
I want to then use the service on my hosted platform so that i can use the data from tables in web form (i.e. .aspx page)
Is it okay to use WCF for this purpose ?
Please suggest me how to move on ?
Thanks
Windows Communication Foundation (WCF) is responsible for communication between 2 points with different channel technology. you will use WCF if you want to send/receive some data between two point regardless channel technology (TCP/UDP/NetPipe/MSMQ , ...)
But you first need to design you crawler application which is configured to fetch data from your target web sites, then you need to design a schedular application using
http://quartznet.sourceforge.net/
to run your crawlers.
after running and storing your web pages you can use WCF if you need to do replication or synchronization with center server but it is optional
You could use a WCF service to do this but I would go for another setup:
I'd build a Windows application that is scheduled to run every 30 minutes by the Windows Task Scheduler. A simple console application might be fine.
I'd use a Web application (possibly ASP MVC) to query the database.
As you can see there is no need to use WCF at all.
An exception can/must be made when the server is not yours but you are using a hosting provider who doesn't allow you to schedule a Windows task. In that case you might want to run the crawling process by hand through the web application and have it repeat itself after 30 minutes.
Some hosting providers do allow the scheduling of tasks but in a different way so it might be worth to investigate.