i want to write a windows service by c# that get data from t1 table in database1 and insert this data into another table in another database (for example t2 in database2). i am using this tutorial series: Windows Services in C#. briefly, by using a timer , for a period (for example every 1000 ms) data from database1 read and write in database2. this work fine and there are no problem.
i want to provide an GUI for user that he can enter this items:
connection string for database 1
connection string for database 2
interval for timer
because that i am beginning to windows service , i can't do this items. i search in net and SO for this and find that for this problem introduced IPC (Inter Process Communication). but i can't find out IPC and how i can solve my problem by it.
If you do the real work only every second or so, you could easily read the settings from the registry or a configuration file just before you do your work cycle.
Your GUI application would then be completly independent, it would just write the registry settings or config file, which could be easier than IPC.
If you do want to use IPC, you need to first decide for dome IPC mechanism (I recommend sockets for portability and possible remote configuration, but this is a matter of personal taste), then have your service listen and your GUI app connect and write. Make sure, you don't forget clean authentication.
Related
I am creating a task scheduler that needs to be running even when the user is logged off. From what I've read, a windows service is best for this.
Obviously it also needs a GUI for the user to actually schedule tasks.
I don't think windows services can have a GUI, what is the best way to accomplish this?
You need to write two programs.
The first is the UI for managing the data - adding & removing schedules etc.
The second is the Windows service that just reads the data and acts on it.
You'll need a database or configuration file that both programs can read and write to, so you'll need to have some form of data locking to prevent issues if both the service and GUI are trying to update the data at the same time.
The "database" doesn't have to be a full blown relational database like SQL or oracle. It would probably be enough to be a configuration file that the GUI writes and the service reads. The storage method you choose depends on the volume and complexity of the data you need to store.
You can create scheduled task using "Task Scheduler" utility in Windows. You can specify the frequency, executable etc details in the "Task Scheduler".
I'm curious about the best way for a C# gui to access the functions of a Windows Service, be it WCF or ServiceController or some other option.
I'll explain what I'm doing: on a regulated time interval the service will be zipping one hours worth of datafiles from location A and sending the zipped file to location B, this will be done in the background 24/7 or until the service is stopped by the user and runs even when no one is logged in (hence the need for service) I would like the user to be able to pull up a GUI program that will allow them several options:
1) change the location to zip from
2) change the location to zip to
3) manually start the zipping process for a DateTime range specified
Now most of the functions for zipping and timers is all stored within the service. SO im wondering if a ServiceController in the GUI program would allow me to send variables to/from the service (ie folderpath names as strings, various other data) or if I'll need to spend the time making a WCF and treat the GUI as the Client and the Windows service as a source.
It should be noted the GUI will likely recieve data from the service, but only to signify if it is currently busy zipping.
One option is to tave a WCF service embedded on your windows service. With this WCF you can control the behaviour without restarting the service.
But the best option IMO is to have this in a config file. You can add some keys, but you would have to restart the service when you update the config.
In this case you can try a workaround, as in this thread.
The config is a good place for this kind of detail, because it can be easily modified and, unlike a database, it will be always avaiable.
I don't fully understand what you're trying to say, but you define what the interface to your service is when you make it. If the operations you define take in variables, then you can pass data from your application to the service.
From what you described, just make opertaions in the service to do those 3 things you listed then call those from a button click in your UI code.
WCF would be fine for this here's a basic introduction to it http://msdn.microsoft.com/en-us/library/bb332338.aspx.
Let me give a back ground for everybody before I go to my problem. My company hosts website for many clients, my company also contracts some of the work to another company.
So when we first set up a website with all the informations to our clients, we pass that information to the other company we contracted and three of us have the same data. Problem is once the site is up and running, our clients will change some data and when ever they do that we should be able to update our contracted company.
The way we transfer data to the contracted company is by using a web service (httppost, xml data). Now my question is what it the best way to write a program which sends updated data to the contracted company everytime our clients change some data.
1) Write a windows service having a timer inside my code where every 30min or so connects to the database and find all changes and send it to the contracted company
2) Write the same code as #1 (with out the timer in it) but this time make it a simple program and let windows scheduler wake it every 30min
3) Any other suggestion you may have
Techenologies available for me are VS 2008, SQLServer 2005
Scheduled task is the way to go. Jon wrote up a good summary of why services are not well suited for this sort of thing: http://weblogs.asp.net/jgalloway/archive/2005/10/24/428303.aspx
A service is easy to create and install and is more "professional" feeling so why not go that way? Using a non-service EXE would also work of course and would be slightly easier to get running (permissions, etc.) but I think the difference in setup between the two is nearly negligible.
One possible solution would be to add a timestamp column to your data tables.
Once this is done, you can have one entry in each table that has the last collected time by your contracted company. They can pull all records since that last time and update their records accordingly.
A Windows Service is more self contained, and you can easily configure it to start up automatically when the OS is starting up. You might also need to create additional configuration options, as well as some way to trigger the synchronization immediately.
It will also give you more room to grow your functionality for the service in the future.
A standalone app should be easier to develop though, however you are reliant on the windows scheduler to execute the task always. My experience has been that it is easier to mess up things with the windows scheduler and have it not run, for example in cases where you reboot the OS but no user has logged in.
If you want a more professional approach go with the service, even though it might mean a little bit more work.
A windows service makes more sense in this case. Think about what happens after your server is restarted:
With a Windows Application you need to have someone restart the application, or manually copy a shortcut to the startup folder to make sure the application gets launched
OR,
With a Windows Service you set it to start automatically and forget about it. When the machine reboots your service starts up and continues processing.
One more consideration, what happens when there is an error? A Windows application would likely show an error dialog and wait for input before continuing; whereas a service would log the error in the event log and carry on.
I have a require ment to read data from a table(SQL 2005) and send that data to other application for every 5 seconds. I am looking for the best approach to do the same.
Right now I am planning to write a console application(.NET and C#) which will read the data from sql server 2005(QUEUE table which will be filled through different applications) and send to other application through TCP/IP(Central server). Run that console application under schedule task for every 5 seconds. I am assuming scheduled task will take care to discard new run event if task is already running(avoid to run concurrent executions).
Does any body come accross similar situation? Please share your experience and advice me for best approach.
Thanks in advance for your valuable time spending for my request.
-Por-hills-
We have done simliar work. If you are going to query a sql database every 5 seconds, be sure to use a stored procedure that is optimized to be very fast. It should not update data unless aboslutely necessary. This approach is typically called 'polling' and I've found that it is acceptable if your sqlserver is not otherwise bogged down with too many other calls.
In approaches we've used, a Windows Service that does the polling works well.
To communicate results to another app, it all depends on what your other app is doing and what type of interface you can make into it, and how quickly you need the results. The WCF class libraries from Microsoft provide many workable approaches for real time communication. My preference is to write to the applications database, and then have the application read the data (if it works for that app). If you need something real time, WCF is the way to go, and I'd suggest using a stateless protocol like http if < 5 sec response time is required, (using standard HTTP posts), or TCP/IP if subsecond response time is required.
since I assume your central storage is also SQL 2005, have you considered using what SQL Server 2005 offers out of the box to achieve your requirements? Rather than pool every 5 seconds, marshal and unmarshal TCP/IP, implement authentication and authorization for the TCP/IP pipe, scale TCP transmission with boxcaring, manage message acknowledgments and retries, deal with central site availability, fragment large messages, implement fairness in transmission and so on and so forth, why not simply use Service Broker? It does all you need and more, out of the box, already tested, already tuned for performance and scalability.
Getting reliable messaging right is not trivial and you should focus your efforts in meeting your business specifics, not reiventing the wheel.
I would recommend writing a Windows Service (since you are C#) that has some timer which runs every 5 seconds. That way you wont be starting and stopping an application all the time, it can run even when there is no one logged into the machine, and it will automatically start when the machine is restarted.
For one of my projects, I needed to do something periodically. I opted for a service and set up a timer that takes care of reading the data. You might consider that solution. It has worked well for me.
I suggest to create a windows service and not an application and to perform the timing yourself - create a timer and execute one step on each timer event. For the communication you have many choices - I would consider using standard technologies like a webservice or Winows Communication Foundation.
Besides this custom solution I would evaluate if the task can be solved using Microsoft Integration Services .
Finally other question comes to mind - why do you need this application? Why doesn't/don't the application(s) consuming the data query the database? Is the expensive polling required? Is it possible for the data producers to signal the availibilty of new data directly to the data consumers?
I am not sure about the details of your project, specifically related to security but maybe it would be better to create an SSIS package and schedule it as a job?
I need to process large image files into smaller image files. I would like to distribute the work to many "slave" servers, rather than tasking my main server with this. I am using Windows Server 2005/2008, C#, and ASP.NET. I have a lot of web application development experience but have not developed distributed systems. I had a notion that this could be designed as follows:
1) Files would be placed in a shared network drive
2) Slave servers would periodically poll the drive for new content
3) Slave servers would rename newly found files to something like UNPROCESSED_appIDXXXX_jidXXXXX_photoidXXXXX.tif and begin processing that file.
4) Other slave servers would avoid trying to process files that are in process by examining file name, i.e. if something has been named "UNPROCESSED" they will not attempt to process.
I am wondering a few things:
1) Will there be issues with two slave servers trying to "grab" and rename the file at once, or will Windows Server automatically lock the file?
2) What do you think the best mechanism for notification of new content for processing should be? One simple idea is to write a basic aspx page on each slave system and have it running on a timer. A better idea might be to write a Windows Service that utilizes SystemFileWatcher and have it running on each slave system. A third idea is to have a central server somehow dispatch instructions to a given slave server to attempt a processing job, but I do not know of ways of invoking that kind of communication beyond a very hack-ish approach of having the master server pass a message via HTTP.
I'd much appreciate any guidance you have to offer.
Cheers,
-KF
If you don't want to go all the way with a compute cluster type solution. You should consider having a job manager running somewhere that will parcel out the work. That way, when a server becomes available to do work, it asks the job manager for a new bit of work to do. It can then tell the job manager that it's finished and the job manager can inform your "client" when the work on the whole job is complete. That way, it's easy to register work and know it's complete and the job manager can parcel out the work without the worry of race conditions on file renames. :)