Suggestions for developing Windows Service with C# - c#

I am developing a distributed application where each distributed site will run a 2-tier winform based client-server application (there are specific reasons for not going into web architecture). The local application server will be responsible for communicating with a central server and the local workstations will in turn communicate with the local server.
I am about to develop one or more windows services to achieve the following objectives. I presume some of them will run on the local server while some on the local workstations. The central server is just a SQL Server database so no services to be developed on that end, just DML statements will be fired from local server.
Windows Service Objectives
Synchronise local data with data from central server (both end SQL Server) based on a schedule
Take automatic database backup based on a schedule
Automatically download software updates from vendor site based on a schedule
Enforce software license management by monitoring connected workstations
Intranet messaging between workstations and/or local server (like a little richer NET SEND utility)
Manage deployment of updates from local server to workstations
Though I have something on my mind, I am a little confused in deciding and I need to cross-check my thoughts with experts like you. Following are my queries -
How many services should run on the local server?
Please specify individual service roles.
How many services should run on the workstations?
Please specify individual service roles.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks
a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads
b. Build .NET executable assemblies for the scheduled tasks and fire the executables
Though it may seem as too many questions in a single post, they are all related. I will share my views with the forum later as I do not want to bias the forum's views/suggestions in any way.
Thanking all of you in advance.

How many services should run on the local server?
It depends on many factors, but I would generally go for as less services as possible, because maintaining and surveilling one service is less work for the admin than having many services.
How many services should run on the workstations?
I would use just one, because this will make it a single point of failure. The user will notice if the service on the workstation is down. If this is the case only one service needs to be started.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
I would generally put the connection string in the app.config. The .NET framework also offers facilities to encrypt the connection string.
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads b. Build .NET executable assemblies for the scheduled tasks and fire the executables
b. is easier to design and implement. It gives you the possibility to use the windows scheduler. In this case you will need to think of the problem, when the windows scheduler starts the executable, when the previous start has not finished yet. This results to two processes, which may do the same. If this is not a problem then stay at that design. If it is a problem, consider solution a.
For solution a. have a look on Quartz.NET which offers you a lot of advanced scheduling capabilities. Also considering using application domains instead of threads to make the service more robust.
If you don't get admin rights on the local server, think about means to restart the service without the service control manager. Give some priviledged user the possibility to re-initialize the service from a client machine.
Also think about ways to restart just one part of a serivce, if one service is doing mulitple tasks. For instance the service is behaving strangely, because the update task is running wrong. If you need to restart the whole service to repair this, all users may become aware of it. Provide some means to re-intialize only the update task.
Most important: Don't follow any of my advices if you find an easier way to achieve your goals. Start with a simple design. Don't overengineer! Solutions with (multiple) services and scheduling tend to explode in their complexity with each added feature. Especially when you need to let the services talk to each other.

I dont think there is one answer to this, some would probably use just one service and other would modularize every domain into a service and add enterprise transaction services etc. The question is more of a SOA one than C# and you might consider read up on some SOA books to find your pattern.

This does not answer your questions specifically, but one thing to consider is that you can run multiple services in the same process. The ServiceBase.Run method accepts an array of ServerBase instances to start. They are technically different services and appear as such in the service control manager, but they are executed inside the same process. Again, just something to consider in the broader context of your questions.

Related

C# automation over RDP

I have a windows network (not connected to domain) and I need to provide some automation on each PC at certain time of the day. There are several tasks - launch executables, managing FS, transfering files. All this actions must be implemented via RDP, using C#. What is common approach to achieve this? I don't have experience using RDP within software. So are there .NET classes or free libraries I can use to get RDP functionality in my software. Thank you!
All the tasks you have listed relyed much more on security issues for machines within your network and a user logged-in priveledges a rather than a usage of RPD.
Within a windows domain the tasks like yours are usually delegated to ActiveDirectory administration and policies.
In case of a not Windows Domain Network you will need to use a mechanism that will be presented in following configuration:
a client installed on each particular machine under proper permissions. The client should implement a subscriber pattern.
a server installed on a "commander" machine. the server should inplement a publisher pattern.
There should be a lot of ready solution that should implement the concept of content disribution and starting specific scripts. I think that your investment in such tools research and evaluation will be much more time- and cost- effective rather than writing an app that "uses RPD functionality"
But if there is a reason that prevents usage of 3rd parties, I would go for implementaion of WCF service that will be installed on all clients. This service should be "trained" to do all your suff on client. Server side you will need an appliaction or a service that will publish events for clients or trigger known clients methods.

Single application instance per network

We have developed some long running C# console applications which will be run by Windows Scheduled tasks.
These applications might be run on many different server machines on intranet/extranet.
We cannot ensure that they run on one machine because each application might need access to some resources, which are available only on a certain machine.
Still, all these applications are using a common WCF service to access the database.
We need to ensure, that there is only instance of one of our applications running at any moment.
As the apps might be on different extranet computers, we cannot use per-machine mutexes or MSMQ.
I have thought about the following solution - WCF Mutex service with a timeout. When one app runs, it checks to see if it is already launched (maybe on another machine) and then (in a dedicated thread) periodically pings the WCF Mutex service to update the timestamp (if ping fails, the app exits immediately). If the timestamp gets expired, this means, that the application has crashed, so it can be run again.
I would like to know, if this "WCF mutex" is optimal solution for my problem. Maybe there are already some third party libraries which have implemented such functionality?
You mutex solution has a race condition.
If an app on a different server checks the timestamp in the window after the timestamp expired, but before the current service had updated the timestamp you will have two instances running.
I'd probably go the opposite route. I'd have a central monitoring service. This service would continually monitor the health of the system. If it detects a service went down, it would restart it on either that machine or a different one.
You may want to bite the bullet and go with a full Enterprise Service Bus. Check the Wikipedia article for ESBs. It lists over a dozen commercial and open source systems.
How about a file lock on a network location?
If you can create/open the file with exclusive read write then it is the only app running. If this running app then subsequently crashes, the lock is released automaticaly by the OS.
Tim
Oops, just re-read the question and saw "extranet", ignore me!

What is the possibility of making asp.net web application update itself?

I have a web application that i would like it to check for updates, download and install them.
i know there are already some updater frameworks that works for windows applications, but is it possible for web applications ?
The first thing came to my mind when thinking of this is:
File permissions (i might not be able to replace all my application files due to file permissions)
Also touching the web.config or the bin folder will cause the application to restart.
I also thought about executing an exe from my web application that does the job, but i dont know if it could get shutdown because of a restart to the web application.
I would appreciate any ideas or solution to that case.
Thanks
Take a look at WebDeploy
It is meant to ease such tasks. Where you want to deploy a publish to a production server.
Rather than having your server check for updates and update itself, it would be simpler to just push updates to it when you have them.
Web Deploy allows you to efficiently synchronize sites, applications
or servers across your IIS 7.0 server farm by detecting differences
between the source and destination content and transferring only those
changes which need synchronization. The tool simplifies the
synchronization process by automatically determining the
configuration, content and certificates to be synchronized for a
specific site. In addition to the default behavior, you still have the
option to specify additional providers for the synchronization,
including databases, COM objects, GAC assemblies and registry
settings.
Administrative privileges are not required in order to deploy Web
applications.
Server administrators have granular control over the operations that can be performed and can delegate tasks to non-administrators.
This needs you to be running IIS7 though.

How do I host an Azure worker role locally/on premise?

Some background....
We're venturing into Azure for the first time and are trying to do it in baby steps. For now, our first apps are going to be worker roles monitoring queues to process requests (such as sending an email or performing some screen scraping) and we'll just insert into the queues from our on-premise MVC app and WCF services. We'll later move the MVC app and WCF services to Azure.
Our development workflow essentially goes like this (somewhat modified in unimportant ways):
Develop locally and on some shared servers. Check into source control.
Every 15 minutes, build server builds and deploys to a "development" environment (doubles as CI and covers in case we need to hit a "development" environment other than local)
Technical testers manually trigger build server to deploy the last-successful "Dev" build to the Test environment (or alternately, the previously-deployed Test build, including/excluding database).
Technical testers and business testers manually trigger build server to deploy the last-successful "Test" build (or alternately, the previously-deployed Test build, including/excluding database) to the QA environment.
At some point we merge the changeset that QA approves for deployment.
Later our production build server deploys this version to staging and then later to our production environments (we host it N times in parallel/independent environments).
As you can tell, we have a number of internally-hosted versions of our app for internal support people to hit against prior to reaching production. I would like for these to have a reasonably low-level of dependence upon Azure. I don't need to complete sever dependence so we'll continue to use Azure Queues and perhaps some other mechanisms just because they're easy to continue using, but we don't want to have our build server to have to deploy to Azure for every one of these environments (and alternatively pay for all that hosting).
So how can we reasonably host our Worker Roles on-premise in a way where we're actually testing the code that gets deployed to Azure?
One option that's been suggested is that we create the worker role as a wrapper/facade and do all the real work inside a class library, which was our plan. However, the follow-up to allow us to "host" this would be to create a second wrapper/facade application that performs the same work as the worker role, just in a way where we can run it as a scheduled task or a windows server. Ultimately, I don't like this option because an entire project is never tested until it hits staging.
Is it possible to do something similar where we create a second wrapper/facade application that instead of calling the class library that it actually references and calls the Run() function in the worker role?
Do you reckon Azure emulator might help you? These are the differences between the real Azure provider and the emulator.
Having a facade for your worker role seems reasonable. And use adaptors to adapt any possible cloud (or other hosting) techonology to that facade? Just trying to throw in some ideas. I actually used this approach before, but was a "personal" project.
Use PowerShell to configure your roles and whatnot.
Configure your Azure emulator like this.
The facade approach is the best one to adopt, to be honest.
When you have deployments that ultimately have dependencies on the supporting infrastructure, it's exceptionally difficult to fully test until you deploy to an identical, or comparable, infrastructure. That's surely the case with an Azure Worker Role.
By decoupling the functional aspects of your application from the infrastructure touch-points, you can spend your effort ensuring that your code behaves as it should, prove that the facade behaves as it should, then confidence test the final combination.
There's always some element of compromise to this effect unless your test environments are identical to your production environments.
And it's what the Azure staging deployment is for; that last level of confidence testing before you switch to production.
You can create an extra-small deployment, purely for your later stages of testing. You pay for the time that the role is deployed, so if you delete your deployment once your testing is complete, you can minimise the cost.
Lastly, and the facade pattern is an example, design for testability. Factor your code to maximise the amount that can be tested before deployment and you minimise your risk in later stages of testing.

Multiple instances of an ASP.net application from a single web site configuration in IIS

As the title suggests I'd like to create a single Web Site in IIS that creates multiple instances of an ASP.net application based on the requested Host.
So that all instances are running the same codebase, but each instance has it's own Application object, Session's collection etc.
For example :
host1.domain.tld/default.aspx -> this.Application["foo"] = "host1"
host2.domain.tld/default.aspx -> this.Application["foo"] = "host2"
host3.domain.tld/default.aspx -> this.Application["foo"] = "host3"
I know I can configure IIS to listen to a specific IP address, and set the DNS for host(1|2|3).domain.tld to point at this IP address. Then use Global.asax to check the requested host to setup host specific settings. But the application will still be running as a single instance on the server.
I'd rather have multiple instances of the application running for each host so that their runtime is fully separated, it would also be nice if I could have them in separate Application Pools too, but that's not so important
Of course, I could add the sites individually into the IIS on the servers, but there are some 1600 instances that will need to be configured and this will be very time consuming to do and difficult to manage.
Being able to setup a single instances on a number of servers, then control the load balancing via DNS configuration or filtering on the firewalls both of which can be easily controlled programmatically.
FYI - the asp.net version in use is 4.0 and the IIS is running on a Windows Server 2008.
Any suggestions would be great.
Many thanks
The simplest and most robust way to do this would be to setup individual IIS sites. I know you don't want to do this because it will be very time consuming and definately would be difficult to manage.
However, you've already created a website so now perhaps it's time to create a management tool for instances of that website. As there are 1600 instances that you want to configure, there's a fairly good chance that you already have details of those 1600 instances stored somewhere, such as a database or a spreadsheet.
So:
Get the data about the 1600 instances into a usable format, a Sql Server database (Express, or paid for!) would probably be ideal.
Investigate the IIS7 provisioning APIs
Put together a tool that allows you to create all 1600 instances from the data you have about them, automatically / in batches, via the IIS7 API.
Maintain tool and expand it ready for the inevitable changes that will be required when you need to add or remove instances.
Don't forget that putting your own tool together for a task such as this gives you a lot of flexibility, although there may be tools out there for this purpose that are worthy of investigation. For that (i.e. a non-programmatic solution), I'd suggest asking at http://www.serverfault.com

Categories