How do I host an Azure worker role locally/on premise? - c#

Some background....
We're venturing into Azure for the first time and are trying to do it in baby steps. For now, our first apps are going to be worker roles monitoring queues to process requests (such as sending an email or performing some screen scraping) and we'll just insert into the queues from our on-premise MVC app and WCF services. We'll later move the MVC app and WCF services to Azure.
Our development workflow essentially goes like this (somewhat modified in unimportant ways):
Develop locally and on some shared servers. Check into source control.
Every 15 minutes, build server builds and deploys to a "development" environment (doubles as CI and covers in case we need to hit a "development" environment other than local)
Technical testers manually trigger build server to deploy the last-successful "Dev" build to the Test environment (or alternately, the previously-deployed Test build, including/excluding database).
Technical testers and business testers manually trigger build server to deploy the last-successful "Test" build (or alternately, the previously-deployed Test build, including/excluding database) to the QA environment.
At some point we merge the changeset that QA approves for deployment.
Later our production build server deploys this version to staging and then later to our production environments (we host it N times in parallel/independent environments).
As you can tell, we have a number of internally-hosted versions of our app for internal support people to hit against prior to reaching production. I would like for these to have a reasonably low-level of dependence upon Azure. I don't need to complete sever dependence so we'll continue to use Azure Queues and perhaps some other mechanisms just because they're easy to continue using, but we don't want to have our build server to have to deploy to Azure for every one of these environments (and alternatively pay for all that hosting).
So how can we reasonably host our Worker Roles on-premise in a way where we're actually testing the code that gets deployed to Azure?
One option that's been suggested is that we create the worker role as a wrapper/facade and do all the real work inside a class library, which was our plan. However, the follow-up to allow us to "host" this would be to create a second wrapper/facade application that performs the same work as the worker role, just in a way where we can run it as a scheduled task or a windows server. Ultimately, I don't like this option because an entire project is never tested until it hits staging.
Is it possible to do something similar where we create a second wrapper/facade application that instead of calling the class library that it actually references and calls the Run() function in the worker role?

Do you reckon Azure emulator might help you? These are the differences between the real Azure provider and the emulator.
Having a facade for your worker role seems reasonable. And use adaptors to adapt any possible cloud (or other hosting) techonology to that facade? Just trying to throw in some ideas. I actually used this approach before, but was a "personal" project.
Use PowerShell to configure your roles and whatnot.
Configure your Azure emulator like this.

The facade approach is the best one to adopt, to be honest.
When you have deployments that ultimately have dependencies on the supporting infrastructure, it's exceptionally difficult to fully test until you deploy to an identical, or comparable, infrastructure. That's surely the case with an Azure Worker Role.
By decoupling the functional aspects of your application from the infrastructure touch-points, you can spend your effort ensuring that your code behaves as it should, prove that the facade behaves as it should, then confidence test the final combination.
There's always some element of compromise to this effect unless your test environments are identical to your production environments.
And it's what the Azure staging deployment is for; that last level of confidence testing before you switch to production.
You can create an extra-small deployment, purely for your later stages of testing. You pay for the time that the role is deployed, so if you delete your deployment once your testing is complete, you can minimise the cost.
Lastly, and the facade pattern is an example, design for testability. Factor your code to maximise the amount that can be tested before deployment and you minimise your risk in later stages of testing.

Related

What is the ideal method for creating a Windows application and service package?

I have a project I am working on where I need to create an app and service package for Windows. I would like the service process to run as SYSTEM or LOCALSYSTEM so that credentials are irrelevant. The application frontend will be installed and executable by any user on the machine. Data from the frontend application will be passed to the service - most likely paths to directories selected by users. Once started the service will listen for a command to do some action while accepting the aforementioned paths.
I'm using C# on the .NET platform and I've looked into creating a standalone service and a standalone application separately as well as creating a WCF service library and host application - that's as far as I've gotten.
All of these methods seem overly complex for what I am trying to achieve. What is modern convention when attempting something like this? I'm willing and able to learn the best method for moving forward.
Edit: This was flagged duplicate. I'm not looking for information on HOW to communicate with a Windows service. That's remedial and not at all what I'm asking. I'm looking for validation that I'm on the right track and if I'm not, I'm looking for suggestions. I've been told that I'm on the right track and pointed towards named pipe binding.
Windows Service is certainly an option for hosting WCF, although it kind of is a deployment nightmare. It really depends on your environment and the capability and support of your system admins as I've had many clients where deploying a windows service, as you need admin rights to install and update it, was simply not practical.
Console applications may sound like a terrible idea but the practicality of being able to drop them on a share and run a powershell script to start them is very compelling.
But frankly IIS hosting has the most advantages in my mind as the product is designed for ease of deployment and up time. And you can use any transport binding in IIS that you can use in a Windows Service or Console.
As for the binding itself named pipe is not really a popular option in many enterprise scenarios as it is incompatible with anything but .NET. Although the same can be said for binary which is one of the more performant bindings. The WSHttpBinding is probably the most popular binding in scenarios that require unknown callers. WebHttpBinding is an interesting option as its HTTP/REST based, although that requires further decoration of your operations and honestly if your going that route you should really be using Web API.

Unit Testing Web API using HttpServer or HttpSelfHostServer

I am trying to do some unit testing in for a Web API project. I am going simulate the web API hosting environment. It seems like that I could use In memory host (HttpServer) or self host (HttpSelfHostServer).
Just wondering what are the difference and which technology is good for what and is there any limitation for those options.
You should use in memory host for end-to-end tests and then test the network connectivity of your environment separately.
For a number of reasons:
In memory host, as the name suggests, runs entirely in memory so will be much faster
Self host needs to be run with elevated privileges, so your tests will need to be executed within the context of an "admin" identity. This is far from desired. It is especially troublesome if you want to execute tests from i.e. build scripts or from PowerShell, since, as a result, these processes would also have to be started with elevated privileges. Moreover, this will have to happen on any of the servers you test on.
In self host you end up testing the given operating system’s networking stack which really is something that shouldn't be tested – since it might differ across different environments (development, staging, QA, production and so on). For example - a given port might not be available. As a result you might get dragged into unnecessary debugging efforts across different machines to even get the tests running.
Finally, testing using self-hosting, still doesn't guarantee that the service will run correctly when web-hosted and vice versa - so you might as well just test in memory

What is the possibility of making asp.net web application update itself?

I have a web application that i would like it to check for updates, download and install them.
i know there are already some updater frameworks that works for windows applications, but is it possible for web applications ?
The first thing came to my mind when thinking of this is:
File permissions (i might not be able to replace all my application files due to file permissions)
Also touching the web.config or the bin folder will cause the application to restart.
I also thought about executing an exe from my web application that does the job, but i dont know if it could get shutdown because of a restart to the web application.
I would appreciate any ideas or solution to that case.
Thanks
Take a look at WebDeploy
It is meant to ease such tasks. Where you want to deploy a publish to a production server.
Rather than having your server check for updates and update itself, it would be simpler to just push updates to it when you have them.
Web Deploy allows you to efficiently synchronize sites, applications
or servers across your IIS 7.0 server farm by detecting differences
between the source and destination content and transferring only those
changes which need synchronization. The tool simplifies the
synchronization process by automatically determining the
configuration, content and certificates to be synchronized for a
specific site. In addition to the default behavior, you still have the
option to specify additional providers for the synchronization,
including databases, COM objects, GAC assemblies and registry
settings.
Administrative privileges are not required in order to deploy Web
applications.
Server administrators have granular control over the operations that can be performed and can delegate tasks to non-administrators.
This needs you to be running IIS7 though.

Approaches for (re)deploying code/bin files to (multiple) Windows Azure Virtual Machines

This question may not relate specifically to Azure Virtual Machines, but I'm hoping maybe Azure provides an easier way of doing this than Amazon EC2.
I have long-running apps running on multiple Azure Virtual Machines (i.e. not Azure Web Sites or [Paas] Roles). They are simple Console apps/Windows Services. Occasionally, I will do a code refresh and need to stop these processes, update the code/binaries, then restart these processes.
In the past, I have attempted to use PSTools (psexec) to remotely do this, but it seems like such a hack. Is there a better way to remotely kill the app, refresh the deployment, and restart the app?
Ideally, there would be a "Publish Console App" equivalent from within Visual Studio that would allow me to deploy the code as if it were an Azure Web Site, but I'm guessing that's not possible.
Many thanks for any suggestions!
There are number of "correct" ways to perfrom your task.
If you are running Windows Azure Application - there is simple a guide on MSDN.
But if you have to do this with a regular console app - you have a problem.
The Microsoft-way is to use WMI - good technology for any kind managent of the remote Windows servers. I suppose WMI should be ok for your purposes.
And the last way: install Git on every Azure VM and write simple server-side script scheduled to run every 5 minutes to update the code from repository, build it, kill old process and start new one. Publish your update to repository, thats all.
Definitely hack, but it works even for non-windows machines.
One common pattern is to store items, such as command-line apps, in Windows Azure Blob storage. I do this frequently (for instance: I store all MongoDB binaries in a blob, zip'd, with one zip per version #). Upon VM startup, I have a task that downloads the zip from blob to local disk, unzips to local folder, and starts the mongod.exe process (this applies equally well to other console apps). If you have a more complex install, you'd need to grab an MSI or other type of automated installer. Two nice thing about storing these apps in blob storage:
Reduced deployment package size
No more need to redeploy entire cloud app just to change one component of it
When updating the console app: You can upload a new version to blob storage. Now you have a few ways to signal my VM's to update. For example:
Modify my configuration file (maybe I have a key/value pair referring to my app name + version number). When this changes, I can handle the event in my web/worker role, allowing my code to take appropriate action. This action could be to stop exe, grab new one from blob, and restart. Or... if it's more complex than that, I could even let the VM instance simply restart itself, clearing memory/temp files/etc. and starting everything cleanly.
Send myself some type of command to update the app. I'd likely use a Service Bus queue to do this, since I can have multiple subscribers on my "software update" topic. Each instance could subscribe to the queue and, when an update message shows up, handle it accordingly (maybe the message contains app name and version number, like our key/value pair in the config). I could also use a Windows Azure Storage queue for this, but then I'd probably need one queue per instance (I'm not a fan of this).
Create some type of wcf service that my role instances listen to, for a command to update. Same problem as Windows Azure queues: Requires me to find a way to push the same message to every instance of my web/worker role.
These all apply well to standalone exe's (or xcopy-deployable exe's). For MSI's that require admin-level permissions, these need to run via startup script. In this case, you could have a configuration change event, which would be handled by your role instances (as described above), but you'd have the instances simply restart, allowing them to run the MSI via startup script.
You could
build your sources and stash the package contents in a packaging folder
generate a package from the binaries in the packaging folder and upload into Blob storage
use PowerShell Remoting to host to pull down (and unpack) the package into a remote folder
use PowerShell Remoting to host to run an install.ps1 from the package contents (i.e. download and configure) as desired.
This same approach can be done with your Enter-PSSession -ComputerName $env:COMPUTERNAME to have a quick deploy local build strategy that means you're using an identical strategy for dev, production and test a la Continuous Delivery.
A potential optimization you can do later (if necessary) is (for a local build) to cut out steps 2 and 3, i.e. pretend you've packed, uploaded, downloaded and unpacked and just supply the packaging folder to your install.ps1 as the remote folder and run your install.ps1 interactively in a non-remoted session.
A common variation on the above theme is to use an efficient file transfer and versioning mechanism such as git (or (shudder) TFS!) to achieve the 'push somewhere at end of build' and 'pull at start of deploy' portions of the exercise (Azure Web Sites offers a built in TFS or git endpoint which makes each 'push' implicitly include a 'pull' on the far end).
If your code is xcopy deployable (and shadow copied), you could even have a full app image in git and simply do a git pull to update your site (with or without a step 4 comprised of a PowerShell Remoting execute of an install.ps1).

Suggestions for developing Windows Service with C#

I am developing a distributed application where each distributed site will run a 2-tier winform based client-server application (there are specific reasons for not going into web architecture). The local application server will be responsible for communicating with a central server and the local workstations will in turn communicate with the local server.
I am about to develop one or more windows services to achieve the following objectives. I presume some of them will run on the local server while some on the local workstations. The central server is just a SQL Server database so no services to be developed on that end, just DML statements will be fired from local server.
Windows Service Objectives
Synchronise local data with data from central server (both end SQL Server) based on a schedule
Take automatic database backup based on a schedule
Automatically download software updates from vendor site based on a schedule
Enforce software license management by monitoring connected workstations
Intranet messaging between workstations and/or local server (like a little richer NET SEND utility)
Manage deployment of updates from local server to workstations
Though I have something on my mind, I am a little confused in deciding and I need to cross-check my thoughts with experts like you. Following are my queries -
How many services should run on the local server?
Please specify individual service roles.
How many services should run on the workstations?
Please specify individual service roles.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks
a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads
b. Build .NET executable assemblies for the scheduled tasks and fire the executables
Though it may seem as too many questions in a single post, they are all related. I will share my views with the forum later as I do not want to bias the forum's views/suggestions in any way.
Thanking all of you in advance.
How many services should run on the local server?
It depends on many factors, but I would generally go for as less services as possible, because maintaining and surveilling one service is less work for the admin than having many services.
How many services should run on the workstations?
I would use just one, because this will make it a single point of failure. The user will notice if the service on the workstation is down. If this is the case only one service needs to be started.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
I would generally put the connection string in the app.config. The .NET framework also offers facilities to encrypt the connection string.
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads b. Build .NET executable assemblies for the scheduled tasks and fire the executables
b. is easier to design and implement. It gives you the possibility to use the windows scheduler. In this case you will need to think of the problem, when the windows scheduler starts the executable, when the previous start has not finished yet. This results to two processes, which may do the same. If this is not a problem then stay at that design. If it is a problem, consider solution a.
For solution a. have a look on Quartz.NET which offers you a lot of advanced scheduling capabilities. Also considering using application domains instead of threads to make the service more robust.
If you don't get admin rights on the local server, think about means to restart the service without the service control manager. Give some priviledged user the possibility to re-initialize the service from a client machine.
Also think about ways to restart just one part of a serivce, if one service is doing mulitple tasks. For instance the service is behaving strangely, because the update task is running wrong. If you need to restart the whole service to repair this, all users may become aware of it. Provide some means to re-intialize only the update task.
Most important: Don't follow any of my advices if you find an easier way to achieve your goals. Start with a simple design. Don't overengineer! Solutions with (multiple) services and scheduling tend to explode in their complexity with each added feature. Especially when you need to let the services talk to each other.
I dont think there is one answer to this, some would probably use just one service and other would modularize every domain into a service and add enterprise transaction services etc. The question is more of a SOA one than C# and you might consider read up on some SOA books to find your pattern.
This does not answer your questions specifically, but one thing to consider is that you can run multiple services in the same process. The ServiceBase.Run method accepts an array of ServerBase instances to start. They are technically different services and appear as such in the service control manager, but they are executed inside the same process. Again, just something to consider in the broader context of your questions.

Categories