I would like to be able to start a number of concurrent processes (command line programs) on Azure remotely. I need to be able to make a call from a C# program to start these processes off. What would be the best approach to this?
Set up an Azure VM with IIS
Install the various .exes that I want to run.
Write and install a web service on the box that can take parameters and start the processes on the server.
Is there an easier way to do this using Azure though? I am not familiar with Azure VMs or worker roles etc.
Thanks!
You could make use of Azure Worker Roles and Windows Azure Storage for that purpose. What you could do is either package the executables along with the worker role package so that they are always available in your role or you could save them in blob storage so that you can download on demand into your worker role. From your calling application, you can put messages in a queue which will be constantly polled by your worker role. The message will contain enough information (for example exe to invoke etc.). Once the message is received, the worker role will execute the executable.
Hope this helps.
Related
I have a web application which consist of 1 web role and 1 worker role instance. The webrole basically aggregates some blobs on blob storage and the worker comes and sweeps the stored files and processes them.
The problem is if I were to use 2 worker role instances, there may be some duplicate results due to the fact that the blobs only get removed on successful processing. In order to avoid this I decided to use Hangfire.
What I plan on doing is, when the web role gets the request, not only it will save the file to blob storage but also enqueue the processing task for that file. Then Hangfire worker threads will process it.
This raises another question: where should I deploy hangfire?
If I deploy hangfire onto web role instances, I will be able to access the UI since it has IIS, but I won't be able to isolate the resources used for background processing from the web role itself. If I deploy it onto worker role instances, it won't be able to serve the UI, since there's no IIS.
Is there a way to have web role instance fire the tasks but have the worker instance look for and consume them? If so, how?
is it possible to start Worker Role Instances dynamically from a c# application running on azure windows vm?
in azure i have a Medium virtual machine, on it there is a c# console application that runs automatically on 11:00PM daily and it keeps processing data until about 7:00AM, my data is getting bigger and thus needs more time to be processed and i need to finish processing all data before 5:00AM.
is it possible to use Worker rule to run an instance of the application an pass it a part of the data to process?
note that my process makes http requests to external websites and the processed data gets written to a mongodb.
i am not sure where to start, and i am not sure if using worker rules is better than creating couple vms.
in general how would you solve this problem with the tools available on azure?
Is it possible to start Worker Role Instances dynamically from a c#
application running on azure windows vm?
Absolutely Yes. In order to do so, you would need to consume Service Management API. You could either write code yourself to consume this API or there's a Windows Azure Management Library available to do so which you can install from Nuget. To learn more about this API, you may find this blog post useful: http://www.bradygaster.com/post/getting-started-with-the-windows-azure-management-libraries.
Generally speaking Worker Roles are equivalent to Windows Services in the sense that both are used to perform background tasks. Since you're performing background tasks through your VM, I can't see any reason why you can't do the same though a Worker Role instance. My recommendation would be to go through tutorials available online or Windows Azure Platform Training Kit to become familiar with Worker Role concepts and how you could make use of them in your project.
For your specific scenario you may want to look at the auto scale rules that are now available; In the configuration for the worker role, in the Azure Management Console, you can specify, for example, that you want at least two workers running between certain times each day.
The Service Management API gives you a lot more control, but the auto scale is quick and easy to start with.
Incidentally, if the work your worker has to do can be divided into atomic chunks, then you may want to use a storage queue to write all the tasks to and then have the worker role pull tasks off that queue. You can then configure the autoscale to monitor the length of the queue and start and stop workers as required.
Previously, there was worker role in Azure, now I can't see one - so what to use for background/scheduled tasks, like maintainance, email sending, etc, should I create virtual machine and create windows services there or is there easier way?
The definitive (current) guide to this FAQ is Guarav's Building a Simple Task Scheduler in Windows Azure — which, as it turns out, is not that simple, and it is not really suited to Azure websites (but rather roles).
The simplest solution is to create RESTful (ish) routes (controllers, etc) using something like the MVC Web API and get a cron job scheduler to kick them off. Recently I have been using the Aditi cloud scheduler which kicks of those jobs for you, and is free (5000 calls per month) in the marketplace.
There is this new scheduler http://www.windowsazure.com/en-us/services/scheduler/
Windows Azure Scheduler allows you to invoke actions—such as calling HTTP/S endpoints or posting a message to a storage queue—on any schedule. With Scheduler, you create jobs in the cloud that reliably call services both inside and outside of Windows Azure and run those jobs on demand, on a regularly recurring schedule, or designate them for a future date. This service is currently available as a standalone API.
If you are using Azure Web Sites (and they are very good) then there is the new WebJobs feature that lets you poke a http(s) endpoint or run scripts on a schedule.
Web and Worker Roles are part of the Cloud Service model, and both exist and haven't gone anywhere.
As stated in the comments to your question, the portal does not facilitate construction of these roles; this is something you'd create, either through Visual Studio, Eclipse (worker role), or PowerShell.
And you don't need a worker role for background tasks. As mentioned in dozens of other answers, worker and web roles are templates for Windows Server virtual machines. Since the VMs are stateless and restart each time from the same baseline, the template shapes what gets installed at startup.
You can run background tasks as a thread in either a web role or worker role. So if you wanted to, you could run all your background tasks within the same web role instances as your web site.
I recommend working through some of the basic examples in the Azure Training Kit, which walk through creating different roles from Visual Studio.
This question may not relate specifically to Azure Virtual Machines, but I'm hoping maybe Azure provides an easier way of doing this than Amazon EC2.
I have long-running apps running on multiple Azure Virtual Machines (i.e. not Azure Web Sites or [Paas] Roles). They are simple Console apps/Windows Services. Occasionally, I will do a code refresh and need to stop these processes, update the code/binaries, then restart these processes.
In the past, I have attempted to use PSTools (psexec) to remotely do this, but it seems like such a hack. Is there a better way to remotely kill the app, refresh the deployment, and restart the app?
Ideally, there would be a "Publish Console App" equivalent from within Visual Studio that would allow me to deploy the code as if it were an Azure Web Site, but I'm guessing that's not possible.
Many thanks for any suggestions!
There are number of "correct" ways to perfrom your task.
If you are running Windows Azure Application - there is simple a guide on MSDN.
But if you have to do this with a regular console app - you have a problem.
The Microsoft-way is to use WMI - good technology for any kind managent of the remote Windows servers. I suppose WMI should be ok for your purposes.
And the last way: install Git on every Azure VM and write simple server-side script scheduled to run every 5 minutes to update the code from repository, build it, kill old process and start new one. Publish your update to repository, thats all.
Definitely hack, but it works even for non-windows machines.
One common pattern is to store items, such as command-line apps, in Windows Azure Blob storage. I do this frequently (for instance: I store all MongoDB binaries in a blob, zip'd, with one zip per version #). Upon VM startup, I have a task that downloads the zip from blob to local disk, unzips to local folder, and starts the mongod.exe process (this applies equally well to other console apps). If you have a more complex install, you'd need to grab an MSI or other type of automated installer. Two nice thing about storing these apps in blob storage:
Reduced deployment package size
No more need to redeploy entire cloud app just to change one component of it
When updating the console app: You can upload a new version to blob storage. Now you have a few ways to signal my VM's to update. For example:
Modify my configuration file (maybe I have a key/value pair referring to my app name + version number). When this changes, I can handle the event in my web/worker role, allowing my code to take appropriate action. This action could be to stop exe, grab new one from blob, and restart. Or... if it's more complex than that, I could even let the VM instance simply restart itself, clearing memory/temp files/etc. and starting everything cleanly.
Send myself some type of command to update the app. I'd likely use a Service Bus queue to do this, since I can have multiple subscribers on my "software update" topic. Each instance could subscribe to the queue and, when an update message shows up, handle it accordingly (maybe the message contains app name and version number, like our key/value pair in the config). I could also use a Windows Azure Storage queue for this, but then I'd probably need one queue per instance (I'm not a fan of this).
Create some type of wcf service that my role instances listen to, for a command to update. Same problem as Windows Azure queues: Requires me to find a way to push the same message to every instance of my web/worker role.
These all apply well to standalone exe's (or xcopy-deployable exe's). For MSI's that require admin-level permissions, these need to run via startup script. In this case, you could have a configuration change event, which would be handled by your role instances (as described above), but you'd have the instances simply restart, allowing them to run the MSI via startup script.
You could
build your sources and stash the package contents in a packaging folder
generate a package from the binaries in the packaging folder and upload into Blob storage
use PowerShell Remoting to host to pull down (and unpack) the package into a remote folder
use PowerShell Remoting to host to run an install.ps1 from the package contents (i.e. download and configure) as desired.
This same approach can be done with your Enter-PSSession -ComputerName $env:COMPUTERNAME to have a quick deploy local build strategy that means you're using an identical strategy for dev, production and test a la Continuous Delivery.
A potential optimization you can do later (if necessary) is (for a local build) to cut out steps 2 and 3, i.e. pretend you've packed, uploaded, downloaded and unpacked and just supply the packaging folder to your install.ps1 as the remote folder and run your install.ps1 interactively in a non-remoted session.
A common variation on the above theme is to use an efficient file transfer and versioning mechanism such as git (or (shudder) TFS!) to achieve the 'push somewhere at end of build' and 'pull at start of deploy' portions of the exercise (Azure Web Sites offers a built in TFS or git endpoint which makes each 'push' implicitly include a 'pull' on the far end).
If your code is xcopy deployable (and shadow copied), you could even have a full app image in git and simply do a git pull to update your site (with or without a step 4 comprised of a PowerShell Remoting execute of an install.ps1).
The google has really failed me on this one. I am new to Azure and am only intermediate at .NET
I have an Azure solution going and I've written some code in a Web Role which runs great. What I would like to do now is move some of this code into an Azure Worker, which will be initialized by a controller function in the Web Role
What on earth do I need to do to get this going locally? I have created the Worker project within the SLN. I just need to know how to fire it up and run it.
I think part of my problem is I am assuming these workers behave like Heroku workers... is this the case? Because what I need is something like a queue system (a bunch of "worker tasks" in one big queue).
A lot of the links I've found for tutorials seem to tap dance around how to actually initialize the process from a Web Role.
Workers in Windows Azure are not tasks; they're entire VMs. To make your life easier, memorize this little detail: Web Role instances are Windows Server 2008 with IIS running, and Worker Roles are the same thing but with IIS disabled.
When you added that worker role to your project, you actually now have a new set of virtual machines running (at least one, depending on the instance count you set). These VMs have their own OnStart() and Run() methods you can put code into, for bootstrapping purposes.
If you grab the Windows Azure training kit, you'll see a few labs that show how to communicate between your various role instances (a common pattern being the use of Windows Azure queues). There's a good example of background processes with the Guestbook hands-on lab (the very first lab).
More info on this, as I've gotten it going now..
If you're coming from a Heroku background, then an Azure Worker is more or less the function in Rails that you'd actually execute with the queue. Unlike Heroku queued operations, an Azure Worker just runs endlessly and keeps polling for new stuff to do... hence the templated sleep(10000) in the Run() function.
The most conventional way I've found to make a Web and Worker talk to each other is by queue messages via Azure ServiceBus which is currently NOT emulated, meaning you need a functioning Azure account to make this work, and it will work even if you are running locally. You just need internet access.
A ServiceBus message can pass an entire object over to the Worker (so long as the Worker proj has the right dependencies in it), so that's kind of nice.
I think you're just having trouble starting the azure emulator along with your worker/web roles? Just set the azure configuration project as the start up project and run that. It'll boot up the emulator along with all your roles.