I am using the MongoDB worker role project to use MongoDB on Azure. I have two separate cloud services, in one of them everything works fine, however in the other, the MongoDB worker role is stuck in a Busy (Waiting for role to start... Calling OnRoleStart state.
I connected to one of the MongoDB worker roles and accessed the MongoDB log file and found the following error:
[rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
There are threads on how to fix this normally, but not with Windows Azure. I did not configure anything for the MongoDB worker role (apart from Azure storage connection strings), and it works in another service, so I don't know why it isn't working for this service. Any idea?
Some time ago I was trying to host RavenDB in Azure as Worker Role and had lot's of issues with it as well.
Today, I believe it's better to run database the "suggested" way on target platform which is as Windows Service according to this "Install MongoDB on Windows" guide. This way you won't have to deal with Azure-specific issues. To achieve this you can:
Use Azure CmdLets along with CsPack.exe to create the package for MondoDB.
A solution similiar to RavenDB Master-Slave reads on Azure which I posted on GitHub.
Sign up for Virtual Machine (beta) on Azure, kick off a machine and install MongoDB manually there.
But I guess the most important question when hosting DB is: where do you plan to store the actual DB?
Azure's CloudDrive, which is a VHD stored in Cloud Storage, has the worst IO performance possible. Not sufficient for normal DB usage I'd say.
Ephemeral storage, a Cloud Service local disk space, has perfect IO, but you lose all data once VM is deleted. This means you usually want to make continious or at least regular backups to Cloud Storage, maybe through CloudDrive.
Azure VM attached disk - has better IO than CloudDrive, but still not as good as Ephemeral storage.
As for the actual troubleshooting to your problem. I'd suggest wrapping OnRoleStart with try-catch, writting it to the log, enabling RDP to the box and then connecting and looking into the actual issue right in place. Another alternative is using IntelliTrace, but you need VS Ultimate for that. Also, don't forget that Azure requires usage of Local Resources if your app needs to make writes to the disk.
Related
I am trying to implement Azure Redis Cache in my app. When I see the documents they say I have to define a cache storage in the Azure online tool. I am wondering is there a way to skip that step and use Redis for development without using the actual server thing?
You can install Redis locally and use localhost. That might be one of your options even though I don't think it's faster.
You can download it and install it from here.
You can run Redis server locally, and start the experiment. But if you have decided to use Azure Redis, you should develop toward the real one as early as possible. Several reasons:
Azure Redis supports SSL and this is the default port. You should use this.
Azure Redis has high availability support through master slave.
Azure Redis provide Cluster support.
It might experience unexpected patching process causing temporarily data loss
These things are not easily setup and test locally.
To start off with, I am aware of this question that seems to ask the same thing. However I'm going to ask it again with a slight modification.
I'm working on a project to print PDF's. We have a library for PDF manipulation. We can use it to render a PDF to a file that the System.Drawing.Printing.PrintDocument object can use and print in c#. We are going to have an Azure Worker role that takes many 1-page pdf's and turns them into one large pdf, and I would like to have another Azure Worker Role that then spools that large PDF to a Windows Print Server here locally.
Since the printing part is so much slower (compared to the pdf creation/aggregation piece) I would like to be able to host it in azure to easily scale.
My initial thought was "I don't think this is even possible. How would Azure be able to know anything about my local print server." Which is the basic answer from the above similar question. But after some searching, I found some results that seem to indicate setting up a VPN Site-To-Site Tunnel or ExpressRoute Connection would let me do what I want. However I'm relatively new to Azure and the results I found are short on actual, useful, helpful details.
If it can't be done, fine, I can set up an application server locally to do the printing. But if someone has ideas or more insight on how to accomplish this I would love to hear it.
Basically, you could store PDFs into Azure Blob Storage like:
http://azureaccount.blob.core.windows.net/PDF/pdf1.pdf
Then you define an Azure Queue entity like:
MyQueue(Pdf_Id, Pdf_Blob_Url)
MyQueue(1, http://azureaccount.blob.core.windows.net/PDF/pdf1.pdf)
MyQueue(2, http://azureaccount.blob.core.windows.net/PDF/pdf2.pdf)
and submit to the Azure Queue.
Then on your Printing server, just setup an application to check the AzureQueue to process the PDFs. At this point, just get the PDFs directly from Azure blob storage url to do anything you want like merging, printing,....
Without getting into the VPN / Site-To-Site setups, here is an Idea:
You could have a small application hosted on your network that uses Service Bus Relay to expose a WCF service (this will allow incoming connections to the service from the role)
The worker role can consume this Service and then send the PDF for printing to it.
Your App would send the PDF to the printer via PrintDocument object you mentioned.
See:
https://azure.microsoft.com/en-gb/documentation/articles/service-bus-dotnet-how-to-use-relay/
What is the Service Bus relay? The Service Bus relay service enables
you to build hybrid applications that run in both an Azure datacenter
and your own on-premises enterprise environment. The Service Bus relay
facilitates this by enabling you to securely expose Windows
Communication Foundation (WCF) services that reside within a corporate
enterprise network to the public cloud, without having to open a
firewall connection, or require intrusive changes to a corporate
network infrastructure.
I want to send Tracer output to Azure blob store, but the application is not deployable to Azure, it must run as a standalone application. Is there a relatively easy way to do this? Everything I've seen talks about logging in Azure if you're running as some sort of deployed Azure role. Ideally, I want a Trace.Listeners.Add call and/or something in app.config that solves this.
You can certainly create a trace listener to writes to Windows Azure Storage and use that from any application as long as it will have the ability to access the REST endpoints for the Storage system in Windows Azure. In fact, Steve Marx posted something on his blog that could give you a head start.
I would suggest that for something like a trace writer I would recommend looking at writing the messages to Table storage (like Steve's post) rather than BLOB storage, unless you are wanting to batch up a lot of messages locally to the application and then write them all as a file to BLOB storage.
This question may not relate specifically to Azure Virtual Machines, but I'm hoping maybe Azure provides an easier way of doing this than Amazon EC2.
I have long-running apps running on multiple Azure Virtual Machines (i.e. not Azure Web Sites or [Paas] Roles). They are simple Console apps/Windows Services. Occasionally, I will do a code refresh and need to stop these processes, update the code/binaries, then restart these processes.
In the past, I have attempted to use PSTools (psexec) to remotely do this, but it seems like such a hack. Is there a better way to remotely kill the app, refresh the deployment, and restart the app?
Ideally, there would be a "Publish Console App" equivalent from within Visual Studio that would allow me to deploy the code as if it were an Azure Web Site, but I'm guessing that's not possible.
Many thanks for any suggestions!
There are number of "correct" ways to perfrom your task.
If you are running Windows Azure Application - there is simple a guide on MSDN.
But if you have to do this with a regular console app - you have a problem.
The Microsoft-way is to use WMI - good technology for any kind managent of the remote Windows servers. I suppose WMI should be ok for your purposes.
And the last way: install Git on every Azure VM and write simple server-side script scheduled to run every 5 minutes to update the code from repository, build it, kill old process and start new one. Publish your update to repository, thats all.
Definitely hack, but it works even for non-windows machines.
One common pattern is to store items, such as command-line apps, in Windows Azure Blob storage. I do this frequently (for instance: I store all MongoDB binaries in a blob, zip'd, with one zip per version #). Upon VM startup, I have a task that downloads the zip from blob to local disk, unzips to local folder, and starts the mongod.exe process (this applies equally well to other console apps). If you have a more complex install, you'd need to grab an MSI or other type of automated installer. Two nice thing about storing these apps in blob storage:
Reduced deployment package size
No more need to redeploy entire cloud app just to change one component of it
When updating the console app: You can upload a new version to blob storage. Now you have a few ways to signal my VM's to update. For example:
Modify my configuration file (maybe I have a key/value pair referring to my app name + version number). When this changes, I can handle the event in my web/worker role, allowing my code to take appropriate action. This action could be to stop exe, grab new one from blob, and restart. Or... if it's more complex than that, I could even let the VM instance simply restart itself, clearing memory/temp files/etc. and starting everything cleanly.
Send myself some type of command to update the app. I'd likely use a Service Bus queue to do this, since I can have multiple subscribers on my "software update" topic. Each instance could subscribe to the queue and, when an update message shows up, handle it accordingly (maybe the message contains app name and version number, like our key/value pair in the config). I could also use a Windows Azure Storage queue for this, but then I'd probably need one queue per instance (I'm not a fan of this).
Create some type of wcf service that my role instances listen to, for a command to update. Same problem as Windows Azure queues: Requires me to find a way to push the same message to every instance of my web/worker role.
These all apply well to standalone exe's (or xcopy-deployable exe's). For MSI's that require admin-level permissions, these need to run via startup script. In this case, you could have a configuration change event, which would be handled by your role instances (as described above), but you'd have the instances simply restart, allowing them to run the MSI via startup script.
You could
build your sources and stash the package contents in a packaging folder
generate a package from the binaries in the packaging folder and upload into Blob storage
use PowerShell Remoting to host to pull down (and unpack) the package into a remote folder
use PowerShell Remoting to host to run an install.ps1 from the package contents (i.e. download and configure) as desired.
This same approach can be done with your Enter-PSSession -ComputerName $env:COMPUTERNAME to have a quick deploy local build strategy that means you're using an identical strategy for dev, production and test a la Continuous Delivery.
A potential optimization you can do later (if necessary) is (for a local build) to cut out steps 2 and 3, i.e. pretend you've packed, uploaded, downloaded and unpacked and just supply the packaging folder to your install.ps1 as the remote folder and run your install.ps1 interactively in a non-remoted session.
A common variation on the above theme is to use an efficient file transfer and versioning mechanism such as git (or (shudder) TFS!) to achieve the 'push somewhere at end of build' and 'pull at start of deploy' portions of the exercise (Azure Web Sites offers a built in TFS or git endpoint which makes each 'push' implicitly include a 'pull' on the far end).
If your code is xcopy deployable (and shadow copied), you could even have a full app image in git and simply do a git pull to update your site (with or without a step 4 comprised of a PowerShell Remoting execute of an install.ps1).
I'm developing an application to run in azure.
I'm making use of the azure cache, however when I run this locally I don't want to connect up to Azure to use the cache because it's a bit slow and tedious.
Can you run the cache locally?
[EDIT]
This is .Net C#
Unfortunately, you do need to connect to azure to test the windows azure cache service. Read this for more info: http://msdn.microsoft.com/en-us/library/windowsazure/gg278342.aspx
You can use Windows Server AppFabric Cache when local debug. It utilizes very similar configuration and program mode, which means almost all you need to change is the cache server IP and access token.
But I'd better to create a separated Cache layer to isolate the cache operations. For example introduces ICache interface with Add, Get, Remove, etc. methods. Then you can implement the Azure Cache, Memcached, In-Proc Cache, etc. in vary cases.
There's a good cache layer you might be interested in, check the ServiceStack project at GitHUB https://github.com/ServiceStack/ServiceStack/tree/master/src/ServiceStack.Interfaces/CacheAccess
It's not possible. To use the windows azure caching service locally, you'll always have to route your request to azure, which adds a serious delay on top of the request.
To property test your cache, you need to deploy your service in azure.
As others said, you can use Windows Server AppFabric caching locally, but be warned, there are some differences between the Windows Server AppFabric caching and the Windows Azure caching service, like for example the notifcation based invalidation on local cache items is not supported in azure. Make sure not to use any of these features while developing locally, or you might get surprised when deploying your service to the cloud.
Only the timeout based invalidation on local cache is supported for the windows azure caching service. Windows azure caching service is designed to be used for your cloud services, so it makes sense it's kinda crappy when using with on on-premise application.
Azure AppFabric caching uses a subset of the functionality of Windows Server AppFabric caching. If you're willing to setup a server in house with the cache installed you could probably get something comparable to using the Azure cache. I haven't tried this myself, so while I know that the code you'd need to write is more or less the same between the two, I'm not sure how different the configs need to be.
Chances are though that it's going to be a lot less time and effort to just use the Azure cache.
This article specifically talks about what you are trying to do. Create a caching "infrastructure" that switches between local and distributed cache based on configuration(s):
http://msdn.microsoft.com/en-us/magazine/hh708748.aspx
Now you can use azure in-role cache and try locally using emulator