We have a dotnet core application hosted in Azure app service (Windows machine) in our production environment. It consists of two components -
Email Service
Business Rules Engine
The Email service downloads all emails first to a folder Attachments in the same directory where the application is hosted (D:\home\wwwroot\). For each email, a separate directory (with a guid value) is created under the Attachments directory.
The Business Rules engine accesses that folder and uses the email and it's attachments. Once done, we clear out all contents from the Attachments directory.
The problem we're seeing is that after a certain number of emails are processed, all of a sudden our application is unable to create directories under the Attachments folder. The statement
Directory.CreateDirectory({path})
throws an error saying the specified path could not be found.
The only way we've been able to resolve this is to restart the app service and it again happily goes on it's way creating directories, processing emails until it fails again in a day or so 8-|
What we've tried -
Ours was a multithreaded app, so assuming that maybe one thread is holding a lock on the filesystem due to incorrect or incomplete disposing of resources, we changed it to single threaded processing
Where the directories were being created, we used DirectoryInfo, so tried using DirectoryInfo.Refresh() after every directory deletion, creation etc
Wherever FileStream was being used, we've added explicit .Dispose() statements to dispose of the FileStream
Called GC.Collect() at the end of each run of our service
I suspect this issue is due to the Azure environment but we've not been able to identify what is causing this issue. Has anybody had any such issues and if so how was it resolved?
I made some changes to my code based on what I read in these links here which gives a good summary of the storage system in Azure app service -
https://www.thebestcsharpprogrammerintheworld.com/2017/12/13/how-to-manually-create-a-directory-on-your-azure-app-service/
https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system
D:\local directory points to a folder that is accessible only to that instance of the service, instead of what I was using earlier which is shared among instances - D:\home.
So I changed the code to resolve the %Temp% environment variable, which resolved to D:\local\Temp and then used that location to store the downloaded Emails.
So far multiple testing runs have been executed without any exceptions related to the file system.
Yes, based on your issue description,it does look to be a sandbox restriction. To provide more more on this, the standard/native Azure Web Apps run in a secure environment called a sandbox. Each app runs inside its own sandbox, isolating its execution from other instances on the same machine as well as providing an additional degree of security and privacy which would otherwise not be available.
Azure App Service provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, changes to the global assembly cache, and so on (see Operating system functionality on Azure App Service). If your application requires more access than the preconfigured environment allows, you can deploy a custom Windows container instead.
Symbolic link creation: While sandboxed applications can follow/open existing symbolic links, they cannot create symbolic links (or any other reparse point) anywhere.
Additionally, you can check if the files has read-only attribute, to check for this, go to Kudu Console (({yoursite}.scm.azurewebsites.net)) and run attrib somefile.txt, and check if it includes the R (read-only) attribute.
Related
The Scenario
I'm using msdeploy to deploy files to Web Server A (let's call it WebA) and Web Farm Framework's Application Provisioning feature to synchronise to Web Server B (let's be imaginative and call it WebB).
The Problem
For just one specific WCF .NET web service, the msdeploy to WebA works okay, but the sync fails, reporting that a .NET assembly file is locked by the w3wp.exe process.
What have I tried?
Of course restarting IIS etc will unlock it and allow the sync, but I'm struggling to work out why it's locked in the first place. I believe IIS doesn't use the deployed files directly, instead copying them to the Temp ASP.NET Files directory and JIT-ing the svc file etc all in there as it does with regular ASP.NET.
The Question
Where can I begin to work out why the file would be locked by w3wp.exe? I don't think it'll be the service itself because the msdeploy.exe to WebA works okay and it's only the sync to WebB that fails. Could it be the Application Provisioning "service" on WebB that's locking the file? Why might it do that?
I have a web application that i would like it to check for updates, download and install them.
i know there are already some updater frameworks that works for windows applications, but is it possible for web applications ?
The first thing came to my mind when thinking of this is:
File permissions (i might not be able to replace all my application files due to file permissions)
Also touching the web.config or the bin folder will cause the application to restart.
I also thought about executing an exe from my web application that does the job, but i dont know if it could get shutdown because of a restart to the web application.
I would appreciate any ideas or solution to that case.
Thanks
Take a look at WebDeploy
It is meant to ease such tasks. Where you want to deploy a publish to a production server.
Rather than having your server check for updates and update itself, it would be simpler to just push updates to it when you have them.
Web Deploy allows you to efficiently synchronize sites, applications
or servers across your IIS 7.0 server farm by detecting differences
between the source and destination content and transferring only those
changes which need synchronization. The tool simplifies the
synchronization process by automatically determining the
configuration, content and certificates to be synchronized for a
specific site. In addition to the default behavior, you still have the
option to specify additional providers for the synchronization,
including databases, COM objects, GAC assemblies and registry
settings.
Administrative privileges are not required in order to deploy Web
applications.
Server administrators have granular control over the operations that can be performed and can delegate tasks to non-administrators.
This needs you to be running IIS7 though.
This question may not relate specifically to Azure Virtual Machines, but I'm hoping maybe Azure provides an easier way of doing this than Amazon EC2.
I have long-running apps running on multiple Azure Virtual Machines (i.e. not Azure Web Sites or [Paas] Roles). They are simple Console apps/Windows Services. Occasionally, I will do a code refresh and need to stop these processes, update the code/binaries, then restart these processes.
In the past, I have attempted to use PSTools (psexec) to remotely do this, but it seems like such a hack. Is there a better way to remotely kill the app, refresh the deployment, and restart the app?
Ideally, there would be a "Publish Console App" equivalent from within Visual Studio that would allow me to deploy the code as if it were an Azure Web Site, but I'm guessing that's not possible.
Many thanks for any suggestions!
There are number of "correct" ways to perfrom your task.
If you are running Windows Azure Application - there is simple a guide on MSDN.
But if you have to do this with a regular console app - you have a problem.
The Microsoft-way is to use WMI - good technology for any kind managent of the remote Windows servers. I suppose WMI should be ok for your purposes.
And the last way: install Git on every Azure VM and write simple server-side script scheduled to run every 5 minutes to update the code from repository, build it, kill old process and start new one. Publish your update to repository, thats all.
Definitely hack, but it works even for non-windows machines.
One common pattern is to store items, such as command-line apps, in Windows Azure Blob storage. I do this frequently (for instance: I store all MongoDB binaries in a blob, zip'd, with one zip per version #). Upon VM startup, I have a task that downloads the zip from blob to local disk, unzips to local folder, and starts the mongod.exe process (this applies equally well to other console apps). If you have a more complex install, you'd need to grab an MSI or other type of automated installer. Two nice thing about storing these apps in blob storage:
Reduced deployment package size
No more need to redeploy entire cloud app just to change one component of it
When updating the console app: You can upload a new version to blob storage. Now you have a few ways to signal my VM's to update. For example:
Modify my configuration file (maybe I have a key/value pair referring to my app name + version number). When this changes, I can handle the event in my web/worker role, allowing my code to take appropriate action. This action could be to stop exe, grab new one from blob, and restart. Or... if it's more complex than that, I could even let the VM instance simply restart itself, clearing memory/temp files/etc. and starting everything cleanly.
Send myself some type of command to update the app. I'd likely use a Service Bus queue to do this, since I can have multiple subscribers on my "software update" topic. Each instance could subscribe to the queue and, when an update message shows up, handle it accordingly (maybe the message contains app name and version number, like our key/value pair in the config). I could also use a Windows Azure Storage queue for this, but then I'd probably need one queue per instance (I'm not a fan of this).
Create some type of wcf service that my role instances listen to, for a command to update. Same problem as Windows Azure queues: Requires me to find a way to push the same message to every instance of my web/worker role.
These all apply well to standalone exe's (or xcopy-deployable exe's). For MSI's that require admin-level permissions, these need to run via startup script. In this case, you could have a configuration change event, which would be handled by your role instances (as described above), but you'd have the instances simply restart, allowing them to run the MSI via startup script.
You could
build your sources and stash the package contents in a packaging folder
generate a package from the binaries in the packaging folder and upload into Blob storage
use PowerShell Remoting to host to pull down (and unpack) the package into a remote folder
use PowerShell Remoting to host to run an install.ps1 from the package contents (i.e. download and configure) as desired.
This same approach can be done with your Enter-PSSession -ComputerName $env:COMPUTERNAME to have a quick deploy local build strategy that means you're using an identical strategy for dev, production and test a la Continuous Delivery.
A potential optimization you can do later (if necessary) is (for a local build) to cut out steps 2 and 3, i.e. pretend you've packed, uploaded, downloaded and unpacked and just supply the packaging folder to your install.ps1 as the remote folder and run your install.ps1 interactively in a non-remoted session.
A common variation on the above theme is to use an efficient file transfer and versioning mechanism such as git (or (shudder) TFS!) to achieve the 'push somewhere at end of build' and 'pull at start of deploy' portions of the exercise (Azure Web Sites offers a built in TFS or git endpoint which makes each 'push' implicitly include a 'pull' on the far end).
If your code is xcopy deployable (and shadow copied), you could even have a full app image in git and simply do a git pull to update your site (with or without a step 4 comprised of a PowerShell Remoting execute of an install.ps1).
There is a Web Service installed on Amazon Server. Exposed WebMethod should start an executable. But, it seems that process (executable) started by WebMethod has not permissions to finish its job. If a WebMethod is called locally (using IE on Amazon VM) I can trace some events into log file placed on the path: C:\\LogFiles. But, if it is called from remote machine, there is no even log files. Locally, on my machine all works fine.
The question: is there any way or settings in IIS7 to allow to my WebService to create process that can do everything I want to do? In web.config file I added a line:
<identity impersonate="true" userName="USERNAME" password="password"/>
(userName and password are, of course, written correctly in the file).
Also, I tried to use impersonization as it is explained here, but there is no result. My process can't do its job, it cannot even trace actions into log file. Locally, on my machine, everything works fine.
Any idea how to change settings or whatever into IIS7?
EDIT: In addition to the main question: my WebService is not able even to create log files on the path: C:\\LogFiles - although it is able if it started locally, but remotely there is no simple log file that contains some string. How to allow it to create simple text files?
If all else fails, you may start such a process separately and make it wait for a signal. You can supply a signal in many ways — via an IP socket, via a mailslot, via a named pipe. The web service will post requests to the command pipe (or queue), and the 'executor' process will pop commands, execute them, and wait for more commands.
You should avoid trying to start external processes from ASP.NET - if anything, because your application will then be running under the context of the ASP.NET account. (Yes, you could use impersonation to launch into another account, but, lets not go there)
Instead, install a Windows Service which can receive a signal* to launch the executable you wish.
This has the advantage that you can customise what account the service runs under, without putting passwords inside your code.
(*) Signalling could be achieved through a number of means:
WCF Service Call (using a WCF Service being hosted by the Windows service)
Monitoring for a filesystem change to a known directory.
If you were using Linux, I would have given you the smartest solution ever, setting SUID flag, which is not possible in Windows.
The problem with impersonation is that it works as soon as you have some control over the server machine, more than having appropriate credentials.
You mentioned Amazon VM: I'm pretty certain that they won't allow, for security reasons, to perfrom impersonation. [Add] Or, better, they won't allow anybody to write in C:\
Option 1
Switch to Mono/Linux, set SUID bit using chmod from console and rock!!
Option 2
If you can run the executable other way than ASP.NET (ie. you have a Remote Desktop, SSH*) as privileged account (note: privileged doesn't mean Administrator) then you can redesign your application to have ASP.NET invoke services from your daemon process using WCF, Web Services or Remoting. But, in this case, you have to redesign your executable to be a stand-alone server.
[Add] None of the solution fix if your hosting provider doesn't allow you to write in paths such as C:\, but only allows you to write under your home directory.
*It works on Windows too!!!! And I mean the server!!!
I've got a piece of code that calls the DeleteFile method in the Microsoft.VisualBasic.FileIO.FileSystem class (in the Microsoft.VisualBasic assembly) in order to send the file to the Recycle Bin instead of permanently deleting it. This code is in a managed Windows Service and runs on a Win Server 2k8 machine (32-bit).
The relevant line:
FileSystem.DeleteFile(file.FullName, UIOption.OnlyErrorDialogs, RecycleOption.SendToRecycleBin, UICancelOption.DoNothing);
Of course, I've got "using Microsoft.VisualBasic.FileIO;" at the top of the class and I verified that the method being called is really on the FileSystem class in that namespace. In the above line I refer to a local variable "file" - that's a FileInfo for a local file (say, C:\path\to\file.txt) of which I'm certain that it exists. The application has Full Control over both the file and the directory it's in.
This appears to work just fine as the file disappears from the directory it was in. However, the file doesn't show up in the Recycle Bin. I tried inspecting the C:\$Recycle.Bin folders manually as I suspected the Windows Service running in session 0 would make it end up in a different Recycle Bin, but all the Recycle Bins appear empty.
Does anybody have a clue as to what causes this behavior?
By the way - the machine is definitely not out of free space on the drive in question (or any other drive for that matter), and the file is very small (a couple of kilobytes, so it doesn't exceed the Recycle Bin threshold).
I assume your service is running under a different user account than your own (or one of the special service accounts).
I don't believe it's possible for one user to view the contents of another user's recycle bin - even though you can see some evidence of their existence within the C:\$Recycle.Bin folder.
If it's running under another user account, try logging into the machine using that account, and then check the recycle bin. If it's running under a service account (e.g. Local Service, Network Service, or Local System) it's going to be trickier.
Given that the recycle bins are separate, how are you planning to make use of the fact that the file is in the recycle bin anyway?
the problem could come from the user executing your service, could you try to alter the executing user policy or change the executing user.
Anyway, it could also come from the service being executed without a shell, as the recycle bin depend on the shell api. this post seem to confirm this issue. So you would need to take another approach to acces shell api from your service.