I am creating and using mutex in a windows service
using(var m = new Mutex(false,"mymutex")
{
m.WaitOne();
//to my things for a long time
m.ReleaseMutex();
}
On another program running with Administrator rights I do
Mutex.OpenExisting("mymutex")
and it throws mutex does not exist. I can see in the Resource manager that windows service has reference to the mutex.
What is wrong?
Operating system objects like Mutex have session scope. Your service runs in session 0 so its mutex is not visible to processes that run on the desktop session. The workaround is simple, prefix Global\ to the mutex name.
Ignorance is not bliss. EventWaitHandle.OpenExisting throws WaitHandleCannotBeOpenedException
It looks like the default behaviour changes from a Localsystem account to user account.
Related
As I understand Azure Worker roles run by the help of Host application called WaWorkerHost.exe and there is another application called WaHostBootstrapper.exe which checks if WaWorkerHost.exe is running and if not it will run the WaWorkerHost.exe.
How often does this 'worker role status check' occurs?
How can I quickly restart the Worker role myself? I can either reboot the machine worker role is running and wait for few minutes or chose the following traditional method:
Taskkill /im /f WaWorkerHost.exe
and wait for few minutes for the WaHostBootstrapper.exe to kick in but this very inefficient and slow.
Is there any (instant)method of restarting the worker role?
Can I run something like the following and expect similar results to the WaHostBootstapper.exe or there are other consideration?
WaWorkerHost.exe {MyAzureWorkerRole.dll}
The bootstrapper checks the WaWorkerHost status every 1 second.You can see it in the bootsrapper logs (c:\resources\WaHostBootstrapper.txt), by looking at interval of the trace:
"Getting status from client WaWorkerHost.exe"
You can use AzureTools which is a utility used by Azure support team.
One of the its features is gracefully recycle the role instance:
Alternatively, you can restart the instance programmatically:
Upload management certificate to your subscription.
Use the following code to programmatically restart the instance:
Using Microsoft Azure Compute Management library:
X509Certificate2 cert = new X509Certificate2("");
var credentials = new CertificateCloudCredentials("your_subscription_id", cert);
using (var managementClient = new ComputeManagementClient(credentials))
{
OperationStatusResponse response =
await managementClient.Deployments.RebootRoleInstanceByDeploymentSlotAsync(
"cloud_service_name",
DeploymentSlot.Production, // or staging
"instance_name");
}
This is not recommended, for three reasons:
The bootsrapper checks every second, which should be enough for most cases.
It could lead to weird issues. For example, you kill the worker, bootstrapper identifies that the worker is down, you manually start the worker, bootstrapper also tries to start the worker and fail (will crash? will enter zombie state?). It can lead to unhealthy bootstrapper, means that nothing takes care of the worker process.
It depends, of course, on what's the bootstrapper does other than starting the worker. But even if it is currently does nothing other than starting the role, you cannot know for sure if tomorrow Azure team will decide to add it more responsibilities/actions.
If the role itself is aware that it needs to restart, it can call RoleEnvironment.RequestRecycle to cause the role instance to be restarted.
I am developing a Windows Service in C# to centrally manage some application connectivity. It's a sleeper service in general, which performs some actions when awoken by an external executable. To this end I'm using named events, specifically the .NET EventWaitHandle. My code boils down to, at the service end:
EventWaitHandleSecurity sec = new EventWaitHandleSecurity();
sec.AddAccessRule(new EventWaitHandleAccessRule(
new SecurityIdentifier(WellKnownSidType.WorldSid, null),
EventWaitHandleRights.FullControl,
AccessControlType.Allow));
evh = new EventWaitHandle(false, EventResetMode.AutoReset, EVENT_NAME,
out created, sec);
Log(created ? "Event created" : "Event already existed?");
As it's an internal application on trusted servers I don't mind that granting 'Full Control' to 'World' in general wouldn't be smart.
At the client end I have:
EventWaitHandle.TryOpenExisting(EVENT_NAME, EventWaitHandleRights.Modify, out evh)
The code above works perfectly when I run my service in console-based interactive mode. The event is found on both ends, the client can set, and the service kicks to work. Everybody's happy.
When installing the service however it doesn't work. The logging still reports that the event was created anew, but the client cannot find the event. As I thought it was security-related I added the World Full Control Allow access rule, but it didn't change anything. I changed the service to run as Local Admin, even as my own user account, but nothing - the client cannot find the event even though logs show the service is happily polling away on it. If I change the TryOpenExisting to OpenExisting I get an explicit exception:
System.Threading.WaitHandleCannotBeOpenedException: No handle of the given name exists.
What am I missing?
Starting with Windows Vista, services are isolated and run in Session 0 (see Service Changes for Windows Vista). When calling CreateEvent (which EventWaitHandle does), the event object is created in the local namespace by default, also called session namespace. An event object created by a service in session 0 with a name in the session namespace is visible in session 0 only. It is invisible to applications running in an interactive user session.
To create an event object by a service (running in session 0) that can be discovered by application code (running in an interactive user session), you have to create it into the global namespace. This is done by prefixing the event name with "Global\", as documented under CreateEvent.
A helpful tool to track down kernel object-related bugs is Sysinternal's WinObj.
How can I notify another application which is in different domain that current running application has crashed?
in the other words, Is it possible to negotiate two different applications in separate domain?
Thanks in advance.
You can use named pipes for this sort of IPC. For this, look into System.IO.Pipes namespace and excellent NamedsPipeServerStream & NamedPipeClientStream classes.
Note that you can use anonymous pipes only for inter process communications within the same domain, while you can use named pipes for IPC in separate domains (i.e. across PCs on the same intranet).
Yes it is possible. How well this is supported in .NET types will vary depending on how you are going to make the determination of "has crashed".
Basically the monitoring application needs to supply credentials suitable to access the system that should be running the monitored application. This is exactly what one would do to copy a file to/from another domain by starting with something like:
net use \\Fileserver1.domain2.com\IPC$ /user:DOMAIN\USER PASSWORD
or its API equaivalent.
If you use WMI (this is the obvious approach, it is easy to list the processes on a remote system with a query for Win32_Process) you can supply credentials (eg. with the scripting interface or in .NET).
You can use the AppDomain.UnhandledException event to signal the other AppDomain, possibly through a named Mutex. Since they're system-wide, you could create one called "MyAppHasCrashed" and lock it immediately. When you hit an unhandled exception, you release the mutex. On the other side, have a thread that waits on the mutex. Since it's initially blocked, the thread will sit waiting. When an exception occurs, the thread resumes and you can handle the crash.
Mutex crashed = new Mutex(true, "AppDomain1_Crashed");
...
private void AppDomain_UnhandledException(...)
{
// do whatever you want to log / alert the user
// then unlock the mutex
crashed.ReleaseMutex();
}
Then, on the other side:
void CrashWaitThread()
{
try {
crashed = Mutex.OpenExisting("AppDomain1_Crashed");
}
catch (WaitHandleCannotBeOpenedException)
{
// couldn't open the mutex
}
crashed.WaitOne();
// code to handle the crash here.
}
It's a bit of a hack, but it works nicely for both inter-domain and inter-process cases.
I've written a C# Windows Service application that reads a file via a timer delegate every 20 minutes or so, deserializes the content, and then clears the file. The file is written to by one or more client applications running on the same machine and I have chosen to use a Mutex to more or less "lock" the file while it is being deserialized and written to by the service.
I am doing this to avoid Exceptions in the rare occurrence that the client application and the service try to write to the file at the same time.
I create the Mutex inside of the Windows service via the following C# code (this is run every 20 minutes):
public void MyServiceFunction() {
Mutex sessMutex = new Mutex(false, "sessMutex");
sessMutex.WaitOne();
// Write to the file ......
sessMutex.ReleaseMutex();
}
In the client application I run the following:
public void MyClientFunction() {
Mutex mutex = Mutex.OpenExisting("sessMutex");
mutex.WaitOne();
// Write to the file ......
mutex.ReleaseMutex();
}
Now, if I start the Windows service and run the client application within a few minutes, everything works fine. However, after a few hours when attempting to execute the client application I get the following error:
No handle of the given name exists.
My question is, how do I prevent this error from occurring and "persisting" the Mutex.
Would storing the Mutex as property of the Windows service class work? Is using a Mutex the proper way to achieve the functionality I am looking for?
Thanks in advance for your help!
I don't understand why you treat the mutex differently in your client.
It should be the same code in both programs:
Mutex sessMutex = new Mutex(false, "sessMutex");
sessMutex.WaitOne();
// Write to the file ......
sessMutex.ReleaseMutex();
Create the mutex - if the mutex already exists in the system, this will return the existing mutex!
Try to lock
Write to file
Release lock
Apparently the service creates the mutex, locks it and start the file operations, then releases it, and then as soon as that method loses scope it's eligible for garbage collection. Subsequently, in the client, you seem to assume the mutex is still there.. yes storing it in the service scope may work, but then the client could still throw that exception if the service exited for whatever reason. your client will need to check if the mutex is still there. ps: then if it isn't, your service probably also isn't..
I am using the below code to disable the task manager for a kiosk application which works perfectly
public void DisableTaskManager()
{
RegistryKey regkey;
string keyValueInt = "1";
string subKey = "Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\System";
try
{
regkey = Registry.CurrentUser.CreateSubKey(subKey);
regkey.SetValue("DisableTaskMgr", keyValueInt);
regkey.Close();
}
catch (Exception ex)
{
MessageBox.Show("DisableTaskManager" + ex.ToString());
}
}
But when i run this in OS hardened machine i get the following error,
DisableTaskManagerSystem.UnauthorizedAccessException:
Access to the registry key 'HKey_Current_User\Software\Mictrosoft\Windows\CurrentVersion\Policies\System' is denied.
at Microsoft.win32.RegistryKey.win32Error(int32 errorcode, String str)
How can i overcome this ? I need to do this for a Kiosk application.
take a look at this, im not yet a good enough C# Developer to comment but i know that during my work with other developers they came accross the UAC In windows 7, If thats what were talking about here.
http://www.aneef.net/2009/06/29/request-uac-elevation-for-net-application-managed-code/
Well the guy that set up that machine basically asked the reverse... "How do I prevent a non-administrator from messing with group policy". So rather engaging in a group policy arms race, you can either do it at install time when running as an admin, or just skip that part of the code when not running as a user that has permission to do so.
Don't have your application disable task manager but instead use a windows service or scheduled task. Your application is going to run in the context of the current user and won't have rights to the registry key. Instead you can create either a windows service or a scheduled task which can run as a user with higher privileges and can write to the registry.
With a windows service you can communicate it through any IPC mechanism such as custom service messages, Sockets, .NET Remoting, WCF, or whatever, to tell it to turn task manager on/off.
the code requires an elevated privilege to access registry. However there is just a fragment of code that requires these extra permission. To handle such scenarios impersonation is used i.e. you will execute this application as normal user only but that particular piece of code will be executed as if you were an Administrator.
http://msdn.microsoft.com/en-us/library/system.security.principal.windowsimpersonationcontext.aspx