Remote monitor directory via FileSystemWatcher [duplicate] - c#

Suppose some Windows service uses code that wants mapped network drives and no UNC paths. How can I make the drive mapping available to the service's session when the service is started? Logging in as the service user and creating a persistent mapping will not establish the mapping in the context of the actual service.

Use this at your own risk. (I have tested it on XP and Server 2008 x64 R2)
For this hack you will need SysinternalsSuite by Mark Russinovich:
Step one:
Open an elevated cmd.exe prompt (Run as administrator)
Step two:
Elevate again to root using PSExec.exe:
Navigate to the folder containing SysinternalsSuite and execute the following command
psexec -i -s cmd.exe
you are now inside of a prompt that is nt authority\system and you can prove this by typing whoami. The -i is needed because drive mappings need to interact with the user
Step Three:
Create the persistent mapped drive as the SYSTEM account with the following command
net use z: \\servername\sharedfolder /persistent:yes
It's that easy!
WARNING: You can only remove this mapping the same way you created it, from the SYSTEM account. If you need to remove it, follow steps 1 and 2 but change the command on step 3 to net use z: /delete.
NOTE: The newly created mapped drive will now appear for ALL users of this system but they will see it displayed as "Disconnected Network Drive (Z:)". Do not let the name fool you. It may claim to be disconnected but it will work for everyone. That's how you can tell this hack is not supported by M$.

I found a solution that is similar to the one with psexec but works without additional tools and survives a reboot.
Just add a sheduled task, insert "system" in the "run as" field and point the task to a batch file with the simple command
net use z: \servername\sharedfolder /persistent:yes
Then select "run at system startup" (or similar, I do not have an English version) and you are done.

You'll either need to modify the service, or wrap it inside a helper process: apart from session/drive access issues, persistent drive mappings are only restored on an interactive logon, which services typically don't perform.
The helper process approach can be pretty simple: just create a new service that maps the drive and starts the 'real' service. The only things that are not entirely trivial about this are:
The helper service will need to pass on all appropriate SCM commands (start/stop, etc.) to the real service. If the real service accepts custom SCM commands, remember to pass those on as well (I don't expect a service that considers UNC paths exotic to use such commands, though...)
Things may get a bit tricky credential-wise. If the real service runs under a normal user account, you can run the helper service under that account as well, and all should be OK as long as the account has appropriate access to the network share. If the real service will only work when run as LOCALSYSTEM or somesuch, things get more interesting, as it either won't be able to 'see' the network drive at all, or require some credential juggling to get things to work.

A better way would be to use a symbolic link using mklink.exe. You can just create a link in the file system that any app can use. See http://en.wikipedia.org/wiki/NTFS_symbolic_link.

There is a good answer here:
https://superuser.com/a/651015/299678
I.e. You can use a symbolic link, e.g.
mklink /D C:\myLink \\127.0.0.1\c$

You could us the 'net use' command:
var p = System.Diagnostics.Process.Start("net.exe", "use K: \\\\Server\\path");
var isCompleted = p.WaitForExit(5000);
If that does not work in a service, try the Winapi and PInvoke WNetAddConnection2
Edit: Obviously I misunderstood you - you can not change the sourcecode of the service, right? In that case I would follow the suggestion by mdb, but with a little twist: Create your own service (lets call it mapping service) that maps the drive and add this mapping service to the dependencies for the first (the actual working) service. That way the working service will not start before the mapping service has started (and mapped the drive).

ForcePush,
NOTE: The newly created mapped drive will now appear for ALL users of this system but they will see it displayed as "Disconnected Network Drive (Z:)". Do not let the name fool you. It may claim to be disconnected but it will work for everyone. That's how you can tell this hack is not supported by M$...
It all depends on the share permissions. If you have Everyone in the share permissions, this mapped drive will be accessible by other users. But if you have only some particular user whose credentials you used in your batch script and this batch script was added to the Startup scripts, only System account will have access to that share not even Administrator.
So if you use, for example, a scheduled ntbackuo job, System account must be used in 'Run as'.
If your service's 'Log on as: Local System account' it should work.
What I did, I didn't map any drive letter in my startup script, just used net use \\\server\share ... and used UNC path in my scheduled jobs. Added a logon script (or just add a batch file to the startup folder) with the mapping to the same share with some drive letter: net use Z: \\\... with the same credentials. Now the logged user can see and access that mapped drive. There are 2 connections to the same share. In this case the user doesn't see that annoying "Disconnected network drive ...". But if you really need access to that share by the drive letter not just UNC, map that share with the different drive letters, e.g. Y for System and Z for users.

Found a way to grant Windows Service access to Network Drive.
Take Windows Server 2012 with NFS Disk for example:
Step 1: Write a Batch File to Mount.
Write a batch file, ex: C:\mount_nfs.bat
echo %time% >> c:\mount_nfs_log.txt
net use Z: \\{your ip}\{netdisk folder}\ >> C:\mount_nfs_log.txt 2>&1
Step 2: Mount Disk as NT AUTHORITY/SYSTEM.
Open "Task Scheduler", create a new task:
Run as "SYSTEM", at "System Startup".
Create action: Run "C:\mount_nfs.bat".
After these two simple steps, my Windows ActiveMQ Service run under "Local System" priviledge, perform perfectly without login.

The reason why you are able to access the drive in when you normally run the executable from command prompt is that when u are executing it as normal exe you are running that application in the User account from which you have logged on . And that user has the privileges to access the network. But , when you install the executable as a service , by default if you see in the task manage it runs under 'SYSTEM' account . And you might be knowing that the 'SYSTEM' doesn't have rights to access network resources.
There can be two solutions to this problem.
To map the drive as persistent as already pointed above.
There is one more approach that can be followed. If you open the service manager by typing in the 'services.msc'you can go to your service and in the properties of your service there is a logOn tab where you can specify the account as any other account than 'System' you can either start service from your own logged on user account or through 'Network Service'. When you do this .. the service can access any network component and drive even if they are not persistent also.
To achieve this programmatically you can look into 'CreateService' function at
http://msdn.microsoft.com/en-us/library/ms682450(v=vs.85).aspx and can set the parameter 'lpServiceStartName ' to 'NT AUTHORITY\NetworkService'. This will start your service under 'Network Service' account and then you are done.
You can also try by making the service as interactive by specifying SERVICE_INTERACTIVE_PROCESS in the servicetype parameter flag of your CreateService() function but this will be limited only till XP as Vista and 7 donot support this feature.
Hope the solutions help you.. Let me know if this worked for you .

I find a very simple method: using command "New-SmbGlobalMapping" of powershell, which will mount drive globally:
$User = "usernmae"
$PWord = ConvertTo-SecureString -String "password" -AsPlainText -Force
$creds = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $User, $PWord
New-SmbGlobalMapping -RemotePath \\192.168.88.11\shares -Credential $creds -LocalPath S:

You wan't to either change the user that the Service runs under from "System" or find a sneaky way to run your mapping as System.
The funny thing is that this is possible by using the "at" command, simply schedule your drive mapping one minute into the future and it will be run under the System account making the drive visible to your service.

I can't comment yet (working on reputation) but created an account just to answer #Tech Jerk #spankmaster79 (nice name lol) and #NMC issues they reported in reply to the "I found a solution that is similar to the one with psexec but works without additional tools and survives a reboot." post #Larry had made.
The solution to this is to just browse to that folder from within the logged in account, ie:
\\servername\share
and let it prompt to login, and enter the same credentials you used for the UNC in psexec. After that it starts working. In my case, I think this is because the server with the service isn't a member of the same domain as the server I'm mapping to. I'm thinking if the UNC and the scheduled task both refer to the IP instead of hostname
\\123.456.789.012\share
it may avoid the problem altogether.
If I ever get enough rep points on here i'll add this as a reply instead.

Instead of relying on a persistent drive, you could set the script to map/unmap the drive each time you use it:
net use Q: \\share.domain.com\share
forfiles /p Q:\myfolder /s /m *.txt /d -0 /c "cmd /c del #path"
net use Q: /delete
This works for me.

Related

File.Exists() always returns false on IIS

The file path that I'm checking with File.Exists() resides on a mapped drive (Z:\hello.txt). The code runs fine in debug environment, however in IIS, it always returns false
var fullFileName = string.Format("{0}\\{1}", ConfigurationManager.AppSettings["FileName"], fileName);
if (System.IO.File.Exists(fullFileName))
Why is this so, and how can I workaround this?
I have granted everyone full read/write permissions in that mapped drive
EDIT:
I tried deleting the file via \\192.168.1.12\Examples\Files\2.xml and I get the same result. It doesn't detect the file on IIS, but works fine on debug
I think your application do not has permission on "Z:"
Is "Z:" network disk?
I have had similar issues using network mapped drives, when running debug code application works perfectly and when running release version application cannot find the file.
If the files are stored on the same server as the application is deployed we found a solution by storing the local drive directory location of the mapped drive for example Z:\files\ could be E:\folder\folder1\
If the application is deployed on a separate server we found using the full network name works for example \\server1\folder\
I hope this proves helpful to you.
Your web application is running under a certain security context and you need to find out what context this is. If it's a normal user, open a command prompt as the user (using the runas tool), map the required drive using the command prompt (be sure to use the /persistent:yes flag)
Alternatively why can't you just use a UNC path (\\serverName\shareName) and avoid all this nonsense?
EDIT: 2013-05-27
To troubleshoot this, create a new application pool, based on whatever app pool you want. Then set the identity that this pool runs under as shown in the attached screenshot.
Make sure that this user has the correct privileges on the file share and then retest it
May be you should use Path.DirectorySeparatorChar

Why does System.IO.FileSystemWatcher crash my service when I try to monitor a folder in a network share? [duplicate]

I am trying to run a file watcher over some server path using windows service.
I am using my windows login credential to run the service, and am able to access this "someServerPath" from my login.
But when I do that from the FileSystemWatcher it throws:
The directory name \someServerPath is invalid" exception.
var fileWatcher = new FileSystemWatcher(GetServerPath())
{
NotifyFilter=(NotifyFilters.LastWrite|NotifyFilters.FileName),
EnableRaisingEvents=true,
IncludeSubdirectories=true
};
public static string GetServerPath()
{
return string.Format(#"\\{0}", FileServer1);
}
Can anyone please help me with this?
I have projects using the FileSystemWatcher object monitoring UNC paths without any issues.
My guess from looking at your code example may be that you are pointing the watcher at the root share of the server (//servername/) which may not be a valid file system share? I know it returns things like printers, scheduled tasks, etc. in windows explorer.
Try pointing the watcher to a share beneath the root - something like //servername/c$/ would be a good test example if you have remote administrative rights on the server.
With regards to the updated question, I agree that you probably need to specify a valid share, rather than just the remote server name.
[Update] Fixed previous question about the exception with this:
specify the name as #"\\someServerPath"
The \ is being escaped as a single \
When you prefix the string with an # symbol, it doesn't process the escape sequences.
I was just asked this question in regards to FileSystemWatcher code running as a service and the issue is permissions. I searched and found this question and answer but unfortunately none of the answers here solved the problem. Anyway, I just solved it, so I thought I would throw in the solution here for next guy who searches and find this question.
The drive was mapped as a logged in user but the service was running as LocalSystem. LocalSystem is a different account and does not have access to drives mapped by a user.
The fix is to either:
Authenticate first (I use a C# Class to establish a network connection with credentials)
Run your service as a user that has access to the share.
You can test LocalSystem authentication by using a LocalSystem command prompt, see How to open a command prompt running as Local System?
Even though this is already answered I thought I would put in my two cents worth becaus eyou can see this same error even if you supply valid paths.
You will get the same error when the process running the watcher does not have access to the remote share. This will happen if the watcher is in a service running under the System account and the share is created by a user. System does not have access to that share and wont recognize it, you will need to impersonate the user to get access to it.
although you can use a FileWatcher over the network, you will have to account for other factors, like disconnection of the network share. If your connection to the share is terminated (maintenance, lag, equipment reset, etc) you will no longer have a valid handle on the share in your filewatcher
You can't use directory watches over network shares, this is a limitation of the OS, not of .NET.

Problem when trying to use EventLog.SourceExists method in .NET

I am trying to use eventlogs in my application using C#, so I added the following code
if (!EventLog.SourceExists("SomeName"))
EventLog.CreateEventSource("SomeName", "Application");
The EventLog.SourceExists causes SecurityException that says
"The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security."
I am running as administrator in Windows 7.
Any help would be appriciated.
This is a permissions problem - you should give the running user permission to read the following registry key:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog
Alternaitvely you can bypas the CreateEventSource removing the need to access this registry key.
Both solutions are explained in more detail in the following thread - How do I create an Event Log source under Vista?.
Yes, it's a permissions issue, but it's actually worse than indicated by the currently accepted answer. There are actually 2 parts.
Part 1
In order to use SourceExists(), the account that your code is running under must have "Read" permission for the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog key and it must also have "Read" permissions on each of the descendant-keys. The problem is that some of the children of that key don't inherit permissions, and only allow a subset of accounts to read them. E.g. some that I know about:
Security
State
Virtual Server
So you have to also manually change those when they exist.
FYI, for those keys (e.g. "State") where even the Administrator account doesn't have "Full Access" permission, you'll have to use PsExec/PsExec64 to "fix" things. As indicated in this StackOverflow answer, download PsTools. Run this from an elevated command prompt: PsExec64 -i -s regedit.exe and you'll them be able to add the permissions you need to that key.
Part 2
In order to successfully use CreateEventSource(), the account that your code is running under must have "Full Control" permissions on HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog as well as have "Full Control" permissions on the log you're adding the new source to.
But wait, there's more...
It is also important to know that both CreateEventSource() and WriteEntry() call SourceExists() "under the hood". So ultimately, if you want to use the EventLog class in .Net, you have to change permissions in the registry. The account needs "Full Control" on the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog key and "Read" for all children.
Commentary: And I believe all of this mess is because when Microsoft originally designed the EventLog, they decided it was critical that people would be able to log something by "Source" without needing to know what log that "Source" went with.
Short tip:
One event source is registered during Service instalation (if application is Windows Service), and can be used without Security Exception with low-profile process owner (not Administrator)
I perform service installation / run with C# code in typical way from SO/ MSDN
Important is property ServiceName in class System.ServiceProcess.ServiceBase .
Good afternoon,
The simplest is that you run vs2019 as an administrator, so when debugging or excute the service, it will run correctly without generating the exception.

Connecting to a network drive programmatically and caching credentials

I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user.
The solution I've found so far is to make a batch file with the following:
explorer \\MACHINE1
explorer \\MACHINE2
explorer \\MACHINE3
Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires.
I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached?
Is there an app that does something like this automatically?
Unfortunately, domain authentication via the VPN isn't an option, according to our admin.
EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.
EDIT: In case anyone else has the same problem, here's the solution I wound up using. It requires Ruby and the Highline gem.
require "highline/import"
domain = ask("Domain: ")
username = ask("Username: ")
password = ask("Password: ") { |q| q.echo = false }
machines = [
'\\MACHINE1\SHARE',
'\\MACHINE2\SHARE',
'\\MACHINE3\SHARE',
'\\MACHINE4\SHARE',
'\\MACHINE5\SHARE'
]
drives = ('f'..'z').to_a[-machines.length..-1]
drives.each{|d| system("net use #{d}: /delete >nul 2>nul"); }
machines.zip(drives).each{|machine, drive| system("net use #{drive}: #{machine} #{password} /user:#{domain}\\#{username} >nul 2>nul")}
It'll figure out how many mapped drives I need, then start mapping them to the requested shares. In this case, it maps them from V: to Z:, and assumes I don't have anything shared with those drive letters.
If you already have an Explorer window open to one of the shares, it may give an error, so before I ran the Ruby script, I ran:
net use * /delete
That cleared up the "multiple connections to a share not permitted" error, and allowed me to connect with no problems.
You could create a batch file that uses "NET USE" to connect to your shares. You'd need to use a drive letter for each share, but it'd be super simple to implement.
Your batch file would look like this:
net use h: \\MACHINE1 <password> /user:<domain>\<user>
net use i: \\MACHINE2 <password> /user:<domain>\<user>
net use j: \\MACHINE3 <password> /user:<domain>\<user>
UPDATE
Whether the connection remains or not depends upon what you specified for the /persistent switch. If you specified yes, then it will attempt to reconnect upon your next logon. If you specified no then it won't. The worrying this is the documentation says that it defaults to the value that you used last!
If you specified no, the connection will remain until you next reboot. If you drop your VPN connection the drive would be unavailable (but if you reconnect to the VPN the drive should be available as long as you haven't removed it).
I don't know of a way to use it without mapping to a drive letter, the documentation would lead you to believe that it isn't possible.
I understand your problem, that you're just trying to give explorer the correct credentials so it stops nagging you with login boxes. Using mapped drives though not perfect will at least alleviate your pain.
to pass credential by command line to the explorer you should take a look into the command net use
Use API WNetAddConnection2() via P/Invoke.

FileSystemWatcher Fails to access network drive

I am trying to run a file watcher over some server path using windows service.
I am using my windows login credential to run the service, and am able to access this "someServerPath" from my login.
But when I do that from the FileSystemWatcher it throws:
The directory name \someServerPath is invalid" exception.
var fileWatcher = new FileSystemWatcher(GetServerPath())
{
NotifyFilter=(NotifyFilters.LastWrite|NotifyFilters.FileName),
EnableRaisingEvents=true,
IncludeSubdirectories=true
};
public static string GetServerPath()
{
return string.Format(#"\\{0}", FileServer1);
}
Can anyone please help me with this?
I have projects using the FileSystemWatcher object monitoring UNC paths without any issues.
My guess from looking at your code example may be that you are pointing the watcher at the root share of the server (//servername/) which may not be a valid file system share? I know it returns things like printers, scheduled tasks, etc. in windows explorer.
Try pointing the watcher to a share beneath the root - something like //servername/c$/ would be a good test example if you have remote administrative rights on the server.
With regards to the updated question, I agree that you probably need to specify a valid share, rather than just the remote server name.
[Update] Fixed previous question about the exception with this:
specify the name as #"\\someServerPath"
The \ is being escaped as a single \
When you prefix the string with an # symbol, it doesn't process the escape sequences.
I was just asked this question in regards to FileSystemWatcher code running as a service and the issue is permissions. I searched and found this question and answer but unfortunately none of the answers here solved the problem. Anyway, I just solved it, so I thought I would throw in the solution here for next guy who searches and find this question.
The drive was mapped as a logged in user but the service was running as LocalSystem. LocalSystem is a different account and does not have access to drives mapped by a user.
The fix is to either:
Authenticate first (I use a C# Class to establish a network connection with credentials)
Run your service as a user that has access to the share.
You can test LocalSystem authentication by using a LocalSystem command prompt, see How to open a command prompt running as Local System?
Even though this is already answered I thought I would put in my two cents worth becaus eyou can see this same error even if you supply valid paths.
You will get the same error when the process running the watcher does not have access to the remote share. This will happen if the watcher is in a service running under the System account and the share is created by a user. System does not have access to that share and wont recognize it, you will need to impersonate the user to get access to it.
although you can use a FileWatcher over the network, you will have to account for other factors, like disconnection of the network share. If your connection to the share is terminated (maintenance, lag, equipment reset, etc) you will no longer have a valid handle on the share in your filewatcher
You can't use directory watches over network shares, this is a limitation of the OS, not of .NET.

Categories