I'm trying to create a windows service in C# that will copy all the files from a network drive and paste it into a local drive (let's say in C drive). When I run the test case, the program runs successfully but when I install and run the windows service, the 'Access is denied' error comes in the log file.
I tried Map Network Drive (API) solution but that solution didn't work. either.
Here's the sample code that I've used to get all the files from a network drive and paste it into the local drive folder
Service1.cs
public partial class Service1 : ServiceBase
{
private Timer _timer;
public Service1()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
try
{
DoWork();
}
catch (Exception e)
{
WriteErrorLog(e);
}
}
private void DoWork()
{
_timer = new Timer();
_timer.Interval = 5000;
_timer.Enabled = true;
_timer.Elapsed += _timer_Elapsed;
Update();
}
private void Update()
{
RevidAddinController.Update_AutodeskAddinFolder_With_ArchcorpUpdatedAddinFiles(Configuration.AutodeskVersion, Configuration.AutodeskRevitAddinFolderPath);
}
private void _timer_Elapsed(object sender, ElapsedEventArgs e)
{
Update();
}
private void WriteErrorLog(Exception ex)
{
StreamWriter sw = null;
try
{
sw = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory + "\\Logfile.txt", true);
sw.WriteLine(DateTime.Now.ToString() + " ; " + ex.Source.ToString().Trim() + "; " + ex.Message.ToString().Trim());
sw.Flush();
sw.Close();
}
catch
{
}
}
protected override void OnStop()
{
}
}
RevidAddinController.cs
public static class RevidAddinController
{
public static IEnumerable<AddinStatus> Update_AutodeskAddinFolder_With_ArchcorpUpdatedAddinFiles(List<string> autoDeskVersion, string addinInstallationPath)
{
var networkDrive = ActivateNetworkDrive();
var allAutodeskVersionPath = Util.GetAllAutodeskAddinLibraryFolderPaths(autoDeskVersion, addinInstallationPath);
List<FileData> latestArchcorpAddins = new List<FileData>();
foreach (var autodeskAddinFolder in allAutodeskVersionPath)
{
var archorpAddinFiles = Util.GetAllExternalRevitAddinFilesFromArchcorpAddinFolderPath(Configuration.ArchcorpAddinFolderPath);
var autodeskAddinFiles = Util.GetAllExternalRevitAddinFilesLocationFromAutodeskAddinFolderPath(autodeskAddinFolder);
var latestAddins = Util.GetUpdatedRevitAddinFromArchcorpFolderPath(autodeskAddinFolder, archorpAddinFiles, autodeskAddinFiles)
.Where(addin => !addin.FileName.Contains(Configuration.DeleteAddinNamePrefix));
latestArchcorpAddins.AddRange(latestAddins);
}
List<AddinStatus> addinCopyStatus = new List<AddinStatus>();
foreach (var autodeskAddinPath in allAutodeskVersionPath)
{
foreach (var newArchcorpAddin in latestArchcorpAddins)
{
addinCopyStatus.Add(Util.InstallNewAddinFile(newArchcorpAddin, autodeskAddinPath));
}
}
return addinCopyStatus;
}
/// <summary>
/// Map the network drive path
/// </summary>
/// <returns></returns>
public static NetworkDrive ActivateNetworkDrive()
{
NetworkDrive oNetDrive = new aejw.Network.NetworkDrive();
try
{
oNetDrive.LocalDrive = "O:";
oNetDrive.ShareName = #"\\acdxbfs1\Organisation";
oNetDrive.Force = true;
oNetDrive.Persistent = true;
oNetDrive.MapDrive();
}
catch (Exception err)
{
throw err;
}
return oNetDrive;
}
}
The complete code can be found on the gist here. Would really appreciate if someone reviews the code and provides any feedback/solution to this problem.
Running a service under the default Local System Account, will have no concept of the share. These are set up under user accounts.
Your 2 options
Run your service under a User Account which has those shares mapped
Access your share via and ip address instead of the drive letter. However, you will need to set the file/folder permissions accordingly.
The service does run as Local System (as previously named). If you have a mapped network drive to a local drive letter, the service cannot use it (because a mapped network drive is always only mapped for the user context, not the whole computer/system). However the service can access the share by UNC \\server\share. You can view the UNC path if you only have a mapped network drive by typing 'net use' inside a command prompt.
If you run your program as a user Windows does automatically authenticate you at the remote share (if not already done by adding a mapped network drive). Therefor Local System is the computer account you need to set the access permissions of the target share to the computername eg workstation1$ (only available inside a domain cause a workgroup does not know the other computers). This has to be done for the file permissions and the share permissions because both are independent and can restrict you from the access.
As an alternative you can authenticate at the remote network share with an user and password - there is an excellent thread in stackoverflow which you can find here which does show how you can achieve this.
Naturally you can also set the service to a user/password in the services manager (services.msc - double click your service and go to the logon tab) who has access to the share. By doing this, the user will be granted the 'login as service' permission which is necessary for this.
If the network file is shared with the local system account then you need to Log In as "Local System Account",
The advantage of running your services as the "Local System account" is that the service has complete unrestricted access to local resources.
But there are some disadvantages also, so be careful to not install unauthorized services as service gets full unrestricted access. Also if the service has some bugs it may lead to performance issues.
Related
I need to copy and/or move files across servers to the other side of my
firewall. I was wondering if anyone can tell me what port(s) I will need to
open to run these methods in my C# program?
class MoveIt
{
public static void Main()
{
var localPath = #"c:\temp\";
var remotePath = #"\\MyRemoteServer\MyShare\MyPath\"
try
{
if (File.Exists(localPath + "MyTestFile.txt") &&
Directory.Exists(remotePath))
{
File.Move(localPath + "MyTestFile.txt", remotePath +
"MyTestFile.txt");
}
}
catch (Exception e)
{
Console.WriteLine("The process failed: {0}", e.ToString());
}
}
}
You need at least TCP 445, and to be sure you also want TCP 137-139, though this latter group is only if you're still stuck using NetBIOS for smb name resolution.
EDIT: The issue here wan't the fact it was locked via GP, it was that it was being run as a service under a service account and it didn't have access to the interactive desktop
I have a C# application that needs to check for when a user's session is locked, I'm using Microsoft.Win32.SystemEvents.SessionSwitch and this works fine when a user manually locks the machine.
The problem is that when the machine is locked via a group policy (User Configuration > Policies > Administrative Templates > Personalization > Screen saver timeout) the application doesn't pick up the switch.
Is there another way to check for a machine being locked? Or is there another way to lock machines via group policy that will be picked up by the application?
N.B. The application is running on windows 7 as a service with full admin rights
Here's my code, Thanks in advance!!! :)
public void OnStart(string[] args)
{
Microsoft.Win32.SystemEvents.SessionSwitch += new Microsoft.Win32.SessionSwitchEventHandler(SystemEvents_SessionSwitch);
}
void SystemEvents_SessionSwitch(object sender, Microsoft.Win32.SessionSwitchEventArgs e)
{
if (e.Reason == SessionSwitchReason.SessionLock)
{
//DO STUFF
}
}
I managed to resolve this by enabling 'Other Logon/Logoff Events' in Windows Event Viewer and searching for the lock and unlock events.
//Define strings for searching the eventlog.
string lockEvent = "4800";
string unlockEvent = "4801";
//Define the Eventlog source you want (in this case it's Security)
string LogSource = #"Security";
//Add these together to make the full query for Lock and Unlock
string LockQuery = " *[System/EventID=" + lockEvent + "]";
string UnlockQuery = "*[System/EventID=" + unlockEvent + "]";
//Return true if there is any locked events found.
private bool CheckForLock()
{
//Create Eventlog Reader and Query
var elQuery = new EventLogQuery(LogSource, PathType.LogName, LockQuery);
var elReader = new System.Diagnostics.Eventing.Reader.EventLogReader(elQuery);
//Create a list of Eventlog records and add the found records to this
List<EventRecord> eventList = new List<EventRecord>();
for (EventRecord eventInstance = elReader.ReadEvent();
null != eventInstance; eventInstance = elReader.ReadEvent())
{
eventlist.add(eventInstance);
}
if(eventList.count > 0)
{
return true;
}
else
{
return false;
}
}
N.B. This will check all the event log, so you need to put a qualifier on how far into the past you want to bee looking.
If you check for lock/unlock sessions every ten seconds, you only want to deal with an EventRecord if it's from the same time period.
You can access the eventlist.TimeCreated value to do something like...
if (eventInstance.TimeCreated > DateTime.Now - TimeSpan.FromSeconds(10))
{
eventList.Add(eventInstance);
}
Is it elegant? No. Does it work? Yes.
I am developing an application which tracks directory related changes on available drives of PC and save those changes in the sqlite DB. Everything in the below code is working fine as expected. But when I tested it by copying approx 100 directory and their nested directories together, it freeze the application and also freeze copy process of Windows OS.
I am not getting the way how to manage it because this code for bulk file copying is consuming most of the PC resource. Is there any why I can optimize it? rather setting "IncludeSubDirectories = False"?
class DirWatcher
{
private FileSystemWatcher dirWatcher = null;
public void StartCapture()
{
string[] drives = Environment.GetLogicalDrives();
foreach (string drive in drives)
{
DriveInfo driveInfo = new DriveInfo(drive);
if (driveInfo.DriveType == DriveType.Fixed)
{
//Director Watcher
dirWatcher = new FileSystemWatcher(drive);
dirWatcher.NotifyFilter = NotifyFilters.DirectoryName;
dirWatcher.Created += dirWatcher_Created;
dirWatcher.Deleted += dirWatcher_Created;
dirWatcher.Renamed += dirWatcher_Renamed;
dirWatcher.IncludeSubdirectories = true;
dirWatcher.EnableRaisingEvents = true;
}
}
}
void dirWatcher_Renamed(object sender, RenamedEventArgs e)
{
try
{
Task.Factory.StartNew(() =>
{
saveToDB("Folder",
e.ChangeType.ToString(),
e.FullPath,
Utility.UnixDTstamp(DateTime.Now).ToString(),
Environment.UserName);
});
}
finally
{
dirWatcher.Renamed -= dirWatcher_Renamed;
}
}
void dirWatcher_Created(object sender, FileSystemEventArgs e)
{
try
{
Task.Factory.StartNew(() =>
{
saveToDB("Folder",
e.ChangeType.ToString(),
e.FullPath,
Utility.UnixDTstamp(DateTime.Now).ToString(),
Environment.UserName);
});
}
finally
{
dirWatcher.Created -= dirWatcher_Created;
}
}
public void StopCapture()
{
dirWatcher.IncludeSubdirectories = false;
dirWatcher.EnableRaisingEvents = false;
dirWatcher.Dispose();
}
public void saveToDB(string DirOrFile, string action, string path, string time, string userName)
{
//SavetoDB code will be here.
}
}
}
One thing that may be slowing down ( or even defeating) your program are attached drives on the network. I don't believe that you can successfully set a Filewatcher on these, especially if the drivers are LINUX based. And even if you could, it would be glacial in performance
I have also found Filewatcher to miss events when watching folder hierarchies. So in general, I'd say that you need to find or write a replacement for this functionality that is so flawed it should be deprecated. Perhaps a hook that lets you watch the windows message loop?
All,
I have a ~/temp dir on my web root in the web server. Let's say:
C:\inetpub\myapp\temp
and I have a method in the code behind class of a page that deletes all the files there. Everything works well.
protected CleanTemp() {
// clean all the files in tmp dir
Array.Foreach...
}
The problems is that if I create a timer in my Global.aspx that tries to execute such clean in the same way, the system lists the files in the temp dir but for each of them it returns an exception saying the user does not have the proper access rights:
[Global.aspx]
void Application_Start(object sender, EventArgs e) {
PhisicalTmpFolder = Server.MapPath(~/temp);
// create new timer
System.Timers.Timer timScheduledTask = new System.Timers.Timer();
timScheduledTask.Interval = 20 * 60 * 1000.0;
timScheduledTask.Enabled = true;
timScheduledTask.Elapsed += new System.Timers.ElapsedEventHandler(timScheduledTask_Elapsed);
}
void timScheduledTask_Elapsed(object sender, System.Timers.ElapsedEventArgs e) {
// clean all the files in tmp dir
Array.ForEach(
Directory.GetFiles(PhisicalTmpFolder),
delegate(string path) {
try {
File.Delete(path); i++;
} catch (Exception ex) {
if (_logger != null) _logger.Error(ex.Message, ex);
}
}
);
}
How can I tell to the timer that it must run with the same access right of the application?
Thank you in advance,
Gianpiero
I think the problem you have here is that the ASP.NET process account does not have the appropriate rights and you have user impersonation enabled so when the operation is initiated during a page request it is done so under the request user account which does have rights.
Try granting the ASP.NET process account rights to that directory.
Can you use impersonation inside your timScheduledTask_Elapsed method to change to an identity that has the necessary elevated permissions?
There is an excellent post here that describes how you can easily create an impersonation context in a try / finally block.
Check if the user configured in the Application Pool (usually NETWORK SERVICE) has permission to delete them.
I'm building a small web application with ASP.NET MVC 2, using db4o as a datastore.
I have added an HttpModule—as per the example here—to give the application access to the db4o database, and everything is working perfectly on my development machine under the VS2008 ASP.NET Development Server.
However, when I deploy the app to my web host and try to access it, I get a DatabaseFileLockedException at the line where the HttpModule tries to open the database file. But there should be nothing else accessing the file; indeed on first run of the app it will only just have been created when this exception gets thrown.
The web host's servers are running IIS 7 on Windows Server 2008, and the application is running under Full Trust. It is a sub-application, in case that makes any difference.
I can't work out why this error is occurring on the live server, but not locally on my development server. Can anyone help me out or suggest what I should do next?
That's a mistake in the example-code. It assumes that the HttpModule.Init is only called once, which isn't necessarily true. Depending how your application is configured, it can be called multiple times. To fix this, check in the HttpModule-Handler if the instance is already there:
using System;
using System.Configuration;
using System.Web;
using Db4objects.Db4o;
namespace Db4oDoc.WebApp.Infrastructure
{
public class Db4oProvider : IHttpModule
{
private const string DataBaseInstance = "db4o-database-instance";
private const string SessionKey = "db4o-session";
// #example: open database when the application starts
public void Init(HttpApplication context)
{
if (null==HttpContext.Current.Application[DataBaseInstance])
{
HttpContext.Current.Application[DataBaseInstance] = OpenDatabase();
}
RegisterSessionCreation(context);
}
private IEmbeddedObjectContainer OpenDatabase()
{
string relativePath = ConfigurationSettings.AppSettings["DatabaseFileName"];
string filePath = HttpContext.Current.Server.MapPath(relativePath);
return Db4oEmbedded.OpenFile(filePath);
}
// #end example
// #example: close the database when the application shuts down
public void Dispose()
{
IDisposable toDispose = HttpContext.Current.Application[DataBaseInstance] as IDisposable;
if (null != toDispose)
{
toDispose.Dispose();
}
}
// #end example
// #example: provide access to the database
public static IObjectContainer Database
{
get { return (IObjectContainer)HttpContext.Current.Items[SessionKey]; }
}
// #end example
// #example: A object container per request
private void RegisterSessionCreation(HttpApplication httpApplication)
{
httpApplication.BeginRequest += OpenSession;
httpApplication.EndRequest += CloseSession;
}
private void OpenSession(object sender, EventArgs e)
{
IEmbeddedObjectContainer container =
(IEmbeddedObjectContainer)HttpContext.Current.Application[DataBaseInstance];
IObjectContainer session = container.OpenSession();
HttpContext.Current.Items[SessionKey] = session;
}
private void CloseSession(object sender, EventArgs e)
{
if (HttpContext.Current.Items[SessionKey] != null)
{
IObjectContainer session = (IObjectContainer)HttpContext.Current.Items[SessionKey];
session.Dispose();
}
}
// #end example
}
}
As alternative you could use the Application_Start from the Global.apsx, which is called only once for sure.
You have another problem here.
When AppPools restart there can be an overlap when the old AppPool is finishing request and the new AppPool is servicing new requests.
During this time you will have two processes trying to access the same db4o file
To get around this you can use something like the hack below.
Note the use of Db4oFactory.OpenServer instead of Db4oEmbedded.OpenFile. This allows the use of transactions on a more fine grained basis.
public IObjectServer OpenServer()
{
Logger.Debug("Waiting to open db4o server.");
var attempts = 0;
do
{
try
{
return Db4oFactory.OpenServer(fileName, 0);
}
catch (DatabaseFileLockedException ex)
{
attempts++;
if (attempts > 10)
{
throw new Exception("Couldn't open db4o server. Giving up!", ex);
}
Logger.Warn("Couldn't open db4o server. Trying again in 5sec.");
Thread.Sleep(5.Seconds());
}
} while (true);
}
Hope this helps
Sounds like permission issues if it works on dev. Stick a notepad file in the same directory and try to open that with some bare bones file code. I bet you'll have the same issue.