Can't Access a xml file at random by C# console application - c#

I have a C# console application which creates, parses and deletes multiple xml files at runtime. The application used to run fine in Windows 2003 server with .Net 2.0.
Recently, the Application framework was upgraded to >net 4.0 and the Windows Server OS to Windows 2008 64-bit.
Since then, the application encounters the following exception at random:
Access to the path 'D:\Content\iSDC\GDCOasis\GATE_DATA\LOG\635125008068192773\635125008074911566\SOD\AllRespId.xml' is denied.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.File.Delete(String path)
at ProcessGateFile.SOD.saveFile(String psFile, String psXMLString, Boolean isNonAscii)
The code for the creation, parsing and deletion is as follows:
saveFile(tmpPath + "\\SOD\\AllRespId.xml", "<?xml version= \"1.0\" ?><XML>" + sbldDistinctResp.ToString() + "</XML>", isChinese);
//Save list of Distinct responsibilities for User
sbldDistinctResp.Remove(0, sbldDistinctResp.Length);
xmlCase.Load(tmpPath + "\\SOD\\AllRespId.xml");
arrResps.Clear();
//Start preparing Responsibility selection criteria
RespNodes = xmlCase.SelectNodes("//row");
sRespCriteria = "";
if (RespNodes.Count > 0)
{
foreach (XmlNode RespNode in RespNodes)
{
string RespName = RespNode.Attributes.GetNamedItem("RespId").Value.ToString();
if (!arrResps.Contains(RespName))
{
arrResps.Add(RespName);
}
}
for (int i = 0; i < arrResps.Count; i++)
{
sbldDistinctResp.Append("(#RespId = '" + arrResps[i].ToString() + "') or ");
}
sbldDistinctResp.Remove(sbldDistinctResp.Length - 4, 4);
sRespCriteria = sbldDistinctResp.ToString();
if (!sRespCriteria.Equals(""))
{
sRespCriteria = "(" + sRespCriteria + ")";
}
}
File.Delete(tmpPath + "\\SOD\\AllRespId.xml");
I repeat, the error is happening at random, i.e. it works at times and does not at other times during the same process.
Any idea what might be causing this and how to resolve?

Just a couple of observations:
Why are you saving and then immediately loading the file again? In fact, why do you even need to save this file - you already have all the information you need in the sbldDistinctResp variable to generate the XML you need to work with (as evidenced by the saveFile call at the start of the code) - couldn't you just make a copy of it, surround it with the same XML as you did during saveFile, and work with that?
"It happens randomly" is a very subjective observation :). You should profile this (run it 10,000 times in a loop for example) and record the pattern of errors. You may well be surprised that what seems random at first actually shows a clear pattern over a large number of runs. This may help you to make a connection between the problem and some other apparently unrelated event on the server; or it may confirm that it truly is random and therefore outside of your control.
If you really can't find the problem and you go with the idea of anti-virus, etc, then you could wrap the loading code in a try/catch and re-try a couple of times if you get the error. It's hacky but it would work, assuming you have accepted that the initial error is beyond your control.

Related

How not to allow running some parts of a script by different users at the exact moment of time?

everyone!
I do a small project for my company and I use C#. I have a script for my project. But before this day, my colleagues and I had an idea that the script would be used by users one by one. For example, if there are a user A and user B, there can be the order where the user B runs the script and only then the user A can run the script.
Today the decision was made to give the users the possibility to run the script freely without the predetermined order. And now I have some thoughts. Here the part of the script:
if (Directory.Exists(#"H:\" + doc_number + #"\detached") == false)
{
Directory.CreateDirectory(#"H:\" + doc_number + #"\detached");
File.WriteAllBytes(#"H:\" + doc_number + #"\detached\1.cms", signature_bytes);
}
else
{
string[] files = Directory.GetFiles(#"H:\" + doc_number + #"\detached"); int files_number = files.Length;
File.WriteAllBytes(#"H:\" + doc_number + #"\detached\" + Convert.ToString(files_number + 1) + ".cms", signature_bytes);
}
Firstly, there is a check of the existence of a directory. If it doesn't exist, the directory will be created and the first file will be added there. Otherwise, we just count the number of files in the directory and then create a new file with a name which is the number of the files in the folder plus one.
However, I'm thinking about the situation when the user A and the user B were at the beginning of this part of the script at the same time and the condition for both would be positive so it wouldn't be executed correctly. Or if one of them started running this part earlier but his or her PC was less powerful so while creating the directory another user would go through the condition, counting files and start creating a file before the first user which would be also incorrect.
I don't know how likely one of these situations are. if so, how can I solve it?
Indeed, you can run into concurrency issues. And you are correct that you can't rely on the existence of a directory to decide what branch to take in your if statement because you might have operations execute in this order:
User A: Checks for directory. Does not exist.
User B: Checks for directory. Does not exist.
User A: Creates directory, enters if branch.
User B: Creates directory, enters if branch.
If the code was running in one process on one machine but in multiple threads, you could use a lock statement.
If the code was running on different processes on the same machine, you could use a cross-process coordination method such as a Mutex.
The question implies that the code runs on different computers but accesses the same file system. In this case, a lock file is a common mechanism to coordinate access to a shared resource. In this approach, you would attempt to create a file and lock it. If that file already exists and is locked by another process, you know someone else got there first. Depending on your needs, a common scenario is to wait for the lock on the file to go away then acquire the lock yourself and continue.
This strategy also works for the other 2 cases above, though is less efficient.
For information about how to create a file with a lock, see
How to lock a file with C#?
There are some issues with your code. For example, what would happen if a file is deleted? The number of files in the directory would be different than the number of the last file, and you can end up trying to write a file that already exists. Also, please use Path.Combine to create paths, it is safer. You also don't need to check if the directory exists, since Directory.Create will do nothing if it already exists.
Common for all solutions bellow:
string baseDir = Path.Combine("H:",doc_number, "detached");
Directory.Create(baseDir);
If you just want any number of users to create files in the same directory, some solutions that are more safe:
Use a GUID:
var guid = Guid.NewGuid();
var file = Path.Combine(baseDir, $"{guid}.cms");
File.WriteAllBytes(file, signature_bytes);
Iterate, trying to create a new file:
bool created = false;
int index = 1;
while(!created)
{
//Check first if the file exists, and gets the next available index
var file = Path.Combine(baseDir, $"{index}.cms");
while(File.Exists(file))
{
file = Path.Combine(baseDir, $"{++index}.cms");
}
//Handle race conditions, if the file was created after we checked
try
{
//Try create the file, not allowing others to acess it while open
using var stream = File.Open(file,FileMode.CreateNew,FileAccess.Write,FileShare.None);
stream.Write(signature_bytes);
created = true;
}
catch (IOException) //If the file already exists, try the next index
{
++index;
}
}

Debugging a live ASP.net website

I have a C# ASP.net website. Locally I can run it in debug and step through the code to see why things arent working but when its hosted on my live site I cannot do this.
What is the best way to debug what is going on with my website?
Should I add debut/output/trace statements?
If so, which and how do I view the output of these? Can I view them in Chrome-->Developer Tools somehow?
For example, right now I can register a user on my site so I know the database connection is good, but I cannot login a registered user and want to figure out why.
Thanks
You may add trace and debug logs on your app. For ease, you may use logging frameworks like
http://nlog-project.org/
https://serilog.net/
You can actually write your own logging mechanism in which you can create a log class and some functions in it eg
public class Log
{
internal static bool RecordLog(string strSource, string strMethodName, string strStatement)//additional params you think appropriate for your logs
{
List<string> lstInfo = new List<string>();
string strProductName = FileVersionInfo.GetVersionInfo(Assembly.GetExecutingAssembly().Location.ToString()).ProductName.ToString();
string strProductVersion = FileVersionInfo.GetVersionInfo(Assembly.GetExecutingAssembly().Location.ToString()).ProductVersion.ToString();
try
{
strProductName = FileVersionInfo.GetVersionInfo(Assembly.GetCallingAssembly().Location.ToString()).ProductName.ToString();
strProductVersion = FileVersionInfo.GetVersionInfo(Assembly.GetCallingAssembly().Location.ToString()).ProductVersion.ToString();
}
catch
{
}
try
{
lstInfo.Add("** Date=" + DateTime.Now.ToString("d MMM yy, H:mm:ss") + ", " + strProductName + " v" + strProductVersion);
lstInfo.Add("Source=" + strSource + ", Server=" + strServerIP + ""); //add more info in list as per rquirement
bool flag = blnWriteLog("LogFilename", lstInfo);
}
catch (Exception objEx)
{
//exception handling
}
return true;
}
private static bool blnWriteLog(string strProductName, List<string> lstInfo)
{
string strPath = strGetLogFileName(strProductName);
using StreamReader write in the log file received
return true;
}
private static string strGetLogFileName(string strFilePrefix)
{
//logic to check your file name, if it exists return name else create one
return strFile;
}
}
and then you can use the same from your file
Log.RecordLog()// Values as per your code and requirement
Note : Above is just a suggested way to do it, there can be many other and efficient ways also
You can use the built-in Microsoft Intellitrace feature to step through code from the generated intellitrace logs. This link https://msdn.microsoft.com/en-us/library/dn449058.aspx gives instructions on how to achieve the following;
"If you are using Microsoft Monitoring Agent to control IntelliTrace,
you also need to set up set up application performance monitoring on
your web server. This records diagnostic events while your app runs
and saves the events to an IntelliTrace log file. You can then look at
the events in Visual Studio Enterprise (but not Professional or
Community editions), go to the code where an event happened, look at
the recorded values at that point in time, and move forwards or
backwards through the code that ran. After you find and fix the
problem, repeat the cycle to build, release, and monitor your release
so you can resolve future potential problems earlier and faster."

Can Directory.Move fail because I am browsing the folder?

I have two folder in a remote FileShare and I am trying to move the first one inside the second. To do this, I wrote a CLR that pretty much does the following:
if (Directory.Exists(destinationFolder))
{
Directory.Delete(destinationFolder, true);
}
if (Directory.Exists(sourceFolder))
{
Directory.Move(sourceFolder, destinationFolder);
}
This works as expected but there are some cases that am getting the following error:
System.IO.IOException: Cannot create a file when that file already exists.
System.IO.IOException: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.Directory.InternalMove(String sourceDirName, String destDirName, Boolean checkHost)
I was not able to narrow it down. It seems random to me since I can not reproduced it. I run this code block, over 50 times and I could not get the same error (or any error to tell the truth) as before.
- Do you see anything wrong with code?
- Do you have any "guesses" on what may caused this error?
The only thing I can think, is even though the Directory.Delete(destinationFolder, true); return the system does not delete the directory immediately and thus when Directory.Move(sourceFolder, destinationFolder); runs, the destinationFolder still exists.
(29/12/2016) This is not a duplicate of Cannot create a file when that file already exists when using Directory.Move. There, the user has a 'mistake' in her code and creates (Directory.CreateDirectory(destinationdirectory);) the destination folder. I am not creating the destination folder, nevertheless, I am deleting it if exists. I looked the comments and the answers but none of them gave a solution to my issue.
(30/12/2016) I have tried all the suggestions from the comments and answer but still nothing strange happens. No errors and no unexpected behaviors.
The only thing I can think [of] is even though the Directory.Delete(destinationFolder, true); return the system does not delete the directory immediately and thus when Directory.Move(sourceFolder, destinationFolder); runs, the destinationFolder still exists.
I would highly doubt that this is the cause of any issue. I suppose it is not impossible, especially since this is a folder on another system (remote file share) and not local, but I would still expect any write-behind caching being done on the remote system to be completely transparent to any file system requests, not just some of them.
I think it is more likely, given the code shown in the question, that somehow you initiated two threads at nearly the exact same time and hit a race condition wherein both threads were attempting to process the move operation at the same time. You can both detect such a condition and avoid any errors by making the following changes to your code:
string _LogFile = String.Concat(#"C:\TEMP\SQLCLR_", Guid.NewGuid(), ".log");
File.AppendAllText(_LogFile, #"Starting operation for: " + sourceFolder +
#" --> " + destinationFolder);
if (Directory.Exists(destinationFolder))
{
File.AppendAllText(_LogFile, #"Deleting: " + destinationFolder);
Directory.Delete(destinationFolder, true);
}
if (Directory.Exists(sourceFolder))
{
if (!Directory.Exists(destinationFolder))
{
File.AppendAllText(_LogFile, #"Moving: " + sourceFolder);
Directory.Move(sourceFolder, destinationFolder);
}
else
{
File.AppendAllText(_LogFile, #"Oops. " + destinationFolder +
#" already exists. How odd indeed!");
}
}
This will log the operation to a text file. It will indicate exactly which steps are being taken. It will also check for the existence of the destination before calling "move", something which is not currently being checked.
If there are two competing threads, you will get 2 log files since they are named using a GUID.
If, somehow, it actually is a delayed delete issue on the remote OS, that would be indicated by a single log file containing a line for the "Deleting.." and then one for the "Moving...". OR, if the "exists" check sees the not-yet-deleted destination, then you will see a line for "Oops".
According to MSDN you´ll get that exception in the following cases:
An attempt was made to move a directory to a different volume.
destDirName already exists.
The sourceDirName and destDirName parameters refer to the same file or directory.
The directory or a file within it is being used by another process.
You´ll have to check with some of above cases, there will be the solution for sure.
https://msdn.microsoft.com/en-us/library/system.io.directory.move(v=vs.110).aspx

How come my program takes so long to run?

I have written a short code in C# in order to take out the text output generated from the Lyx software and modifies it so I could post it on Math.SE.
The code assumes I have a file on my desktop called answer.txt, reads the file and then modifies the code, then it saves the output as answer2.txt at my desktop: For example, the text generated by Lyx have the first line "% Preview body" which I remove. another example is replacing
\textbf{some text that should be bold}
with
**some text that should be bold**
This is the code I have written:
class Program
{
static void Main(string[] args)
{
using (StreamReader sr = new StreamReader(#"C:\Users\BelgiAmir\Desktop\answer.txt"))
{
string text = sr.ReadToEnd();
string AfterFirstReplacement = text.Replace("\\[", "$$");
string AfterSecondReplacement = AfterFirstReplacement.Replace("\\]", "$$");
string RemovedPreviewHeader = AfterSecondReplacement.Replace("% Preview body", "");
int indexOfBold ;
while ((indexOfBold = RemovedPreviewHeader.IndexOf("\\textbf")) != - 1)
{
indexOfBold = RemovedPreviewHeader.IndexOf("\\textbf");
int endIndex = RemovedPreviewHeader.IndexOf("}", indexOfBold);
string boldedText = RemovedPreviewHeader.Substring(indexOfBold + 8, endIndex - indexOfBold - 8);
RemovedPreviewHeader = RemovedPreviewHeader.Replace("\\textbf{" + boldedText + "}",
"**" + boldedText + "**");
}
File.WriteAllText(#"C:\Users\BelgiAmir\Desktop\answer2.txt", RemovedPreviewHeader);
}
}
}
But my code seems to be slow, at least I don't think it is reasonable run time.
I tested the code in the following manner: I have the two empty answer.txt and answer2.txt on my desktop. The .exe file is also on the same hard drive (and on the same partition). I have opened the .exe file and used a plain stopwatch to measure a running time of
15.5 seconds
The files (.exe and the .txt files) are located on an SSD, my computer have 8gb of ram and an i5 Haswell processor (i5-4570). The OS is Windows 7 professional.
This seems like a very long running time - 15.5 seconds to open an empty .text file, then all string operations should be very fast as they are done on an empty string, and the while loop should not be preformed even once, then I save an empty file.
This is a code I have written a while ago and this is about the same running time I always get (even though I have restarted the computer since a couple of times and no "heavy" software is running on it)
Note: The code was written and compiled using VS 2013 express, I have tried running this test with both the Debug version of the .exe and both with a version for Release (and the running time was about the same)
Can someone suggest a reason for this long running time and how to fix it ?
ADDED: With running in Debug in VS and hitting F5 the code runs in less then two seconds! I don't know why it is so fast when I run it through VS then by opening the .exe file, can someone explain this ?

ToLower() in Global.asax Application_BeginRequest spiking CPU and bombing app

Hoping someone can shed some light on this issue we are having because I'm at a loss here.
First, a little background:
I rewrote the URL rewriting for our application and implemented it a couple of weeks ago. I did this using Application_BeginRequest() in the global.asax file and everything was fine with our application except for a small oversight I had made. When I'm rewriting the URLs I'm simply checking for the existence of certain keywords in the path that the user requests and then rewriting the path accordingly. Pretty straight forward stuff, not inventing the wheel here. Dry code, really. However, the text I'm checking for is all lowercase while the path may come in with different cases.
For instance:
string sPath = Request.Url.ToString();
sPath = sPath.Replace(Request.Url.Scheme + "://", "")
.Replace(Request.Url.Host, "");
if (sPath.TrimStart('/').TrimEnd('/').Split('/')[0].Contains("reports") && sPath.TrimStart('/').TrimEnd('/').Split('/').Length > 2) {
string[] aVariables = sPath.TrimStart('/').TrimEnd('/').Split('/');
Context.RewritePath("/app/reports/report-logon.aspx?iLanguageID=" + aVariables[1] + "&sEventCode=" + aVariables[2]);
}
...if someone enters the pages as /Reports/, the rule will not match and they will receive a 404 error as a result.
Simple to fix, though, I thought. One only needs to force the requested path string to lowercase so that anything I attempt to match against it will be looking at a lowercase version of the requested path, and match successfully in cases such as the above. So I adjusted the code to read:
string sPath = Request.Url.ToString();
sPath = sPath.Replace(Request.Url.Scheme + "://", "")
.Replace(Request.Url.Host, "");
sPath = sPath.ToLower(); // <--- New line
if (sPath.TrimStart('/').TrimEnd('/').Split('/')[0].Contains("reports") && sPath.TrimStart('/').TrimEnd('/').Split('/').Length > 2) {
string[] aVariables = sPath.TrimStart('/').TrimEnd('/').Split('/');
Context.RewritePath("/app/reports/report-logon.aspx?iLanguageID=" + aVariables[1] + "&sEventCode=" + aVariables[2]);
}
With this fix, when I request any URL that matches against the URL rewriting, however, the CPU on the server spikes to 100% and my entire application crashes. I take out .ToLower(), kill the app pool, and the application is perfectly fine again.
Am I missing something here!?!? What gives? Why does such a simple method cause my application to explode? .ToLower() works everywhere else in our application, and although I'm not using it extensively, I am using it quite successfully in other places around the application.
Not sure exactly why ToLower would cause this (only thing I can think of is that it is modifying request.url, which sends asp.net into a frenzy), but there is an easy fix: use an ignorecase comparison rather than converting everything tolower.
Change:
sPath.TrimStart('/').TrimEnd('/').Split('/')[0].Contains("reports")
to:
sPath.TrimStart('/').TrimEnd('/').Split('/')[0].IndexOf("reports", StringComparison.InvariantCultureIgnoreCase) != -1
and remove your ToLower logic.
Though I can't say why .toLower() is bringing your server down
Why dont you try it with indexOf
if (sPath.TrimStart('/').TrimEnd('/').Split('/')[0].IndexOf("reports",StringComparison.InvariantCultureIgnoreCase)>=0 && sPath.TrimStart('/').TrimEnd('/').Split('/').Length > 2)
{
string[] aVariables = sPath.TrimStart('/').TrimEnd('/').Split('/');
Context.RewritePath("/app/reports/report-logon.aspx?iLanguageID=" + aVariables[1] + "&sEventCode=" + aVariables[2]);
}

Categories