Creating StreamWriter instances crashes the application with RPC error - c#

I've stumbled upon extremely weird error. When using FileStream in the first using - application iterates through the loop and prints out "Done", however, then it exits with the error code 5. Try/Catch doesn't work either.
This seems to be extremely fragile error state because if I fiddle with the file names (for example C:\TFS\file1.xml.No.xml -> C:\TFS\file1.xml.N.xml) then it works fine.
If I use var tw = File.CreateText in the first using then the application exits with the code 1073741845. I've manage to reduce the problem significantly to just few lines of code below for a reproducible example.
Perhaps someone can explain why in the world this would behave so weirdly? I'm also interested in why I am not able to recover from this error state? I've tried [HandleProcessCorruptedStateExceptions] and [SecurityCritical] with no effect.
static void Main(string[] args)
{
var ds = new DataSet();
for (int i = 0; i <= 2; i++)
{
using (var fs = new FileStream(#"C:\TFS\file1.xml.No.xml", FileMode.Create))
{
}
using (var tw = File.CreateText(#"C:\TFS\file1.xml"))
{
ds.WriteXml(tw);
}
Console.WriteLine($"Pass {i} done.");
}
Console.WriteLine("Done");
Console.ReadLine();
}
Using .NET Framework 4.7 Console Application project.
EDIT:
If I put Thread.Sleep(2000) in each using statement - I then encounter this error after the 2nd pass - it prints Pass 1 done. and Pass 2 done. before has exited with code 5 (0x5) so the frequency of writing does not seem to be responsible for this behaviour.
Upon further tinkering with this small sample - I can reproduce the issue without using DataSet at all and just with creating StreamWriter instances. The below example should produce before exiting abruptly:
TW1 created.
TW1 flushed.
TW1 created.
TW2 flushed.
Pass 0 done.
static void Main(string[] args)
{
for (int i = 0; i <= 2; i++)
{
var tw1 = File.CreateText(#"C:\TFS\file1.xml.No.xml");
Console.WriteLine("TW1 created.");
tw1.Flush();
Console.WriteLine("TW1 flushed.");
Thread.Sleep(2000);
var tw2 = File.CreateText(#"C:\TFS\file1.xml");
Console.WriteLine("TW1 created.");
tw2.Flush();
Console.WriteLine("TW2 flushed.");
Thread.Sleep(2000);
Console.WriteLine($"Pass {i} done.");
}
Console.WriteLine("Done");
Console.ReadLine();
}
**EDIT2: **
So it appears for us this issue was caused by Kaspersky Endpoint Security for Windows v11.

The process exit code does not mean that much, you favor seeing the debugger stop to tell you about an unhandled exception. But sure, this isn't healthy. This is an anti-malware induced problem, they don't like XML files. Often a problem on a programmer's machine, they also don't like executable files appearing from seemingly no-where, created by a process that uses process interop like the IDE does to run msbuild. Strong malware signals. So first thing you want to do is temporarily turn it off to see if that solves the problem.
It surely will, next thing you'd do is switching to something a bit less aggressive. The anti-malware solution provided by the OS never gets in the way like that. If you use Avast or anything else that has a "deep scan" mode then uninstall asap.
And worry a bit about what your user might use, getting an IOException from FileStream is quite normal so a try/catch is pretty much required. In general you don't want to overwrite a file or delete a directory that you created milliseconds ago, luckily it is never a sensible thing to do.

Related

System.IO.Compression.ZipArchive keeps file locked after dispose?

I have a class that takes data from several sources and writes them to a ZIP file. I've benchmarked the class to check if using CompressionLevel.Optimal would be much slower than CompressionLevel.Fastest. But the benchmark throws an exception on different iterations and in different CompressionLevel values each time I run the benchmark.
I started removing the methods that add the file-content step by step until I ended up with the code below (inside the for-loop) which does basically nothing besides creating an empty zip-file and deleting it.
Simplified code:
var o = #"e:\test.zip";
var result = new FileInfo(o);
for (var i = 0; i < 1_000_000; i++)
{
// Alternate approach
// using(var archive = ZipFile.Open(o, ZipArchiveMode.Create))
using (var archive = new ZipArchive(result.OpenWrite(), ZipArchiveMode.Create, false, Encoding.UTF8))
{
}
result.Delete();
}
The loop runs about 100 to 15k iterations on my PC and then throws an IOException when trying to delete the file saying that the file (result) is locked.
So... did I miss something about how to use System.IO.Compression.ZipArchive? There is no close method for ZipArchive and using should dispose/close the archive... I've tried different .NET versions 4.6, 4.6.1, 4.7 and 4.7.2.
EDIT 1:
The result.Delete() is not part of the code that is benchmarked
EDIT 2:
Also tried to play around with Thread.Sleep(5/10/20) after the using block (therefore the result.Delete() to check if the lock persists) but up to 20ms the file is still locked at some point. Didnt tried higher values than 20ms.
EDIT 3:
Can't reprodurce the problem at home. Tried a dozen times at work and the loop never hit 20k iterations. Tried once here and it completed.
EDIT 4:
jdweng (see comments) was right. Thanks! Its somehow related to my "e:" partition on a local hdd. The same code runs fine on my "c:" partition on a local ssd and also on a network share.
In my experience files are may not be consistently unlocked when the dispose method for the stream returns. My best guess is that this is due to the file system doing some operation asynchronously. The best solution I have found is to retry the delete operation multiple times. i.e. something like this:
public static void DeleteRetrying(this FileInfo self, int delayMs = 100, int numberOfAttempts = 3)
{
for (int i = 0; i < numberOfAttempts-1; i++)
{
try
{
self.Delete();
}
catch (IOException)
{
// Consider making the method async and
// replace this with Task.Delay
Thread.Sleep(delayMs);
}
}
// Final attempt, let the exception propagate
self.Delete();
}
This is not an ideal solution, and I would love if someone could provide a better solution. But it might be good enough for testing where the impact of a non deleted file would be manageable.

C# Console application does work when started from Visual studio, after publishing it only runs for 3 cycles is this normal behaviour?

I made a C# console application (.NET CORE 5.0) to check every minute for changes in a MySQL database and send e-mails with the changes.
If I run this application directly from visual studio 2019, it works fine without any problems.
If I run it after I publish it, it only does 3 cycles and the console window stays open. No errors or anything else.
This first screenshot is from running via Visual Studio 2019
This is screenshot is from running directly from desktop after publish
static void Main(string[] args)
{
TimerCallback callback = new TimerCallback(DoStuff);
Timer stateTimer = new Timer(callback, null, 0, 1000);
for (; ; )
{
Thread.Sleep(100);
}
}
static public void DoStuff(Object stateInfo)
{
DataTable DtblEmployee = DatabaseClass.GetEmployeeList();
foreach (DataRow row in DtblEmployee.Rows)
{
foreach (var item in row.ItemArray)
{
string Str = RandomStringGenerator.GetRandomAlphanumericString(8);
DatabaseClass.EmployeeUpdate(item.ToString(), Str);
EmailClass.SendEmail(item.ToString(), Str);
}
}
Console.WriteLine("Last check was # {0}", DateTime.Now.ToString("h:mm:ss"));
Console.WriteLine("{0} e-mail(s) were send.", DtblEmployee.Rows.Count);
}
The program is supposed to run forever.
Would love to hear from you guys, if more information is needed please ask.
EDIT: Added extra code to show Database
public static void EmployeeUpdate(string Emailadress, string Password)
{
string connectionstring;
connectionstring = "server=1.1.1.1;user id=User;password=Pass;port=3306;persistsecurityinfo=True;database=Test";
connection = new MySqlConnection(connectionstring);
try
{
connection.Open();
var cmd = new MySqlCommand("UPDATE Users SET Password=#param_val_1, GeneratePassword=#param_val_3 where Username=#param_val_2", connection);
cmd.Parameters.AddWithValue("#param_val_1", Password);
cmd.Parameters.AddWithValue("#param_val_2", Emailadress);
cmd.Parameters.AddWithValue("#param_val_3", 0);
cmd.ExecuteScalar();
cmd.Dispose();
}
catch (MySqlException ex)
{
switch (ex.Number)
{
case 0:
Console.WriteLine("Cannot connect to server. Contact administrator");
break;
case 1045:
Console.WriteLine("Invalid username/password, please try again");
break;
}
}
finally
{
connection.Close();
}
}
I strongly expect that the problem is that your Timer is being garbage collected and finalized, and that's stopping the callback from being executed.
When you run your code from Visual Studio in the debugger, the JIT is less aggressive about garbage collection, which is why it's working in that scenario.
The smallest change to fix that would be to add this line at the end of your Main method:
GC.KeepAlive(stateTimer);
Alternatively, as per quetzalcoatl's answer, you could use a using statement for the timer. Either option will have the desired effect of keeping the timer alive for the duration of the method.
An alternative I think you should explore would be to not use a timer at all, instead just loop within the Main method and call your DoStuff method directly. You'd still call Sleep within that loop, which would handle the timing aspect. Obviously that will affect the precise timing of how the code runs, but it's likely to end up being simpler to understand and simpler to debug.
Additionally, I'd suggest being a lot more intentional about your exception handling. Work out whether you want the code to stop looping if any one iteration throws an exception, and make that explicit in the code.
I 100% agree with JonSkeet about the reason.
It's the GC that cleans up the stateTimer variable. Right after this line, this variable becomes unused and the compiler is free to get rid of it from the stack, and then GC is free to get rid of the timer.
When you are running your application in different environments, the GC may be using different set rules. Debugging session, console app, IIS Application, module for SqlServer, etc - they all have different rules as for when and how aggressively run the GC. Under debugging session, it also MAY clean up this variable, but it also MAY do it hours or days later, maybe to give you more time to inspect things? Under free-running console app, it simply occurred sooner.
GC also has its hard rules that has to abide them always: if the variable is used, it cannot be purged.
JonSkeet suggested pinning the stateTimer, I disagree. It's a last-resort option.
Much better, just use USING directive, as the Timer is a IDisposable:
TimerCallback callback = new TimerCallback(DoStuff);
using(Timer stateTimer = new Timer(callback, null, 0, 1000))
for (; ; )
{
Thread.Sleep(100);
}
The variable is still unused, you can even get rid of it and write
using(new Timer(callback, null, 0, 1000))
but even now, the using() statement will remember that Timer object and prevent GC from cleaning it up too soon. (It has to remember that object to be able to call Dispose() when the loop ends..)

Check If File Is In Use By Other Instances of Executable Run

Before I go into too detail, my program is written in Visual Studio 2010 using C# .Net 4.0.
I wrote a program that will generate separate log files for each run. The log file is named after the time, and accurate up at millisecond (for example, 20130726103042375.log). The program will also generate a master log file for the day if it has not already exist (for example, *20130726_Master.log*)
At the end of each run, I want to append the log file to a master log file. Is there a way to check if I can append successfully? And retry after Sleep for like a second or something?
Basically, I have 1 executable, and multiple users (let's say there are 5 users).
All 5 users will access and run this executable at the same time. Since it's nearly impossible for all user to start at the exact same time (up to millisecond), there will be no problem generate individual log files.
However, the issue comes in when I attempt to merge those log files to the master log file. Though it is unlikely, I think the program will crash if multiple users are appending to the same master log file.
The method I use is
File.AppendAllText(masterLogFile, File.ReadAllText(individualLogFile));
I have check into the lock object, but I think it doesn't work in my case, as there are multiple instances running instead of multiple threads in one instance.
Another way I look into is try/catch, something like this
try
{
stream = file.Open(FileMode.Open, FileAccess.ReadWrite, FileShare.None);
}
catch {}
But I don't think this solve the problem, because the status of the masterLogFile can change in that brief millisecond.
So my overall question is: Is there a way to append to masterLogFile if it's not in use, and retry after a short timeout if it is? Or if there is an alternative way to create the masterLogFile?
Thank you in advance, and sorry for the long message. I want to make sure I get my message across and explain what I've tried or look into so we are not wasting anyone's time.
Please let me know if there's anymore information I can provide to help you help me.
Your try/catch is the way to do things. If the call to File.Open succeeds, then you can write to to the file. The idea is to keep the file open. I would suggest something like:
bool openSuccessful = false;
while (!openSuccessful)
{
try
{
using (var writer = new StreamWriter(masterlog, true)) // append
{
// successfully opened file
openSuccessful = true;
try
{
foreach (var line in File.ReadLines(individualLogFile))
{
writer.WriteLine(line);
}
}
catch (exceptions that occur while writing)
{
// something unexpected happened.
// handle the error and exit the loop.
break;
}
}
}
catch (exceptions that occur when trying to open the file)
{
// couldn't open the file.
// If the exception is because it's opened in another process,
// then delay and retry.
// Otherwise exit.
Sleep(1000);
}
}
if (!openSuccessful)
{
// notify of error
}
So if you fail to open the file, you sleep and try again.
See my blog post, File.Exists is only a snapshot, for a little more detail.
I would do something along the lines of this as I think in incurs the least overhead. Try/catch is going to generate a stack trace(which could take a whole second) if an exception is thrown. There has to be a better way to do this atomically still. If I find one I'll post it.

What is the difference between running in VS 2010 and running a builded EXE?

As a school project we've created a C# XNA 4.0 Game that runs perfectly when run (in either Release or Debug) from Visual Studio 2010 itself. However, when it's built, the game inexplicably crashes at a certain point.
The responsible code portion seems to be this:
while( true )
{
if( Client.readInfo )
{
t.Stop();
t.Dispose();
// Save last good IP to file
string toWrite = IPinput.filledIn;
using( StreamWriter file = new StreamWriter( "multiplayer.dat", false ) )
{
file.WriteLine( toWrite );
}
ExitScreen();
using( LobbyScreen screen = new LobbyScreen( c ) )
{
screenManager.AddScreen( screen );
}
break;
}
else if( itTookToLong )
{
Client.killMe = true;
IPinput.Text = "Server IP: ";
IPinput.filledIn = "";
break;
}
}
This waits till the started Client thread makes the public static attribute readInfo true (which happens, because the server obtains the client) or a timer runs out after 10 seconds. Both things work perfectly fine when running through VS, but the game just stops responding when run as built EXE.
What could this be?!
Edit: I seem to have found the fix already, FINALLY. Adding Thread.Sleep(1); at the bottom of the above while(true) loop seems to fix all problems! Don't know why this is different between running an EXE and running from Visual Studio though, but hey it works.
Edit2: This site explains my whole problem: http://igoro.com/archive/volatile-keyword-in-c-memory-model-explained/
Thread.Sleep just accidentally fixed everything ;)
Hard to tell from the code you have posted, but I believe you need to make Client.readInfo field volatile. The reason that Thread.Sleep fixed your problem is that it puts a memory barrier as a side effect.
I suspect that there is a problem with permissions to write the file to disk. The .exe has different permissions when it us running than VS does.
Here is where I would start: add some try-catch blocks in your code so you can figure out where the exception is occurring. While troubleshooting, display the exception details on the screen.
As a temporary debugging strategy, you could also try this: add some kind of logging and record every step along the way here. Right after the line if( Client.readInfo ), test that you have arrived at that line. Right after the line string toWrite = IPinput.filledIn; look at the contents of the string toWrite, and so on.
If you'd rather, just for debugging purposes, you could throw those messages to yourself on the screen. I'm not familiar with XNA, but in any old web app you can usually do this with Response.Write.
Once you narrow down where the exact problem is, you can test further for what it is. And when you've fixed it, of course, you remove all this code.

New FileStream already closed/disposed?

I open a FileStream with FileMode.Open and FileAccess.Read. Shortly after that I call a function to handle the file's contents. I use Invoke to make the call because the call comes from a Thread and the function has to put the results on a Form. The function accepts any kind of Stream (I call it with MemoryStreams too without a problem) and uses XmlTextReader to read the XML in the FileStream, but on rare occasions for unknown reasons even the first Read() throws an ObjectDisposedException and the stream's CanRead property returns false if the stream was already closed.
In the Thread the FileStream is a local using variable, so I don't think another threads should be able to close it, and I don't close it until the Invoke returned. There are no Exceptions thrown so the file is definetly there (since there is no FileNotFoundException) and should be accessed properly (since there is no UnauthorizedAccessException and IOException).
How could my FileStream still look closed sometimes just after opened?
(It might matter that I'm running my code on a Windows CE 5 device with Compact Framework 3.5 and I wasn't able to reproduce the same behaviour on my desktop PC with XP yet.)
EDIT:
I know, that this Invoke is ugly but that alone can't be a reason to fail, can it? (And, in most of the cases it doesn't fail at all.)
//the code in the thread
//...
using (FileStream fs = File.Open(assemblyPath + "\\white.xml", FileMode.Open, FileAccess.Read))
{
mainForm.Instance.Invoke(new DataHandler(mainForm.Instance.handleData), new object[] { fs });
}
//...
//and the handler
public void handleData(Stream stream)
{
infoPanel.SuspendLayout();
try
{
using (XmlTextReader xml = new XmlTextReader(stream))
{
//it doesn't matter what is here
}
}
catch{}
}
There's one reason I can think of: the worker thread got aborted. This will run the finally block generated by the using statement and close the file. How it could be aborted is a secondary question. Is the thread's IsBackground property set to true? Is the program bombing on an unhandled exception elsewhere and shutting down? Just guesses of course.
Sure, this is expected behavior. You call Invoke, which marshals the call to another thread. The calling thread then continues to run and the using block exits, calling Dispose on the stream. This Dispose is happening before you are done (and maybe before you start) using the stream in the UI thread. The exact timing of these actions is going to depend on processor load and some other factors, but it's certainly unsafe.
Either don't put the stream in a using block or better yet have the thread do the read and pass the results to the UI via Invoke.
EDIT
As Hans points out in the comment, the above explanation should be for a BeginInvoke call, which underneath calls PostMessage. Invoke, on the other hand, uses SendMessage. Both propbably uses some WM_COPYDATA shenanigans (I've not looked to see) to marshal the data.
The Invoke call should be executing the entire handler you have posted, though the behavior you see indicates otherwise. From the code you posted there's no real way for us to determine what is closing the stream.
I would still refactor what you've done here because right now you're tying up both the UI and worker threads with the reader operation. I'd do the read work in the worker thread and then pass the results to the UI. This would decrease the odds of the reader work causing UI choppiness and would eliminate the possibility of the stream getting closed while you're reading from it.
I saw the same issue on some embedded board (ARM) I'm working on. Then I created a little test.
The following code (not involving any Threads!) crashes:
using (var w = new StreamWriter(File.Create("file.txt"), System.Text.Encoding.UTF8))
{
for (int i = 0; i < 1000; i++)
{
w.WriteLine("Test");
}
}
This code however does not crash:
using (var w = File.CreateText("file.txt"))
{
for (int i = 0; i < 1000; i++)
{
w.WriteLine("Test");
}
}
So, my guess can only be that the underlying native code treats text files differently than when you open the file using File.Create(). Both files are then written in UTF-8, so there is no difference about the encoding.
BTW: sorry I'm one year late on the answer, but I hope it'll help somebody

Categories