Redundant set to null / setting to null in called function - c#

I created a function to make sure an object is disposed of properly. This function includes setting the object to null. I am wondering if the line that sets the object to null is useless (and hence I will remove the line), and then add a line to set the object to null in the calling function. My example is for the FileStream object, but I any other object (I think) can take its place. I know I can trace the execution of the program and see what is happening, however, I would like to know more information on the inner mechanisms (garbage collection?), does this work for any object, etc.
//Called function:
public static void DiscardFile(System.IO.FileStream file)
{
file.Flush();
file.Close();
file.Dispose();
//Does this work?
//When the function returns, is the file object really set to null?
file = null;
}
//Calling function:
public static void WriteStringToFile(string s, string fileName)
{
System.IO.StreamWriter file = new System.IO.StreamWriter(fileName);
file.Write(s);
DiscardFile(file);
//Is this redundant?
//Or is the line in the called function the redundant line?
file = null;
}
Thanks!
I have a loop that writes a thousand strings to files within 30 seconds. (The program will be writing 400K+ strings when it completes its execution.) I see that the loop waits (every so often) at the file.Write(s) line, and that the memory footprint of the app increases. That is for another thread, but wanted to know the behavior of the above code.
Thanks!

Sorry, but your implementation is dangerous
public static void WriteStringToFile(string s, string fileName)
{
System.IO.StreamWriter file = new System.IO.StreamWriter(fileName);
file.Write(s); // <- the danger is here
DiscardFile(file);
//Is this redundant? Yes, it's redundant
//Or is the line in the called function the redundant line?
file = null;
}
Suppose you have an exception thrown on file.Write(s); it means that DiscardFile(file); will never be executed an you have resource leakage (HFILE - opened file handle).
Why not stick to standard using pattern:
public static void WriteStringToFile(string s, string fileName)
{
// Let system release all the resources acquired
using var file = new System.IO.StreamWriter(fileName);
{
file.Write(s);
} // <- here the resources will be released
}
In case of C# 8.0 you can get rid of pesky {...} and let the system release resources on leaving method's scope (see https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#using-declarations):
public static void WriteStringToFile(string s, string fileName)
{
// Let system release all the resources acquired
using var file = new System.IO.StreamWriter(fileName);
file.Write(s);
} // <- here the resources will be released

Related

If I use a static method to create an object and add it to a disposable instance of another object, is the object created in the static method static?

I am seeing some strange, buggy behavior in some .NET code, and I'm wondering if it has to do with how I've set things up.
I have a non-static class with a method. In this method, I create a new instance of a disposable class. I then use a static helper method to add something to that disposable instance.
Sometimes the code throws an exception at a certain point (by timing out). I don't know why, and this question isn't about why that happens. But what happens next is that if an exception was thrown, the next time the code runs, which supposedly would create a new instance of my main class, and a new instance of the disposable object within that new instance, then a different bit of code involving the disposable object also causes a timeout exception, much earlier in the process.
Here is a simplified example of what's happening:
public sealed class MyClass : OtherClass
{
protected override void MyMethod(ContextInfo context)
{
using (DisposableClass disposableInstance = new DisposableClass(context.URL))
{
Helper.ConditionallyAddThingy(disposableInstance, context.thingInfo, context.URL);
foreach(var foo in fooCollection)
{
// 1. initially I can make as many of these calls as I want,
// and they all finish successfully. if there were no
// timeout issues in section 2 below, the next time this
// runs, supposedly in a new instance of MyClass, and with
// a new instance of "disposableInstance", it again runs perfectly fine,
// no matter how many "foo"s there are.
// 2. see below
// 3. _if_ I had a timeout exception previously in section 2 below,
// the next time this runs, supposedly in a _new_ instance of MyClass,
// and with a _new_ instance of "disposableInstance",
// I get a timeout exception _here_ on the first "foo", and don't even get to section 2.
// make a call that does _not_ have to do with file streams
SomeResponse response = disposableInstance.AskForSomething(foo);
disposableInstance.ExecuteQuery();
}
foreach(var fileInfo in fileInfoCollection)
{
// 2. if there is only one file, the call succeeds, however
// if there is more than one file, the request for the
// second file casuses a System.Net.WebException: The operation has timed out
var fileUrl = fileInfo["URL"];
FileIshThing fileIsh = File.OpenBinaryDirect(disposableInstance, fileUrl);
disposableInstance.ExecuteQuery();
}
}
}
}
internal static class Helper
{
internal static void ConditionallyAddThingy(DisposableClass disposableInst, string thingInfo string contextUrl)
{
if (!string.IsNullOrWhiteSpace(thingInfo))
{
Thing thing = new Thing(thingInfo);
Uri uri = new Uri(contextUrl);
thing.Uri = uri;
ThingCollection collection = new ThingCollection();
collection.Add(thing);
disposabeInst.ExecuteWebRequestEventHandler += delegate (object sender, EventArgs eventArgs)
{
eventArgs.ThingCollection = collection;
}
}
}
}
Is there something about creating the Thing or ThingCollection, or adding the event receiver in the static method that makes those things static themselves, so that the next time through, I'm not really creating new ones but reusing the ones from the last time the method executed?
I don't understand how an error condition in a previous (and disposed of) instance of an object can affect a new instance of the object that may not necessarily meet the conditions for causing the error.

Lock text file during read and write or alternative

I have an application where I need to create files with a unique and sequential number as part of the file name. My first thought was to use (since this application does not have any other data storage) a text file that would contain a number and I would increment this number so then my application would always create a file with a unique id.
Then I thought that maybe at a time when there are more than one user submitting to this application at the same time, one process might be reading the txt file before it has been written by the previous process. So then I am looking for a way to read and write to a file (with try catch so then I can know when it's being used by another process and then wait and try to read from it a few other times) in the same 'process' without unlocking the file in between.
If what I am saying above sounds like a bad option, could you please give me an alternative to this? How would you then keep track of unique identification numbers for an application like my case?
Thanks.
If it's a single application then you can store the current number in your application settings. Load that number at startup. Then with each request you can safely increment it and use the result. Save the sequential number when the program shuts down. For example:
private int _fileNumber;
// at application startup
_fileNumber = LoadFileNumberFromSettings();
// to increment
public int GetNextFile()
{
return Interlocked.Increment(ref _fileNumber);
}
// at application shutdown
SaveFileNumberToSettings(_fileNumber);
Or, you might want to make sure that the file number is saved whenever it's incremented. If so, change your GetNextFile method:
private readonly object _fileLock = new object();
public int GetNextFile()
{
lock (_fileLock)
{
int result = ++_fileNumber;
SaveFileNumbertoSettings(_fileNumber);
return result;
}
}
Note also that it might be reasonable to use the registry for this, rather than a file.
Edit: As Alireza pointed in the comments, it is not a valid way to lock between multiple applications.
You can always lock the access to the file (so you won't need to rely on exceptions).
e.g:
// Create a lock in your class
private static object LockObject = new object();
// and then lock on this object when you access the file like this:
lock(LockObject)
{
... access to the file
}
Edit2: It seems that you can use Mutex to perform inter-application signalling.
private static System.Threading.Mutex m = new System.Threading.Mutex(false, "LockMutex");
void AccessMethod()
{
try
{
m.WaitOne();
// Access the file
}
finally
{
m.ReleaseMutex();
}
}
But it's not the best pattern to generate unique ids. Maybe a sequence in a database would be better ? If you don't have a database, you can use Guids or a local database (even Access would be better I think)
I would prefer a complex and universal solution with the global mutex. It uses a mutex with name prefixed with "Global\" which makes it system-wide i.e. one mutex instance is shared across all processes. if your program runs in friendly environment or you can specify strict permissions limited to a user account you can trust then it works well.
Keep in mind that this solution is not transactional and is not protected against thread-abortion/process-termination.
Not transactional means that if your process/thread is caught in the middle of storage file modification and is terminated/aborted then the storage file will be left in unknown state. For instance it can be left empty. You can protect yourself against loss of data (loss of last used index) by writing the new value first, saving the file and only then removing the previous value. Reading procedure should expect a file with multiple numbers and should take the greatest.
Not protected against thread-abortion means that if a thread which obtained the mutex is aborted unexpectedly and/or you do not have proper exception handling then the mutex could stay locked for the life of the process that created that thread. In order to make solution abort-protected you will have to implement timeouts on obtaining the lock i.e. replace the following line which waits forever
blnResult = iLock.Mutex.WaitOne();
with something with timeout.
Summing this up I try to say that if you are looking for a really robust solution you will come to utilizing some kind of a transactional database or write a kind of such a database yourself :)
Here is the working code without timeout handling (I do not need it in my solution). It is robust enough to begin with.
using System;
using System.IO;
using System.Security.AccessControl;
using System.Security.Principal;
using System.Threading;
namespace ConsoleApplication31
{
class Program
{
//You only need one instance of that Mutex for each application domain (commonly each process).
private static SMutex mclsIOLock;
static void Main(string[] args)
{
//Initialize the mutex. Here you need to know the path to the file you use to store application data.
string strEnumStorageFilePath = Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData),
"MyAppEnumStorage.txt");
mclsIOLock = IOMutexGet(strEnumStorageFilePath);
}
//Template for the main processing routine.
public static void RequestProcess()
{
//This flag is used to protect against unwanted lock releases in case of recursive routines.
bool blnLockIsSet = false;
try
{
//Obtain the lock.
blnLockIsSet = IOLockSet(mclsIOLock);
//Read file data, update file data. Do not put much of long-running code here.
//Other processes may be waiting for the lock release.
}
finally
{
//Release the lock if it was obtained in this particular call stack frame.
IOLockRelease(mclsIOLock, blnLockIsSet);
}
//Put your long-running code here.
}
private static SMutex IOMutexGet(string iMutexNameBase)
{
SMutex clsResult = null;
clsResult = new SMutex();
string strSystemObjectName = #"Global\" + iMutexNameBase.Replace('\\', '_');
//Give permissions to all authenticated users.
SecurityIdentifier clsAuthenticatedUsers = new SecurityIdentifier(WellKnownSidType.AuthenticatedUserSid, null);
MutexSecurity clsMutexSecurity = new MutexSecurity();
MutexAccessRule clsMutexAccessRule = new MutexAccessRule(
clsAuthenticatedUsers,
MutexRights.FullControl,
AccessControlType.Allow);
clsMutexSecurity.AddAccessRule(clsMutexAccessRule);
//Create the mutex or open an existing one.
bool blnCreatedNew;
clsResult.Mutex = new Mutex(
false,
strSystemObjectName,
out blnCreatedNew,
clsMutexSecurity);
clsResult.IsMutexHeldByCurrentAppDomain = false;
return clsResult;
}
//Release IO lock.
private static void IOLockRelease(
SMutex iLock,
bool? iLockIsSetInCurrentStackFrame = null)
{
if (iLock != null)
{
lock (iLock)
{
if (iLock.IsMutexHeldByCurrentAppDomain &&
(!iLockIsSetInCurrentStackFrame.HasValue ||
iLockIsSetInCurrentStackFrame.Value))
{
iLock.MutexOwnerThread = null;
iLock.IsMutexHeldByCurrentAppDomain = false;
iLock.Mutex.ReleaseMutex();
}
}
}
}
//Set the IO lock.
private static bool IOLockSet(SMutex iLock)
{
bool blnResult = false;
try
{
if (iLock != null)
{
if (iLock.MutexOwnerThread != Thread.CurrentThread)
{
blnResult = iLock.Mutex.WaitOne();
iLock.IsMutexHeldByCurrentAppDomain = blnResult;
if (blnResult)
{
iLock.MutexOwnerThread = Thread.CurrentThread;
}
else
{
throw new ApplicationException("Failed to obtain the IO lock.");
}
}
}
}
catch (AbandonedMutexException iMutexAbandonedException)
{
blnResult = true;
iLock.IsMutexHeldByCurrentAppDomain = true;
iLock.MutexOwnerThread = Thread.CurrentThread;
}
return blnResult;
}
}
internal class SMutex
{
public Mutex Mutex;
public bool IsMutexHeldByCurrentAppDomain;
public Thread MutexOwnerThread;
}
}

IOException while writing to text file despite locking the block

I know the answer must be out there somewhere, I applied suggestions both from many other questions and from MSDN itself but I'm probably overlooking something here.
This is my method, I use it to dump output to file. lock object declaration attached for clarity.
private static Object fileLock = new Object();
private static void WriteToFile(string msg, bool WriteLine)
{
lock (fileLock)
{
msg = DateTime.Now.ToShortTimeString() + " - " + msg;
FileInfo F = new FileInfo("dump.txt");
using (StreamWriter writer = F.Exists ? F.AppendText() : F.CreateText()) //<--THIS LINE THROWS
{
if (WriteLine)
writer.WriteLine(msg);
else
writer.Write(msg);
}
}
}
Question is: Why does the using line above throws an IOException complaining another process is using the file the 2nd time I call the method ?
I'm calling it like this around my code:
Console.WriteLine(something)
#if(DEBUG)
Extensions.WriteToFile(something,true);
#endif
Again, I'm sure this is a trivial issue and someone else asked something like this getting the right answer, but I'm unable to dig it up.
UPDATE
Refactoring out the FileInfo object and switching to File.XXX methods made the code work fine. I still wonder what the issue was, anyway the issue looks like solved.
#Guffa: declaration has to be private static object fileLock = new object();
#alex: Your code works just fine on my machine although it's a bit too complicated for the task imo.
static void Write(string text, string file)
{
using (StreamWriter sw = File.AppendText(file))// Creates or opens and appends
{
sw.WriteLine(text);
}
}
Maybe some antivirus or indexer locks your dump file.

How to stop a "black box" operation?

I am using an asynchronous delegate that invokes a method which loads an xml file into an XPathDocument. If the xml is too big to fit into memory it never finishes loading. the code below works if the xml file is successfully loaded into the XPathDocument. I have been able to use a timer event that executes the asyncXpath.EndInvoke(result) statement and that works to end the CreateDocument method, but it does not stop the XPathDocument from loading. My conclusion is that the only thing I can do is to issue an Application.End statement to kill the application. Does anyone know how to stop a blackbox operation such as loading an XPathDocument.
delegate bool AsyncXpathQueryCaller(string xmlfile
bool found = false;
AsyncXpathQueryCaller asyncXpath = new
AsyncXpathQueryCaller(CreateDocument);
IAsyncResult result = asyncXpath.BeginInvoke(xmlfile, null, null);
while (!result.IsCompleted)
{
result.AsyncWaitHandle.WaitOne(100, false);
}
found = asyncXpath.EndInvoke(result);
private bool CreateDocument (string xmlfile)
{
XPathDocument doc = new XPathDocument(xmlfile);
}
What about using FileInfo before you try to load it and checking the size? If it's too big just skip it.
Something like this:
FileInfo fi = new FileInfo(xmlfile);
if(fi.Length < /*some huge number*/)
{
//load the file
}
You could declare a FileStream and give it to the constructor, but before you do, look at its Length property, and if it's too long, just return an error.
As proposed by Abe Miessler it is sensible to check the file size even before attempting to load it into an XPathDocument.
How would one decide what should be the limit?
There is no exact rule, but I have heard people say that you should multiply the file size by 5 and then the result is close to the memory that the XmlDocument will require in order for the text to be loaded/parsed.
EDIT: I just realized that KeithS has come close to a good answer. The basic idea is that you call the XPathDocument constructor that accepts a Stream which wraps a FileStream. The object you pass it should implement the Read(byte[], int, int) function to call the wrapped FileStream's Read function, or throw an exception if the operation has timed out. Here's a code sample:
class XmlStream : FileStream
{
DateTime deadline;
public XmlStream(string filename, TimeSpan timeout)
: base(filename, FileMode.Open)
{
deadline = DateTime.UtcNow + timeout;
}
public override int Read(byte[] array, int offset, int count)
{
if (DateTime.UtcNow > deadline)
throw new TimeoutException();
return base.Read(array, offset, count);
}
}
Here's some code that reads in document, but times out after 1 second:
bool found = true;
using(var stream = new XmlStream(document, TimeSpan.FromSeconds(1)))
try
{
xpath = new XPathDocument(stream);
}
catch (TimeoutException)
{
found = false;
}
If you create a separate thread instead of doing a BeginInvoke, you can just abort the thread when a timer ticks (or somebody clicks "Cancel"). While aborting threads is generally not advisable because it could be holding a lock or have global data in an inconsistent state, in this case it should be fine because your thread would not be holding a lock or accessing global data.
Here's the code for this method that does the same as the previous sample:
bool found = false;
thread = new Thread(() =>
{
xpath = new XPathDocument(document);
found = true;
});
thread.Start();
thread.Join(TimeSpan.FromSeconds(1));
thread.Abort();
If you're uncomfortable with aborting threads in your own app domain, you can create the document in another app domain and call AppDomain.Unload on it if it takes too long. That will require some marshalling, but probably won't have too much overhead.
The ultimate way to be able to kill a process is to run it in a separate process and use some sort of remoting interface to access it. That's probably even messier that the other options, though, as you have to worry about finding the executable, passing parameters, some user terminating it, and so on.

What's wrong with my application ---- Size was 0, but I expected 46806 !

I'm a C# programmer.
Now, I'm using the ICSharpCode.SharpZipLib.dll to create a zip file in my current project. But it occurs to me that when I click the button at the SECOND TIME to execute a function to create a zip file, the application will throw an exception, friendly and seriously told me that "Size was zero, but I expected 46086".
I'm so confused that I want to know why? When I click the button at the first time, I can do it successfully without any error.
My related codes are as follows:
internal void ThreadProc()
{
try
{
ZipHelper.CreateZip(backupZipFile, Constants.HomeConstant, true);
// do other things
}
}
The CreateZip() function's realization is as follows:
public static void CreateZip(string zipFileName, string sourceDirectory, bool recurse)
{
FastZip zip = new FastZip();
if (File.Exists(zipFileName))
{
File.Delete(zipFileName);
}
zip.CreateZip(zipFileName, sourceDirectory, true, "");
}
Now, I will show you the recursive calling process:
Call method "UpdateAppAsync" in "ActiveCheckManager" class
public void UpdateAppAsync(string masterConfig)
{
this.masterConf = masterConfig;
Thread actualThread = new Thread(new ThreadStart(UpdateApp));
actualThread.IsBackground = true;
actualThread.CurrentCulture = Thread.CurrentThread.CurrentCulture;
actualThread.CurrentUICulture = Thread.CurrentThread.CurrentUICulture;
actualThread.Start();
}
Call the UpdateApp function asynchronously, in the UpdateApp method, it will only call the UpdateDetail function simply.
private void UpdateDetail(string masterConfig, string category)
{
IUpdate worker = new HP.ActiveCheckLocalMode.UpdateEngine.UpdateManager();
worker.UpdateApp(masterConf);
}
The worker.UpdateApp will call UpdateDetail(string, UpdateCategory) only.
private void UpdateDetail(string masterConfig, UpdateCategory cat)
{
UpdateThread updateThread = new UpdateThread(this, cat);
updateThread.MasterConfig = masterConfig;
updateThread.ThreadProc();
}
That is the calling process. When I click the update button second time, it will throw an exception, can you help me? Thank you very much.
Has the first task thread finished before you start the second time?
I would imagine that File.Delete() and some items in the SharpZipLib to not respond nicelly to multithreadingly zip the same folder simultaneously to the same file.
Promote that " UpdateThread updateThread " as a private member of the "ActiveCheckManager" class, then check if it is already running from a previous click before creating a new thread.

Categories