How would i restart StreamWriter onced it has been used? - c#

I am using a Streamwriter method for a button click to run a test, but after the test is complete I want to run it again and record the results in the same text file.
I always receive an error that the file is still in another process even though the previous test has been completed. I have tried using .Close() but then I get a "Can't write to closed textwriter" error message.
StreamWriter ResultsFile = new StreamWriter(FileName, true);
public void ToFile()
{
ResultsFile.AutoFlush = true;
Console.SetOut(ResultsFile);
Console.WriteLine("Regression Test Performed at {0}", thisDate);
Console.WriteLine("-----");
}
[Test]
public void NewTextFile()
{
ToFile();
const int requiredNumber = 5;
for (var i = 0; i < requiredNumber; i++)
{
Console.WriteLine("New ID assigned to ABC");
Console.WriteLine("-----");
}
}

You should not reuse the StreamWriter like that, try the following instead.
public void ToFile(StreamWriter sw)
{
sw.AutoFlush = true;
Console.SetOut(sw);
Console.WriteLine("Regression Test Performed at {0}", thisDate);
Console.WriteLine("-----");
}
[Test]
public void NewTextFile()
{
using (var sw = new StreamWriter(FileName, true))
{
ToFile(sw);
const int requiredNumber = 5;
for (var i = 0; i < requiredNumber; i++)
{
Console.WriteLine("New ID assigned to ABC");
Console.WriteLine("-----");
}
}
}
Explanation of using can be found here. I would also advise you to read about IDisposable interface.
Beware that after exiting the using statement, calling Console.WriteLine would result into ObjectDisposedException as you would be trying to write into disposed stream. To prevent this situation you can cache the return value of Console.Out into local variable before entering the using statement and after exiting the using statement you can restore the state by again calling the SetOut with cached value of original stream, i.e.:
var original = Console.Out;
using (var sw = new StreamWriter(FileName, true))
{
// ...
}
Console.SetOut(original);

Related

IOException: The process cannot access the file 'fileName/textFile.txt' because it is being used by another process

I saw other threads about this problem, and non of them seems to solve my exact problem.
static void RecordUpdater(string username,int points,string term) //Updates Record File with New Records.
{
int minPoints = 0;
StreamWriter streamWriter = new StreamWriter($#"Record\{term}");
Player playersRecord = new Player(points, username);
List<Player> allRecords = new List<Player>();
StreamReader reader = new StreamReader($#"Record\{term}");
while (!reader.EndOfStream)
{
string[] splitText = reader.ReadLine().Split(',');
Player record = new Player(Convert.ToInt32(splitText[0]), splitText[1]);
allRecords.Add(record);
}
reader.Close();
foreach (var playerpoint in allRecords )
{
if(minPoints > playerpoint.points)
minPoints = playerpoint.points;
}
if (points > minPoints)
{
allRecords.Add(playersRecord);
allRecords.Remove(allRecords.Min());
}
allRecords.Sort();
allRecords.Reverse();
streamWriter.Flush();
foreach (var player in allRecords)
{
streamWriter.WriteLine(player.points + "," + player.username);
}
}
So after I run the program and get to that point in code I get an error message:
"The process cannot access the file 'fileName/textFile.txt' because it is being used by another process."
You should use the using statement around disposable objects like streams. This will ensure that the objects release every unmanaged resources they hold. And don't open the writer until you need it. Makes no sense to open the writer when first you need to read the records
static void RecordUpdater(string username,int points,string term)
{
Player playersRecord = new Player(points, username);
List<Player> allRecords = new List<Player>();
int minPoints = 0;
try
{
using(StreamReader reader = new StreamReader($#"Record\{term}"))
{
while (!reader.EndOfStream)
{
.... load you data line by line
}
}
..... process your data .....
using(StreamWriter streamWriter = new StreamWriter($#"Record\{term}"))
{
... write your data...
}
}
catch(Exception ex)
{
... show a message about the ex.Message or just log everything
in a file for later analysis
}
}
Also you should consider that working with files is one of the most probable context in which you could receive an exception due to external events in which your program has no control.
It is better to enclose everything in a try/catch block with proper handling of the exception

Sudden memory consumption jump resulting in out of memory exception while processing huge text file

I need to process a very large text file (6-8 GB). I wrote the code attached below. Unfortunately, every time output file reaches (being created next to source file) reaches ~2GB, I observe sudden jump in memory consumption (~100MB to few GBs) and in result - out of memory exception.
Debugger indicates that OOM occurs at while ((tempLine = streamReader.ReadLine()) != null)
I am targeting .NET 4.7 and x64 architecture only.
Single line is at most 50 character long.
I can workaround this and split original file to smaller parts not to face the problem while processing and merge resuls back to one file at the end, but would like not to do it.
Code:
public async Task PerformDecodeAsync(string sourcePath, string targetPath)
{
var allLines = CountLines(sourcePath);
long processedlines = default;
using (File.Create(targetPath));
var streamWriter = File.AppendText(targetPath);
var decoderBlockingCollection = new BlockingCollection<string>(1000);
var writerBlockingCollection = new BlockingCollection<string>(1000);
var producer = Task.Factory.StartNew(() =>
{
using (var streamReader = new StreamReader(File.OpenRead(sourcePath), Encoding.Default, true))
{
string tempLine;
while ((tempLine = streamReader.ReadLine()) != null)
{
decoderBlockingCollection.Add(tempLine);
}
decoderBlockingCollection.CompleteAdding();
}
});
var consumer1 = Task.Factory.StartNew(() =>
{
foreach (var line in decoderBlockingCollection.GetConsumingEnumerable())
{
short decodeCounter = 0;
StringBuilder builder = new StringBuilder();
foreach (var singleChar in line)
{
var positionInDecodeKey = decodingKeysList[decodeCounter].IndexOf(singleChar);
if (positionInDecodeKey > 0)
builder.Append(model.Substring(positionInDecodeKey, 1));
else
builder.Append(singleChar);
if (decodeCounter > 18)
decodeCounter = 0;
else ++decodeCounter;
}
writerBlockingCollection.TryAdd(builder.ToString());
Interlocked.Increment(ref processedlines);
if (processedlines == (long)allLines)
writerBlockingCollection.CompleteAdding();
}
});
var writer = Task.Factory.StartNew(() =>
{
foreach (var line in writerBlockingCollection.GetConsumingEnumerable())
{
streamWriter.WriteLine(line);
}
});
Task.WaitAll(producer, consumer1, writer);
}
Solutions, as well as advices how to optimize it a little more is greatly appreciated.
Like I said, I'd probably go for something simpler first, unless or until it's demonstrated that it's not performing well. As Adi said in their answer, this work appears to be I/O bound - so there seems little benefit in creating multiple tasks for it.
publiv void PerformDecode(string sourcePath, string targetPath)
{
File.WriteAllLines(targetPath,File.ReadLines(sourcePath).Select(line=>{
short decodeCounter = 0;
StringBuilder builder = new StringBuilder();
foreach (var singleChar in line)
{
var positionInDecodeKey = decodingKeysList[decodeCounter].IndexOf(singleChar);
if (positionInDecodeKey > 0)
builder.Append(model.Substring(positionInDecodeKey, 1));
else
builder.Append(singleChar);
if (decodeCounter > 18)
decodeCounter = 0;
else ++decodeCounter;
}
return builder.ToString();
}));
}
Now, of course, this code actually blocks until it's done, which is why I've not marked it async. But then, so did yours, and it should have been warning about that already.
(You could try using PLINQ instead of LINQ for the Select portion but honestly, the amount of processing we're doing here looks trivial; Profile first before applying any such change)
As the work you are doing is mostly IO bound, you aren't really gaining anything from parallelization. It also looks to me like (correct me if I'm wrong) that your transformation algorithm doesn't depend on you reading the file line-by-line, so I would recommend instead doing something like this:
void Main()
{
//Setup streams for testing
using(var inputStream = new MemoryStream())
using(var outputStream = new MemoryStream())
using (var inputWriter = new StreamWriter(inputStream))
using (var outputReader = new StreamReader(outputStream))
{
//Write test string and rewind stream
inputWriter.Write("abcdefghijklmnop");
inputWriter.Flush();
inputStream.Seek(0, SeekOrigin.Begin);
var inputBuffer = new byte[5];
var outputBuffer = new byte[5];
int inputLength;
while ((inputLength = inputStream.Read(inputBuffer, 0, inputBuffer.Length)) > 0)
{
for (var i = 0; i < inputLength; i++)
{
//transform each character
outputBuffer[i] = ++inputBuffer[i];
}
//Write to output
outputStream.Write(outputBuffer, 0, inputLength);
}
//Read for testing
outputStream.Seek(0, SeekOrigin.Begin);
var output = outputReader.ReadToEnd();
Console.WriteLine(output);
//Outputs: "bcdefghijklmnopq"
}
}
Obviously, you would be using FileStreams instead of MemoryStreams, and you can increase the buffer length to something much larger (as this was just a demonstrative example). Also as your original method is Async, you use the async variants of Stream.Write and Stream.Read

Parallel read of SQL Server FileStream not working

I am trying to read the same SQL Server file stream in parallel threads, but am having no success.
Mostly I get the following exception (although from time to time I get a other errors):
System.InvalidOperationException: "The process cannot access the file specified because it has been opened in another transaction."
I have searched the internet, and found just a few posts, but as I understand this is supposed to work. I'm using SQL Server 2008 R2.
I've simplified the code to the following: the main code opens a main transaction, and then runs 2 threads in parallel, each thread using a DependentTransaction and copies the SQL Server file stream to a temporary file on the disk.
If I change threadCount to 1, then the code works.
Any idea why this fails?
The code:
class Program
{
private static void Main(string[] args)
{
string path = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());
Directory.CreateDirectory(path);
try
{
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required))
{
TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);
const int threadCount = 2;
var transaction = Transaction.Current;
// Create dependent transactions, one for each thread
var dependentTransactions = Enumerable
.Repeat(transaction.DependentClone(DependentCloneOption.BlockCommitUntilComplete), threadCount)
.ToList();
// Copy the file from the DB to a temporary files, in parallel (each thread will use a different temporary file).
Parallel.For(0, threadCount, i =>
{
using (dependentTransactions[i])
{
CopyFile(path, dependentTransactions[i]);
dependentTransactions[i].Complete();
}
});
transactionScope.Complete();
}
}
finally
{
if (Directory.Exists(path))
Directory.Delete(path, true);
}
}
private static void CopyFile(string path, DependentTransaction dependentTransaction)
{
string tempFilePath = Path.Combine(path, Path.GetRandomFileName());
// Open a transaction scope for the dependent transaction
using (var transactionScope = new TransactionScope(dependentTransaction, TransactionScopeAsyncFlowOption.Enabled))
{
using (Stream stream = GetStream())
{
// Copy the SQL stream to a temporary file
using (var tempFileStream = File.OpenWrite(tempFilePath))
stream.CopyTo(tempFileStream);
}
transactionScope.Complete();
}
}
// Gets a SQL file stream from the DB
private static Stream GetStream()
{
var sqlConnection = new SqlConnection("Integrated Security=true;server=(local);initial catalog=DBName");
var sqlCommand = new SqlCommand {Connection = sqlConnection};
sqlConnection.Open();
sqlCommand.CommandText = "SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()";
Object obj = sqlCommand.ExecuteScalar();
byte[] txContext = (byte[])obj;
const string path = "\\\\MyMachineName\\MSSQLSERVER\\v1\\DBName\\dbo\\TableName\\TableName\\FF1444E6-6CD3-4AFF-82BE-9B5FCEB5FC96";
var sqlFileStream = new SqlFileStream(path, txContext, FileAccess.Read, FileOptions.SequentialScan, 0);
return sqlFileStream;
}
}
kk

IsolatedStorageException sometimes following a disk-full condition?

I'm designing an API. Currently I'm trying to safely handle a condition where we run out of disk space. Basically, we have a series of files holding some data. When the disk is full, when we go to write another data file, it will of course throw an error. At this point, we delete a single file(loop through file list from oldest to newest and retry after we successfully delete a file). Then, we retry writing the file. Repeat that process until the file is written without error.
Now the fun part. All of this happens concurrently. Like, at some point there are 8 threads doing this at once. This makes things extra interesting, and has lead to an odd error.
Here is the code
public void Save(string text, string id)
{
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
var existing = store.GetFileNames(string.Format(Prefix + "/*-{0}.dat", id));
if (existing.Any()) return; //it already is saved
string name = string.Format(Prefix + "/{0}-{1}.dat", DateTime.UtcNow.ToString("yyyyMMddHHmmssfffffff"), id);
tryagain:
bool doover=false;
try
{
AttemptFileWrite(store, name, text);
}
catch (IOException)
{
doover = true;
}
catch (IsolatedStorageException) //THIS LINE
{
doover = true;
}
if (doover)
{
Attempt(() => store.DeleteFile(name)); //because apparently this can also fail.
var files = store.GetFileNames(Path.Combine(Prefix, "*.dat"));
foreach (var file in files.OrderBy(x=>x))
{
try
{
store.DeleteFile(Path.Combine(Prefix, file));
}
catch
{
continue;
}
break;
}
goto tryagain; //prepare the velociraptor shield!
}
}
}
void AttemptFileWrite(IsolatedStorageFile store, string name, string text)
{
using (var file = store.OpenFile(
name,
FileMode.Create,
FileAccess.ReadWrite,
FileShare.None | FileShare.Delete
))
{
using (var writer = new StreamWriter(file))
{
writer.Write(text);
writer.Flush();
writer.Close();
}
file.Close();
}
}
static void Attempt(Action func)
{
try
{
func();
}
catch
{
}
}
static T Attempt<T>(Func<T> func)
{
try
{
return func();
}
catch
{
}
return default(T);
}
public string GetSaved()
{
string content=null;
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
var files = store.GetFileNames(Path.Combine(Prefix,"*.dat")).OrderBy(x => x);
if (!files.Any()) return new MessageBatch();
foreach (var filename in files)
{
IsolatedStorageFileStream file=null;
try
{
file = Attempt(() =>
store.OpenFile(Path.Combine(Prefix, filename), FileMode.Open, FileAccess.ReadWrite, FileShare.None | FileShare.Delete));
if (file == null)
{
continue; //couldn't open. assume locked or some such
}
file.Seek(0L, SeekOrigin.Begin);
using (var reader = new StreamReader(file))
{
content = reader.ReadToEnd();
}
//take note here. We delete the file, while we still have it open!
//This is done because having the file open prevents other readers, but if we close it first,
//then there is a race condition that right after closing the stream, another reader could pick it up and
//open exclusively. It looks weird, but it's right. Trust me.
store.DeleteFile(Path.Combine(Prefix, filename));
if (!string.IsNullOrEmpty(content))
{
break;
}
}
finally
{
if (file != null) file.Close();
}
}
}
return content;
}
At the line marked THIS LINE, is what I'm talking about. When doing AttemptFileWrite, I can look over at store.AvailableSpace and see that there is enough room to fit the data into it, but upon trying to open the file, it throws this IsolatedStorageException with the description of Operation Not Permitted. Aside from this weird case, in all other cases it's just an IOException thrown with a message about the disk being full
I'm trying to figure out if I have some odd race condition, or if this is an error I just have to deal with or what?
Why does this error occur?

C# expression, equivalent to ruby's sandwich block code

I am a .NET developer and recently started learning ruby with ruby_koans. Some of Ruby's syntaxes are amazing and one of them is the way it handles "Sandwich" code.
The following is ruby sandwich code.
def file_sandwich(file_name)
file = open(file_name)
yield(file)
ensure
file.close if file
end
def count_lines2(file_name)
file_sandwich(file_name) do |file|
count = 0
while line = file.gets
count += 1
end
count
end
end
def test_counting_lines2
assert_equal 4, count_lines2("example_file.txt")
end
I am fascinated that I can get rid of the cumbersome "file open and close code" each time I access a file but cannot think of any C# equivalent code. Maybe, I can use IoC's dynamic proxy to do the same thing, but is there any way I can do it purely with C#?
Many thanks in advance.
You certainly don't need anything IoC-related here. How about:
public T ActOnFile<T>(string filename, Func<Stream, T> func)
{
using (Stream stream = File.OpenRead(stream))
{
return func(stream);
}
}
public int CountLines(string filename)
{
return ActOnFile(filename, stream =>
{
using (StreamReader reader = new StreamReader(stream))
{
int count = 0;
while (reader.ReadLine() != null)
{
count++;
}
return count;
}
});
}
In this case it doesn't help very much, as the using statement already does most of what you want... but the general principle holds. Indeed, that's how LINQ is so flexible. If you haven't looked at LINQ yet, I strongly recommend that you do.
Here's the act CountLines method I'd use:
public int CountLines(string filename)
{
return File.ReadLines(filename).Count();
}
Note that this will still only read a line at a time... but the Count extension method acts on the returned sequence.
In .NET 3.5 it would be:
public int CountLines(string filename)
{
using (var reader = File.OpenText(filename))
{
int count = 0;
while (reader.ReadLine() != null)
{
count++;
}
return count;
}
}
... still pretty simple.
are you just looking for something that opens and closes the stream for you?
public IEnumerable<string>GetFileLines(string path)
{
//the using() statement will open, close, and dispose your stream for you:
using(FileStream fs = new FileStream(path, FileMode.Open))
{
//do stuff here
}
}
Is yield return what you're looking for?
using will call Dispose() and Close() when it reaches the closing brace, but I think the question is how to achieve this particular structure of code.
Edit: Just realized that this isn't exactly what you're looking for, but I'll leave this answer here since a lot of people aren't aware of this technique.
static IEnumerable<string> GetLines(string filename)
{
using (var r = new StreamReader(filename))
{
string line;
while ((line = r.ReadLine()) != null)
yield return line;
}
}
static void Main(string[] args)
{
Console.WriteLine(GetLines("file.txt").Count());
//Or, similarly:
int count = 0;
foreach (var l in GetLines("file.txt"))
count++;
Console.WriteLine(count);
}

Categories