Hello
I've been working on terminal-like application to get better at programming in c#, just something to help me learn. I've decided to add a feature that will copy a file exactly as it is, to a new file... It seems to work almost perfect. When opened in Notepad++ the file are only a few lines apart in length, and very, very, close to the same as far as actual file size goes. However, the duplicated copy of the file never runs. It says the file is corrupt. I have a feeling it's within the methods for reading and rewriting binary to files that I created. The code is as follows, thank for the help. Sorry for the spaghetti code too, I get a bit sloppy when I'm messing around with new ideas.
Class that handles the file copying/writing
using System;
using System.IO;
//using System.Collections.Generic;
namespace ConsoleFileExplorer
{
class FileTransfer
{
private BinaryWriter writer;
private BinaryReader reader;
private FileStream fsc; // file to be duplicated
private FileStream fsn; // new location of file
int[] fileData;
private string _file;
public FileTransfer(String file)
{
_file = file;
fsc = new FileStream(file, FileMode.Open);
reader = new BinaryReader(fsc);
}
// Reads all the original files data to an array of bytes
public byte[] ReadAllDataToArray()
{
byte[] bytes = reader.ReadBytes((int)fsc.Length); // reading bytes from the original file
return bytes;
}
// writes the array of original byte data to a new file
public void WriteDataFromArray(byte[] fileData, string path) // got a feeling this is the problem :p
{
fsn = new FileStream(path, FileMode.Create);
writer = new BinaryWriter(fsn);
int i = 0;
while(i < fileData.Length)
{
writer.Write(fileData[i]);
i++;
}
}
}
}
Code that interacts with this class .
(Sleep(5000) is because I was expecting an error on first attempt...
case '3':
Console.Write("Enter source file: ");
string sourceFile = Console.ReadLine();
if (sourceFile == "")
{
Console.Clear();
Console.ForegroundColor = ConsoleColor.DarkRed;
Console.Error.WriteLine("Must input a proper file path.\n");
Console.ForegroundColor = ConsoleColor.White;
Menu();
} else {
Console.WriteLine("Copying Data"); System.Threading.Thread.Sleep(5000);
FileTransfer trans = new FileTransfer(sourceFile);
//copying the original files data
byte[] data = trans.ReadAllDataToArray();
Console.Write("Enter Location to store data: ");
string newPath = Console.ReadLine();
// Just for me to make sure it doesnt exit if i forget
if(newPath == "")
{
Console.Clear();
Console.ForegroundColor = ConsoleColor.DarkRed;
Console.Error.WriteLine("Cannot have empty path.");
Console.ForegroundColor = ConsoleColor.White;
Menu();
} else
{
Console.WriteLine("Writing data to file"); System.Threading.Thread.Sleep(5000);
trans.WriteDataFromArray(data, newPath);
Console.WriteLine("File stored.");
Console.ReadLine();
Console.Clear();
Menu();
}
}
break;
File compared to new file
right-click -> open in new tab is probably a good idea
Original File
New File
You're not properly disposing the file streams and the binary writer. Both tend to buffer data (which is a good thing, especially when you're writing one byte at a time). Use using, and your problem should disappear. Unless somebody is editing the file while you're reading it, of course.
BinaryReader and BinaryWriter do not just write "raw data". They also add metadata as needed - they're designed for serialization and deserialization, rather than reading and writing bytes. Now, in the particular case of using ReadBytes and Write(byte[]) in particular, those are really just raw bytes; but there's not much point to use these classes just for that. Reading and writing bytes is the thing every Stream gives you - and that includes FileStreams. There's no reason to use BinaryReader/BinaryWriter here whatsover - the file streams give you everything you need.
A better approach would be to simply use
using (var fsn = ...)
{
fsn.Write(fileData, 0, fileData.Length);
}
or even just
File.WriteAllBytes(fileName, fileData);
Maybe you're thinking that writing a byte at a time is closer to "the metal", but that simply isn't the case. At no point during this does the CPU pass a byte at a time to the hard drive. Instead, the hard drive copies data directly from RAM, with no intervention from the CPU. And most hard drives still can't write (or read) arbitrary amounts of data from the physical media - instead, you're reading and writing whole sectors. If the system really did write a byte at a time, you'd just keep rewriting the same sector over and over again, just to write one more byte.
An even better approach would be to use the fact that you've got file streams open, and stream the files from source to destination rather than first reading everything into memory, and then writing it back to disk.
There is an File.Copy() Method in C#, you can see it here https://msdn.microsoft.com/ru-ru/library/c6cfw35a(v=vs.110).aspx
If you want to realize it by yourself, try to place a breakpoint inside your methods and use a debug. It is like a story about fisher and god, who gived a rod to fisher - to got a fish, not the exactly fish.
Also, look at you int[] fileData and byte[] fileData inside last method, maybe this is problem.
Related
I have a code that is SSIS script task to zip file written in C#.
I have problem when zipping 1gb (approxymately) file.
I try to implement this code and still get error 'System.OutOfMemoryException'
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at ST_4cb59661fb81431abcf503766697a1db.ScriptMain.AddFileToZipUsingStream(String sZipFile, String sFilePath, String sFileName, String sBackupFolder, String sPrefixFolder) in c:\Users\dtmp857\AppData\Local\Temp\vsta\84bef43d323b439ba25df47c365b5a29\ScriptMain.cs:line 333
at ST_4cb59661fb81431abcf503766697a1db.ScriptMain.Main() in c:\Users\dtmp857\AppData\Local\Temp\vsta\84bef43d323b439ba25df47c365b5a29\ScriptMain.cs:line 131
This is the snippet of code when zipping file:
protected bool AddFileToZipUsingStream(string sZipFile, string sFilePath, string sFileName, string sBackupFolder, string sPrefixFolder)
{
bool bIsSuccess = false;
try
{
if (File.Exists(sZipFile))
{
using (ZipArchive addFile = ZipFile.Open(sZipFile, ZipArchiveMode.Update))
{
addFile.CreateEntryFromFile(sFilePath, sFileName);
//Move File after zipping it
BackupFile(sFilePath, sBackupFolder, sPrefixFolder);
}
}
else
{
//from https://stackoverflow.com/questions/28360775/adding-large-files-to-io-compression-ziparchiveentry-throws-outofmemoryexception
using (var zipFile = ZipFile.Open(sZipFile, ZipArchiveMode.Update))
{
var zipEntry = zipFile.CreateEntry(sFileName);
using (var writer = new BinaryWriter(zipEntry.Open()))
using (FileStream fs = File.Open(sFilePath, FileMode.Open))
{
var buffer = new byte[16 * 1024];
using (var data = new BinaryReader(fs))
{
int read;
while ((read = data.Read(buffer, 0, buffer.Length)) > 0)
writer.Write(buffer, 0, read);
}
}
}
//Move File after zipping it
BackupFile(sFilePath, sBackupFolder, sPrefixFolder);
}
bIsSuccess = true;
}
catch (Exception ex)
{
throw ex;
}
return bIsSuccess;
}
What I am missing, please give me suggestion maybe tutorial or best practice handling this problem.
I know this is an old post but what can I say, it helped me sort out some stuff and still comes up as a top hit on Google.
So there is definitely something wrong with the System.IO.Compression library!
First and Foremost...
You must make sure to turn off 32-Preferred. Having this set (in my case with a build for "AnyCPU") causes so many inconsistent issues.
Now with that said, I took some demo files (several less than 500MB, one at 500MB, and one at 1GB), and created a sample program with 3 buttons that made use of the 3 methods.
Button 1 - ZipArchive.CreateFromDirectory(AbsolutePath, TargetFile);
Button 2 - ZipArchive.CreateEntryFromFile(AbsolutePath, RelativePath);
Button 3 - Using the [16 * 1024] Byte Buffer method from above
Now here is where it gets interesting. (Assuming that the program is built as "AnyCPU" and with NO 32 Preferred check)... all 3 Methods worked on a Windows 64-Bit OS, regardless of how much memory it had.
However, as soon as I ran the same test on a 32-Bit OS, regardless of how much memory it had, ONLY method 1 worked!
Method 2 and 3 blew up with the outofmemory error AND to add salt to it, method 3 (the preferred method of chunking) actually corrupted more files than method #2!
By messed up, I mean that of my files, the 500MB and the 1GB file ended up in the zipped archive but at a size less than the original (it was basically truncated).
So I dunno... since there are not many 32-bit OS around anymore, I guess maybe it is a moot point.
But seems like there are some bugs in the System.IO.Compression Framework!
I'm attempting to use StreamReader and StreamWriter to grab a temporary output log (.txt format) from another application.
The output log is always open and constantly written to.
Unhelpfully if the application closes or crashes, the log file ends up deleted - hence the need for a tool that can grab the information from this log and save it.
What my program currently does is:
Create a new .txt file, and stores the path of that file as the
string "destinationFile".
Finds the .txt log file to read, and stores the path of that file as
the string "sourceFile"
It then passes those two strings to the method below.
Essentially I'm trying to read the sourceFile one line at a time.
Each time one line is read, it is appended to destinationFile.
This keeps looping until the sourceFile no longer exists (i.e. the application has closed or crashed and deleted its log).
In addition, the sourceFile can get quite big (sometimes 100Mb+), and this program may be handling more than one log at a time.
Reading the whole log rather than line by line will most likely start consuming a fair bit of memory.
private void logCopier(string sourceFile, string destinationFile)
{
while (File.Exists(sourceFile))
{
string textLine;
using (var readerStream = File.Open(sourceFile,
FileMode.Open,
FileAccess.Read,
FileShare.ReadWrite))
using (var reader = new StreamReader(readerStream))
{
while ((textLine = reader.ReadLine()) != null)
{
using (FileStream writerStream = new FileStream(destinationFile,
FileMode.Append,
FileAccess.Write))
using (StreamWriter writer = new StreamWriter(writerStream))
{
writer.WriteLine(textLine);
}
}
}
}
}
The problem is that my WPF application locks up and ceases to respond when it reaches this code.
To track down where, I put a MessageBox just before the writerStream line of the code to output what the reader was picking up.
It was certainly reading the log file just fine, but there appears to be a problem with writing it to the file.
As soon as it reaches the using (FileStream writerStream = new FileStream part of the code, it stops responding.
Is using the StreamWriter in this manner not valid, or have I just gone and dome something silly in the code?
Am also open to a better solution than what I'm trying to do here.
Simply what I understand is you need to copy a file from source to destination which may be deleted at any time.
I'll suggest you to use FileSystemWatcher to watch for source file changed event, then just simply copy the whole file from source to destination using File.Copy.
I've just solved the problem, and the issue was indeed something silly!
When creating the text file for the StreamWriter, I had forgotten to use .Dispose();. I had File.Create(filename); instead of File.Create(filename).Dispose(); This meant the text file was already open, and the StreamWriter was attempting to write to a file that was locked / in use.
The UI still locks up (as expected), as I've yet to implement this on a new thread as SteenT mentioned. However the program no longer crashes and the code correctly reads the log and outputs to a text file.
Also after a bit of refinement, my log reader/writer code now looks like this:
private void logCopier(string sourceFile, string destinationFile)
{
int num = 1;
string textLine = String.Empty;
long offset = 0L;
while (num == 1)
{
if (File.Exists(sourceFile))
{
FileStream stream = new FileStream(sourceFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (new StreamReader(stream))
{
stream.Seek(offset, SeekOrigin.Begin);
TextReader reader2 = new StreamReader(stream);
while ((textLine = reader2.ReadLine()) != null)
{
Thread.Sleep(1);
StreamWriter writer = new StreamWriter(destinationFile, true);
writer.WriteLine(textLine);
writer.Flush();
writer.Close();
offset = stream.Position;
}
continue;
}
}
else
{
num = 0;
}
}
}
Just putting this code up here in case anyone else is looking for something like this. :)
I'm trying to get my program to read code from a .txt and then read it back to me, but for some reason, it crashes the program when I compile. Could someone let me know what I'm doing wrong? Thanks! :)
using System;
using System.IO;
public class Hello1
{
public static void Main()
{
string winDir=System.Environment.GetEnvironmentVariable("windir");
StreamReader reader=new StreamReader(winDir + "\\Name.txt");
try {
do {
Console.WriteLine(reader.ReadLine());
}
while(reader.Peek() != -1);
}
catch
{
Console.WriteLine("File is empty");
}
finally
{
reader.Close();
}
Console.ReadLine();
}
}
I don't like your solution for two simple reasons:
1)I don't like gotta Cath 'em all(try catch). For avoing check if the file exist using System.IO.File.Exist("YourPath")
2)Using this code you haven't dispose the streamreader. For avoing this is better use the using constructor like this: using(StreamReader sr=new StreamReader(path)){ //Your code}
Usage example:
string path="filePath";
if (System.IO.File.Exists(path))
using (System.IO.StreamReader sr = new System.IO.StreamReader(path))
{
while (sr.Peek() > -1)
Console.WriteLine(sr.ReadLine());
}
else
Console.WriteLine("The file not exist!");
If your file is located in the same folder as the .exe, all you need to do is StreamReader reader = new StreamReader("File.txt");
Otherwise, where File.txt is, put the full path to the file. Personally, I think it's easier if they are in the same location.
From there, it's as simple as Console.WriteLine(reader.ReadLine());
If you want to read all lines and display all at once, you could do a for loop:
for (int i = 0; i < lineAmount; i++)
{
Console.WriteLine(reader.ReadLine());
}
Use the code below if you want the result as a string instead of an array.
File.ReadAllText(Path.Combine(winDir, "Name.txt"));
Why not use System.IO.File.ReadAllLines(winDir + "\Name.txt")
If all you're trying to do is display this as output in the console, you could do that pretty compactly:
private static string winDir = Environment.GetEnvironmentVariable("windir");
static void Main(string[] args)
{
Console.Write(File.ReadAllText(Path.Combine(winDir, "Name.txt")));
Console.Read();
}
using(var fs = new FileStream(winDir + "\\Name.txt", FileMode.Open, FileAccess.Read))
{
using(var reader = new StreamReader(fs))
{
// your code
}
}
The .NET framework has a variety of ways to read a text file. Each have pros and cons... lets go through two.
The first, is one that many of the other answers are recommending:
String allTxt = File.ReadAllText(Path.Combine(winDir, "Name.txt"));
This will read the entire file into a single String. It will be quick and painless. It comes with a risk though... If the file is large enough, you may run out of memory. Even if you can store the entire thing into memory, it may be large enough that you will have paging, and will make your software run quite slowly. The next option addresses this.
The second solution allows you to work with one line at a time and not load the entire file into memory:
foreach(String line in File.ReadLines(Path.Combine(winDir, "Name.txt")))
// Do Work with the single line.
Console.WriteLine(line);
This solution may take a little longer for files because it's going to do work MORE OFTEN with the contents of the file... however, it will prevent awkward memory errors.
I tend to go with the second solution, but only because I'm paranoid about loading huge Strings into memory.
Here is my code:
public static TextWriter twLog = null;
private int fileNo = 1;
private string line = null;
TextReader tr = new StreamReader("file_no.txt");
TextWriter tw = new StreamWriter("file_no.txt");
line = tr.ReadLine();
if(line != null){
fileNo = int.Parse(line);
twLog = new StreamWriter("log_" + line + ".txt");
}else{
twLog = new StreamWriter("log_" + fileNo.toString() + ".txt");
}
System.IO.File.WriteAllText("file_no.txt",string.Empty);
tw.WriteLine((fileNo++).ToString());
tr.Close();
tw.Close();
twLog.Close();
It throws this error:
IOException: Sharing violation on path C:\Users\Water Simulation\file_no.txt
What i'm trying to do is just open a file with log_x.txt name and take the "x" from file_no.txt file.If file_no.txt file is empty make log file's name log_1.txt and write "fileNo + 1" to file_no.txt.After a new program starts the new log file name must be log_2.txt.But i'm getting this error and i couldn't understand what am i doing wrong.Thanks for help.
Well, you're trying to open the file file_no.txt for reading and for writing using separate streams. This may not work as the file will be locked by the reading stream, so the writing stream can't be created and you get the exception.
One solution would be to read the file first, close the stream and then write the file after increasing the fileNo. That way the file is only opened once at a time.
Another way would be to create a file stream for both read and write access like that:
FileStream fileStream = new FileStream(#"file_no.txt",
FileMode.OpenOrCreate,
FileAccess.ReadWrite,
FileShare.None);
The accepted answer to this question seems to have a good solution also, even though I assume you do not want to allow shared reads.
Possible alternate solution
I understand you want to create unique log files when your program starts. Another way to do so would be this:
int logFileNo = 1;
string fileName = String.Format("log_{0}.txt", logFileNo);
while (File.Exists(fileName))
{
logFileNo++;
fileName = String.Format("log_{0}.txt", logFileNo);
}
This increases the number until it finds a file number where the log file doesn't exist. Drawback: If you have log_1.txt and log_5.txt, the next file won't be log_6.txt but log_2.txt.
To overcome this, you could enumerate all the files in your directory with mask log_*.txt and find the greatest number by performing some string manipulation.
The possibilities are endless :-D
Well this may be old but the accepted answer didn't work for me. This is caused when you try to Read or Write a file you just created from a separate stream. Solving this is very simple, just dispose the filestream you used in creating it and then you can access the file freely.
if (!File.Exists(myfile))
{
var fs = new FileStream(fav, FileMode.Create);
fs.Dispose();
string text = File.ReadAllText(myfile);
}
enter image description here
var stream = new System.IO.FileStream(filePath, System.IO.FileMode.Create);
resizedBitmap.Compress(Bitmap.CompressFormat.Png, 200, stream); //problem here
stream.Close();
return resizedBitmap;
In the Compress method, I was passing the value of the quality parameter as 200, which sadly doesn't allows values outside the range 0-100.
I changed back the value of quality to 100 and the issue got fixed.
None of the proposed options helped me. But I found a solution:
In my case, the problem was with Anti-Virus, with intensive writing to a file, Anti-Virus started scanning the file and at that moment there was a problem with writing to the file.
There is probably no other way to do this, but is there a way to append the contents of one text file into another text file, while clearing the first after the move?
The only way I know is to just use a reader and writer, which seems inefficient for large files...
Thanks!
No, I don't think there's anything which does this.
If the two files use the same encoding and you don't need to verify that they're valid, you can treat them as binary files, e.g.
using (Stream input = File.OpenRead("file1.txt"))
using (Stream output = new FileStream("file2.txt", FileMode.Append,
FileAccess.Write, FileShare.None))
{
input.CopyTo(output); // Using .NET 4
}
File.Delete("file1.txt");
Note that if file1.txt contains a byte order mark, you should skip past this first to avoid having it in the middle of file2.txt.
If you're not using .NET 4 you can write your own equivalent of Stream.CopyTo... even with an extension method to make the hand-over seamless:
public static class StreamExtensions
{
public static void CopyTo(this Stream input, Stream output)
{
if (input == null)
{
throw new ArgumentNullException("input");
}
if (output == null)
{
throw new ArgumentNullException("output");
}
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
}
Ignoring error handling, encodings, and efficiency for the moment, something like this would probably work (but I haven't tested it)
File.AppendAllText("path/to/destination/file", File.ReadAllText("path/to/source/file"));
Then you just have to delete or clear out the first file once this step is complete.
The cmd.exe version of this is
type fileone.txt >>filetwo.txt
del fileone.txt
You could create a system shell to do this. It should be pretty effecient.
I'm not sure what you mean by "inefficient". Jon's answer is probably enough for most cases.
However, if you are concerned about extremely large source files, Memory-Mapped Files could be your friend. See this link for more info.