What is the best way to add text to the beginning of a file using C#?
I couldn't find a straightforward way to do this, but came up with a couple work-arounds.
Open up new file, write the text that I wanted to add, append the text from the old file to the end of the new file.
Since the text I want to add should be less than 200 characters, I was thinking that I could add white space characters to the beginning of the file, and then overwrite the white space with the text I wanted to add.
Has anyone else come across this problem, and if so, what did you do?
This works for me, but for small files. Probably it's not a very good solution otherwise.
string currentContent = String.Empty;
if (File.Exists(filePath))
{
currentContent = File.ReadAllText(filePath);
}
File.WriteAllText(filePath, newContent + currentContent );
Adding to the beginning of a file (prepending as opposed to appending) is generally not a supported operation. Your #1 options is fine. If you can't write a temp file, you can pull the entire file into memory, preprend your data to the byte array and then overwrite it back out (this is only really feasible if your files are small and you don't have to have a bunch in memory at once because prepending the array is not necessarily easy without a copy either).
Yeah, basically you can use something like this:
public static void PrependString(string value, FileStream file)
{
var buffer = new byte[file.Length];
while (file.Read(buffer, 0, buffer.Length) != 0)
{
}
if(!file.CanWrite)
throw new ArgumentException("The specified file cannot be written.", "file");
file.Position = 0;
var data = Encoding.Unicode.GetBytes(value);
file.SetLength(buffer.Length + data.Length);
file.Write(data, 0, data.Length);
file.Write(buffer, 0, buffer.Length);
}
public static void Prepend(this FileStream file, string value)
{
PrependString(value, file);
}
Then
using(var file = File.Open("yourtext.txt", FileMode.Open, FileAccess.ReadWrite))
{
file.Prepend("Text you want to write.");
}
Not really effective though in case of huge files.
using two streams, you can do it in place, but keep in mind that this will still loop over the whole file on every addition
using System;
using System.IO;
using System.Text;
namespace FilePrepender
{
public class FilePrepender
{
private string file=null;
public FilePrepender(string filePath)
{
file = filePath;
}
public void prependline(string line)
{
prepend(line + Environment.NewLine);
}
private void shiftSection(byte[] chunk,FileStream readStream,FileStream writeStream)
{
long initialOffsetRead = readStream.Position;
long initialOffsetWrite= writeStream.Position;
int offset = 0;
int remaining = chunk.Length;
do//ensure that the entire chunk length gets read and shifted
{
int read = readStream.Read(chunk, offset, remaining);
offset += read;
remaining -= read;
} while (remaining > 0);
writeStream.Write(chunk, 0, chunk.Length);
writeStream.Seek(initialOffsetWrite, SeekOrigin.Begin);
readStream.Seek(initialOffsetRead, SeekOrigin.Begin);
}
public void prepend(string text)
{
byte[] bytes = Encoding.Default.GetBytes(text);
byte[] chunk = new byte[bytes.Length];
using (FileStream readStream = File.Open(file, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using(FileStream writeStream = File.Open(file, FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite))
{
readStream.Seek(0, SeekOrigin.End);//seek chunk.Length past the end of the file
writeStream.Seek(chunk.Length, SeekOrigin.End);//which lets the loop run without special cases
long size = readStream.Position;
//while there's a whole chunks worth above the read head, shift the file contents down from the end
while(readStream.Position - chunk.Length >= 0)
{
readStream.Seek(-chunk.Length, SeekOrigin.Current);
writeStream.Seek(-chunk.Length, SeekOrigin.Current);
shiftSection(chunk, readStream, writeStream);
}
//clean up the remaining shift for the bytes that don't fit in size%chunk.Length
readStream.Seek(0, SeekOrigin.Begin);
writeStream.Seek(Math.Min(size, chunk.Length), SeekOrigin.Begin);
shiftSection(chunk, readStream, writeStream);
//finally, write the text you want to prepend
writeStream.Seek(0,SeekOrigin.Begin);
writeStream.Write(bytes, 0, bytes.Length);
}
}
}
}
}
I think the best way is to create a temp file. Add your text then read the contents of the original file adding it to the temp file. Then you can overwrite the original with the temp file.
prepend:
private const string tempDirPath = #"c:\temp\log.log", tempDirNewPath = #"c:\temp\log.new";
StringBuilder sb = new StringBuilder();
...
File.WriteAllText(tempDirNewPath, sb.ToString());
File.AppendAllText(tempDirNewPath, File.ReadAllText(tempDirPath));
File.Delete(tempDirPath);
File.Move(tempDirNewPath, tempDirPath);
using (FileStream fs = File.OpenWrite(tempDirPath))
{ //truncate to a reasonable length
if (16384 < fs.Length) fs.SetLength(16384);
fs.Close();
}
// The file we'll prepend to
string filePath = path + "\\log.log";
// A temp file we'll write to
string tempFilePath = path + "\\temp.log";
// 1) Write your prepended contents to a temp file.
using (var writer = new StreamWriter(tempFilePath, false))
{
// Write whatever you want to prepend
writer.WriteLine("Hi");
}
// 2) Use stream lib methods to append the original contents to the Temp
// file.
using (var oldFile = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Read, FileShare.Read))
{
using (var tempFile = new FileStream(tempFilePath, FileMode.Append, FileAccess.Write, FileShare.Read))
{
oldFile.CopyTo(tempFile);
}
}
// 3) Finally, dump the Temp file back to the original, keeping all its
// original permissions etc.
File.Replace(tempFilePath, filePath, null);
Even if what you're writing is small, the Temp file gets the entire original file appended to it before the .Replace(), so it does need to be on disk.
Note that this code is not Thread-safe; if more than one thread accesses this code you can lose writes in the file swapping going on here. That said, it's also pretty expensive, so you'd want to gate access to it anyway - pass writes via multiple Providers to a buffer, which periodically empties out via this prepend method on a single Consumer thread.
You should be able to do this without opening a new file. Use the following File method:
public static FileStream Open(
string path,
FileMode mode,
FileAccess access
)
Making sure to specify FileAccess.ReadWrite.
Using the FileStream returned from File.Open, read all of the existing data into memory. Then reset the pointer to the beginning of the file, write your new data, then write the existing data.
(If the file is big and/or you're suspicious of using too much memory, you can do this without having to read the whole file into memory, but implementing that is left as an exercise to the reader.)
The following algorithm may solve the problem pretty easily, it's most efficient for any size of file, including very big text files:
string outPutFile = #"C:\Output.txt";
string result = "Some new string" + DateTime.Now.ToString() + Environment.NewLine;
StringBuilder currentContent = new StringBuilder();
List<string> rawList = File.ReadAllLines(outPutFile).ToList();
foreach (var item in rawList) {
currentContent.Append(item + Environment.NewLine);
}
File.WriteAllText(outPutFile, result + currentContent.ToString());
Use this class:
public static class File2
{
private static readonly Encoding _defaultEncoding = new UTF8Encoding(false, true); // encoding used in File.ReadAll*()
private static object _bufferSizeLock = new Object();
private static int _bufferSize = 1024 * 1024; // 1mb
public static int BufferSize
{
get
{
lock (_bufferSizeLock)
{
return _bufferSize;
}
}
set
{
lock (_bufferSizeLock)
{
_bufferSize = value;
}
}
}
public static void PrependAllLines(string path, IEnumerable<string> contents)
{
PrependAllLines(path, contents, _defaultEncoding);
}
public static void PrependAllLines(string path, IEnumerable<string> contents, Encoding encoding)
{
var temp = Path.GetTempFileName();
File.WriteAllLines(temp, contents, encoding);
AppendToTemp(path, temp, encoding);
File.Replace(temp, path, null);
}
public static void PrependAllText(string path, string contents)
{
PrependAllText(path, contents, _defaultEncoding);
}
public static void PrependAllText(string path, string contents, Encoding encoding)
{
var temp = Path.GetTempFileName();
File.WriteAllText(temp, contents, encoding);
AppendToTemp(path, temp, encoding);
File.Replace(temp, path, null);
}
private static void AppendToTemp(string path, string temp, Encoding encoding)
{
var bufferSize = BufferSize;
char[] buffer = new char[bufferSize];
using (var writer = new StreamWriter(temp, true, encoding))
{
using (var reader = new StreamReader(path, encoding))
{
int bytesRead;
while ((bytesRead = reader.ReadBlock(buffer,0,bufferSize)) != 0)
{
writer.Write(buffer,0,bytesRead);
}
}
}
}
}
Put the file's contents in a string. Append new data you want to add to the top of the file to that string -- string = newdata + string. Then move the seek position of the file to 0 and write the string into the file.
Related
I want to read file continuously like GNU tail with "-f" param. I need it to live-read log file.
What is the right way to do it?
More natural approach of using FileSystemWatcher:
var wh = new AutoResetEvent(false);
var fsw = new FileSystemWatcher(".");
fsw.Filter = "file-to-read";
fsw.EnableRaisingEvents = true;
fsw.Changed += (s,e) => wh.Set();
var fs = new FileStream("file-to-read", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (var sr = new StreamReader(fs))
{
var s = "";
while (true)
{
s = sr.ReadLine();
if (s != null)
Console.WriteLine(s);
else
wh.WaitOne(1000);
}
}
wh.Close();
Here the main reading cycle stops to wait for incoming data and FileSystemWatcher is used just to awake the main reading cycle.
You want to open a FileStream in binary mode. Periodically, seek to the end of the file minus 1024 bytes (or whatever), then read to the end and output. That's how tail -f works.
Answers to your questions:
Binary because it's difficult to randomly access the file if you're reading it as text. You have to do the binary-to-text conversion yourself, but it's not difficult. (See below)
1024 bytes because it's a nice convenient number, and should handle 10 or 15 lines of text. Usually.
Here's an example of opening the file, reading the last 1024 bytes, and converting it to text:
static void ReadTail(string filename)
{
using (FileStream fs = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Seek 1024 bytes from the end of the file
fs.Seek(-1024, SeekOrigin.End);
// read 1024 bytes
byte[] bytes = new byte[1024];
fs.Read(bytes, 0, 1024);
// Convert bytes to string
string s = Encoding.Default.GetString(bytes);
// or string s = Encoding.UTF8.GetString(bytes);
// and output to console
Console.WriteLine(s);
}
}
Note that you must open with FileShare.ReadWrite, since you're trying to read a file that's currently open for writing by another process.
Also note that I used Encoding.Default, which in US/English and for most Western European languages will be an 8-bit character encoding. If the file is written in some other encoding (like UTF-8 or other Unicode encoding), It's possible that the bytes won't convert correctly to characters. You'll have to handle that by determining the encoding if you think this will be a problem. Search Stack overflow for info about determining a file's text encoding.
If you want to do this periodically (every 15 seconds, for example), you can set up a timer that calls the ReadTail method as often as you want. You could optimize things a bit by opening the file only once at the start of the program. That's up to you.
To continuously monitor the tail of the file, you just need to remember the length of the file before.
public static void MonitorTailOfFile(string filePath)
{
var initialFileSize = new FileInfo(filePath).Length;
var lastReadLength = initialFileSize - 1024;
if (lastReadLength < 0) lastReadLength = 0;
while (true)
{
try
{
var fileSize = new FileInfo(filePath).Length;
if (fileSize > lastReadLength)
{
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
fs.Seek(lastReadLength, SeekOrigin.Begin);
var buffer = new byte[1024];
while (true)
{
var bytesRead = fs.Read(buffer, 0, buffer.Length);
lastReadLength += bytesRead;
if (bytesRead == 0)
break;
var text = ASCIIEncoding.ASCII.GetString(buffer, 0, bytesRead);
Console.Write(text);
}
}
}
}
catch { }
Thread.Sleep(1000);
}
}
I had to use ASCIIEncoding, because this code isn't smart enough to cater for variable character lengths of UTF8 on buffer boundaries.
Note: You can change the Thread.Sleep part to be different timings, and you can also link it with a filewatcher and blocking pattern - Monitor.Enter/Wait/Pulse. For me the timer is enough, and at most it only checks the file length every second, if the file hasn't changed.
This is my solution
static IEnumerable<string> TailFrom(string file)
{
using (var reader = File.OpenText(file))
{
while (true)
{
string line = reader.ReadLine();
if (reader.BaseStream.Length < reader.BaseStream.Position)
reader.BaseStream.Seek(0, SeekOrigin.Begin);
if (line != null) yield return line;
else Thread.Sleep(500);
}
}
}
so, in your code you can do
foreach (string line in TailFrom(file))
{
Console.WriteLine($"line read= {line}");
}
You could use the FileSystemWatcher class which can send notifications for different events happening on the file system like file changed.
private void button1_Click(object sender, EventArgs e)
{
if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
{
path = folderBrowserDialog.SelectedPath;
fileSystemWatcher.Path = path;
string[] str = Directory.GetFiles(path);
string line;
fs = new FileStream(str[0], FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
tr = new StreamReader(fs);
while ((line = tr.ReadLine()) != null)
{
listBox.Items.Add(line);
}
}
}
private void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
{
string line;
line = tr.ReadLine();
listBox.Items.Add(line);
}
If you are just looking for a tool to do this then check out free version of Bare tail
I have a StreamReader object that I initialized with a stream, now I want to save this stream to disk (the stream may be a .gif or .jpg or .pdf).
Existing Code:
StreamReader sr = new StreamReader(myOtherObject.InputStream);
I need to save this to disk (I have the filename).
In the future I may want to store this to SQL Server.
I have the encoding type also, which I will need if I store it to SQL Server, correct?
As highlighted by Tilendor in Jon Skeet's answer, streams have a CopyTo method since .NET 4.
var fileStream = File.Create("C:\\Path\\To\\File");
myOtherObject.InputStream.Seek(0, SeekOrigin.Begin);
myOtherObject.InputStream.CopyTo(fileStream);
fileStream.Close();
Or with the using syntax:
using (var fileStream = File.Create("C:\\Path\\To\\File"))
{
myOtherObject.InputStream.Seek(0, SeekOrigin.Begin);
myOtherObject.InputStream.CopyTo(fileStream);
}
You have to call Seek if you're not already at the beginning or you won't copy the entire stream.
You must not use StreamReader for binary files (like gifs or jpgs). StreamReader is for text data. You will almost certainly lose data if you use it for arbitrary binary data. (If you use Encoding.GetEncoding(28591) you will probably be okay, but what's the point?)
Why do you need to use a StreamReader at all? Why not just keep the binary data as binary data and write it back to disk (or SQL) as binary data?
EDIT: As this seems to be something people want to see... if you do just want to copy one stream to another (e.g. to a file) use something like this:
/// <summary>
/// Copies the contents of input to output. Doesn't close either stream.
/// </summary>
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8 * 1024];
int len;
while ( (len = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, len);
}
}
To use it to dump a stream to a file, for example:
using (Stream file = File.Create(filename))
{
CopyStream(input, file);
}
Note that Stream.CopyTo was introduced in .NET 4, serving basically the same purpose.
public void CopyStream(Stream stream, string destPath)
{
using (var fileStream = new FileStream(destPath, FileMode.Create, FileAccess.Write))
{
stream.CopyTo(fileStream);
}
}
private void SaveFileStream(String path, Stream stream)
{
var fileStream = new FileStream(path, FileMode.Create, FileAccess.Write);
stream.CopyTo(fileStream);
fileStream.Dispose();
}
I don't get all of the answers using CopyTo, where maybe the systems using the app might not have been upgraded to .NET 4.0+. I know some would like to force people to upgrade, but compatibility is also nice, too.
Another thing, I don't get using a stream to copy from another stream in the first place. Why not just do:
byte[] bytes = myOtherObject.InputStream.ToArray();
Once you have the bytes, you can easily write them to a file:
public static void WriteFile(string fileName, byte[] bytes)
{
string path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
if (!path.EndsWith(#"\")) path += #"\";
if (File.Exists(Path.Combine(path, fileName)))
File.Delete(Path.Combine(path, fileName));
using (FileStream fs = new FileStream(Path.Combine(path, fileName), FileMode.CreateNew, FileAccess.Write))
{
fs.Write(bytes, 0, (int)bytes.Length);
//fs.Close();
}
}
This code works as I've tested it with a .jpg file, though I admit I have only used it with small files (less than 1 MB). One stream, no copying between streams, no encoding needed, just write the bytes! No need to over-complicate things with StreamReader if you already have a stream you can convert to bytes directly with .ToArray()!
Only potential downsides I can see in doing it this way is if there's a large file you have, having it as a stream and using .CopyTo() or equivalent allows FileStream to stream it instead of using a byte array and reading the bytes one by one. It might be slower doing it this way, as a result. But it shouldn't choke since the .Write() method of the FileStream handles writing the bytes, and it's only doing it one byte at a time, so it won't clog memory, except that you will have to have enough memory to hold the stream as a byte[] object. In my situation where I used this, getting an OracleBlob, I had to go to a byte[], it was small enough, and besides, there was no streaming available to me, anyway, so I just sent my bytes to my function, above.
Another option, using a stream, would be to use it with Jon Skeet's CopyStream function that was in another post - this just uses FileStream to take the input stream and create the file from it directly. It does not use File.Create, like he did (which initially seemed to be problematic for me, but later found it was likely just a VS bug...).
/// <summary>
/// Copies the contents of input to output. Doesn't close either stream.
/// </summary>
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8 * 1024];
int len;
while ( (len = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, len);
}
}
public static void WriteFile(string fileName, Stream inputStream)
{
string path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
if (!path.EndsWith(#"\")) path += #"\";
if (File.Exists(Path.Combine(path, fileName)))
File.Delete(Path.Combine(path, fileName));
using (FileStream fs = new FileStream(Path.Combine(path, fileName), FileMode.CreateNew, FileAccess.Write)
{
CopyStream(inputStream, fs);
}
inputStream.Close();
inputStream.Flush();
}
Here's an example that uses proper usings and implementation of idisposable:
static void WriteToFile(string sourceFile, string destinationfile, bool append = true, int bufferSize = 4096)
{
using (var sourceFileStream = new FileStream(sourceFile, FileMode.OpenOrCreate))
{
using (var destinationFileStream = new FileStream(destinationfile, FileMode.OpenOrCreate))
{
while (sourceFileStream.Position < sourceFileStream.Length)
{
destinationFileStream.WriteByte((byte)sourceFileStream.ReadByte());
}
}
}
}
...and there's also this
public static void WriteToFile(Stream stream, string destinationFile, int bufferSize = 4096, FileMode mode = FileMode.OpenOrCreate, FileAccess access = FileAccess.ReadWrite, FileShare share = FileShare.ReadWrite)
{
using (var destinationFileStream = new FileStream(destinationFile, mode, access, share))
{
while (stream.Position < stream.Length)
{
destinationFileStream.WriteByte((byte)stream.ReadByte());
}
}
}
The key is understanding the proper use of using (which should be implemented on the instantiation of the object that implements idisposable as shown above), and having a good idea as to how the properties work for streams. Position is literally the index within the stream (which starts at 0) that is followed as each byte is read using the readbyte method. In this case I am essentially using it in place of a for loop variable and simply letting it follow through all the way up to the length which is LITERALLY the end of the entire stream (in bytes). Ignore in bytes because it is practically the same and you will have something simple and elegant like this that resolves everything cleanly.
Keep in mind, too, that the ReadByte method simply casts the byte to an int in the process and can simply be converted back.
I'm gonna add another implementation I recently wrote to create a dynamic buffer of sorts to ensure sequential data writes to prevent massive overload
private void StreamBuffer(Stream stream, int buffer)
{
using (var memoryStream = new MemoryStream())
{
stream.CopyTo(memoryStream);
var memoryBuffer = memoryStream.GetBuffer();
for (int i = 0; i < memoryBuffer.Length;)
{
var networkBuffer = new byte[buffer];
for (int j = 0; j < networkBuffer.Length && i < memoryBuffer.Length; j++)
{
networkBuffer[j] = memoryBuffer[i];
i++;
}
//Assuming destination file
destinationFileStream.Write(networkBuffer, 0, networkBuffer.Length);
}
}
}
The explanation is fairly simple: we know that we need to keep in mind the entire set of data we wish to write and also that we only want to write certain amounts, so we want the first loop with the last parameter empty (same as while). Next, we initialize a byte array buffer that is set to the size of what's passed, and with the second loop we compare j to the size of the buffer and the size of the original one, and if it's greater than the size of the original byte array, end the run.
Why not use a FileStream object?
public void SaveStreamToFile(string fileFullPath, Stream stream)
{
if (stream.Length == 0) return;
// Create a FileStream object to write a stream to a file
using (FileStream fileStream = System.IO.File.Create(fileFullPath, (int)stream.Length))
{
// Fill the bytes[] array with the stream data
byte[] bytesInStream = new byte[stream.Length];
stream.Read(bytesInStream, 0, (int)bytesInStream.Length);
// Use FileStream object to write to the specified file
fileStream.Write(bytesInStream, 0, bytesInStream.Length);
}
}
//If you don't have .Net 4.0 :)
public void SaveStreamToFile(Stream stream, string filename)
{
using(Stream destination = File.Create(filename))
Write(stream, destination);
}
//Typically I implement this Write method as a Stream extension method.
//The framework handles buffering.
public void Write(Stream from, Stream to)
{
for(int a = from.ReadByte(); a != -1; a = from.ReadByte())
to.WriteByte( (byte) a );
}
/*
Note, StreamReader is an IEnumerable<Char> while Stream is an IEnumbable<byte>.
The distinction is significant such as in multiple byte character encodings
like Unicode used in .Net where Char is one or more bytes (byte[n]). Also, the
resulting translation from IEnumerable<byte> to IEnumerable<Char> can loose bytes
or insert them (for example, "\n" vs. "\r\n") depending on the StreamReader instance
CurrentEncoding.
*/
Another option is to get the stream to a byte[] and use File.WriteAllBytes. This should do:
using (var stream = new MemoryStream())
{
input.CopyTo(stream);
File.WriteAllBytes(file, stream.ToArray());
}
Wrapping it in an extension method gives it better naming:
public void WriteTo(this Stream input, string file)
{
//your fav write method:
using (var stream = File.Create(file))
{
input.CopyTo(stream);
}
//or
using (var stream = new MemoryStream())
{
input.CopyTo(stream);
File.WriteAllBytes(file, stream.ToArray());
}
//whatever that fits.
}
public void testdownload(stream input)
{
byte[] buffer = new byte[16345];
using (FileStream fs = new FileStream(this.FullLocalFilePath,
FileMode.Create, FileAccess.Write, FileShare.None))
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
fs.Write(buffer, 0, read);
}
}
}
As a means of c # when you delete a file, fill zeros clusters on which the file was written
Can this be done by means of c #?
p.s. without a full format
Here is some code regarding BugFinder's comment. The method overwrites the file data with 4KB zero byte blocks.
public static class FileHelper
{
public static void EraseFile(string fileName)
{
var fileInfo = new FileInfo(fileName);
var zeroBuffer = new byte[4096];
using (var file = new FileStream(fileName, FileMode.Open, FileAccess.Write))
{
while (file.Position < fileInfo.Length)
{
var bytesToWrite = (int)Math.Min(4096, fileInfo.Length - file.Position);
file.Write(zeroBuffer, 0, bytesToWrite);
}
file.Flush(true);
}
File.Delete(fileName);
}
}
I'm creating simple self-extracting archive using magic number to mark the beginning of the content.
For now it is a textfile:
MAGICNUMBER .... content of the text file
Next, textfile copied to the end of the executable:
copy programm.exe/b+textfile.txt/b sfx.exe
I'm trying to find the second occurrence of the magic number (the first one would be a hardcoded constant obviously) using the following code:
string my_filename = System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName;
StreamReader file = new StreamReader(my_filename);
const int block_size = 1024;
const string magic = "MAGICNUMBER";
char[] buffer = new Char[block_size];
Int64 count = 0;
Int64 glob_pos = 0;
bool flag = false;
while (file.ReadBlock(buffer, 0, block_size) > 0)
{
var rel_pos = buffer.ToString().IndexOf(magic);
if ((rel_pos > -1) & (!flag))
{
flag = true;
continue;
}
if ((rel_pos > -1) & (flag == true))
{
glob_pos = block_size * count + rel_pos;
break;
}
count++;
}
using (FileStream fs = new FileStream(my_filename, FileMode.Open, FileAccess.Read))
{
byte[] b = new byte[fs.Length - glob_pos];
fs.Seek(glob_pos, SeekOrigin.Begin);
fs.Read(b, 0, (int)(fs.Length - glob_pos));
File.WriteAllBytes("c:/output.txt", b);
but for some reason I'm copying almost entire file, not the last few kilobytes. Is it because of the compiler optimization, inlining magic constant in while loop of something similar?
How should I do self-extraction archive properly?
Guessed I should read file backwards to avoid problems of compiler inlining magic constant multiply times.
So I've modified my code in the following way:
string my_filename = System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName;
StreamReader file = new StreamReader(my_filename);
const int block_size = 1024;
const string magic = "MAGIC";
char[] buffer = new Char[block_size];
Int64 count = 0;
Int64 glob_pos = 0;
while (file.ReadBlock(buffer, 0, block_size) > 0)
{
var rel_pos = buffer.ToString().IndexOf(magic);
if (rel_pos > -1)
{
glob_pos = block_size * count + rel_pos;
}
count++;
}
using (FileStream fs = new FileStream(my_filename, FileMode.Open, FileAccess.Read))
{
byte[] b = new byte[fs.Length - glob_pos];
fs.Seek(glob_pos, SeekOrigin.Begin);
fs.Read(b, 0, (int)(fs.Length - glob_pos));
File.WriteAllBytes("c:/output.txt", b);
}
So I've scanned the all file once, found that I though would be the last occurrence of the magic number and copied from here to the end of it. While the file created by this procedure seems smaller than in previous attempt it in no way the same file I've attached to my "self-extracting" archive. Why?
My guess is that position calculation of the beginning of the attached file is wrong due to used conversion from binary to string. If so how should I modify my position calculation to make it correct?
Also how should I choose magic number then working with real files, pdfs for example? I wont be able to modify pdfs easily to include predefined magic number in it.
Try this out. Some C# Stream IO 101:
public static void Main()
{
String path = #"c:\here is your path";
// Method A: Read all information into a Byte Stream
Byte[] data = System.IO.File.ReadAllBytes(path);
String[] lines = System.IO.File.ReadAllLines(path);
// Method B: Use a stream to do essentially the same thing. (More powerful)
// Using block essentially means 'close when we're done'. See 'using block' or 'IDisposable'.
using (FileStream stream = File.OpenRead(path))
using (StreamReader reader = new StreamReader(stream))
{
// This will read all the data as a single string
String allData = reader.ReadToEnd();
}
String outputPath = #"C:\where I'm writing to";
// Copy from one file-stream to another
using (FileStream inputStream = File.OpenRead(path))
using (FileStream outputStream = File.Create(outputPath))
{
inputStream.CopyTo(outputStream);
// Again, this will close both streams when done.
}
// Copy to an in-memory stream
using (FileStream inputStream = File.OpenRead(path))
using (MemoryStream outputStream = new MemoryStream())
{
inputStream.CopyTo(outputStream);
// Again, this will close both streams when done.
// If you want to hold the data in memory, just don't wrap your
// memory stream in a using block.
}
// Use serialization to store data.
var serializer = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
// We'll serialize a person to the memory stream.
MemoryStream memoryStream = new MemoryStream();
serializer.Serialize(memoryStream, new Person() { Name = "Sam", Age = 20 });
// Now the person is stored in the memory stream (just as easy to write to disk using a
// file stream as well.
// Now lets reset the stream to the beginning:
memoryStream.Seek(0, SeekOrigin.Begin);
// And deserialize the person
Person deserializedPerson = (Person)serializer.Deserialize(memoryStream);
Console.WriteLine(deserializedPerson.Name); // Should print Sam
}
// Mark Serializable stuff as serializable.
// This means that C# will automatically format this to be put in a stream
[Serializable]
class Person
{
public String Name { get; set; }
public Int32 Age { get; set; }
}
The easiest solution is to replace
const string magic = "MAGICNUMBER";
with
static string magic = "magicnumber".ToUpper();
But there are more problems with the whole magic string approach. What is the file contains the magic string? I think that the best solution is to put the file size after the file. The extraction is much easier that way: Read the length from the last bytes and read the required amount of bytes from the end of the file.
Update: This should work unless your files are very big. (You'd need to use a revolving pair of buffers in that case (to read the file in small blocks)):
string inputFilename = System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName;
string outputFilename = inputFilename + ".secret";
string magic = "magic".ToUpper();
byte[] data = File.ReadAllBytes(inputFilename);
byte[] magicData = Encoding.ASCII.GetBytes(magic);
for (int idx = magicData.Length - 1; idx < data.Length; idx++) {
bool found = true;
for (int magicIdx = 0; magicIdx < magicData.Length; magicIdx++) {
if (data[idx - magicData.Length + 1 + magicIdx] != magicData[magicIdx]) {
found = false;
break;
}
}
if (found) {
using (FileStream output = new FileStream(outputFilename, FileMode.Create)) {
output.Write(data, idx + 1, data.Length - idx - 1);
}
}
}
Update2: This should be much faster, use little memory and work on files of all size, but the program your must be proper executable (with size being a multiple of 512 bytes):
string inputFilename = System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName;
string outputFilename = inputFilename + ".secret";
string marker = "magic".ToUpper();
byte[] data = File.ReadAllBytes(inputFilename);
byte[] markerData = Encoding.ASCII.GetBytes(marker);
int markerLength = markerData.Length;
const int blockSize = 512; //important!
using(FileStream input = File.OpenRead(inputFilename)) {
long lastPosition = 0;
byte[] buffer = new byte[blockSize];
while (input.Read(buffer, 0, blockSize) >= markerLength) {
bool found = true;
for (int idx = 0; idx < markerLength; idx++) {
if (buffer[idx] != markerData[idx]) {
found = false;
break;
}
}
if (found) {
input.Position = lastPosition + markerLength;
using (FileStream output = File.OpenWrite(outputFilename)) {
input.CopyTo(output);
}
}
lastPosition = input.Position;
}
}
Read about some approaches here: http://www.strchr.com/creating_self-extracting_executables
You can add the compressed file as resource to the project itself:
Project > Properties
Set the property of this resource to Binary.
You can then retrieve the resource with
byte[] resource = Properties.Resources.NameOfYourResource;
Search backwards rather than forwards (assuming your file won't contain said magic number).
Or append your (text) file and then lastly its length (or the length of the original exe), so you only need read the last DWORD / few bytes to see how long the file is - then no magic number is required.
More robustly, store the file as an additional data section within the executable file. This is more fiddly without external tools as it requires knowledge of the PE file format used for NT executables, q.v. http://msdn.microsoft.com/en-us/library/ms809762.aspx
I want to read file continuously like GNU tail with "-f" param. I need it to live-read log file.
What is the right way to do it?
More natural approach of using FileSystemWatcher:
var wh = new AutoResetEvent(false);
var fsw = new FileSystemWatcher(".");
fsw.Filter = "file-to-read";
fsw.EnableRaisingEvents = true;
fsw.Changed += (s,e) => wh.Set();
var fs = new FileStream("file-to-read", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (var sr = new StreamReader(fs))
{
var s = "";
while (true)
{
s = sr.ReadLine();
if (s != null)
Console.WriteLine(s);
else
wh.WaitOne(1000);
}
}
wh.Close();
Here the main reading cycle stops to wait for incoming data and FileSystemWatcher is used just to awake the main reading cycle.
You want to open a FileStream in binary mode. Periodically, seek to the end of the file minus 1024 bytes (or whatever), then read to the end and output. That's how tail -f works.
Answers to your questions:
Binary because it's difficult to randomly access the file if you're reading it as text. You have to do the binary-to-text conversion yourself, but it's not difficult. (See below)
1024 bytes because it's a nice convenient number, and should handle 10 or 15 lines of text. Usually.
Here's an example of opening the file, reading the last 1024 bytes, and converting it to text:
static void ReadTail(string filename)
{
using (FileStream fs = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Seek 1024 bytes from the end of the file
fs.Seek(-1024, SeekOrigin.End);
// read 1024 bytes
byte[] bytes = new byte[1024];
fs.Read(bytes, 0, 1024);
// Convert bytes to string
string s = Encoding.Default.GetString(bytes);
// or string s = Encoding.UTF8.GetString(bytes);
// and output to console
Console.WriteLine(s);
}
}
Note that you must open with FileShare.ReadWrite, since you're trying to read a file that's currently open for writing by another process.
Also note that I used Encoding.Default, which in US/English and for most Western European languages will be an 8-bit character encoding. If the file is written in some other encoding (like UTF-8 or other Unicode encoding), It's possible that the bytes won't convert correctly to characters. You'll have to handle that by determining the encoding if you think this will be a problem. Search Stack overflow for info about determining a file's text encoding.
If you want to do this periodically (every 15 seconds, for example), you can set up a timer that calls the ReadTail method as often as you want. You could optimize things a bit by opening the file only once at the start of the program. That's up to you.
To continuously monitor the tail of the file, you just need to remember the length of the file before.
public static void MonitorTailOfFile(string filePath)
{
var initialFileSize = new FileInfo(filePath).Length;
var lastReadLength = initialFileSize - 1024;
if (lastReadLength < 0) lastReadLength = 0;
while (true)
{
try
{
var fileSize = new FileInfo(filePath).Length;
if (fileSize > lastReadLength)
{
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
fs.Seek(lastReadLength, SeekOrigin.Begin);
var buffer = new byte[1024];
while (true)
{
var bytesRead = fs.Read(buffer, 0, buffer.Length);
lastReadLength += bytesRead;
if (bytesRead == 0)
break;
var text = ASCIIEncoding.ASCII.GetString(buffer, 0, bytesRead);
Console.Write(text);
}
}
}
}
catch { }
Thread.Sleep(1000);
}
}
I had to use ASCIIEncoding, because this code isn't smart enough to cater for variable character lengths of UTF8 on buffer boundaries.
Note: You can change the Thread.Sleep part to be different timings, and you can also link it with a filewatcher and blocking pattern - Monitor.Enter/Wait/Pulse. For me the timer is enough, and at most it only checks the file length every second, if the file hasn't changed.
This is my solution
static IEnumerable<string> TailFrom(string file)
{
using (var reader = File.OpenText(file))
{
while (true)
{
string line = reader.ReadLine();
if (reader.BaseStream.Length < reader.BaseStream.Position)
reader.BaseStream.Seek(0, SeekOrigin.Begin);
if (line != null) yield return line;
else Thread.Sleep(500);
}
}
}
so, in your code you can do
foreach (string line in TailFrom(file))
{
Console.WriteLine($"line read= {line}");
}
You could use the FileSystemWatcher class which can send notifications for different events happening on the file system like file changed.
private void button1_Click(object sender, EventArgs e)
{
if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
{
path = folderBrowserDialog.SelectedPath;
fileSystemWatcher.Path = path;
string[] str = Directory.GetFiles(path);
string line;
fs = new FileStream(str[0], FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
tr = new StreamReader(fs);
while ((line = tr.ReadLine()) != null)
{
listBox.Items.Add(line);
}
}
}
private void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
{
string line;
line = tr.ReadLine();
listBox.Items.Add(line);
}
If you are just looking for a tool to do this then check out free version of Bare tail