Related
I want to read file continuously like GNU tail with "-f" param. I need it to live-read log file.
What is the right way to do it?
More natural approach of using FileSystemWatcher:
var wh = new AutoResetEvent(false);
var fsw = new FileSystemWatcher(".");
fsw.Filter = "file-to-read";
fsw.EnableRaisingEvents = true;
fsw.Changed += (s,e) => wh.Set();
var fs = new FileStream("file-to-read", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (var sr = new StreamReader(fs))
{
var s = "";
while (true)
{
s = sr.ReadLine();
if (s != null)
Console.WriteLine(s);
else
wh.WaitOne(1000);
}
}
wh.Close();
Here the main reading cycle stops to wait for incoming data and FileSystemWatcher is used just to awake the main reading cycle.
You want to open a FileStream in binary mode. Periodically, seek to the end of the file minus 1024 bytes (or whatever), then read to the end and output. That's how tail -f works.
Answers to your questions:
Binary because it's difficult to randomly access the file if you're reading it as text. You have to do the binary-to-text conversion yourself, but it's not difficult. (See below)
1024 bytes because it's a nice convenient number, and should handle 10 or 15 lines of text. Usually.
Here's an example of opening the file, reading the last 1024 bytes, and converting it to text:
static void ReadTail(string filename)
{
using (FileStream fs = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Seek 1024 bytes from the end of the file
fs.Seek(-1024, SeekOrigin.End);
// read 1024 bytes
byte[] bytes = new byte[1024];
fs.Read(bytes, 0, 1024);
// Convert bytes to string
string s = Encoding.Default.GetString(bytes);
// or string s = Encoding.UTF8.GetString(bytes);
// and output to console
Console.WriteLine(s);
}
}
Note that you must open with FileShare.ReadWrite, since you're trying to read a file that's currently open for writing by another process.
Also note that I used Encoding.Default, which in US/English and for most Western European languages will be an 8-bit character encoding. If the file is written in some other encoding (like UTF-8 or other Unicode encoding), It's possible that the bytes won't convert correctly to characters. You'll have to handle that by determining the encoding if you think this will be a problem. Search Stack overflow for info about determining a file's text encoding.
If you want to do this periodically (every 15 seconds, for example), you can set up a timer that calls the ReadTail method as often as you want. You could optimize things a bit by opening the file only once at the start of the program. That's up to you.
To continuously monitor the tail of the file, you just need to remember the length of the file before.
public static void MonitorTailOfFile(string filePath)
{
var initialFileSize = new FileInfo(filePath).Length;
var lastReadLength = initialFileSize - 1024;
if (lastReadLength < 0) lastReadLength = 0;
while (true)
{
try
{
var fileSize = new FileInfo(filePath).Length;
if (fileSize > lastReadLength)
{
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
fs.Seek(lastReadLength, SeekOrigin.Begin);
var buffer = new byte[1024];
while (true)
{
var bytesRead = fs.Read(buffer, 0, buffer.Length);
lastReadLength += bytesRead;
if (bytesRead == 0)
break;
var text = ASCIIEncoding.ASCII.GetString(buffer, 0, bytesRead);
Console.Write(text);
}
}
}
}
catch { }
Thread.Sleep(1000);
}
}
I had to use ASCIIEncoding, because this code isn't smart enough to cater for variable character lengths of UTF8 on buffer boundaries.
Note: You can change the Thread.Sleep part to be different timings, and you can also link it with a filewatcher and blocking pattern - Monitor.Enter/Wait/Pulse. For me the timer is enough, and at most it only checks the file length every second, if the file hasn't changed.
This is my solution
static IEnumerable<string> TailFrom(string file)
{
using (var reader = File.OpenText(file))
{
while (true)
{
string line = reader.ReadLine();
if (reader.BaseStream.Length < reader.BaseStream.Position)
reader.BaseStream.Seek(0, SeekOrigin.Begin);
if (line != null) yield return line;
else Thread.Sleep(500);
}
}
}
so, in your code you can do
foreach (string line in TailFrom(file))
{
Console.WriteLine($"line read= {line}");
}
You could use the FileSystemWatcher class which can send notifications for different events happening on the file system like file changed.
private void button1_Click(object sender, EventArgs e)
{
if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
{
path = folderBrowserDialog.SelectedPath;
fileSystemWatcher.Path = path;
string[] str = Directory.GetFiles(path);
string line;
fs = new FileStream(str[0], FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
tr = new StreamReader(fs);
while ((line = tr.ReadLine()) != null)
{
listBox.Items.Add(line);
}
}
}
private void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
{
string line;
line = tr.ReadLine();
listBox.Items.Add(line);
}
If you are just looking for a tool to do this then check out free version of Bare tail
One way or another, all digital data is stored in 0 and 1. That's the principle of binary data, I guess.
Is there a method or package that can show you the binary code of a file/single-exe-program of how it is actually being stored in the 0/1 format??
I would see it like:
- import a certain, random file
- convert it to it's 0/1 format
- store the the 1/0-data in a txt (streamwriter/binarywriter)
if yes, is this available in any .NET language (pref: c#)?
Essentially you just need to break this into two steps:
Convert a file into bytes
Convert a byte into a binary string
The first step is easy:
var fileBytes = File.ReadAllBytes(someFileName);
The second step is less straightforward, but still pretty easy:
var byteString = string.Concat(fileBytes.Select(x => Convert.ToString(x, 2).PadLeft(8, '0')))
The idea here is that you select each byte individually, converting each one to a binary string (pad left so each one is 8 characters, since many bytes have leading zeroes), and concatenate all of those into a single string. (Courtesy in part of #xanatos' comment below.)
I think this is something what you are looking for:
byte [] contents = File.ReadAllBytes(filePath);
StringBuilder builder = new StringBuilder();
for(int i = 0; i<contents .Length; i++)
{
builder.Append( Convert.ToString(contents[i], 2).PadLeft(8, '0') );
}
Now, you can for example write builder contents to a text file.
this will stream the conversion, useful if you have huge file.
using System;
using System.IO;
using System.Linq;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var buffer = new byte[1024];
int pos = 0;
using (var fileIn = new FileStream(#"c:\test.txt", FileMode.Open, FileAccess.Read))
using (var fileOut = new FileStream(#"c:\test.txt.binary", FileMode.Create, FileAccess.Write))
while((pos = fileIn.Read(buffer,0,buffer.Length)) > 0)
foreach (var value in buffer.Take(pos).Select(x => Convert.ToString(x, 2).PadLeft(8, '0')))
fileOut.Write(value.Select(x => (byte)x).ToArray(), 0, 8);
}
}
}
You can open the file in binary mode. Didn't test it but it should work :
BitArray GetBits(string fuleSrc)
{
byte[] bytesFile;
using (FileStream file = new FileStream(fuleSrc, FileMode.Open, FileAccess.Read))
{
bytesFile = new byte[file.Length];
file.Read(bytes, 0, (int)file.Length);
}
return new BitArray(bytesFile);
}
A solution using FileStream, StreamWriter, StringBuilder and Convert
static void Main(string[] args)
{
StringBuilder sb = new StringBuilder();
using (FileStream fs = new FileStream(InputFILEPATH, FileMode.Open))
{
while (fs.Position != fs.Length)
{
sb.Append(Convert.ToString(fs.ReadByte(),2));
}
}
using (StreamWriter stw = new StreamWriter(File.Open(OutputFILEPATH,FileMode.OpenOrCreate)))
{
stw.WriteLine(sb.ToString());
}
Console.ReadKey();
}
I've lot of tried to write file from collection of bytes. but file always get corrupted. not sure why its happening. If somebody knows about it would be helpful me more.
Note: Its always working good when I uncomment under while loop this line //AppendAllBytes(pathSource, bytes);
but I need bytes from object. later on I will use this concept on p2p.
namespace Sender
{
static class Program
{
static void Main(string[] args)
{
string pathSource = "../../Ok&SkipButtonForWelcomeToJakayaWindow.jpg";
using (FileStream fsSource = new FileStream(pathSource,
FileMode.Open, FileAccess.Read))
{
// Read the source file into a byte array.
const int numBytesToRead = 100000; // Your amount to read at a time
byte[] bytes = new byte[numBytesToRead];
int numBytesRead = 0;
if (File.Exists(pathSource))
{
Console.WriteLine("File of this name already exist, you want to continue?");
System.IO.FileInfo obj = new System.IO.FileInfo(pathSource);
pathSource = "../../Files/" + Guid.NewGuid() + obj.Extension;
}
int i = 0;
byte[] objBytes = new byte[numBytesRead];
List<FileInfo> objFileInfo = new List<FileInfo>();
Guid fileID = Guid.NewGuid();
FileInfo fileInfo = null;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to numBytesToRead.
int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
i++;
//AppendAllBytes(pathSource, bytes);
fileInfo = new FileInfo { FileID = fileID, FileBytes = bytes, FileByteID = i };
objFileInfo.Add(fileInfo);
// Break when the end of the file is reached.
if (n == 0)
{
break;
}
// Do here what you want to do with the bytes read (convert to string using Encoding.YourEncoding.GetString())
}
//foreach (var b in objFileInfo.OrderBy(m => m.FileByteID))
//{
// AppendAllBytes(pathSource, b.FileBytes);
//}
foreach (var item in objFileInfo)
{
AppendAllBytes(pathSource, item.FileBytes);
}
fileInfo = null;
}
}
static void AppendAllBytes(string path, byte[] bytes)
{
using (var stream = new FileStream(path, FileMode.Append))
{
stream.Write(bytes, 0, bytes.Length);
}
}
}
class FileInfo
{
public Guid FileID { get; set; }
public int FileByteID { get; set; }
public byte[] FileBytes { get; set; }
}
}
You don't increase numBytesRead and don't decrease numBytesToRead.
objFileInfo contains a List of FileInfo which contains a reference type byte[].
You copy the reference to the bytes when you create a new FileInfo and then repeatedly overwrite those bytes until you reach the end of the file.
byte[] bytes = new byte[numBytesToRead];
//...
List<FileInfo> objFileInfo = new List<FileInfo>();
//...
//...
while (numBytesToRead > 0)
{
int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
//First time here bytes[0] == the first byte of the file
//Second time here bytes[0] == 10000th byte of file
//...
//The following line should copy the bytes into file info instead of the reference to the existing byte array
fileInfo = new FileInfo { ..., FileBytes = bytes, ... };
objFileInfo.Add(fileInfo);
//First time here objFileInfo[0].FileBytes[0] == first byte of file
//Second time here objFileInfo[0].FileBytes[0] == 10000th byte of file because objFileInfo[All].FileBytes == bytes
//...
}
You can test this by looking in the FileBytes variable for multiple FileInfo. I'd bet the contents look similar
There is two problem in your code :
The block of data is all of size 100000, which cannot work most of time unless the file size is exactly a multiple of it. So, the last block of data will contains 0s.
FileInfo.FileBytes will change, if you change the buffer to something new causing the every single block of data being the identical to the last block read.
using System;
using System.Collections.Generic;
using System.IO;
static class Program
{
static void Main(string[] args)
{
string pathSource = "test.jpg";
using (FileStream fsSource = new FileStream(pathSource, FileMode.Open, FileAccess.Read))
{
// Read the source file into a byte array.
const int BufferSize = 100000; // Your amount to read at a time
byte[] buffer = new byte[BufferSize];
if (File.Exists(pathSource))
{
Console.WriteLine("File of this name already exist, you want to continue?");
System.IO.FileInfo obj = new System.IO.FileInfo(pathSource);
pathSource = "Files/" + Guid.NewGuid() + obj.Extension;
}
int i = 0, offset = 0, bytesRead;
List<FileInfo> objFileInfo = new List<FileInfo>();
Guid fileID = Guid.NewGuid();
while (0 != (bytesRead = fsSource.Read(buffer, offset, BufferSize)))
{
var data = new byte[bytesRead];
Array.Copy(buffer, data, bytesRead);
objFileInfo.Add(new FileInfo { FileID = fileID, FileBytes = data, FileByteID = ++i });
}
foreach (var item in objFileInfo)
{
AppendAllBytes(pathSource, item.FileBytes);
}
}
}
static void AppendAllBytes(string path, byte[] bytes)
{
using (var stream = new FileStream(path, FileMode.Append))
{
stream.Write(bytes, 0, bytes.Length);
}
}
}
class FileInfo
{
public Guid FileID { get; set; }
public int FileByteID { get; set; }
public byte[] FileBytes { get; set; }
}
I am using ICSharpCode.SharpZipLib.Zip.FastZip to zip files but I'm stuck on a problem:
When I try to zip a file with special characters in its file name, it does not work. It works when there are no special characters in the file name.
I think you cannot use FastZip. You need to iterate the files and add the entries yourself specifying:
entry.IsUnicodeText = true;
To tell SharpZipLib the entry is unicode.
string[] filenames = Directory.GetFiles(sTargetFolderPath);
// Zip up the files - From SharpZipLib Demo Code
using (ZipOutputStream s = new
ZipOutputStream(File.Create("MyZipFile.zip")))
{
s.SetLevel(9); // 0-9, 9 being the highest compression
byte[] buffer = new byte[4096];
foreach (string file in filenames)
{
ZipEntry entry = new ZipEntry(Path.GetFileName(file));
entry.DateTime = DateTime.Now;
entry.IsUnicodeText = true;
s.PutNextEntry(entry);
using (FileStream fs = File.OpenRead(file))
{
int sourceBytes;
do
{
sourceBytes = fs.Read(buffer, 0, buffer.Length);
s.Write(buffer, 0, sourceBytes);
} while (sourceBytes > 0);
}
}
s.Finish();
s.Close();
}
You can continue using FastZip if you would like, but you need to give it a ZipEntryFactory that creates ZipEntrys with IsUnicodeText = true.
var zfe = new ZipEntryFactory { IsUnicodeText = true };
var fz = new FastZip { EntryFactory = zfe };
fz.CreateZip("out.zip", "C:\in", true, null);
You have to download and compile the latest version of SharpZipLib library so you can use
entry.IsUnicodeText = true;
here is your snippet (slightly modified):
FileInfo file = new FileInfo("input.ext");
using(var sw = new FileStream("output.zip", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
using(var zipStream = new ZipOutputStream(sw))
{
var entry = new ZipEntry(file.Name);
entry.IsUnicodeText = true;
zipStream.PutNextEntry(entry);
using (var reader = new FileStream(file.FullName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = reader.Read(buffer, 0, buffer.Length)) > 0)
{
byte[] actual = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, actual, 0, bytesRead);
zipStream.Write(actual, 0, actual.Length);
}
}
}
}
Possibility 1: you are passing a filename to the regex file filter.
Possibility 2: those characters are not allowed in zip files (or at least SharpZipLib thinks so)
try to take out the special character from the file name, i,e replace it.
your Filename.Replace("&", "&");
What is the best way to add text to the beginning of a file using C#?
I couldn't find a straightforward way to do this, but came up with a couple work-arounds.
Open up new file, write the text that I wanted to add, append the text from the old file to the end of the new file.
Since the text I want to add should be less than 200 characters, I was thinking that I could add white space characters to the beginning of the file, and then overwrite the white space with the text I wanted to add.
Has anyone else come across this problem, and if so, what did you do?
This works for me, but for small files. Probably it's not a very good solution otherwise.
string currentContent = String.Empty;
if (File.Exists(filePath))
{
currentContent = File.ReadAllText(filePath);
}
File.WriteAllText(filePath, newContent + currentContent );
Adding to the beginning of a file (prepending as opposed to appending) is generally not a supported operation. Your #1 options is fine. If you can't write a temp file, you can pull the entire file into memory, preprend your data to the byte array and then overwrite it back out (this is only really feasible if your files are small and you don't have to have a bunch in memory at once because prepending the array is not necessarily easy without a copy either).
Yeah, basically you can use something like this:
public static void PrependString(string value, FileStream file)
{
var buffer = new byte[file.Length];
while (file.Read(buffer, 0, buffer.Length) != 0)
{
}
if(!file.CanWrite)
throw new ArgumentException("The specified file cannot be written.", "file");
file.Position = 0;
var data = Encoding.Unicode.GetBytes(value);
file.SetLength(buffer.Length + data.Length);
file.Write(data, 0, data.Length);
file.Write(buffer, 0, buffer.Length);
}
public static void Prepend(this FileStream file, string value)
{
PrependString(value, file);
}
Then
using(var file = File.Open("yourtext.txt", FileMode.Open, FileAccess.ReadWrite))
{
file.Prepend("Text you want to write.");
}
Not really effective though in case of huge files.
using two streams, you can do it in place, but keep in mind that this will still loop over the whole file on every addition
using System;
using System.IO;
using System.Text;
namespace FilePrepender
{
public class FilePrepender
{
private string file=null;
public FilePrepender(string filePath)
{
file = filePath;
}
public void prependline(string line)
{
prepend(line + Environment.NewLine);
}
private void shiftSection(byte[] chunk,FileStream readStream,FileStream writeStream)
{
long initialOffsetRead = readStream.Position;
long initialOffsetWrite= writeStream.Position;
int offset = 0;
int remaining = chunk.Length;
do//ensure that the entire chunk length gets read and shifted
{
int read = readStream.Read(chunk, offset, remaining);
offset += read;
remaining -= read;
} while (remaining > 0);
writeStream.Write(chunk, 0, chunk.Length);
writeStream.Seek(initialOffsetWrite, SeekOrigin.Begin);
readStream.Seek(initialOffsetRead, SeekOrigin.Begin);
}
public void prepend(string text)
{
byte[] bytes = Encoding.Default.GetBytes(text);
byte[] chunk = new byte[bytes.Length];
using (FileStream readStream = File.Open(file, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using(FileStream writeStream = File.Open(file, FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite))
{
readStream.Seek(0, SeekOrigin.End);//seek chunk.Length past the end of the file
writeStream.Seek(chunk.Length, SeekOrigin.End);//which lets the loop run without special cases
long size = readStream.Position;
//while there's a whole chunks worth above the read head, shift the file contents down from the end
while(readStream.Position - chunk.Length >= 0)
{
readStream.Seek(-chunk.Length, SeekOrigin.Current);
writeStream.Seek(-chunk.Length, SeekOrigin.Current);
shiftSection(chunk, readStream, writeStream);
}
//clean up the remaining shift for the bytes that don't fit in size%chunk.Length
readStream.Seek(0, SeekOrigin.Begin);
writeStream.Seek(Math.Min(size, chunk.Length), SeekOrigin.Begin);
shiftSection(chunk, readStream, writeStream);
//finally, write the text you want to prepend
writeStream.Seek(0,SeekOrigin.Begin);
writeStream.Write(bytes, 0, bytes.Length);
}
}
}
}
}
I think the best way is to create a temp file. Add your text then read the contents of the original file adding it to the temp file. Then you can overwrite the original with the temp file.
prepend:
private const string tempDirPath = #"c:\temp\log.log", tempDirNewPath = #"c:\temp\log.new";
StringBuilder sb = new StringBuilder();
...
File.WriteAllText(tempDirNewPath, sb.ToString());
File.AppendAllText(tempDirNewPath, File.ReadAllText(tempDirPath));
File.Delete(tempDirPath);
File.Move(tempDirNewPath, tempDirPath);
using (FileStream fs = File.OpenWrite(tempDirPath))
{ //truncate to a reasonable length
if (16384 < fs.Length) fs.SetLength(16384);
fs.Close();
}
// The file we'll prepend to
string filePath = path + "\\log.log";
// A temp file we'll write to
string tempFilePath = path + "\\temp.log";
// 1) Write your prepended contents to a temp file.
using (var writer = new StreamWriter(tempFilePath, false))
{
// Write whatever you want to prepend
writer.WriteLine("Hi");
}
// 2) Use stream lib methods to append the original contents to the Temp
// file.
using (var oldFile = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Read, FileShare.Read))
{
using (var tempFile = new FileStream(tempFilePath, FileMode.Append, FileAccess.Write, FileShare.Read))
{
oldFile.CopyTo(tempFile);
}
}
// 3) Finally, dump the Temp file back to the original, keeping all its
// original permissions etc.
File.Replace(tempFilePath, filePath, null);
Even if what you're writing is small, the Temp file gets the entire original file appended to it before the .Replace(), so it does need to be on disk.
Note that this code is not Thread-safe; if more than one thread accesses this code you can lose writes in the file swapping going on here. That said, it's also pretty expensive, so you'd want to gate access to it anyway - pass writes via multiple Providers to a buffer, which periodically empties out via this prepend method on a single Consumer thread.
You should be able to do this without opening a new file. Use the following File method:
public static FileStream Open(
string path,
FileMode mode,
FileAccess access
)
Making sure to specify FileAccess.ReadWrite.
Using the FileStream returned from File.Open, read all of the existing data into memory. Then reset the pointer to the beginning of the file, write your new data, then write the existing data.
(If the file is big and/or you're suspicious of using too much memory, you can do this without having to read the whole file into memory, but implementing that is left as an exercise to the reader.)
The following algorithm may solve the problem pretty easily, it's most efficient for any size of file, including very big text files:
string outPutFile = #"C:\Output.txt";
string result = "Some new string" + DateTime.Now.ToString() + Environment.NewLine;
StringBuilder currentContent = new StringBuilder();
List<string> rawList = File.ReadAllLines(outPutFile).ToList();
foreach (var item in rawList) {
currentContent.Append(item + Environment.NewLine);
}
File.WriteAllText(outPutFile, result + currentContent.ToString());
Use this class:
public static class File2
{
private static readonly Encoding _defaultEncoding = new UTF8Encoding(false, true); // encoding used in File.ReadAll*()
private static object _bufferSizeLock = new Object();
private static int _bufferSize = 1024 * 1024; // 1mb
public static int BufferSize
{
get
{
lock (_bufferSizeLock)
{
return _bufferSize;
}
}
set
{
lock (_bufferSizeLock)
{
_bufferSize = value;
}
}
}
public static void PrependAllLines(string path, IEnumerable<string> contents)
{
PrependAllLines(path, contents, _defaultEncoding);
}
public static void PrependAllLines(string path, IEnumerable<string> contents, Encoding encoding)
{
var temp = Path.GetTempFileName();
File.WriteAllLines(temp, contents, encoding);
AppendToTemp(path, temp, encoding);
File.Replace(temp, path, null);
}
public static void PrependAllText(string path, string contents)
{
PrependAllText(path, contents, _defaultEncoding);
}
public static void PrependAllText(string path, string contents, Encoding encoding)
{
var temp = Path.GetTempFileName();
File.WriteAllText(temp, contents, encoding);
AppendToTemp(path, temp, encoding);
File.Replace(temp, path, null);
}
private static void AppendToTemp(string path, string temp, Encoding encoding)
{
var bufferSize = BufferSize;
char[] buffer = new char[bufferSize];
using (var writer = new StreamWriter(temp, true, encoding))
{
using (var reader = new StreamReader(path, encoding))
{
int bytesRead;
while ((bytesRead = reader.ReadBlock(buffer,0,bufferSize)) != 0)
{
writer.Write(buffer,0,bytesRead);
}
}
}
}
}
Put the file's contents in a string. Append new data you want to add to the top of the file to that string -- string = newdata + string. Then move the seek position of the file to 0 and write the string into the file.