C# Insert text from one txt file to another txt file - c#

I would like to insert text from one text file to another.
So for example I have a text file at C:\Users\Public\Test1.txt
first
second
third
forth
And i have a second text file at C:\Users\Public\Test2.txt
1
2
3
4
I want to insert Test2.txt into Test1.txt
The end result should be:
first
second
1
2
3
4
third
forth
It should be inserted at the third line.
So far I have this:
string strTextFileName = #"C:\Users\Public\test1.txt";
int iInsertAtLineNumber = 2;
string strTextToInsert = #"C:\Users\Public\test2.txt";
ArrayList lines = new ArrayList();
StreamReader rdr = new StreamReader(
strTextFileName);
string line;
while ((line = rdr.ReadLine()) != null)
lines.Add(line);
rdr.Close();
if (lines.Count > iInsertAtLineNumber)
lines.Insert(iInsertAtLineNumber,
strTextToInsert);
else
lines.Add(strTextToInsert);
StreamWriter wrtr = new StreamWriter(
strTextFileName);
foreach (string strNewLine in lines)
wrtr.WriteLine(strNewLine);
wrtr.Close();
However I get this when i run it:
first
second
C:\Users\Public\test2.txt
third
forth
Thanks in advance!

Instead of using StreamReaders/Writers, you can use methods from the File helper class.
const string textFileName = #"C:\Users\Public\test1.txt";
const string textToInsertFileName = #"C:\Users\Public\test2.txt";
const int insertAtLineNumber = 2;
List<string> fileContent = File.ReadAllLines(textFileName).ToList();
fileContent.InsertRange(insertAtLineNumber , File.ReadAllLines(textToInsertFileName));
File.WriteAllLines(textFileName, fileContent);
A List<string> is way more convenient than an ArrayList. I also renamed a couple of your variables (most notably textToInsertFileName, and removed the prefix cluttering your declarations, any modern IDE will tell you the datatype if you hover for half a second) and declared your constants with const.
Your original problem was related to the fact that you're never reading from strTextToInsert, looks like you thought it was already the text to insert where it's actually the filename.

Without changing your structure or types around too much you could create a method to read the lines
public ArrayList GetFileLines(string fileName)
{
var lines = new ArrayList();
using (var rdr = new StreamReader(fileName))
{
string line;
while ((line = rdr.ReadLine()) != null)
lines.Add(line);
}
return lines;
}
In the intiial question you were not reading the second file, in the following example it is a little easier to determine when you are reading the files and that each one is read:
string strTextFileName = #"C:\Users\Public\test1.txt";
int iInsertAtLineNumber = 2;
string strTextToInsert = #"C:\Users\Public\test2.txt";
ArrayList lines = new ArrayList();
lines.AddRange(GetFileLines(strTextFileName));
lines.InsertRange(iInsertAtLineNumber, GetFileLines(strTextToInsert));
using (var wrtr = new StreamWriter(strTextFileName))
{
foreach (string strNewLine in lines)
wrtr.WriteLine(strNewLine);
}
NOTE: if you wrap a reader or write in a using statement it will automatically close
I haven't tested this and it could be done better, but hopefully this gets you pointed in the right direction. This solution would completely rewrite the first file

Related

How to increment the value of a variable while writing to a txt file C #

I have the following program:
Database (if you can call it that on text files)
When writing to a text file, I need to increase the record id by one
How can I not understand / find with the help of which method it is possible to implement a loop in which I will increase the id, can anyone tell me?
I have a method by which I can format a text file from WPF text boxes:
using (StreamReader sr = new StreamReader("D:\\123.txt", true))
{
while (sr.ReadLine() != null)
{
id++;
using (StreamWriter txt = new StreamWriter("D:\\123.txt", true))
{
txt.WriteLine(string.Format("{0} {1} {2} {3} {4} {5} {6}\n", id, TBName, TBLName, TBMName, TBInfo, TBMat, TBFiz));
}
MessageBox.Show("Данные успешно сохранены!");
}
}
How can the id increase by 1 with each new entry in the text file?
The output of information to the datagrid was as follows:
private void Work()
{
try
{
List<Student> list = new List<Student>();
using (StreamReader sr = new StreamReader(fileName, true))
{
string line;
while ((line = sr.ReadLine()) != null)
{
var parsed = line.Split(' ');
list.Add(new Student
(
Convert.ToInt32(parsed[0]),
parsed[1],
parsed[2],
parsed[3],
Convert.ToInt32(parsed[4]),
Convert.ToInt32(parsed[5]),
Convert.ToInt32(parsed[6])
));
}
}
DGridStudents.ItemsSource = list;
}
catch(Exception ex)
{
MessageBox.Show(ex.Message);
}
}
The code shown has some problems. First you cannot write in a file that you have opened for reading using the StreamReader/StreamWriter classes. But even if you can, look closely at how it works. First you open the file, then you start a loop reading a line, then writing a new line in the same file, then reading the next line (and that next line could be the same one you have just written).
In the better outcome your file will fill your disk.
To increment the value used as id in the last line you could approach with this
// First read the whole file and get the last line from it
int id = 0;
string lastLine = File.ReadLines("D:\\123.txt").LastOrDefault();
if(!string.IsNullOrEmpty(line))
{
// Now split and convert the value of the first splitted part
var parts = line.Split();
if(parts.Length > 0)
{
Int32.TryParse(parts[0], out id);
}
}
// You can now increment and write the new line
id++
using (StreamWriter txt = new StreamWriter("D:\\123.txt", true))
{
txt.WriteLine($"{id} {TBName} {TBLName} {TBMName} {TBInfo} {TBMat} {TBFiz}");
}
This approach will force you to read the whole file to find the last line. However you could add a second file (and index file) to your txt with the same name but with the idx extension. This file will contain only the last number written
int id = 0;
string firstLine = File.ReadLines("D:\\123.idx").FirstOrDefault();
if(!string.IsNullOrEmpty(line))
{
// Now split and convert the value of the first splitted part
var parts = line.Split();
if(parts.Length > 0)
{
Int32.TryParse(parts[0], out id);
}
}
id++
using (StreamWriter txt = new StreamWriter("D:\\123.txt", true))
{
txt.WriteLine($"{id} {TBName} {TBLName} {TBMName} {TBInfo} {TBMat} {TBFiz}");
}
File.WriteAllText("D:\\123.idx", id.ToString());
This second approach is probably better if the txt file is big because it doesn't require to read the whole txt file but there are more points of possible failure. You have two files to handle and this double the chances of IO errors and of course we are not even considering the multiuser scenario.
A database, even one based of a file like SQLite or Access are better suited for these tasks.

C# Parsing a text file misses the first line of every text file

I am making a number to music note value converter from text files and I am having trouble parsing text files in my C# code. The stream reader or parser seems to ignore every first line of text from different text files, I am learning from parsing CSV and assumed that I can parse text files the same way that I can parse CSV files. Here is my code:
static List<NoteColumns> ReadNoteValues(string fileName)
{
var allNotes = new List<NoteColumns>();
using (var reader = new StreamReader(fileName))
{
string line = "";
reader.ReadLine();
while ((line = reader.ReadLine()) != null)
{
string[] cols = line.Split('|');
var noteColumn = new NoteColumns();
if (line.StartsWith("-"))
{
continue;
}
double col;
if (double.TryParse(cols[0], out col))
{
noteColumn.QCol = col;
}
if (double.TryParse(cols[1], out col))
{
noteColumn.WCol = col;
}
}
}
return allNotes;
}
Here are the first 4 lines of one of my text file:
0.1|0.0|
0.0|0.1|
0.3|0.0|
0.1|0.0|
So every time that I have an output it will always skip the first line and move onto the next line. After it misses the first line it converts all the other values perfectly.
I have tried using a foreach loop but I end up getting an 'Index Out Of Range Exception'. Am I missing something/being stupid? Thanks for your time
You have a call to reader.ReadLine(); before your loop, which you don't assign to anything
It's because you have already added ReadLine() before the while loop which skips the first line
In order to avoid such errors (erroneous reader.ReadLine() in the beginning) get rid of Readers at all and query file with Linq (let .Net open and close the file, read it line after line and do all low level work for you):
using System.IO;
using System.Linq;
...
static List<NoteColumns> ReadNoteValues(string fileName) {
return File
.ReadLines(fileName)
.Where(line => !line.StartsWith('-'))
.Select(line => line.Split('|'))
.Select(cols => {
var noteColumn = new NoteColumns();
if (double.TryParse(cols[0], out var col))
noteColumn.QCol = col;
if (double.TryParse(cols[1], out var col))
noteColumn.WCol = col;
return noteColumn;
})
.ToList();
}

read only given last x lines in txt file [duplicate]

This question already has answers here:
Get last 10 lines of very large text file > 10GB
(21 answers)
Closed 9 years ago.
Currently I'm reading file content using File.ReadAllText(), but now I need to read last x lines in my txt file. How can I do that?
content of myfile.txt
line1content
line2content
line3content
line4content
string contentOfLastTwoLines = ...
What about this
List <string> text = File.ReadLines("file.txt").Reverse().Take(2).ToList()
Use Queue<string> to store last X lines and replace the first one with currently read:
int x = 4; // number of lines you want to get
var buffor = new Queue<string>(x);
var file = new StreamReader("Input.txt");
while (!file.EndOfStream)
{
string line = file.ReadLine();
if (buffor.Count >= x)
buffor.Dequeue();
buffor.Enqueue(line);
}
string[] lastLines = buffor.ToArray();
string contentOfLastLines = String.Join(Environment.NewLine, lastLines);
You can use ReadLines to avoid reading the entire file into memory, like this:
const int neededLines = 5;
var lines = new List<String>();
foreach (var s in File.ReadLines("c:\\myfile.txt")) {
lines.Add(s);
if (lines.Count > neededLines) {
lines.RemoveAt(0);
}
}
Once the for loop is finished, the lines list contains up to the last neededLines of text from the file. Of course if the file does not contain as many lines as required, fewer lines will be placed in the lines list.
Read the lines into an array, then extract the last two:
string[] lines = File.ReadAllLines();
string last2 = lines[lines.Count-2] + Environment.NewLine + lines[lines.Count-1];
Assuming your file is reasonably small, it's easier to just read the whole thing and throw away what you don't need.
Since reading a file is done linearly, usually line-by-line. Simply read line-by-line and remember last two lines (you can use queue or something if you want... or just two string variables). When you get to EOF, you'll have your last two lines.
You want to read the file backwards using ReverseLineReader:
How to read a text file reversely with iterator in C#
Then run .Take(2) on it.
var lines = new ReverseLineReader(filename);
var last = lines.Take(2);
OR
Use a System.IO.StreamReader.
string line1, line2;
using(StreamReader reader = new StreamReader("myFile.txt")) {
line1 = reader.ReadLine();
line2 = reader.ReadLine();
}

How to read a csv file one line at a time and replace/edit certain lines as you go?

I have a 60GB csv file I need to make some modifications to. The customer wants some changes to the files data, but I don't want to regenerate the data in that file because it took 4 days to do.
How can I read the file, line by line (not loading it all into memory!), and make edits to those lines as I go, replacing certain values etc.?
The process would be something like this:
Open a StreamWriter to a temporary file.
Open a StreamReader to the target file.
For each line:
Split the text into columns based on a delimiter.
Check the columns for the values you want to replace, and replace them.
Join the column values back together using your delimiter.
Write the line to the temporary file.
When you are finished, delete the target file, and move the temporary file to the target file path.
Note regarding Steps 2 and 3.1: If you are confident in the structure of your file and it is simple enough, you can do all this out of the box as described (I'll include a sample in a moment). However, there are factors in a CSV file that may need attention (such as recognizing when a delimiter is being used literally in a column value). You can drudge through this yourself, or try an existing solution.
Basic example just using StreamReader and StreamWriter:
var sourcePath = #"C:\data.csv";
var delimiter = ",";
var firstLineContainsHeaders = true;
var tempPath = Path.GetTempFileName();
var lineNumber = 0;
var splitExpression = new Regex(#"(" + delimiter + #")(?=(?:[^""]|""[^""]*"")*$)");
using (var writer = new StreamWriter(tempPath))
using (var reader = new StreamReader(sourcePath))
{
string line = null;
string[] headers = null;
if (firstLineContainsHeaders)
{
line = reader.ReadLine();
lineNumber++;
if (string.IsNullOrEmpty(line)) return; // file is empty;
headers = splitExpression.Split(line).Where(s => s != delimiter).ToArray();
writer.WriteLine(line); // write the original header to the temp file.
}
while ((line = reader.ReadLine()) != null)
{
lineNumber++;
var columns = splitExpression.Split(line).Where(s => s != delimiter).ToArray();
// if there are no headers, do a simple sanity check to make sure you always have the same number of columns in a line
if (headers == null) headers = new string[columns.Length];
if (columns.Length != headers.Length) throw new InvalidOperationException(string.Format("Line {0} is missing one or more columns.", lineNumber));
// TODO: search and replace in columns
// example: replace 'v' in the first column with '\/': if (columns[0].Contains("v")) columns[0] = columns[0].Replace("v", #"\/");
writer.WriteLine(string.Join(delimiter, columns));
}
}
File.Delete(sourcePath);
File.Move(tempPath, sourcePath);
memory-mapped files is a new feature in .NET Framework 4 which can be used to edit large files.
read here http://msdn.microsoft.com/en-us/library/dd997372.aspx
or google Memory-mapped files
Just read the file, line by line, with streamreader, and then use REGEX! The most amazing tool in the world.
using (var sr = new StreamReader(new FileStream(#"C:\temp\file.csv", FileMode.Open)))
{
var line = sr.ReadLine();
while (!sr.EndOfStream)
{
// do stuff
line = sr.ReadLine();
}
}

How to obtain certain lines from a text file in c#?

I'm working in C# and i got a large text file (75MB)
I want to save lines that match a regular expression
I tried reading the file with a streamreader and ReadToEnd, but it takes 400MB of ram
and when used again creates an out of memory exception.
I then tried using File.ReadAllLines():
string[] lines = File.ReadAllLines("file");
StringBuilder specialLines = new StringBuilder();
foreach (string line in lines)
if (match reg exp)
specialLines.append(line);
this is all great but when my function ends the memory taken doesnt clear and I'm
left with 300MB of used memory, only when recalling the function and executing the line:
string[] lines = File.ReadAllLines("file");
I see the memory clearing down to 50MB give or take and then reallocating back to 200MB
How can I clear this memory or get the lines I need in a different way ?
var file = File.OpenRead("myfile.txt");
var reader = new StreamReader(file);
while (!reader.EndOfStream)
{
string line = reader.ReadLine();
//evaluate the line here.
}
reader.Dispose();
file.Dispose();
You need to stream the text instead of loading the whole file in memory. Here's a way to do it, using an extension method and Linq:
static class ExtensionMethods
{
public static IEnumerable<string> EnumerateLines(this TextReader reader)
{
string line;
while((line = reader.ReadLine()) != null)
{
yield return line;
}
}
}
...
var regex = new Regex(..., RegexOptions.Compiled);
using (var reader = new StreamReader(fileName))
{
var specialLines =
reader.EnumerateLines()
.Where(line => regex.IsMatch(line))
.Aggregate(new StringBuilder(),
(sb, line) => sb.AppendLine(line));
}
You can use StreamReader#ReadLine to read file line-by-line and to save those lines that you need.
You should use the Enumerator pattern to keep your memory footprint low in case your file can be huge.

Categories