so I have this application that I have inherited from someone that is long gone. The gist of the application is that it reads in a .cvs file that has about 5800 lines in it, copies it over to another .cvs, which it creates new each time, after striping out a few things , #, ', &. Well everything works great, or it has until about a month ago. so I started checking into it, and what I have found so far is that there are about 131 items missing from the spreadsheet. Now I read someplace that the maximun amount of data a string can hold is over 1,000,000,000 chars, and my spreadsheet is way under that, around 800,000 chars, but the only thing I can think is doing it is the string object.
So anyway, here is the code in question, this piece appears
to both read in from the existing field, and output to the new file:
StreamReader s = new StreamReader(File);
//Read the rest of the data in the file.
string AllData = s.ReadToEnd();
//Split off each row at the Carriage Return/Line Feed
//Default line ending in most windows exports.
//You may have to edit this to match your particular file.
//This will work for Excel, Access, etc. default exports.
string[] rows = AllData.Split("\r\n".ToCharArray(), System.StringSplitOptions.RemoveEmptyEntries);
//Now add each row to the DataSet
foreach (string r in rows)
{
//Split the row at the delimiter.
string[] items = r.Split(delimiter.ToCharArray());
//Add the item
result.Rows.Add(items);
}
If anyone can help me I would really appreciate it. I either need to figure out how to split the data better, or I need to figure out why it is cutting out the last 131 lines from the existing excel file to the new excel file.
One easier way to do this, since you're using "\r\n" for lines, would be to just use the built-in line reading method: File.ReadLines(path)
foreach(var line in File.ReadLines(path))
{
var items = line.Split(',');
result.Rows.Add(items);
}
You may want to check out the TextFieldParser class, which is part of the Microsoft.VisualBasic.FileIO namespace (yes, you can use this with C# code)
Something along the lines of:
using(var reader = new TextFieldParser("c:\\path\\to\\file"))
{
//configure for a delimited file
reader.TextFieldType = FieldType.Delimited;
//configure the delimiter character (comma)
reader.Delimiters = new[] { "," };
while(!reader.EndOfData)
{
string[] row = reader.ReadFields();
//do stuff
}
}
This class can help with some of the issues of splitting a line into its fields, when the field may contain the delimiter.
Related
I want to get the line containing a certain word that cannot be repeated like profile ID without make loop to read each of line separately, Because if the word I am looking for is in the last line of the text file, this will take a lot of time to get it, and if the search process is for more than one word and extract the line that contains it, I think it will take a lot of time.
Example for line text file
name,id,image,age,place,link
string word = "13215646";
string output = string.Empty;
using (var fileStream = File.OpenRead(FileName))
using (var streamReader = new StreamReader(fileStream, Encoding.UTF8))
{
String line;
while ((line = streamReader.ReadLine()) != null)
{
string[] strList = line.Split(',');
if (word == strList[1]) // check if word = id
{
output = line;
break;
}
}
}
You can use this to search the file:
var output = File.ReadLines(FileName).
Where(line => line.Split(',')[1] == word).
FirstOrDefault();
But it won't solve this:
if the word I am looking for is in the last line of the text file, this will take a lot of time to get it, and if the search process is for more than one word and extract the line that contains it, I think it will take a lot of time.
There's not a practical way to avoid this for a basic file.
The only ways around actually reading through the file is either maintaining an index, which requires absolute control over everything that might write into the file, or if you can guarantee the file is already sorted by the columns that matter, in which case you can do something like a binary search.
But neither is likely for a random csv file. This is one of the reasons people use databases.
However, we also need to stop and check whether this is really a problem for you. I'd expect the code above to handle files up to a couple hundred MB in around 1 to 2 seconds on modern hardware, even if you need to look through the whole file.
You can optimise the code. Here are few ideas:
var ids = new ["13215646", "113"];
foreach(var line in File.ReadLines(FileName))
{
var id = line.Split(',', count: 3)[1]; // Optimization 1: Use: `count: 3`
if(ids.Contains(id) // Optimization 2: Search for multiple ids
{
//Do what you need with the line
}
}
Hi there I have a text file like this, many lines 2 columns
CF7CED1BF035345269118A15EF2D45A06, product1
CF7CED1BF035345269118A15EF2D45A09, product2
....
...
...
...
I need to split this and access each field, more precise I need to make a loop that creates many files like product1.txt product2.txt etc and will enclose the codes on its left.
So I need to create files with filenames of columns [2] of all lines and enclose the column[1] as value of each line
I know how to do basic stuff in arrays, like read all lines and store them, but i don't know how to make a loop that will read both field 1 then 2 of LINE 1, create the file and store (I know how to read and save to file) and go on on next LINE 2 and go on field 1 and field 2 again and so on.
Someone suggested using jagged arrays would be faster than 2d arrays
"When you use ReadLines, you can start enumerating the collection of
strings before the whole collection is returned; when you use
ReadAllLines, you must wait for the whole array of strings be
returned before you can access the array. Therefore, when you are
working with very large files, ReadLines can be more efficient."
string path_read = #"c:\read\file.txt";
//Path to save resulting files.
string path = #"c:\temp\";
char[] comma = new char[1]{','};
//ASSUMPTION: Your every row has comma separated 2 values.
//Do a for loop.
//Code now use File.ReadLines
foreach (var currentLine in File.ReadLines(path_read))
{
string[] itemArray = currentLine.Split(comma, StringSplitOptions.RemoveEmptyEntries);
// Your item array now has 2 values from 2 columns in the same row.
// Do whatever with it.
File.WriteAllText(path+itemArray[1]+".txt", itemArray[0], Encoding.UTF8);
}
Do you need to keep the contents for any further use? if the intent is to read and the save the contents to separate files then there is no need for a separate array.
using (var reader = new StreamReader(#"input.txt"))
{
while (!reader.EndOfStream)
{
var inputText = reader.ReadLine();
var splitText = inputText.Split(',');
File.AppendAllLines(splitText[1] + ".txt", new List<string> {splitText[0]});
}
}
I am trying to read a CSV file and stored all the values in the single list.CSV file contains credentials as uid(userid) and pass(password) and separated by','I have successfully read all the lines and write it in the file.but when it writes in the file, it write the value in between " "(double quotes) like as("abcdefgh3 12345678")what i want actually to remove this "" double quotes sign when i write it in to the files.i am pasting my code here:
static void Main(string[] args)
{
var reader = new StreamReader(File.OpenRead(#"C:\Desktop\userid1.csv"));
List<string> listA = new List<string>();
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(',');
listA.Add(values[0]);
listA.Add(values[1]);
}
foreach (string a in listA)
{
TextWriter tr = new StreamWriter(#"E:\newfiless",true);
tr.Write(a);
tr.Write(tr.NewLine);
tr.Close();
}
}
and the resulted output is like this:
"uid
pass"
"Martin123
123456789"
"Damian
91644"
but i want in this form:
uid
pass
Martin123
123456789
Damian
91644
Thanking you all in advance.
The original file clearly has quotes, which makes it a CSV file with only one colum and in that column there are two values. Not usual, but it happens.
To actually remove quotes you can use Trim, TrimEnd or TrimStart.
You can remove the quotes while reading, or while writing, in this case it doesn't really matter.
var line = reader.ReadLine().Trim('"');
This will remove the quotes while reading. Note that this assumes the CSV is of this "broken" variant.
tr.WriteLine(a.Trim('"'));
This will handle it on write. This will work even if the file is "correct" CSV having two columns and values in quotes.
Note that you can use WriteLine to add the newline, no need for two Write calls.
Also as others have commented, don't create a TextWriter in the loop for every value, create it once.
using (TextWriter tr = new StreamWriter(#"E:\newfiless"))
{
foreach (string a in listA)
{
tr.WriteLine(a.Trim('"'));
}
}
The using will take care of closing the file and other possible resources even if there is an exception.
I assume that all you need to read the input file, strip out all starting/ending quotation marks, then split by comma and write it all to another file. You can actually accomplish it in a one-liner using SelectMany, which will produce a "flat" collection:
File.WriteAllLines(
#"c:\temp\output.txt",
File
.ReadAllLines(#"c:\temp\input.csv")
.SelectMany(line => line.Trim('"').Split(','))
);
It's not quite clear from your example where quotation marks are located in the file. For a typical .CSV file some comma-separated field might be wrapped in quotation marks to allow commas to be a part of the content. If it's the case, then parsing will be more complex.
You can use
tr.Write(a.Substring(1, line.Length - 2));
Edited
Please use Trim
tr.Write(a.TrimEnd('"').TrimStart('"'));
I have five lines of text in a text file that I want to read and write in the following way:
read 1st line and copy it to new text file 1.
read 1st and 2nd line and copy them to new text file 2.
read 1st, 2nd and 3rd line and copy them to new text file 3.
read 1st, 2nd, 3rd and 4th line and copy them to new text file 4.
read all lines and copy them to new text file 5.
I have tried something with loops, but I just get confused. Or maybe to use recursion....?
Something like that (just Linq with Take)
// ..Or ReadAllLines to cache the file lines
var source = File.ReadLines(#"C:\MyText.txt");
File.WriteAllLines(#"C:\target1.txt", source.Take(1));
File.WriteAllLines(#"C:\target2.txt", source.Take(2));
File.WriteAllLines(#"C:\target3.txt", source.Take(3));
File.WriteAllLines(#"C:\target4.txt", source.Take(4));
// not 5 lines, but entire file
File.WriteAllLines(#"C:\target5.txt", source);
I created a basic solution for you.. Please check for the rest, this is just for help you out.
List<String> lines = File.ReadLines(#"C:\Users\m\Desktop\te\source.txt").ToList();
string basicPath = #"C:\Users\m\Desktop\te\";
int i = 1;
foreach (string line in lines)
{
File.WriteAllLines(basicPath + i + ".txt", lines.GetRange(0, i));
i++;
}
As part of a recent project I had to read and write from a CSV file and put in a grid view in c#. In the end decided to use a ready built parser to do the work for me.
Because I like to do that kind of stuff, I wondered how to go about writing my own.
So far all I've managed to do is this:
//Read the header
StreamReader reader = new StreamReader(dialog.FileName);
string row = reader.ReadLine();
string[] cells = row.Split(',');
//Create the columns of the dataGridView
for (int i = 0; i < cells.Count() - 1; i++)
{
DataGridViewTextBoxColumn column = new DataGridViewTextBoxColumn();
column.Name = cells[i];
column.HeaderText = cells[i];
dataGridView1.Columns.Add(column);
}
//Display the contents of the file
while (reader.Peek() != -1)
{
row = reader.ReadLine();
cells = row.Split(',');
dataGridView1.Rows.Add(cells);
}
My question: is carrying on like this a wise idea, and if it is (or isn't) how would I test it properly?
As a programming exercise (for learning and gaining experience) it is probably a very reasonable thing to do. For production code, it may be better to use an existing library mainly because the work is already done. There are quite a few things to address with a CSV parser. For example (randomly off the top of my head):
Quoted values (strings)
Embedded quotes in quoted strings
Empty values (NULL ... or maybe even NULL vs. empty).
Lines without the correct number of entries
Headers vs. no headers.
Recognizing different data types (e.g., different date formats).
If you have a very specific input format in a very controlled environment, though, you may not need to deal with all of those.
... is carrying on like this a wise idea ...?
Since you're doing this as a learning exercise, you may want to dig deeper into lexing and parsing theory. Your current approach will show its shortcomings fairly quickly as described in Stop Rolling Your Own CSV Parser!. It's not that parsing CSV data is difficult. (It's not.) It's just that most CSV parser projects treat the problem as a text splitting problem versus a parsing problem. If you take the time to define the CSV "language", the parser almost writes itself.
RFC 4180 defines a grammar for CSV data in ABNF form:
file = [header CRLF] record *(CRLF record) [CRLF]
header = name *(COMMA name)
record = field *(COMMA field)
name = field
field = (escaped / non-escaped)
escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE
non-escaped = *TEXTDATA
COMMA = %x2C
CR = %x0D ;as per section 6.1 of RFC 2234
DQUOTE = %x22 ;as per section 6.1 of RFC 2234
LF = %x0A ;as per section 6.1 of RFC 2234
CRLF = CR LF ;as per section 6.1 of RFC 2234
TEXTDATA = %x20-21 / %x23-2B / %x2D-7E
This grammar shows how single characters are built up to create more and more complex language elements. (As written, definitions go the opposite direction from complex to simple.)
If you start with a grammar, you can write parsing functions that mirror non-terminal grammar elements (the lowercase items). Julian M Bucknall describes the process in Writing a parser for CSV data. Take a look at Test-Driven Development with ANTLR for an example of the same process using a parser generator.
Keep in mind, there is no one accepted CSV definition. CSV data in the wild is not guaranteed to implement all of the RFC 4180 suggestions.
Get (or make) some CSV data and write Unit Tests using NUnit or Visual Studio Testing Tools.
Be sure to test edge cases like
"csv","Data","with","a","trailing","comma",
and
"csv","Data","with,","commas","and","""quotes""","in","it"
This come from
http://www.gigawebsolution.com/Posts/Details/61/Building-a-Simple-CSV-Parser-in-C#
public interface ICsvReaderWriter
{
List<string[]> Read(string filePath, char delimiter);
void Write(string filePath, List<string[]> lines, char delimiter);
}
public class CsvReaderWriter : ICsvReaderWriter
{
public List<string[]> Read(string filePath, char delimiter)
{
var fileContent = new List<string[]>();
using (var reader = new StreamReader(filePath, Encoding.Unicode))
{
string line;
while ((line = reader.ReadLine()) != null)
{
if (!string.IsNullOrEmpty(line))
{
fileContent.Add(line.Split(delimiter));
}
}
}
return fileContent;
}
public void Write(string filePath, List<string[]> lines, char delimiter)
{
using (var writer = new StreamWriter(filePath, true, Encoding.Unicode))
{
foreach (var line in lines)
{
var data = line.Aggregate(string.Empty,
(current, column) => current +
string.Format("{0}{1}", column,delimiter))
.TrimEnd(delimiter);
writer.WriteLine(data);
}
}
}
}
Parsing a CSV file isn't difficult, but it involves more than simply calling String.Split().
You are breaking the lines at each comma. But it's possible for fields to contain embedded commas. In these cases, CSV wraps the field in double quotes. So you must also look for double quotes and ignore commas within those quotes. In addition, it's even possible for fields to contain embedded double quotes. Double quotes must appear within double quotes and be "doubled up" to indicate the quote is a literal character.
If you'd like to see how I did it, you can check out this article.