C# file input from text file - c#

I have a function like this:
List<float> myList = new List(float);
public void numbers(string filename)
{
string input;
float number;
if (System.IO.File.Exists(filename) == true)
{
System.IO.StreamReader objectReader;
objectReader = new System.IO.StreamReader(filename);
while ((input = objectReader.ReadLine()) != null)
{
number = Convert.ToSingle(input);
myList.Add(number);
}
objectReader.Close();
}
else
{
MessageBox.Show("No Such File" + filename);
}
}
Where Im trying to add numbers (floats) from a text file into a List. But I keep getting errors saying wrong format. The numbers in the text file are one number per line...any help?

I would suggest you do a Trim call like this
number = Convert.ToSingle(input.Trim());
However, a better code would be using a TryParse call
float tmp;
if(float.TryParse(input.Trim(), out tmp)
{
mylist.Add(tmp);
}

Your code worked fine for me except for the case of a newline (and of course for entries that were not numbers at all)
Here is a version that should work for you, using a tryParse to check if each line can convert to a single):
public void Numbers(string filename)
{
List<float> myList = new List<float>();
string input;
if (System.IO.File.Exists(filename) == true)
{
System.IO.StreamReader objectReader;
objectReader = new System.IO.StreamReader(filename);
while ((input = objectReader.ReadLine()) != null)
{
Single output;
if (Single.TryParse(input, out output ))
{
myList.Add(output);
}
else
{
// Huh? Should this happen, maybe some logging can go here to track down why you couldn't just use the .Convert()
}
}
objectReader.Close();
}
else
{
MessageBox.Show("No Such File" + filename);
}
}
As Mike C rightly points out, this could be potentially risky - swallowing good data that has been corrupted by the output process. The tryParse method returns false when it fails so you could add in an else branch and some logging to check just what is causing the failures and see if there is another bug floating around that can be corrected.

Do you have any blank lines in the file, or failures to convert the number? My guess is that you have a line which is not castable to float from its current format. You should make sure you sanitize the lines before reading them in (strip off everything that is not a number using a regex) and throw the line out if it fails the check.
One thing you might do is use double instead and do a Convert.ToDouble().

Are there spaces or commas or anything? The best thing to do would be to set a breakpoint on
number = Convert.ToSingle(input);
to see what input is actually before you try to convert it.

There's a wonderful free package called FileHelpers which helps with importing data from all sorts of text files. The advantage with this is that a lot of the deeper error handling is already in place.

By the way,
if (System.IO.File.Exists(filename) == true)
can be shortened to
if (System.IO.File.Exists(filename))

Related

Search String Pattern in Large Text Files C#

I have been trying to search string patterns in a large text file. I am reading line by line and checking each line which is causing a lot of time. I did try with HashSet and ReadAllLines.
HashSet<string> strings = new HashSet<string>(File.ReadAllLines(#"D:\Doc\Tst.txt"));
Now when I am trying to search the string, it's not matching. As it is looking for a match of the entire row. I just want to check if the string appears in the row.
I had tried by using this:
using (System.IO.StreamReader file = new System.IO.StreamReader(#"D:\Doc\Tst.txt"))
{
while ((CurrentLine = file.ReadLine()) != null)
{
vals = chk_log(CurrentLine, date_Format, (range.Cells[i][counter]).Value2, vals);
if (vals == true)
break;
}
}
bool chk_log(string LineText, string date_to_chk, string publisher, bool tvals)
{
if (LineText.Contains(date_to_chk))
if (LineText.Contains(publisher))
{
tvals = true;
}
else
tvals = false;
else tvals = false;
return tvals;
}
But this is consuming too much time. Any help on this would be good.
Reading into a HashSet doesn't make sense to me (unless there are a lot of duplicated lines) since you aren't testing for membership of the set.
Taking a really naive approach you could just do this.
var isItThere = File.ReadAllLines(#"d:\docs\st.txt").Any(x =>
x.Contains(date_to_chk) && x.Contains(publisher));
65K lines at (say) 1K a line isn't a lot of memory to worry about, and I personally wouldn't bother with Parallel since it sounds like it would be superfast to do anyway.
You could replace Any where First to find the first result or Where to get an IEnumerable<string> containing all results.
You can use a compiled regular expression instead of String.Contains (compile once before looping over the lines). This typically gives better performance.
var regex = new Regex($"{date}|{publisher}", RegexOptions.Compiled);
foreach (string line in File.ReadLines(#"D:\Doc\Tst.txt"))
{
if (regex.IsMatch(line)) break;
}
This also shows a convenient standard library function for reading a file line by line.
Or, depending on what you want to do...
var isItThere = File.ReadLines(#"D:\Doc\Tst.txt").Any(regex.IsMatch);

C# CSV Imports Into Listbox With Quotations

See the below code. I'm opening a .CSV file and reading it into a listbox, but rather than coming across as -
X
Y
Z
it is
"X"
"Y"
"Z"
Relevant code is:
if(ofdCSV.ShowDialog() == DialogResult.OK)
{
list.Visible = true;
StreamReader sr = new StreamReader(ofdCSV.FileName);
string currentLine;
while ((currentLine = sr.ReadLine()) != null)
{
list.Items.Add(currentLine);
}
Any ideas? I've looked around for a while, but I'm still a novice with this, so I'm not entirely sure what to even look for.
CSV files are a common example of something that is invariably more complex than you think it is at first glance. The process usually starts with you thinking that you know how csv files work and writing some simple code. You then gradually accumulate more and more code as you discover more and more edge cases. The final stage is when all your code is eventually discarded in favor of a 3rd party CSV parser, such as Filehelpers.
The .net framework has TextFieldParser which does a pretty good job if you set
Delimiters = new string[] { "," };
HasFieldsEnclosedInQuotes = true;
A more detailed explanation can be found here.
It's less overhead than using a whole other library, but it may fall over if you feed it a line like
caltrops,"10' pole, 1" diameter",lunch
But you probably don't want to regex that either.
You can try using Replace to remove quotes.
list.Items.Add(currentLine.Replace('"', '').Trim());
or
list.Items.Add(currentLine.Replace("\"", string.Empty));;
I have written following statement which to my knowledge works properly to create CSV data. The major point here is that CSV data can span multiple lines. Hope this helps.
public static string Escape(string val)
{
if (val == null)
return "";
while (val.EndsWith("\r\n"))
val = val.Remove(val.Length - 2, 2);
if (val.Contains("\t"))
val = val.Replace("\t", " ");
if (val.Contains("\r\n"))
val = val.Replace("\r\n", "\n");
if (val.Contains("\r"))
val = val.Replace("\r", "\n");
if (val.Contains("\""))
val = val.Replace("\"", "\"\"");
if (val.Contains(",") ||
val.Contains("\"") ||
val.Contains("\n") ||
val.StartsWith(" ") ||
val.EndsWith(" "))
{
val = "\"" + val + "\"";
}
return val;
}

Read Specific Strings from Text File

I'm trying to get certain strings out of a text file and put it in a variable.
This is what the structure of the text file looks like keep in mind this is just one line and each line looks like this and is separated by a blank line:
Date: 8/12/2013 12:00:00 AM Source Path: \\build\PM\11.0.64.1\build.11.0.64.1.FileServerOutput.zip Destination Path: C:\Users\Documents\.NET Development\testing\11.0.64.1\build.11.0.55.5.FileServerOutput.zip Folder Updated: 11.0.64.1 File Copied: build.11.0.55.5.FileServerOutput.zip
I wasn't entirely too sure of what to use for a delimiter for this text file or even if I should be using a delimiter so it could be subjected to change.
So just a quick example of what I want to happen with this, is I want to go through and grab the Destination Path and store it in a variable such as strDestPath.
Overall the code I came up with so far is this:
//find the variables from the text file
string[] lines = File.ReadAllLines(GlobalVars.strLogPath);
Yeah not much, but I thought perhaps if I just read one line at at a time and tried to search for what I was looking for through that line but honestly I'm not 100% sure if I should stick with that way or not...
If you are skeptical about how large your file is, you should come up using ReadLines which is deferred execution instead of ReadAllLines:
var lines = File.ReadLines(GlobalVars.strLogPath);
The ReadLines and ReadAllLines methods differ as follows:
When you use ReadLines, you can start enumerating the collection of strings before the whole collection is returned; when you use ReadAllLines, you must wait for the whole array of strings be returned before you can access the array. Therefore, when you are working with very large files, ReadLines can be more efficient.
As weird as it might sound, you should take a look to log parser. If you are free to set the file format you could use one that fits with log parser and, believe me, it will make your life a lot more easy.
Once you load the file with log parse you can user queries to get the information you want. If you don't care about using interop in your project you can even add a com reference and use it from any .net project.
This sample reads a HUGE csv file a makes a bulkcopy to the DB to perform there the final steps. This is not really your case, but shows you how easy is to do this with logparser
COMTSVInputContextClass logParserTsv = new COMTSVInputContextClass();
COMSQLOutputContextClass logParserSql = new COMSQLOutputContextClass();
logParserTsv.separator = ";";
logParserTsv.fixedSep = true;
logParserSql.database = _sqlDatabaseName;
logParserSql.server = _sqlServerName;
logParserSql.username = _sqlUser;
logParserSql.password = _sqlPass;
logParserSql.createTable = false;
logParserSql.ignoreIdCols = true;
// query shortened for clarity purposes
string SelectPattern = #"Select TO_STRING(UserName),TO_STRING(UserID) INTO {0} From {1}";
string query = string.Format(SelectPattern, _sqlTable, _csvPath);
logParser.ExecuteBatch(query, logParserTsv, logParserSql);
LogParser in one of those hidden gems Microsoft has and most people don't know about. I have use to read iis logs, CSV files, txt files, etc. You can even generate graphics!!!
Just check it here http://support.microsoft.com/kb/910447/en
Looks like you need to create a Tokenizer. Try something like this:
Define a list of token values:
List<string> gTkList = new List<string>() {"Date:","Source Path:" }; //...etc.
Create a Token class:
public class Token
{
private readonly string _tokenText;
private string _val;
private int _begin, _end;
public Token(string tk, int beg, int end)
{
this._tokenText = tk;
this._begin = beg;
this._end = end;
this._val = String.Empty;
}
public string TokenText
{
get{ return _tokenText; }
}
public string Value
{
get { return _val; }
set { _val = value; }
}
public int IdxBegin
{
get { return _begin; }
}
public int IdxEnd
{
get { return _end; }
}
}
Create a method to Find your Tokens:
List<Token> FindTokens(string str)
{
List<Token> retVal = new List<Token>();
if (!String.IsNullOrWhitespace(str))
{
foreach(string cd in gTkList)
{
int fIdx = str.IndexOf(cd);
if(fIdx > -1)
retVal.Add(cd,fIdx,fIdx + cd.Length);
}
}
return retVal;
}
Then just do something like this:
foreach(string ln in lines)
{
//returns ordered list of tokens
var tkns = FindTokens(ln);
for(int i=0; i < tkns.Length; i++)
{
int len = (i == tkns.Length - 1) ? ln.Length - tkns[i].IdxEnd : tkns[i+1].IdxBegin - tkns[i].IdxEnd;
tkns[i].value = ln.Substring(tkns[i].IdxEnd+1,len).Trim();
}
//Do something with the gathered values
foreach(Token tk in tkns)
{
//stuff
}
}

Search for a keyword in C# and output the line as a string

How would it be possible to search for a string e.g. #Test1 in a text file and then output the line below it as a string e.g.
Test.txt
#Test1
86/100
#Test2
99/100
#Test3
13/100
so if #Test2 was the search keyword "99/200" would be turned into a string
Parse the file once, store the results in a dictionary. Then lookup in the dictionary.
var dictionary = new Dictionary<string, string>();
var lines = File.ReadLines("testScores.txt");
var e = lines.GetEnumerator();
while(e.MoveNext()) {
if(e.Current.StartsWith("#Test")) {
string test = e.Current;
if(e.MoveNext()) {
dictionary.Add(test, e.Current);
}
else {
throw new Exception("File not in expected format.");
}
}
}
Now you can just say
Console.WriteLine(dictionary["#Test1"]);
etc.
Also, long-term, I recommend moving to a database.
Use readline and search for the string (ex. #Test1) and then use the next line as input.
If the exactly above is the file format. Then you can use this
1. read all lines till eof in an array.
2. now run a loop and check if the string[] is not empty.
Hold the value in some other array or list.
now you have items one after one. so whenever you use loop and use [i][i+1],
it will give you the test number and score.
Hope this might help.
How about RegularExpressions? here's a good example
This should do it for you:
int lineCounter = 0;
StreamReader strReader = new StreamReader(path);
while (!strReader.EndOfStream)
{
string fileLine = strReader.ReadLine();
if (Regex.IsMatch(fileLine,pattern))
{
Console.WriteLine(pattern + "found in line " +lineCounter.ToString());
}
lineCounter++;
}

How do I handle line breaks in a CSV file using C#?

I have an Excel spreadsheet being converted into a CSV file in C#, but am having a problem dealing with line breaks. For instance:
"John","23","555-5555"
"Peter","24","555-5
555"
"Mary,"21","555-5555"
When I read the CSV file, if the record does not starts with a double quote (") then a line break is there by mistake and I have to remove it. I have some CSV reader classes from the internet but I am concerned that they will fail on the line breaks.
How should I handle these line breaks?
Thanks everybody very much for your help.
Here's is what I've done so far. My records have fixed format and all start with
JTW;...;....;...;
JTW;...;...;....
JTW;....;...;..
..;...;... (wrong record, line break inserted)
JTW;...;...
So I checked for the ; in the [3] position of each line. If true, I write; if false, I'll append on the last (removing the line-break)
I'm having problems now because I'm saving the file as a txt.
By the way, I am converting the Excel spreadsheet to csv by saving as csv in Excel. But I'm not sure if the client is doing that.
So the file as a TXT is perfect. I've checked the records and totals. But now I have to convert it back to csv, and I would really like to do it in the program. Does anybody know how?
Here is my code:
namespace EditorCSV
{
class Program
{
static void Main(string[] args)
{
ReadFromFile("c:\\source.csv");
}
static void ReadFromFile(string filename)
{
StreamReader SR;
StreamWriter SW;
SW = File.CreateText("c:\\target.csv");
string S;
char C='a';
int i=0;
SR=File.OpenText(filename);
S=SR.ReadLine();
SW.Write(S);
S = SR.ReadLine();
while(S!=null)
{
try { C = S[3]; }
catch (IndexOutOfRangeException exception){
bool t = false;
while (t == false)
{
t = true;
S = SR.ReadLine();
try { C = S[3]; }
catch (IndexOutOfRangeException ex) { S = SR.ReadLine(); t = false; }
}
}
if( C.Equals(';'))
{
SW.Write("\r\n" + S);
i = i + 1;
}
else
{
SW.Write(S);
}
S=SR.ReadLine();
}
SR.Close();
SW.Close();
Console.WriteLine("Records Processed: " + i.ToString() + " .");
Console.WriteLine("File Created SucacessFully");
Console.ReadKey();
}
}
}
CSV has predefined ways of handling that. This site provides an easy to read explanation of the standard way to handle all the caveats of CSV.
Nevertheless, there is really no reason to not use a solid, open source library for reading and writing CSV files to avoid making non-standard mistakes. LINQtoCSV is my favorite library for this. It supports reading and writing in a clean and simple way.
Alternatively, this SO question on CSV libraries will give you the list of the most popular choices.
Rather than check if the current line is missing the (") as the first character, check instead to see if the last character is a ("). If it is not, you know you have a line break, and you can read the next line and merge it together.
I am assuming your example data was accurate - fields were wrapped in quotes. If quotes might not delimit a text field (or new-lines are somehow found in non-text data), then all bets are off!
There is a built-in method for reading CSV files in .NET (requires Microsoft.VisualBasic assembly reference added):
public static IEnumerable<string[]> ReadSV(TextReader reader, params string[] separators)
{
var parser = new Microsoft.VisualBasic.FileIO.TextFieldParser(reader);
parser.SetDelimiters(separators);
while (!parser.EndOfData)
yield return parser.ReadFields();
}
If you're dealing with really large files this CSV reader claims to be the fastest one you'll find: http://www.codeproject.com/Articles/9258/A-Fast-CSV-Reader
I've used this piece of code recently to parse rows from a CSV file (this is a simplified version):
private void Parse(TextReader reader)
{
var row = new List<string>();
var isStringBlock = false;
var sb = new StringBuilder();
long charIndex = 0;
int currentLineCount = 0;
while (reader.Peek() != -1)
{
charIndex++;
char c = (char)reader.Read();
if (c == '"')
isStringBlock = !isStringBlock;
if (c == separator && !isStringBlock) //end of word
{
row.Add(sb.ToString().Trim()); //add word
sb.Length = 0;
}
else if (c == '\n' && !isStringBlock) //end of line
{
row.Add(sb.ToString().Trim()); //add last word in line
sb.Length = 0;
//DO SOMETHING WITH row HERE!
currentLineCount++;
row = new List<string>();
}
else
{
if (c != '"' && c != '\r') sb.Append(c == '\n' ? ' ' : c);
}
}
row.Add(sb.ToString().Trim()); //add last word
//DO SOMETHING WITH LAST row HERE!
}
Try CsvHelper (a library I maintain). It ignores empty rows. I believe there is a flag you can set in FastCsvReader to have it handle empty rows also.
Heed the advice from the experts and Don't roll your own CSV parser.
Your first thought is, "How do I handle new line breaks?"
Your next thought is, "I need to handle commas inside of quotes."
Your next thought will be, "Oh, crap, I need to handle quotes inside of quotes. Escaped quotes. Double quotes. Single quotes..."
It's a road to madness. Don't write your own. Find a library with an extensive unit test coverage that hits all the hard parts and has gone through hell for you. For .NET, use the free CsvHelper library.
Maybe you could count for (") during the ReadLine(). If they are odd, that will raise the flag. You could either ignore those lines, or get the next two and eliminate the first "\n" occurrence of the merge lines.
What I usually do is read the text in character by character opposed to line by line, due to this very problem.
As you're reading each character, you should be able to figure out where each cell starts and stops, but also the difference between a linebreak in a row and in a cell: If I remember correctly, for Excel generated files anyway, rows start with \r\n, and newlines in cells are only \r.
There is an example parser is c# that seems to handle your case correctly. Then you can read your data in and purge the line breaks out of it post-read.
Part 2 is the parser, and there is a Part 1 that covers the writer portion.
Read the line.
Split into columns(fields).
If you have enough columns expected for each line, then process.
If not, read the next line, and capture the remaining columns until you get what you need.
Repeat.
A somewhat simple regular expression could be used on each line. When it matches, you process each field from the match. When it doesn't find a match, you skip that line.
The regular expression could look something like this.
Match match = Regex.Match(line, #"^(?:,?(?<q>['"](?<field>.*?\k'q')|(?<field>[^,]*))+$");
if (match.Success)
{
foreach (var capture in match.Groups["field"].Captures)
{
string fieldValue = capture.Value;
// Use the value.
}
}
Have a look at FileHelpers Library
It supports reading\writing CSV with line breaks as well as reading\writing to excel
The LINQy solution:
string csvText = File.ReadAllText("C:\\Test.txt");
var query = csvText
.Replace(Environment.NewLine, string.Empty)
.Replace("\"\"", "\",\"").Split(',')
.Select((i, n) => new { i, n }).GroupBy(a => a.n / 3);
You might also check out my CSV parser SoftCircuits.CsvParser on NuGet. It will not only parse a CSV file but--if wanted--can also automatically map column values to your class properties. And it runs nearly four times faster than CsvHelper.
For a line break to exist in a CSV, there must be an open double quote that's not closed.
Assuming that all CSVs cells must open and close a double quote, just check if there's an odd number of quotation marks
my_string.Count(c => c == '"') % 2 == 1
and if that's the case, continue reading until you have the even number.

Categories