CSV string handling - c#

Typical way of creating a CSV string (pseudocode):
Create a CSV container object (like a StringBuilder in C#).
Loop through the strings you want to add appending a comma after each one.
After the loop, remove that last superfluous comma.
Code sample:
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
sb.Remove(sb.Length - 1, 1);
//sb.Replace(",", "", sb.Length - 1, 1)
return sb.ToString();
}
I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence?
I feel that there should be an easier/cleaner/more efficient way of removing that last comma. Any ideas?

You could use LINQ to Objects:
string [] strings = contactList.Select(c => c.Name).ToArray();
string csv = string.Join(",", strings);
Obviously that could all be done in one line, but it's a bit clearer on two.

Your code not really compliant with full CSV format. If you are just generating CSV from data that has no commas, leading/trailing spaces, tabs, newlines or quotes, it should be fine. However, in most real-world data-exchange scenarios, you do need the full imlementation.
For generation to proper CSV, you can use this:
public static String EncodeCsvLine(params String[] fields)
{
StringBuilder line = new StringBuilder();
for (int i = 0; i < fields.Length; i++)
{
if (i > 0)
{
line.Append(DelimiterChar);
}
String csvField = EncodeCsvField(fields[i]);
line.Append(csvField);
}
return line.ToString();
}
static String EncodeCsvField(String field)
{
StringBuilder sb = new StringBuilder();
sb.Append(field);
// Some fields with special characters must be embedded in double quotes
bool embedInQuotes = false;
// Embed in quotes to preserve leading/tralining whitespace
if (sb.Length > 0 &&
(sb[0] == ' ' ||
sb[0] == '\t' ||
sb[sb.Length-1] == ' ' ||
sb[sb.Length-1] == '\t' ))
{
embedInQuotes = true;
}
for (int i = 0; i < sb.Length; i++)
{
// Embed in quotes to preserve: commas, line-breaks etc.
if (sb[i] == DelimiterChar ||
sb[i]=='\r' ||
sb[i]=='\n' ||
sb[i] == '"')
{
embedInQuotes = true;
break;
}
}
// If the field itself has quotes, they must each be represented
// by a pair of consecutive quotes.
sb.Replace("\"", "\"\"");
String rv = sb.ToString();
if (embedInQuotes)
{
rv = "\"" + rv + "\"";
}
return rv;
}
Might not be world's most efficient code, but it has been tested. Real world sucks compared to quick sample code :)

Why not use one of the open source CSV libraries out there?
I know it sounds like overkill for something that appears so simple, but as you can tell by the comments and code snippets, there's more than meets the eye. In addition to handling full CSV compliance, you'll eventually want to handle both reading and writing CSVs... and you may want file manipulation.
I've used Open CSV on one of my projects before (but there are plenty of others to choose from). It certainly made my life easier. ;)

Don't forget our old friend "for". It's not as nice-looking as foreach but it has the advantage of being able to start at the second element.
public string ReturnAsCSV(ContactList contactList)
{
if (contactList == null || contactList.Count == 0)
return string.Empty;
StringBuilder sb = new StringBuilder(contactList[0].Name);
for (int i = 1; i < contactList.Count; i++)
{
sb.Append(",");
sb.Append(contactList[i].Name);
}
return sb.ToString();
}
You could also wrap the second Append in an "if" that tests whether the Name property contains a double-quote or a comma, and if so, escape them appropriately.

You could instead add the comma as the first thing inside your foreach.
if (sb.Length > 0) sb.Append(",");

You could also make an array of c.Name data and use String.Join method to create your line.
public string ReturnAsCSV(ContactList contactList)
{
List<String> tmpList = new List<string>();
foreach (Contact c in contactList)
{
tmpList.Add(c.Name);
}
return String.Join(",", tmpList.ToArray());
}
This might not be as performant as the StringBuilder approach, but it definitely looks cleaner.
Also, you might want to consider using .CurrentCulture.TextInfo.ListSeparator instead of a hard-coded comma -- If your output is going to be imported into other applications, you might have problems with it. ListSeparator may be different across different cultures, and MS Excel at the very least, honors this setting. So:
return String.Join(
System.Globalization.CultureInfo.CurrentCulture.TextInfo.ListSeparator,
tmpList.ToArray());

I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence?
You're prematurely optimizing, the performance hit would be negligible.

Just a thought, but remember to handle comma's and quotation marks (") in the field values, otherwise your CSV file may break the consumers reader.

I wrote a small class for this in case someone else finds it useful...
public class clsCSVBuilder
{
protected int _CurrentIndex = -1;
protected List<string> _Headers = new List<string>();
protected List<List<string>> _Records = new List<List<string>>();
protected const string SEPERATOR = ",";
public clsCSVBuilder() { }
public void CreateRow()
{
_Records.Add(new List<string>());
_CurrentIndex++;
}
protected string _EscapeString(string str)
{
return string.Format("\"{0}\"", str.Replace("\"", "\"\"")
.Replace("\r\n", " ")
.Replace("\n", " ")
.Replace("\r", " "));
}
protected void _AddRawString(string item)
{
_Records[_CurrentIndex].Add(item);
}
public void AddHeader(string name)
{
_Headers.Add(_EscapeString(name));
}
public void AddRowItem(string item)
{
_AddRawString(_EscapeString(item));
}
public void AddRowItem(int item)
{
_AddRawString(item.ToString());
}
public void AddRowItem(double item)
{
_AddRawString(item.ToString());
}
public void AddRowItem(DateTime date)
{
AddRowItem(date.ToShortDateString());
}
public static string GenerateTempCSVPath()
{
return Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString().ToLower().Replace("-", "") + ".csv");
}
protected string _GenerateCSV()
{
StringBuilder sb = new StringBuilder();
if (_Headers.Count > 0)
{
sb.AppendLine(string.Join(SEPERATOR, _Headers.ToArray()));
}
foreach (List<string> row in _Records)
{
sb.AppendLine(string.Join(SEPERATOR, row.ToArray()));
}
return sb.ToString();
}
public void SaveAs(string path)
{
using (StreamWriter sw = new StreamWriter(path))
{
sw.Write(_GenerateCSV());
}
}
}

I've used this method before. The Length property of StringBuilder is NOT readonly so subtracting it by one means truncate the last character. But you have to make sure your length is not zero to start with (which would happen if your list is empty) because setting the length to less than zero is an error.
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
if (sb.Length > 0)
sb.Length -= 1;
return sb.ToString();
}

I use CSVHelper - it's a great open-source library that lets you generate compliant CSV streams one element at a time or custom-map your classes:
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
using (StringWriter stringWriter = new StringWriter(sb))
{
using (var csvWriter = new CsvHelper.CsvWriter(stringWriter))
{
csvWriter.Configuration.HasHeaderRecord = false;
foreach (Contact c in contactList)
{
csvWriter.WriteField(c.Name);
}
}
}
return sb.ToString();
}
or if you map then something like this: csvWriter.WriteRecords<ContactList>(contactList);

How about some trimming?
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
return sb.ToString().Trim(',');
}

How about tracking whether you are on the first item, and only add a comma before the item if it is not the first one.
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
bool isFirst = true;
foreach (Contact c in contactList) {
if (!isFirst) {
// Only add comma before item if it is not the first item
sb.Append(",");
} else {
isFirst = false;
}
sb.Append(c.Name);
}
return sb.ToString();
}

Related

C# string.split() separate string by uppercase

I've been using the Split() method to split strings. But this work if you set some character for condition in string.Split(). Is there any way to split a string when is see Uppercase?
Is it possible to get few words from some not separated string like:
DeleteSensorFromTemplate
And the result string is to be like:
Delete Sensor From Template
Use Regex.split
string[] split = Regex.Split(str, #"(?<!^)(?=[A-Z])");
Another way with regex:
public static string SplitCamelCase(string input)
{
return System.Text.RegularExpressions.Regex.Replace(input, "([A-Z])", " $1", System.Text.RegularExpressions.RegexOptions.Compiled).Trim();
}
If you do not like RegEx and you really just want to insert the missing spaces, this will do the job too:
public static string InsertSpaceBeforeUpperCase(this string str)
{
var sb = new StringBuilder();
char previousChar = char.MinValue; // Unicode '\0'
foreach (char c in str)
{
if (char.IsUpper(c))
{
// If not the first character and previous character is not a space, insert a space before uppercase
if (sb.Length != 0 && previousChar != ' ')
{
sb.Append(' ');
}
}
sb.Append(c);
previousChar = c;
}
return sb.ToString();
}
I had some fun with this one and came up with a function that splits by case, as well as groups together caps (it assumes title case for whatever follows) and digits.
Examples:
Input -> "TodayIUpdated32UPCCodes"
Output -> "Today I Updated 32 UPC Codes"
Code (please excuse the funky symbols I use)...
public string[] SplitByCase(this string s) {
var ʀ = new List<string>();
var ᴛ = new StringBuilder();
var previous = SplitByCaseModes.None;
foreach(var ɪ in s) {
SplitByCaseModes mode_ɪ;
if(string.IsNullOrWhiteSpace(ɪ.ToString())) {
mode_ɪ = SplitByCaseModes.WhiteSpace;
} else if("0123456789".Contains(ɪ)) {
mode_ɪ = SplitByCaseModes.Digit;
} else if(ɪ == ɪ.ToString().ToUpper()[0]) {
mode_ɪ = SplitByCaseModes.UpperCase;
} else {
mode_ɪ = SplitByCaseModes.LowerCase;
}
if((previous == SplitByCaseModes.None) || (previous == mode_ɪ)) {
ᴛ.Append(ɪ);
} else if((previous == SplitByCaseModes.UpperCase) && (mode_ɪ == SplitByCaseModes.LowerCase)) {
if(ᴛ.Length > 1) {
ʀ.Add(ᴛ.ToString().Substring(0, ᴛ.Length - 1));
ᴛ.Remove(0, ᴛ.Length - 1);
}
ᴛ.Append(ɪ);
} else {
ʀ.Add(ᴛ.ToString());
ᴛ.Clear();
ᴛ.Append(ɪ);
}
previous = mode_ɪ;
}
if(ᴛ.Length != 0) ʀ.Add(ᴛ.ToString());
return ʀ.ToArray();
}
private enum SplitByCaseModes { None, WhiteSpace, Digit, UpperCase, LowerCase }
Here's another different way if you don't want to be using string builders or RegEx, which are totally acceptable answers. I just want to offer a different solution:
string Split(string input)
{
string result = "";
for (int i = 0; i < input.Length; i++)
{
if (char.IsUpper(input[i]))
{
result += ' ';
}
result += input[i];
}
return result.Trim();
}

Parsing CSV File with double quotes [duplicate]

Is there a default/official/recommended way to parse CSV files in C#? I don't want to roll my own parser.
Also, I've seen instances of people using ODBC/OLE DB to read CSV via the Text driver, and a lot of people discourage this due to its "drawbacks." What are these drawbacks?
Ideally, I'm looking for a way through which I can read the CSV by column name, using the first record as the header / field names. Some of the answers given are correct but work to basically deserialize the file into classes.
A CSV parser is now a part of .NET Framework.
Add a reference to Microsoft.VisualBasic.dll (works fine in C#, don't mind the name)
using (TextFieldParser parser = new TextFieldParser(#"c:\temp\test.csv"))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
while (!parser.EndOfData)
{
//Process row
string[] fields = parser.ReadFields();
foreach (string field in fields)
{
//TODO: Process field
}
}
}
The docs are here - TextFieldParser Class
P.S. If you need a CSV exporter, try CsvExport (discl: I'm one of the contributors)
CsvHelper (a library I maintain) will read a CSV file into custom objects.
using (var reader = new StreamReader("path\\to\\file.csv"))
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
var records = csv.GetRecords<Foo>();
}
Sometimes you don't own the objects you're trying to read into. In this case, you can use fluent mapping because you can't put attributes on the class.
public sealed class MyCustomObjectMap : CsvClassMap<MyCustomObject>
{
public MyCustomObjectMap()
{
Map( m => m.Property1 ).Name( "Column Name" );
Map( m => m.Property2 ).Index( 4 );
Map( m => m.Property3 ).Ignore();
Map( m => m.Property4 ).TypeConverter<MySpecialTypeConverter>();
}
}
Let a library handle all the nitty-gritty details for you! :-)
Check out FileHelpers and stay DRY - Don't Repeat Yourself - no need to re-invent the wheel a gazillionth time....
You basically just need to define that shape of your data - the fields in your individual line in the CSV - by means of a public class (and so well-thought out attributes like default values, replacements for NULL values and so forth), point the FileHelpers engine at a file, and bingo - you get back all the entries from that file. One simple operation - great performance!
In a business application, i use the Open Source project on codeproject.com, CSVReader.
It works well, and has good performance. There is some benchmarking on the link i provided.
A simple example, copied from the project page:
using (CsvReader csv = new CsvReader(new StreamReader("data.csv"), true))
{
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
Console.Write(string.Format("{0} = {1};", headers[i], csv[i]));
Console.WriteLine();
}
}
As you can see, it's very easy to work with.
I know its a bit late but just found a library Microsoft.VisualBasic.FileIO which has TextFieldParser class to process csv files.
Here is a helper class I use often, in case any one ever comes back to this thread (I wanted to share it).
I use this for the simplicity of porting it into projects ready to use:
public class CSVHelper : List<string[]>
{
protected string csv = string.Empty;
protected string separator = ",";
public CSVHelper(string csv, string separator = "\",\"")
{
this.csv = csv;
this.separator = separator;
foreach (string line in Regex.Split(csv, System.Environment.NewLine).ToList().Where(s => !string.IsNullOrEmpty(s)))
{
string[] values = Regex.Split(line, separator);
for (int i = 0; i < values.Length; i++)
{
//Trim values
values[i] = values[i].Trim('\"');
}
this.Add(values);
}
}
}
And use it like:
public List<Person> GetPeople(string csvContent)
{
List<Person> people = new List<Person>();
CSVHelper csv = new CSVHelper(csvContent);
foreach(string[] line in csv)
{
Person person = new Person();
person.Name = line[0];
person.TelephoneNo = line[1];
people.Add(person);
}
return people;
}
[Updated csv helper: bug fixed where the last new line character created a new line]
If you need only reading csv files then I recommend this library: A Fast CSV Reader
If you also need to generate csv files then use this one: FileHelpers
Both of them are free and opensource.
This solution is using the official Microsoft.VisualBasic assembly to parse CSV.
Advantages:
delimiter escaping
ignores Header
trim spaces
ignore comments
Code:
using Microsoft.VisualBasic.FileIO;
public static List<List<string>> ParseCSV (string csv)
{
List<List<string>> result = new List<List<string>>();
// To use the TextFieldParser a reference to the Microsoft.VisualBasic assembly has to be added to the project.
using (TextFieldParser parser = new TextFieldParser(new StringReader(csv)))
{
parser.CommentTokens = new string[] { "#" };
parser.SetDelimiters(new string[] { ";" });
parser.HasFieldsEnclosedInQuotes = true;
// Skip over header line.
//parser.ReadLine();
while (!parser.EndOfData)
{
var values = new List<string>();
var readFields = parser.ReadFields();
if (readFields != null)
values.AddRange(readFields);
result.Add(values);
}
}
return result;
}
I have written TinyCsvParser for .NET, which is one of the fastest .NET parsers around and highly configurable to parse almost any CSV format.
It is released under MIT License:
https://github.com/bytefish/TinyCsvParser
You can use NuGet to install it. Run the following command in the Package Manager Console.
PM> Install-Package TinyCsvParser
Usage
Imagine we have list of Persons in a CSV file persons.csv with their first name, last name and birthdate.
FirstName;LastName;BirthDate
Philipp;Wagner;1986/05/12
Max;Musterman;2014/01/02
The corresponding domain model in our system might look like this.
private class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
}
When using TinyCsvParser you have to define the mapping between the columns in the CSV data and the property in you domain model.
private class CsvPersonMapping : CsvMapping<Person>
{
public CsvPersonMapping()
: base()
{
MapProperty(0, x => x.FirstName);
MapProperty(1, x => x.LastName);
MapProperty(2, x => x.BirthDate);
}
}
And then we can use the mapping to parse the CSV data with a CsvParser.
namespace TinyCsvParser.Test
{
[TestFixture]
public class TinyCsvParserTest
{
[Test]
public void TinyCsvTest()
{
CsvParserOptions csvParserOptions = new CsvParserOptions(true, new[] { ';' });
CsvPersonMapping csvMapper = new CsvPersonMapping();
CsvParser<Person> csvParser = new CsvParser<Person>(csvParserOptions, csvMapper);
var result = csvParser
.ReadFromFile(#"persons.csv", Encoding.ASCII)
.ToList();
Assert.AreEqual(2, result.Count);
Assert.IsTrue(result.All(x => x.IsValid));
Assert.AreEqual("Philipp", result[0].Result.FirstName);
Assert.AreEqual("Wagner", result[0].Result.LastName);
Assert.AreEqual(1986, result[0].Result.BirthDate.Year);
Assert.AreEqual(5, result[0].Result.BirthDate.Month);
Assert.AreEqual(12, result[0].Result.BirthDate.Day);
Assert.AreEqual("Max", result[1].Result.FirstName);
Assert.AreEqual("Mustermann", result[1].Result.LastName);
Assert.AreEqual(2014, result[1].Result.BirthDate.Year);
Assert.AreEqual(1, result[1].Result.BirthDate.Month);
Assert.AreEqual(1, result[1].Result.BirthDate.Day);
}
}
}
User Guide
A full User Guide is available at:
http://bytefish.github.io/TinyCsvParser/
Here is a short and simple solution.
using (TextFieldParser parser = new TextFieldParser(outputLocation))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
string[] headers = parser.ReadLine().Split(',');
foreach (string header in headers)
{
dataTable.Columns.Add(header);
}
while (!parser.EndOfData)
{
string[] fields = parser.ReadFields();
dataTable.Rows.Add(fields);
}
}
Here is my KISS implementation...
using System;
using System.Collections.Generic;
using System.Text;
class CsvParser
{
public static List<string> Parse(string line)
{
const char escapeChar = '"';
const char splitChar = ',';
bool inEscape = false;
bool priorEscape = false;
List<string> result = new List<string>();
StringBuilder sb = new StringBuilder();
for (int i = 0; i < line.Length; i++)
{
char c = line[i];
switch (c)
{
case escapeChar:
if (!inEscape)
inEscape = true;
else
{
if (!priorEscape)
{
if (i + 1 < line.Length && line[i + 1] == escapeChar)
priorEscape = true;
else
inEscape = false;
}
else
{
sb.Append(c);
priorEscape = false;
}
}
break;
case splitChar:
if (inEscape) //if in escape
sb.Append(c);
else
{
result.Add(sb.ToString());
sb.Length = 0;
}
break;
default:
sb.Append(c);
break;
}
}
if (sb.Length > 0)
result.Add(sb.ToString());
return result;
}
}
Some time ago I had wrote simple class for CSV read/write based on Microsoft.VisualBasic library. Using this simple class you will be able to work with CSV like with 2 dimensions array. You can find my class by the following link: https://github.com/ukushu/DataExporter
Simple example of usage:
Csv csv = new Csv("\t");//delimiter symbol
csv.FileOpen("c:\\file1.csv");
var row1Cell6Value = csv.Rows[0][5];
csv.AddRow("asdf","asdffffff","5")
csv.FileSave("c:\\file2.csv");
For reading header only you need is to read csv.Rows[0] cells :)
This code reads csv to DataTable:
public static DataTable ReadCsv(string path)
{
DataTable result = new DataTable("SomeData");
using (TextFieldParser parser = new TextFieldParser(path))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
bool isFirstRow = true;
//IList<string> headers = new List<string>();
while (!parser.EndOfData)
{
string[] fields = parser.ReadFields();
if (isFirstRow)
{
foreach (string field in fields)
{
result.Columns.Add(new DataColumn(field, typeof(string)));
}
isFirstRow = false;
}
else
{
int i = 0;
DataRow row = result.NewRow();
foreach (string field in fields)
{
row[i++] = field;
}
result.Rows.Add(row);
}
}
}
return result;
}
Single source file solution for straightforward parsing needs, useful. Deals with all the nasty edge cases. Such as new line normalization and handling new lines in quoted string literals. Your welcome!
If you CSV file has a header you just read out the column names (and compute column indexes) from the first row. Simple as that.
Note that Dump is a LINQPad method, you might want to remove that if you are not using LINQPad.
void Main()
{
var file1 = "a,b,c\r\nx,y,z";
CSV.ParseText(file1).Dump();
var file2 = "a,\"b\",c\r\nx,\"y,z\"";
CSV.ParseText(file2).Dump();
var file3 = "a,\"b\",c\r\nx,\"y\r\nz\"";
CSV.ParseText(file3).Dump();
var file4 = "\"\"\"\"";
CSV.ParseText(file4).Dump();
}
static class CSV
{
public struct Record
{
public readonly string[] Row;
public string this[int index] => Row[index];
public Record(string[] row)
{
Row = row;
}
}
public static List<Record> ParseText(string text)
{
return Parse(new StringReader(text));
}
public static List<Record> ParseFile(string fn)
{
using (var reader = File.OpenText(fn))
{
return Parse(reader);
}
}
public static List<Record> Parse(TextReader reader)
{
var data = new List<Record>();
var col = new StringBuilder();
var row = new List<string>();
for (; ; )
{
var ln = reader.ReadLine();
if (ln == null) break;
if (Tokenize(ln, col, row))
{
data.Add(new Record(row.ToArray()));
row.Clear();
}
}
return data;
}
public static bool Tokenize(string s, StringBuilder col, List<string> row)
{
int i = 0;
if (col.Length > 0)
{
col.AppendLine(); // continuation
if (!TokenizeQuote(s, ref i, col, row))
{
return false;
}
}
while (i < s.Length)
{
var ch = s[i];
if (ch == ',')
{
row.Add(col.ToString().Trim());
col.Length = 0;
i++;
}
else if (ch == '"')
{
i++;
if (!TokenizeQuote(s, ref i, col, row))
{
return false;
}
}
else
{
col.Append(ch);
i++;
}
}
if (col.Length > 0)
{
row.Add(col.ToString().Trim());
col.Length = 0;
}
return true;
}
public static bool TokenizeQuote(string s, ref int i, StringBuilder col, List<string> row)
{
while (i < s.Length)
{
var ch = s[i];
if (ch == '"')
{
// escape sequence
if (i + 1 < s.Length && s[i + 1] == '"')
{
col.Append('"');
i++;
i++;
continue;
}
i++;
return true;
}
else
{
col.Append(ch);
i++;
}
}
return false;
}
}
Another one to this list, Cinchoo ETL - an open source library to read and write multiple file formats (CSV, flat file, Xml, JSON etc)
Sample below shows how to read CSV file quickly (No POCO object required)
string csv = #"Id, Name
1, Carl
2, Tom
3, Mark";
using (var p = ChoCSVReader.LoadText(csv)
.WithFirstLineHeader()
)
{
foreach (var rec in p)
{
Console.WriteLine($"Id: {rec.Id}");
Console.WriteLine($"Name: {rec.Name}");
}
}
Sample below shows how to read CSV file using POCO object
public partial class EmployeeRec
{
public int Id { get; set; }
public string Name { get; set; }
}
static void CSVTest()
{
string csv = #"Id, Name
1, Carl
2, Tom
3, Mark";
using (var p = ChoCSVReader<EmployeeRec>.LoadText(csv)
.WithFirstLineHeader()
)
{
foreach (var rec in p)
{
Console.WriteLine($"Id: {rec.Id}");
Console.WriteLine($"Name: {rec.Name}");
}
}
}
Please check out articles at CodeProject on how to use it.
This parser supports nested commas and quotes in a column:
static class CSVParser
{
public static string[] ParseLine(string line)
{
List<string> cols = new List<string>();
string value = null;
for(int i = 0; i < line.Length; i++)
{
switch(line[i])
{
case ',':
cols.Add(value);
value = null;
if(i == line.Length - 1)
{// It ends with comma
cols.Add(null);
}
break;
case '"':
cols.Add(ParseEnclosedColumn(line, ref i));
i++;
break;
default:
value += line[i];
if (i == line.Length - 1)
{// Last character
cols.Add(value);
}
break;
}
}
return cols.ToArray();
}//ParseLine
static string ParseEnclosedColumn(string line, ref int index)
{// Example: "b"",bb"
string value = null;
int numberQuotes = 1;
int index2 = index;
for (int i = index + 1; i < line.Length; i++)
{
index2 = i;
switch (line[i])
{
case '"':
numberQuotes++;
if (numberQuotes % 2 == 0)
{
if (i < line.Length - 1 && line[i + 1] == ',')
{
index = i;
return value;
}
}
else if (i > index + 1 && line[i - 1] == '"')
{
value += '"';
}
break;
default:
value += line[i];
break;
}
}
index = index2;
return value;
}//ParseEnclosedColumn
}//class CSVParser
Based on unlimit's post on How to properly split a CSV using C# split() function? :
string[] tokens = System.Text.RegularExpressions.Regex.Split(paramString, ",");
NOTE: this doesn't handle escaped / nested commas, etc., and therefore is only suitable for certain simple CSV lists.
If anyone wants a snippet they can plop into their code without having to bind a library or download a package. Here is a version I wrote:
public static string FormatCSV(List<string> parts)
{
string result = "";
foreach (string s in parts)
{
if (result.Length > 0)
{
result += ",";
if (s.Length == 0)
continue;
}
if (s.Length > 0)
{
result += "\"" + s.Replace("\"", "\"\"") + "\"";
}
else
{
// cannot output double quotes since its considered an escape for a quote
result += ",";
}
}
return result;
}
enum CSVMode
{
CLOSED = 0,
OPENED_RAW = 1,
OPENED_QUOTE = 2
}
public static List<string> ParseCSV(string input)
{
List<string> results;
CSVMode mode;
char[] letters;
string content;
mode = CSVMode.CLOSED;
content = "";
results = new List<string>();
letters = input.ToCharArray();
for (int i = 0; i < letters.Length; i++)
{
char letter = letters[i];
char nextLetter = '\0';
if (i < letters.Length - 1)
nextLetter = letters[i + 1];
// If its a quote character
if (letter == '"')
{
// If that next letter is a quote
if (nextLetter == '"' && mode == CSVMode.OPENED_QUOTE)
{
// Then this quote is escaped and should be added to the content
content += letter;
// Skip the escape character
i++;
continue;
}
else
{
// otherwise its not an escaped quote and is an opening or closing one
// Character is skipped
// If it was open, then close it
if (mode == CSVMode.OPENED_QUOTE)
{
results.Add(content);
// reset the content
content = "";
mode = CSVMode.CLOSED;
// If there is a next letter available
if (nextLetter != '\0')
{
// If it is a comma
if (nextLetter == ',')
{
i++;
continue;
}
else
{
throw new Exception("Expected comma. Found: " + nextLetter);
}
}
}
else if (mode == CSVMode.OPENED_RAW)
{
// If it was opened raw, then just add the quote
content += letter;
}
else if (mode == CSVMode.CLOSED)
{
// Otherwise open it as a quote
mode = CSVMode.OPENED_QUOTE;
}
}
}
// If its a comma seperator
else if (letter == ',')
{
// If in quote mode
if (mode == CSVMode.OPENED_QUOTE)
{
// Just read it
content += letter;
}
// If raw, then close the content
else if (mode == CSVMode.OPENED_RAW)
{
results.Add(content);
content = "";
mode = CSVMode.CLOSED;
}
// If it was closed, then open it raw
else if (mode == CSVMode.CLOSED)
{
mode = CSVMode.OPENED_RAW;
results.Add(content);
content = "";
}
}
else
{
// If opened quote, just read it
if (mode == CSVMode.OPENED_QUOTE)
{
content += letter;
}
// If opened raw, then read it
else if (mode == CSVMode.OPENED_RAW)
{
content += letter;
}
// It closed, then open raw
else if (mode == CSVMode.CLOSED)
{
mode = CSVMode.OPENED_RAW;
content += letter;
}
}
}
// If it was still reading when the buffer finished
if (mode != CSVMode.CLOSED)
{
results.Add(content);
}
return results;
}
For smaller input CSV data LINQ is fully enough.
For example for the following CSV file content:
schema_name,description,utype
"IX_HE","High-Energy data","x"
"III_spectro","Spectrosopic data","d"
"VI_misc","Miscellaneous","f"
"vcds1","Catalogs only available in CDS","d"
"J_other","Publications from other journals","b"
when we read the whole content into single string called data, then
using System;
using System.IO;
using System.Linq;
var data = File.ReadAllText(Path2CSV);
// helper split characters
var newline = Environment.NewLine.ToCharArray();
var comma = ",".ToCharArray();
var quote = "\"".ToCharArray();
// split input string data to lines
var lines = data.Split(newline);
// first line is header, take the header fields
foreach (var col in lines.First().Split(comma)) {
// do something with "col"
}
// we skip the first line, all the rest are real data lines/fields
foreach (var line in lines.Skip(1)) {
// first we split the data line by comma character
// next we remove double qoutes from each splitted element using Trim()
// finally we make an array
var fields = line.Split(comma)
.Select(_ => { _ = _.Trim(quote); return _; })
.ToArray();
// do something with the "fields" array
}

How remove some special words from a string content?

I have some strings containing code for emoji icons, like :grinning:, :kissing_heart:, or :bouquet:. I'd like to process them to remove the emoji codes.
For example, given:
Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet:
I want to get this:
Hello , how are you? Are you fine?
I know I can use this code:
richTextBox2.Text = richTextBox1.Text.Replace(":kissing_heart:", "").Replace(":bouquet:", "").Replace(":grinning:", "").ToString();
However, there are 856 different emoji icons I have to remove (which, using this method, would take 856 calls to Replace()). Is there any other way to accomplish this?
You can use Regex to match the word between :anything:. Using Replace with function you can make other validation.
string pattern = #":(.*?):";
string input = "Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet: Are you super fan, for example. :words not to replace:";
string output = Regex.Replace(input, pattern, (m) =>
{
if (m.ToString().Split(' ').Count() > 1) // more than 1 word and other validations that will help preventing parsing the user text
{
return m.ToString();
}
return String.Empty;
}); // "Hello , how are you? Are you fine? Are you super fan, for example. :words not to replace:"
If you don't want to use Replace that make use of a lambda expression, you can use \w, as #yorye-nathan mentioned, to match only words.
string pattern = #":(\w*):";
string input = "Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet: Are you super fan, for example. :words not to replace:";
string output = Regex.Replace(input, pattern, String.Empty); // "Hello , how are you? Are you fine? Are you super fan, for example. :words not to replace:"
string Text = "Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet:";
i would solve it that way
List<string> Emoj = new List<string>() { ":kissing_heart:", ":bouquet:", ":grinning:" };
Emoj.ForEach(x => Text = Text.Replace(x, string.Empty));
UPDATE - refering to Detail's Comment
Another approach: replace only existing Emojs
List<string> Emoj = new List<string>() { ":kissing_heart:", ":bouquet:", ":grinning:" };
var Matches = Regex.Matches(Text, #":(\w*):").Cast<Match>().Select(x => x.Value);
Emoj.Intersect(Matches).ToList().ForEach(x => Text = Text.Replace(x, string.Empty));
But i'm not sure if it's that big difference for such short chat-strings and it's more important to have code that's easy to read/maintain. OP's question was about reducing redundancy Text.Replace().Text.Replace() and not about the most efficient solution.
I would use a combination of some of the techniques already suggested. Firstly, I'd store the 800+ emoji strings in a database and then load them up at runtime. Use a HashSet to store these in memory, so that we have a O(1) lookup time (very fast). Use Regex to pull out all potential pattern matches from the input and then compare each to our hashed emoji, removing the valid ones and leaving any non-emoji patterns the user has entered themselves...
public class Program
{
//hashset for in memory representation of emoji,
//lookups are O(1), so very fast
private HashSet<string> _emoji = null;
public Program(IEnumerable<string> emojiFromDb)
{
//load emoji from datastore (db/file,etc)
//into memory at startup
_emoji = new HashSet<string>(emojiFromDb);
}
public string RemoveEmoji(string input)
{
//pattern to search for
string pattern = #":(\w*):";
string output = input;
//use regex to find all potential patterns in the input
MatchCollection matches = Regex.Matches(input, pattern);
//only do this if we actually find the
//pattern in the input string...
if (matches.Count > 0)
{
//refine this to a distinct list of unique patterns
IEnumerable<string> distinct =
matches.Cast<Match>().Select(m => m.Value).Distinct();
//then check each one against the hashset, only removing
//registered emoji. This allows non-emoji versions
//of the pattern to survive...
foreach (string match in distinct)
if (_emoji.Contains(match))
output = output.Replace(match, string.Empty);
}
return output;
}
}
public class MainClass
{
static void Main(string[] args)
{
var program = new Program(new string[] { ":grinning:", ":kissing_heart:", ":bouquet:" });
string output = program.RemoveEmoji("Hello:grinning: :imadethis:, how are you?:kissing_heart: Are you fine?:bouquet: This is:a:strange:thing :to type:, but valid :nonetheless:");
Console.WriteLine(output);
}
}
Which results in:
Hello :imadethis:, how are you? Are you fine? This is:a:strange:thing :to type:,
but valid :nonetheless:
You do not have to replace all 856 emoji's. You only have to replace those that appear in the string. So have a look at:
Finding a substring using C# with a twist
Basically you extract all tokens ie the strings between : and : and then replace those with string.Empty()
If you are concerned that the search will return strings that are not emojis such as :some other text: then you could have a hash table lookup to make sure that replacing said found token is appropriate to do.
Finally got around to write something up. I'm combining a couple previously mentioned ideas, with the fact we should only loop over the string once. Based on those requirement, this sound like the perfect job for Linq.
You should probably cache the HashSet. Other than that, this has O(n) performance and only goes over the list once. Would be interesting to benchmark, but this could very well be the most efficient solution.
The approach is pretty straight forwards.
First load all Emoij in a HashSet so we can quickly look them up.
Split the string with input.Split(':') at the :.
Decide if we keep the current element.
If the last element was a match, keep the current element.
If the last element was no match, check if the current element matches.
If it does, ignore it. (This effectively removes the substring from the output).
If it doesn't, append : back and keep it.
Rebuild our string with a StringBuilder.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
static class Program
{
static void Main(string[] args)
{
ISet<string> emojiList = new HashSet<string>(new[] { "kissing_heart", "bouquet", "grinning" });
Console.WriteLine("Hello:grinning: , ho:w: a::re you?:kissing_heart:kissing_heart: Are you fine?:bouquet:".RemoveEmoji(':', emojiList));
Console.ReadLine();
}
public static string RemoveEmoji(this string input, char delimiter, ISet<string> emojiList)
{
StringBuilder sb = new StringBuilder();
input.Split(delimiter).Aggregate(true, (prev, curr) =>
{
if (prev)
{
sb.Append(curr);
return false;
}
if (emojiList.Contains(curr))
{
return true;
}
sb.Append(delimiter);
sb.Append(curr);
return false;
});
return sb.ToString();
}
}
}
Edit: I did something cool using the Rx library, but then realized Aggregate is the IEnumerable counterpart of Scan in Rx, thus simplifying the code even more.
If efficiency is a concern and to avoid processing "false positives", consider rewriting the string using a StringBuilder while skipping the special emoji tokens:
static HashSet<string> emojis = new HashSet<string>()
{
"grinning",
"kissing_heart",
"bouquet"
};
static string RemoveEmojis(string input)
{
StringBuilder sb = new StringBuilder();
int length = input.Length;
int startIndex = 0;
int colonIndex = input.IndexOf(':');
while (colonIndex >= 0 && startIndex < length)
{
//Keep normal text
int substringLength = colonIndex - startIndex;
if (substringLength > 0)
sb.Append(input.Substring(startIndex, substringLength));
//Advance the feed and get the next colon
startIndex = colonIndex + 1;
colonIndex = input.IndexOf(':', startIndex);
if (colonIndex < 0) //No more colons, so no more emojis
{
//Don't forget that first colon we found
sb.Append(':');
//Add the rest of the text
sb.Append(input.Substring(startIndex));
break;
}
else //Possible emoji, let's check
{
string token = input.Substring(startIndex, colonIndex - startIndex);
if (emojis.Contains(token)) //It's a match, so we skip this text
{
//Advance the feed
startIndex = colonIndex + 1;
colonIndex = input.IndexOf(':', startIndex);
}
else //No match, so we keep the normal text
{
//Don't forget the colon
sb.Append(':');
//Instead of doing another substring next loop, let's just use the one we already have
sb.Append(token);
startIndex = colonIndex;
}
}
}
return sb.ToString();
}
static void Main(string[] args)
{
List<string> inputs = new List<string>()
{
"Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet:",
"Tricky test:123:grinning:",
"Hello:grinning: :imadethis:, how are you?:kissing_heart: Are you fine?:bouquet: This is:a:strange:thing :to type:, but valid :nonetheless:"
};
foreach (string input in inputs)
{
Console.WriteLine("In <- " + input);
Console.WriteLine("Out -> " + RemoveEmojis(input));
Console.WriteLine();
}
Console.WriteLine("\r\n\r\nPress enter to exit...");
Console.ReadLine();
}
Outputs:
In <- Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet:
Out -> Hello , how are you? Are you fine?
In <- Tricky test:123:grinning:
Out -> Tricky test:123
In <- Hello:grinning: :imadethis:, how are you?:kissing_heart: Are you fine?:bouquet: This is:a:strange:thing :to type:, but valid :nonetheless:
Out -> Hello :imadethis:, how are you? Are you fine? This is:a:strange:thing :to type:, but valid :nonetheless:
Use this code I put up below I think using this function your problem will be solved.
string s = "Hello:grinning: , how are you?:kissing_heart: Are you fine?:bouquet:";
string rmv = ""; string remove = "";
int i = 0; int k = 0;
A:
rmv = "";
for (i = k; i < s.Length; i++)
{
if (Convert.ToString(s[i]) == ":")
{
for (int j = i + 1; j < s.Length; j++)
{
if (Convert.ToString(s[j]) != ":")
{
rmv += s[j];
}
else
{
remove += rmv + ",";
i = j;
k = j + 1;
goto A;
}
}
}
}
string[] str = remove.Split(',');
for (int x = 0; x < str.Length-1; x++)
{
s = s.Replace(Convert.ToString(":" + str[x] + ":"), "");
}
Console.WriteLine(s);
Console.ReadKey();
I'd use extension method like this:
public static class Helper
{
public static string MyReplace(this string dirty, char separator)
{
string newText = "";
bool replace = false;
for (int i = 0; i < dirty.Length; i++)
{
if(dirty[i] == separator) { replace = !replace ; continue;}
if(replace ) continue;
newText += dirty[i];
}
return newText;
}
}
Usage:
richTextBox2.Text = richTextBox2.Text.MyReplace(':');
This method show be better in terms of performance compare to one with Regex
I would split the text with the ':' and then build the string excluding the found emoji names.
const char marker = ':';
var textSections = text.Split(marker);
var emojiRemovedText = string.Empty;
var notMatchedCount = 0;
textSections.ToList().ForEach(section =>
{
if (emojiNames.Contains(section))
{
notMatchedCount = 0;
}
else
{
if (notMatchedCount++ > 0)
{
emojiRemovedText += marker.ToString();
}
emojiRemovedText += section;
}
});

Regex to get all "cells" form csv file row [duplicate]

Is there a default/official/recommended way to parse CSV files in C#? I don't want to roll my own parser.
Also, I've seen instances of people using ODBC/OLE DB to read CSV via the Text driver, and a lot of people discourage this due to its "drawbacks." What are these drawbacks?
Ideally, I'm looking for a way through which I can read the CSV by column name, using the first record as the header / field names. Some of the answers given are correct but work to basically deserialize the file into classes.
A CSV parser is now a part of .NET Framework.
Add a reference to Microsoft.VisualBasic.dll (works fine in C#, don't mind the name)
using (TextFieldParser parser = new TextFieldParser(#"c:\temp\test.csv"))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
while (!parser.EndOfData)
{
//Process row
string[] fields = parser.ReadFields();
foreach (string field in fields)
{
//TODO: Process field
}
}
}
The docs are here - TextFieldParser Class
P.S. If you need a CSV exporter, try CsvExport (discl: I'm one of the contributors)
CsvHelper (a library I maintain) will read a CSV file into custom objects.
using (var reader = new StreamReader("path\\to\\file.csv"))
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
var records = csv.GetRecords<Foo>();
}
Sometimes you don't own the objects you're trying to read into. In this case, you can use fluent mapping because you can't put attributes on the class.
public sealed class MyCustomObjectMap : CsvClassMap<MyCustomObject>
{
public MyCustomObjectMap()
{
Map( m => m.Property1 ).Name( "Column Name" );
Map( m => m.Property2 ).Index( 4 );
Map( m => m.Property3 ).Ignore();
Map( m => m.Property4 ).TypeConverter<MySpecialTypeConverter>();
}
}
Let a library handle all the nitty-gritty details for you! :-)
Check out FileHelpers and stay DRY - Don't Repeat Yourself - no need to re-invent the wheel a gazillionth time....
You basically just need to define that shape of your data - the fields in your individual line in the CSV - by means of a public class (and so well-thought out attributes like default values, replacements for NULL values and so forth), point the FileHelpers engine at a file, and bingo - you get back all the entries from that file. One simple operation - great performance!
In a business application, i use the Open Source project on codeproject.com, CSVReader.
It works well, and has good performance. There is some benchmarking on the link i provided.
A simple example, copied from the project page:
using (CsvReader csv = new CsvReader(new StreamReader("data.csv"), true))
{
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
Console.Write(string.Format("{0} = {1};", headers[i], csv[i]));
Console.WriteLine();
}
}
As you can see, it's very easy to work with.
I know its a bit late but just found a library Microsoft.VisualBasic.FileIO which has TextFieldParser class to process csv files.
Here is a helper class I use often, in case any one ever comes back to this thread (I wanted to share it).
I use this for the simplicity of porting it into projects ready to use:
public class CSVHelper : List<string[]>
{
protected string csv = string.Empty;
protected string separator = ",";
public CSVHelper(string csv, string separator = "\",\"")
{
this.csv = csv;
this.separator = separator;
foreach (string line in Regex.Split(csv, System.Environment.NewLine).ToList().Where(s => !string.IsNullOrEmpty(s)))
{
string[] values = Regex.Split(line, separator);
for (int i = 0; i < values.Length; i++)
{
//Trim values
values[i] = values[i].Trim('\"');
}
this.Add(values);
}
}
}
And use it like:
public List<Person> GetPeople(string csvContent)
{
List<Person> people = new List<Person>();
CSVHelper csv = new CSVHelper(csvContent);
foreach(string[] line in csv)
{
Person person = new Person();
person.Name = line[0];
person.TelephoneNo = line[1];
people.Add(person);
}
return people;
}
[Updated csv helper: bug fixed where the last new line character created a new line]
If you need only reading csv files then I recommend this library: A Fast CSV Reader
If you also need to generate csv files then use this one: FileHelpers
Both of them are free and opensource.
This solution is using the official Microsoft.VisualBasic assembly to parse CSV.
Advantages:
delimiter escaping
ignores Header
trim spaces
ignore comments
Code:
using Microsoft.VisualBasic.FileIO;
public static List<List<string>> ParseCSV (string csv)
{
List<List<string>> result = new List<List<string>>();
// To use the TextFieldParser a reference to the Microsoft.VisualBasic assembly has to be added to the project.
using (TextFieldParser parser = new TextFieldParser(new StringReader(csv)))
{
parser.CommentTokens = new string[] { "#" };
parser.SetDelimiters(new string[] { ";" });
parser.HasFieldsEnclosedInQuotes = true;
// Skip over header line.
//parser.ReadLine();
while (!parser.EndOfData)
{
var values = new List<string>();
var readFields = parser.ReadFields();
if (readFields != null)
values.AddRange(readFields);
result.Add(values);
}
}
return result;
}
I have written TinyCsvParser for .NET, which is one of the fastest .NET parsers around and highly configurable to parse almost any CSV format.
It is released under MIT License:
https://github.com/bytefish/TinyCsvParser
You can use NuGet to install it. Run the following command in the Package Manager Console.
PM> Install-Package TinyCsvParser
Usage
Imagine we have list of Persons in a CSV file persons.csv with their first name, last name and birthdate.
FirstName;LastName;BirthDate
Philipp;Wagner;1986/05/12
Max;Musterman;2014/01/02
The corresponding domain model in our system might look like this.
private class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
}
When using TinyCsvParser you have to define the mapping between the columns in the CSV data and the property in you domain model.
private class CsvPersonMapping : CsvMapping<Person>
{
public CsvPersonMapping()
: base()
{
MapProperty(0, x => x.FirstName);
MapProperty(1, x => x.LastName);
MapProperty(2, x => x.BirthDate);
}
}
And then we can use the mapping to parse the CSV data with a CsvParser.
namespace TinyCsvParser.Test
{
[TestFixture]
public class TinyCsvParserTest
{
[Test]
public void TinyCsvTest()
{
CsvParserOptions csvParserOptions = new CsvParserOptions(true, new[] { ';' });
CsvPersonMapping csvMapper = new CsvPersonMapping();
CsvParser<Person> csvParser = new CsvParser<Person>(csvParserOptions, csvMapper);
var result = csvParser
.ReadFromFile(#"persons.csv", Encoding.ASCII)
.ToList();
Assert.AreEqual(2, result.Count);
Assert.IsTrue(result.All(x => x.IsValid));
Assert.AreEqual("Philipp", result[0].Result.FirstName);
Assert.AreEqual("Wagner", result[0].Result.LastName);
Assert.AreEqual(1986, result[0].Result.BirthDate.Year);
Assert.AreEqual(5, result[0].Result.BirthDate.Month);
Assert.AreEqual(12, result[0].Result.BirthDate.Day);
Assert.AreEqual("Max", result[1].Result.FirstName);
Assert.AreEqual("Mustermann", result[1].Result.LastName);
Assert.AreEqual(2014, result[1].Result.BirthDate.Year);
Assert.AreEqual(1, result[1].Result.BirthDate.Month);
Assert.AreEqual(1, result[1].Result.BirthDate.Day);
}
}
}
User Guide
A full User Guide is available at:
http://bytefish.github.io/TinyCsvParser/
Here is a short and simple solution.
using (TextFieldParser parser = new TextFieldParser(outputLocation))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
string[] headers = parser.ReadLine().Split(',');
foreach (string header in headers)
{
dataTable.Columns.Add(header);
}
while (!parser.EndOfData)
{
string[] fields = parser.ReadFields();
dataTable.Rows.Add(fields);
}
}
Here is my KISS implementation...
using System;
using System.Collections.Generic;
using System.Text;
class CsvParser
{
public static List<string> Parse(string line)
{
const char escapeChar = '"';
const char splitChar = ',';
bool inEscape = false;
bool priorEscape = false;
List<string> result = new List<string>();
StringBuilder sb = new StringBuilder();
for (int i = 0; i < line.Length; i++)
{
char c = line[i];
switch (c)
{
case escapeChar:
if (!inEscape)
inEscape = true;
else
{
if (!priorEscape)
{
if (i + 1 < line.Length && line[i + 1] == escapeChar)
priorEscape = true;
else
inEscape = false;
}
else
{
sb.Append(c);
priorEscape = false;
}
}
break;
case splitChar:
if (inEscape) //if in escape
sb.Append(c);
else
{
result.Add(sb.ToString());
sb.Length = 0;
}
break;
default:
sb.Append(c);
break;
}
}
if (sb.Length > 0)
result.Add(sb.ToString());
return result;
}
}
Some time ago I had wrote simple class for CSV read/write based on Microsoft.VisualBasic library. Using this simple class you will be able to work with CSV like with 2 dimensions array. You can find my class by the following link: https://github.com/ukushu/DataExporter
Simple example of usage:
Csv csv = new Csv("\t");//delimiter symbol
csv.FileOpen("c:\\file1.csv");
var row1Cell6Value = csv.Rows[0][5];
csv.AddRow("asdf","asdffffff","5")
csv.FileSave("c:\\file2.csv");
For reading header only you need is to read csv.Rows[0] cells :)
This code reads csv to DataTable:
public static DataTable ReadCsv(string path)
{
DataTable result = new DataTable("SomeData");
using (TextFieldParser parser = new TextFieldParser(path))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
bool isFirstRow = true;
//IList<string> headers = new List<string>();
while (!parser.EndOfData)
{
string[] fields = parser.ReadFields();
if (isFirstRow)
{
foreach (string field in fields)
{
result.Columns.Add(new DataColumn(field, typeof(string)));
}
isFirstRow = false;
}
else
{
int i = 0;
DataRow row = result.NewRow();
foreach (string field in fields)
{
row[i++] = field;
}
result.Rows.Add(row);
}
}
}
return result;
}
Single source file solution for straightforward parsing needs, useful. Deals with all the nasty edge cases. Such as new line normalization and handling new lines in quoted string literals. Your welcome!
If you CSV file has a header you just read out the column names (and compute column indexes) from the first row. Simple as that.
Note that Dump is a LINQPad method, you might want to remove that if you are not using LINQPad.
void Main()
{
var file1 = "a,b,c\r\nx,y,z";
CSV.ParseText(file1).Dump();
var file2 = "a,\"b\",c\r\nx,\"y,z\"";
CSV.ParseText(file2).Dump();
var file3 = "a,\"b\",c\r\nx,\"y\r\nz\"";
CSV.ParseText(file3).Dump();
var file4 = "\"\"\"\"";
CSV.ParseText(file4).Dump();
}
static class CSV
{
public struct Record
{
public readonly string[] Row;
public string this[int index] => Row[index];
public Record(string[] row)
{
Row = row;
}
}
public static List<Record> ParseText(string text)
{
return Parse(new StringReader(text));
}
public static List<Record> ParseFile(string fn)
{
using (var reader = File.OpenText(fn))
{
return Parse(reader);
}
}
public static List<Record> Parse(TextReader reader)
{
var data = new List<Record>();
var col = new StringBuilder();
var row = new List<string>();
for (; ; )
{
var ln = reader.ReadLine();
if (ln == null) break;
if (Tokenize(ln, col, row))
{
data.Add(new Record(row.ToArray()));
row.Clear();
}
}
return data;
}
public static bool Tokenize(string s, StringBuilder col, List<string> row)
{
int i = 0;
if (col.Length > 0)
{
col.AppendLine(); // continuation
if (!TokenizeQuote(s, ref i, col, row))
{
return false;
}
}
while (i < s.Length)
{
var ch = s[i];
if (ch == ',')
{
row.Add(col.ToString().Trim());
col.Length = 0;
i++;
}
else if (ch == '"')
{
i++;
if (!TokenizeQuote(s, ref i, col, row))
{
return false;
}
}
else
{
col.Append(ch);
i++;
}
}
if (col.Length > 0)
{
row.Add(col.ToString().Trim());
col.Length = 0;
}
return true;
}
public static bool TokenizeQuote(string s, ref int i, StringBuilder col, List<string> row)
{
while (i < s.Length)
{
var ch = s[i];
if (ch == '"')
{
// escape sequence
if (i + 1 < s.Length && s[i + 1] == '"')
{
col.Append('"');
i++;
i++;
continue;
}
i++;
return true;
}
else
{
col.Append(ch);
i++;
}
}
return false;
}
}
Another one to this list, Cinchoo ETL - an open source library to read and write multiple file formats (CSV, flat file, Xml, JSON etc)
Sample below shows how to read CSV file quickly (No POCO object required)
string csv = #"Id, Name
1, Carl
2, Tom
3, Mark";
using (var p = ChoCSVReader.LoadText(csv)
.WithFirstLineHeader()
)
{
foreach (var rec in p)
{
Console.WriteLine($"Id: {rec.Id}");
Console.WriteLine($"Name: {rec.Name}");
}
}
Sample below shows how to read CSV file using POCO object
public partial class EmployeeRec
{
public int Id { get; set; }
public string Name { get; set; }
}
static void CSVTest()
{
string csv = #"Id, Name
1, Carl
2, Tom
3, Mark";
using (var p = ChoCSVReader<EmployeeRec>.LoadText(csv)
.WithFirstLineHeader()
)
{
foreach (var rec in p)
{
Console.WriteLine($"Id: {rec.Id}");
Console.WriteLine($"Name: {rec.Name}");
}
}
}
Please check out articles at CodeProject on how to use it.
This parser supports nested commas and quotes in a column:
static class CSVParser
{
public static string[] ParseLine(string line)
{
List<string> cols = new List<string>();
string value = null;
for(int i = 0; i < line.Length; i++)
{
switch(line[i])
{
case ',':
cols.Add(value);
value = null;
if(i == line.Length - 1)
{// It ends with comma
cols.Add(null);
}
break;
case '"':
cols.Add(ParseEnclosedColumn(line, ref i));
i++;
break;
default:
value += line[i];
if (i == line.Length - 1)
{// Last character
cols.Add(value);
}
break;
}
}
return cols.ToArray();
}//ParseLine
static string ParseEnclosedColumn(string line, ref int index)
{// Example: "b"",bb"
string value = null;
int numberQuotes = 1;
int index2 = index;
for (int i = index + 1; i < line.Length; i++)
{
index2 = i;
switch (line[i])
{
case '"':
numberQuotes++;
if (numberQuotes % 2 == 0)
{
if (i < line.Length - 1 && line[i + 1] == ',')
{
index = i;
return value;
}
}
else if (i > index + 1 && line[i - 1] == '"')
{
value += '"';
}
break;
default:
value += line[i];
break;
}
}
index = index2;
return value;
}//ParseEnclosedColumn
}//class CSVParser
Based on unlimit's post on How to properly split a CSV using C# split() function? :
string[] tokens = System.Text.RegularExpressions.Regex.Split(paramString, ",");
NOTE: this doesn't handle escaped / nested commas, etc., and therefore is only suitable for certain simple CSV lists.
If anyone wants a snippet they can plop into their code without having to bind a library or download a package. Here is a version I wrote:
public static string FormatCSV(List<string> parts)
{
string result = "";
foreach (string s in parts)
{
if (result.Length > 0)
{
result += ",";
if (s.Length == 0)
continue;
}
if (s.Length > 0)
{
result += "\"" + s.Replace("\"", "\"\"") + "\"";
}
else
{
// cannot output double quotes since its considered an escape for a quote
result += ",";
}
}
return result;
}
enum CSVMode
{
CLOSED = 0,
OPENED_RAW = 1,
OPENED_QUOTE = 2
}
public static List<string> ParseCSV(string input)
{
List<string> results;
CSVMode mode;
char[] letters;
string content;
mode = CSVMode.CLOSED;
content = "";
results = new List<string>();
letters = input.ToCharArray();
for (int i = 0; i < letters.Length; i++)
{
char letter = letters[i];
char nextLetter = '\0';
if (i < letters.Length - 1)
nextLetter = letters[i + 1];
// If its a quote character
if (letter == '"')
{
// If that next letter is a quote
if (nextLetter == '"' && mode == CSVMode.OPENED_QUOTE)
{
// Then this quote is escaped and should be added to the content
content += letter;
// Skip the escape character
i++;
continue;
}
else
{
// otherwise its not an escaped quote and is an opening or closing one
// Character is skipped
// If it was open, then close it
if (mode == CSVMode.OPENED_QUOTE)
{
results.Add(content);
// reset the content
content = "";
mode = CSVMode.CLOSED;
// If there is a next letter available
if (nextLetter != '\0')
{
// If it is a comma
if (nextLetter == ',')
{
i++;
continue;
}
else
{
throw new Exception("Expected comma. Found: " + nextLetter);
}
}
}
else if (mode == CSVMode.OPENED_RAW)
{
// If it was opened raw, then just add the quote
content += letter;
}
else if (mode == CSVMode.CLOSED)
{
// Otherwise open it as a quote
mode = CSVMode.OPENED_QUOTE;
}
}
}
// If its a comma seperator
else if (letter == ',')
{
// If in quote mode
if (mode == CSVMode.OPENED_QUOTE)
{
// Just read it
content += letter;
}
// If raw, then close the content
else if (mode == CSVMode.OPENED_RAW)
{
results.Add(content);
content = "";
mode = CSVMode.CLOSED;
}
// If it was closed, then open it raw
else if (mode == CSVMode.CLOSED)
{
mode = CSVMode.OPENED_RAW;
results.Add(content);
content = "";
}
}
else
{
// If opened quote, just read it
if (mode == CSVMode.OPENED_QUOTE)
{
content += letter;
}
// If opened raw, then read it
else if (mode == CSVMode.OPENED_RAW)
{
content += letter;
}
// It closed, then open raw
else if (mode == CSVMode.CLOSED)
{
mode = CSVMode.OPENED_RAW;
content += letter;
}
}
}
// If it was still reading when the buffer finished
if (mode != CSVMode.CLOSED)
{
results.Add(content);
}
return results;
}
For smaller input CSV data LINQ is fully enough.
For example for the following CSV file content:
schema_name,description,utype
"IX_HE","High-Energy data","x"
"III_spectro","Spectrosopic data","d"
"VI_misc","Miscellaneous","f"
"vcds1","Catalogs only available in CDS","d"
"J_other","Publications from other journals","b"
when we read the whole content into single string called data, then
using System;
using System.IO;
using System.Linq;
var data = File.ReadAllText(Path2CSV);
// helper split characters
var newline = Environment.NewLine.ToCharArray();
var comma = ",".ToCharArray();
var quote = "\"".ToCharArray();
// split input string data to lines
var lines = data.Split(newline);
// first line is header, take the header fields
foreach (var col in lines.First().Split(comma)) {
// do something with "col"
}
// we skip the first line, all the rest are real data lines/fields
foreach (var line in lines.Skip(1)) {
// first we split the data line by comma character
// next we remove double qoutes from each splitted element using Trim()
// finally we make an array
var fields = line.Split(comma)
.Select(_ => { _ = _.Trim(quote); return _; })
.ToArray();
// do something with the "fields" array
}

How to delete last character in a string in C#?

Building a string for post request in the following way,
var itemsToAdd = sl.SelProds.ToList();
if (sl.SelProds.Count() != 0)
{
foreach (var item in itemsToAdd)
{
paramstr = paramstr + string.Format("productID={0}&", item.prodID.ToString());
}
}
after I get resulting paramstr, I need to delete last character & in it
How to delete last character in a string using C#?
Personally I would go with Rob's suggestion, but if you want to remove one (or more) specific trailing character(s) you can use TrimEnd. E.g.
paramstr = paramstr.TrimEnd('&');
build it with string.Join instead:
var parameters = sl.SelProds.Select(x=>"productID="+x.prodID).ToArray();
paramstr = string.Join("&", parameters);
string.Join takes a seperator ("&") and and array of strings (parameters), and inserts the seperator between each element of the array.
string source;
// source gets initialized
string dest;
if (source.Length > 0)
{
dest = source.Substring(0, source.Length - 1);
}
Try this:
paramstr.Remove((paramstr.Length-1),1);
I would just not add it in the first place:
var sb = new StringBuilder();
bool first = true;
foreach (var foo in items) {
if (first)
first = false;
else
sb.Append('&');
// for example:
var escapedValue = System.Web.HttpUtility.UrlEncode(foo);
sb.Append(key).Append('=').Append(escapedValue);
}
var s = sb.ToString();
string str="This is test string.";
str=str.Remove(str.Length-1);
It's better if you use string.Join.
class Product
{
public int ProductID { get; set; }
}
static void Main(string[] args)
{
List<Product> products = new List<Product>()
{
new Product { ProductID = 1 },
new Product { ProductID = 2 },
new Product { ProductID = 3 }
};
string theURL = string.Join("&", products.Select(p => string.Format("productID={0}", p.ProductID)));
Console.WriteLine(theURL);
}
It's good practice to use a StringBuilder when concatenating a lot of strings and you can then use the Remove method to get rid of the final character.
StringBuilder paramBuilder = new StringBuilder();
foreach (var item in itemsToAdd)
{
paramBuilder.AppendFormat(("productID={0}&", item.prodID.ToString());
}
if (paramBuilder.Length > 1)
paramBuilder.Remove(paramBuilder.Length-1, 1);
string s = paramBuilder.ToString();
paramstr.Remove((paramstr.Length-1),1);
This does work to remove a single character from the end of a string. But if I use it to remove, say, 4 characters, this doesn't work:
paramstr.Remove((paramstr.Length-4),1);
As an alternative, I have used this approach instead:
DateFrom = DateFrom.Substring(0, DateFrom.Length-4);
Add a StringBuilder extension method.
public static StringBuilder RemoveLast(this StringBuilder sb, string value)
{
if(sb.Length < 1) return sb;
sb.Remove(sb.ToString().LastIndexOf(value), value.Length);
return sb;
}
then use:
yourStringBuilder.RemoveLast(",");

Categories