I successfully imported following file in database but my import method removes double quotes during saving process. but i want to export this file as it is , i.e add quotes to a string which contains delimiter so how to achieve this .
here is my csv file with headers and 1 record.
PTNAME,REGNO/ID,BLOOD GRP,WARD NAME,DOC NAME,XRAY,PATHO,MEDICATION,BLOOD GIVEN
Mr. GHULAVE VASANTRAO PANDURANG,SH1503/00847,,RECOVERY,SHELKE SAMEER,"X RAY PBH RT IT FEMUR FRACTURE POST OP XRAY -ACCEPTABLE WITH IMPLANT IN SITU 2D ECHO MILD CONC LVH GOOD LV SYSTOLIC FUN, ALTERED LV DIASTOLIC FUN.", HB-11.9gm% TLC-8700 PLT COUNT-195000 BSL-173 UREA -23 CREATININE -1.2 SR.ELECTROLYTES-WNR BLD GROUP-B + HIV-NEGATIVE HBsAG-NEGATIVE PT INR -15/15/1.0. ECG SINUS TACHYCARDIA ,IV TAXIMAX 1.5 GM 1-0-1 IV TRAMADOL DRIP 1-0-1 TAB NUSAID SP 1-0-1 TAB ARCOPAN D 1-0-1 CAP BONE C PLUS 1 -0-1 TAB ANXIT 0.5 MG 0-0-1 ANKLE TRACTION 3 KG RT LL ,NOT GIVEN
Here is my method of export:
public void DataExport(string SelectQuery, string fileName)
{
try
{
DataTable dt = new DataTable();
SqlDataAdapter da = new SqlDataAdapter(SelectQuery, con);
da.Fill(dt);
//Sets file path and print Headers
// string filepath = txtreceive.Text + "\\" + fileName;
string filepath = #"C:\Users\Priya\Desktop\R\z.csv";
StreamWriter sw = new StreamWriter(filepath);
int iColCount = dt.Columns.Count;
// First we will write the headers if IsFirstRowColumnNames is true: //
for (int i = 0; i < iColCount; i++)
{
sw.Write(dt.Columns[i]);
if (i < iColCount - 1)
{
sw.Write(',');
}
}
sw.Write(sw.NewLine);
foreach (DataRow dr in dt.Rows) // Now write all the rows.
{
for (int i = 0; i < iColCount; i++)
{
if (!Convert.IsDBNull(dr[i]))
{
sw.Write(dr[i].ToString());
}
if (i < iColCount - 1)
{
sw.Write(',');
}
}
sw.Write(sw.NewLine);
}
sw.Close();
}
catch { }
}
if (myString.Contains(","))
{
myWriter.Write("\"{0}\"", myString);
}
else
{
myWriter.Write(myString);
}
The very simplest thing that you can do is replace the line sw.Write(dr[i].ToString()); with this:
var text = dr[i].ToString();
text = text.Contains(",") ? String.Format("\"{0}\"", text) : text;
sw.Write(text);
However, there are quite a few other issues with your code - most importantly you are opening a lot of disposable resources without disposing them properly.
I'd suggest a bit of a rewrite, like this:
public void DataExport(string SelectQuery, string fileName)
{
using (var dt = new DataTable())
{
using (var da = new SqlDataAdapter(SelectQuery, con))
{
da.Fill(dt);
var header = String.Join(
",",
dt.Columns.Cast<DataColumn>().Select(dc => dc.ColumnName));
var rows =
from dr in dt.Rows.Cast<DataRow>()
select String.Join(
",",
from dc in dt.Columns.Cast<DataColumn>()
let t1 = Convert.IsDBNull(dr[dc]) ? "" : dr[dc].ToString()
let t2 = t1.Contains(",") ? String.Format("\"{0}\"", t1) : t1
select t2);
using (var sw = new StreamWriter(fileName))
{
sw.WriteLine(header);
foreach (var row in rows)
{
sw.WriteLine(row);
}
sw.Close();
}
}
}
}
I've also broken apart the querying of the data from the data adapter from the writing of the data to the stream writer.
And, of course, I'm adding double-quotes to text that contains commas.
The only other thing I was concerned about was the fact that con is clearly a class-level variable and it is being left open. That's bad. Connections should be opened and closed each time they are used. You should probably consider making that change too.
You could also remove the stream write entirely by replacing that block with this:
File.WriteAllLines(fileName, new [] { header }.Concat(rows));
And, finally, wrapping your code in a try { ... } catch { } is just a bad practice. It's like saying "I'm writing some code that could fail, but I don't care and I don't want to be told if it does fail". You should only even catch specific exceptions that you do deal with. In this code you should consider catching file exceptions like running out of hard drive space, or writing to a read-only file, etc.
Related
Im trying to process a set of files, i have a given number of txt files, which im currently joining into 1 txt file to apply filters to. The creation of the 1 file from multiple works great. But i have 2 questions and 1 error i cant seem to get around.
1 - Im getting an error when i try to read the newly created file so i can apply the filters. "The process cannot access the file because it is being used by another process."
2 - Am i approaching this the correct or more efficient way? by that i mean can the reading and filtering be applied before creating the concatenated file? I mean i still need to create a new file, but it would be nice to be able to apply everything before creating so that the file is already cleaned and ready for use outside the application.
Here is the current code that is having the issue and the 1 commented line that was my other attempt at releasing the file
private DataTable processFileData(string fname, string locs2 = "0", string effDate = "0", string items = "0")
{
DataTable dt = new DataTable();
string fullPath = fname;
try
{
using (StreamReader sr = new StreamReader(File.OpenRead(fullPath)))
//using (StreamReader sr = new StreamReader(File.Open(fullPath,FileMode.Open,FileAccess.Read, FileShare.Read)))
{
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
if (!String.IsNullOrWhiteSpace(line))
{
string[] headers = line.ToUpper().Split('|');
while (dt.Columns.Count < headers.Length)
{
dt.Columns.Add();
}
string[] rows = line.ToUpper().Split('|');
DataRow dr = dt.NewRow();
for (int i = 0; i < rows.Count(); i++)
{
dr[i] = rows[i];
}
dt.Rows.Add(dr);
}
}
//sr.Close();
sr.Dispose();
}
string cls = String.Format("Column6 NOT LIKE ('{0}')", String.Join("','", returnClass()));
dt.DefaultView.RowFilter = cls;
return dt;
}
catch (IOException ex)
{
Console.WriteLine(ex.Message);
return dt;
}
Here is the concatenation method:
private void Consolidate(string fileType)
{
string sourceFolder = #"H:\Merchant\Strategy\Signs\BACKUP TAG DATA\Wave 6\" + sfld;
string destinationFile = #"H:\Merchant\Strategy\Signs\BACKUP TAG DATA\Wave 6\" + sfld + #"\"+ sfld + #"_consolidation.txt";
// Specify wildcard search to match TXT files that will be combined
string[] filePaths = Directory.GetFiles(sourceFolder, fileType);
StreamWriter fileDest = new StreamWriter(destinationFile, true);
int i;
for (i = 0; i < filePaths.Length; i++)
{
string file = filePaths[i];
string[] lines = File.ReadAllLines(file);
if (i > 0)
{
lines = lines.Skip(1).ToArray(); // Skip header row for all but first file
}
foreach (string line in lines)
{
fileDest.WriteLine(line);
}
}
if (sfld == "CLR")
{
clrFilter(destinationFile);
}
if (sfld == "UPL")
{
uplFilter(destinationFile);
}
if (sfld == "HD")
{
hdFilter(destinationFile);
}
if (sfld == "PD")
{
pdFilter(destinationFile);
}
fileDest.Close();
fileDest.Dispose();
}
What im trying to accomplish is reading min(2 or 3 txt files and as much as 13 txt files) and applying some filtering. But im getting this error:
"The process cannot access the file because it is being used by another process."
You're disposing the stream reader with the following line
sr.Dispose();
Using a 'Using' statement will dispose after the stream goes out of context. So remove the Dispose line (if it wasn't clear below)
I am trying to use DotNetZip open source library for creating large zip files.
I need to be able to write to each stream writer part of the data row content (see the code below) of the data table. Other limitation I have is that I can't do this in memory due to the contents being large (several giga bytes each entry).
The problem I have is that despite writing to each stream separately, the output is all written to the last entry only. The first entry contains blank. Does anybody have any idea on how to fix this issue?
static void Main(string fileName)
{
var dt = CreateDataTable();
var streamWriters = new StreamWriter[2];
using (var zipOutputStream = new ZipOutputStream(File.Create(fileName)))
{
for (var i = 0; i < 2; i++)
{
var entryName = "file" + i + ".txt";
zipOutputStream.PutNextEntry(entryName);
streamWriters[i] = new StreamWriter(zipOutputStream, Encoding.UTF8);
}
WriteContents(streamWriters[0], streamWriters[1], dt);
zipOutputStream.Close();
}
}
private DataTable CreateDataTable()
{
var dt = new DataTable();
dt.Columns.AddRange(new DataColumn[] { new DataColumn("col1"), new DataColumn("col2"), new DataColumn("col3"), new DataColumn("col4") });
for (int i = 0; i < 100000; i++)
{
var row = dt.NewRow();
for (int j = 0; j < 4; j++)
{
row[j] = j * 1;
}
dt.Rows.Add(row);
}
return dt;
}
private void WriteContents(StreamWriter writer1, StreamWriter writer2, DataTable dt)
{
foreach (DataRow dataRow in dt.Rows)
{
writer1.WriteLine(dataRow[0] + ", " + dataRow[1]);
writer2.WriteLine(dataRow[2] + ", " + dataRow[3]);
}
}
Expected Results:
Both file0.txt and file1.txt need to written.
Actual results:
Only file1.txt file is written all content. file0.txt is blank.
It seems to be the expected behaviour according to the docs
If you don't call Write() between two calls to PutNextEntry(), the first entry is inserted into the zip file as a file of zero size. This may be what you want.
So to me it seems that it is not possible to do what you want through the current API.
Also, as zip file is a continuous sequence of zip entries, it is probably physically impossible to create entries in parallel, as you would have to know the size of each entry before starting a new one.
Perhaps you could just create separate archives and then combine them (if I am not mistaken there was a simple API to do that)
I have a remote sql connection in C# that needs to execute a query and save its results to the users's local hard disk. There is a fairly large amount of data this thing can return, so need to think of an efficient way of storing it. I've read before that first putting the whole result into memory and then writing it is not a good idea, so if someone could help, would be great!
I am currently storing the sql result data into a DataTable, although I am thinking it could be better doing something in while(myReader.Read(){...}
Below is the code that gets the results:
DataTable t = new DataTable();
string myQuery = QueryLoader.ReadQueryFromFileWithBdateEdate(#"Resources\qrs\qryssysblo.q", newdate, newdate);
using (SqlDataAdapter a = new SqlDataAdapter(myQuery, sqlconn.myConnection))
{
a.Fill(t);
}
var result = string.Empty;
for(int i = 0; i < t.Rows.Count; i++)
{
for (int j = 0; j < t.Columns.Count; j++)
{
result += t.Rows[i][j] + ",";
}
result += "\r\n";
}
So now I have this huge result string. And I have the datatable. There has to be a much better way of doing it?
Thanks.
You are on the right track yourself. Use a loop with while(myReader.Read(){...} and write each record to the text file inside the loop. The .NET framework and operating system will take care of flushing the buffers to disk in an efficient way.
using(SqlConnection conn = new SqlConnection(connectionString))
using(SqlCommand cmd = conn.CreateCommand())
{
conn.Open();
cmd.CommandText = QueryLoader.ReadQueryFromFileWithBdateEdate(
#"Resources\qrs\qryssysblo.q", newdate, newdate);
using(SqlDataReader reader = cmd.ExecuteReader())
using(StreamWriter writer = new StreamWriter("c:\temp\file.txt"))
{
while(reader.Read())
{
// Using Name and Phone as example columns.
writer.WriteLine("Name: {0}, Phone : {1}",
reader["Name"], reader["Phone"]);
}
}
}
I came up with this, it's a better CSV writer than the other answers:
public static class DataReaderExtension
{
public static void ToCsv(this IDataReader dataReader, string fileName, bool includeHeaderAsFirstRow)
{
const string Separator = ",";
StreamWriter streamWriter = new StreamWriter(fileName);
StringBuilder sb = null;
if (includeHeaderAsFirstRow)
{
sb = new StringBuilder();
for (int index = 0; index < dataReader.FieldCount; index++)
{
if (dataReader.GetName(index) != null)
sb.Append(dataReader.GetName(index));
if (index < dataReader.FieldCount - 1)
sb.Append(Separator);
}
streamWriter.WriteLine(sb.ToString());
}
while (dataReader.Read())
{
sb = new StringBuilder();
for (int index = 0; index < dataReader.FieldCount; index++)
{
if (!dataReader.IsDBNull(index))
{
string value = dataReader.GetValue(index).ToString();
if (dataReader.GetFieldType(index) == typeof(String))
{
if (value.IndexOf("\"") >= 0)
value = value.Replace("\"", "\"\"");
if (value.IndexOf(Separator) >= 0)
value = "\"" + value + "\"";
}
sb.Append(value);
}
if (index < dataReader.FieldCount - 1)
sb.Append(Separator);
}
if (!dataReader.IsDBNull(dataReader.FieldCount - 1))
sb.Append(dataReader.GetValue(dataReader.FieldCount - 1).ToString().Replace(Separator, " "));
streamWriter.WriteLine(sb.ToString());
}
dataReader.Close();
streamWriter.Close();
}
}
usage: mydataReader.ToCsv("myfile.csv", true)
Rob Sedgwick answer is more like it, but can be improved and simplified. This is how I did it:
string separator = ";";
string fieldDelimiter = "";
bool useHeaders = true;
string connectionString = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
using (SqlConnection conn = new SqlConnection(connectionString))
{
using (SqlCommand cmd = conn.CreateCommand())
{
conn.Open();
string query = #"SELECT whatever";
cmd.CommandText = query;
using (SqlDataReader reader = cmd.ExecuteReader())
{
if (!reader.Read())
{
return;
}
List<string> columnNames = GetColumnNames(reader);
// Write headers if required
if (useHeaders)
{
first = true;
foreach (string columnName in columnNames)
{
response.Write(first ? string.Empty : separator);
line = string.Format("{0}{1}{2}", fieldDelimiter, columnName, fieldDelimiter);
response.Write(line);
first = false;
}
response.Write("\n");
}
// Write all records
do
{
first = true;
foreach (string columnName in columnNames)
{
response.Write(first ? string.Empty : separator);
string value = reader[columnName] == null ? string.Empty : reader[columnName].ToString();
line = string.Format("{0}{1}{2}", fieldDelimiter, value, fieldDelimiter);
response.Write(line);
first = false;
}
response.Write("\n");
}
while (reader.Read());
}
}
}
And you need to have a function GetColumnNames:
List<string> GetColumnNames(IDataReader reader)
{
List<string> columnNames = new List<string>();
for (int i = 0; i < reader.FieldCount; i++)
{
columnNames.Add(reader.GetName(i));
}
return columnNames;
}
I agree that your best bet here would be to use a SqlDataReader. Something like this:
StreamWriter YourWriter = new StreamWriter(#"c:\testfile.txt");
SqlCommand YourCommand = new SqlCommand();
SqlConnection YourConnection = new SqlConnection(YourConnectionString);
YourCommand.Connection = YourConnection;
YourCommand.CommandText = myQuery;
YourConnection.Open();
using (YourConnection)
{
using (SqlDataReader sdr = YourCommand.ExecuteReader())
using (YourWriter)
{
while (sdr.Read())
YourWriter.WriteLine(sdr[0].ToString() + sdr[1].ToString() + ",");
}
}
Mind you, in the while loop, you can write that line to the text file in any format you see fit with the column data from the SqlDataReader.
Keeping your original approach, here is a quick win:
Instead of using String as a temporary buffer, use StringBuilder. That will allow you to use the function .append(String) for concatenations, instead of using the operator +=.
The operator += is specially inefficient, so if you place it on a loop and it is repeated (potentially) millions of times, the performance will be affected.
The .append(String) method won't destroy the original object, so it's faster
Using the response object without a response.Close() causes at least in some instances the html of the page writing out the data to be written to the file. If you use Response.Close() the connection can be closed prematurely and cause an error producing the file.
It is recommended to use the HttpApplication.CompleteRequest() however this appears to always cause the html to be written to the end of the file.
I have tried the stream in conjunction with the response object and have had success in the development environment. I have not tried it in production yet.
I used .CSV to export data from database by DataReader. in my project i read datareader and create .CSV file manualy. in a loop i read datareader and for every rows i append cell value to result string. for separate columns i use "," and for separate rows i use "\n". finally i saved result string as result.csv.
I suggest this high performance extension. i tested it and quickly export 600,000 rows as .CSV .
I use:
private void SaveData(string path)
{
DataTable tblResult = new DataTable();
using(SqlCommand cm = new SqlCommand("select something", objConnect))
{
tblResult.Load(cm.ExecuteLoad());
}
if (tblResult != null)
{
using(FileStream fs = new FileStream(path, FileMode.Create, FileAccess.Write))
{
BinaryFormatter bin = new BinaryFormatter();
bin.Serialize(fs, tblResult);
}
}
}
ease to use, and easy to load, with:
private DataTable LoadData(string path)
{
DataTable t = new DataTable();
using(FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read))
{
BinaryFormatter bin = new BinaryFormatter();
t = (DataTable)bin.Deserialize(fs);
}
return t;
}
you can use this method also to save a DataSet.
i have text file that looks like this:
1 \t a
2 \t b
3 \t c
4 \t d
i have dataset: DataSet ZX = new DataSet();
is there any way for inserting the text file values to this dataset ?
thanks in advance
You will have to parse the file manually. Maybe like this:
string data = System.IO.File.ReadAllText("myfile.txt");
DataRow row = null;
DataSet ds = new DataSet();
DataTable tab = new DataTable();
tab.Columns.Add("First");
tab.Columns.Add("Second");
string[] rows = data.Split(new char[] { '\n' }, StringSplitOptions.RemoveEmptyEntries);
foreach (string r in rows)
{
string[] columns = r.Split(new char[] { '\t' }, StringSplitOptions.RemoveEmptyEntries);
if (columns.Length <= tab.Columns.Count)
{
row = tab.NewRow();
for (int i = 0; i < columns.Length; i++)
row[i] = columns[i];
tab.Rows.Add(row);
}
}
ds.Tables.Add(tab);
UPDATE
If you don't know how many columns in the text file you can modify my original example as the following (assuming that the number of columns is constant for all rows):
// ...
string[] columns = r.Split(new char[] { '\t' }, StringSplitOptions.RemoveEmptyEntries);
if (tab.Columns.Count == 0)
{
for(int i = 0; i < columns.Length; i++)
tab.Columns.Add("Column" + (i + 1));
}
if (columns.Length <= tab.Columns.Count)
{
// ...
Also remove the initial creation of table columns:
// tab.Columns.Add("First");
// tab.Columns.Add("Second")
-- Pavel
Sure there is,
Define a DataTable, Add DataColumn with data types that you want,
ReadLine the file, split the values by tab, and add each value as a DataRow to DataTable by calling NewRow.
There is a nice sample code at MSDN, take a look and follow the steps
Yes, create data tabel on the fly, refer this article for how-to
Read your file line by line and add those value to your data table , refer this article for how-to read text file
Try this
private DataTable GetTextToTable(string path)
{
try
{
DataTable dataTable = new DataTable
{
Columns = {
{"MyID", typeof(int)},
"MyData"
},
TableName="MyTable"
};
// Create an instance of StreamReader to read from a file.
// The using statement also closes the StreamReader.
using (StreamReader sr = new StreamReader(path))
{
String line;
// Read and display lines from the file until the end of
// the file is reached.
while ((line = sr.ReadLine()) != null)
{
string[] words = line.Split(new string[] { "\\t" }, StringSplitOptions.RemoveEmptyEntries);
dataTable.Rows.Add(words[0], words[1]);
}
}
return dataTable;
}
catch (Exception e)
{
// Let the user know what went wrong.
throw new Exception(e.Message);
}
}
Call it like
GetTextToTable(Path.Combine(Server.MapPath("."), "TextFile.txt"));
You could also check out CSV File Imports in .NET
I'd like also to add to the "volpan" code the following :
String _source = System.IO.File.ReadAllText(FilePath, Encoding.GetEncoding(1253));
It's good to add the encoding of your text file, so you can be able to read the data and in my case export those after modification to another file.
I need to read from a CSV/Tab delimited file and write to such a file as well from .net.
The difficulty is that I don't know the structure of each file and need to write the cvs/tab file to a datatable, which the FileHelpers library doesn't seem to support.
I've already written it for Excel using OLEDB, but can't really see a way to write a tab file for this, so will go back to a library.
Can anyone help with suggestions?
.NET comes with a CSV/tab delminited file parser called the TextFieldParser class.
http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.fileio.textfieldparser.aspx
It supports the full RFC for CSV files and really good error reporting.
I used this CsvReader, it is really great and well configurable. It behaves well with all kinds of escaping for strings and separators. The escaping in other quick and dirty implementations were poor, but this lib is really great at reading. With a few additional codelines you can also add a cache if you need to.
Writing is not supported but it rather trivial to implement yourself. Or inspire yourself from this code.
Simple example with CsvHelper
using (TextWriter writer = new StreamWriter(filePath)
{
var csvWriter = new CsvWriter(writer);
csvWriter.Configuration.Delimiter = "\t";
csvWriter.Configuration.Encoding = Encoding.UTF8;
csvWriter.WriteRecords(exportRecords);
}
Here are a couple CSV reader implementations:
http://www.codeproject.com/KB/database/CsvReader.aspx
http://www.heikniemi.fi/jhlib/ (just one part of the library; includes a CSV writer too)
I doubt there is a standard way to convert CSV to DataTable or database 'automatically', you'll have to write code to do that. How to do that is a separate question.
You'll create your datatable in code, and (presuming a header row) can create columns based on your first line in the file. After that, it will simply be a matter of reading the file and creating new rows based on the data therein.
You could use something like this:
DataTable Tbl = new DataTable();
using(StreamReader sr = new StreamReader(path))
{
int count = 0;
string headerRow = sr.Read();
string[] headers = headerRow.split("\t") //Or ","
foreach(string h in headers)
{
DataColumn dc = new DataColumn(h);
Tbl.Columns.Add(dc);
count++;
}
while(sr.Peek())
{
string data = sr.Read();
string[] cells = data.Split("\t")
DataRow row = new DataRow();
foreach(string c in cells)
{
row.Columns.Add(c);
}
Tbl.Rows.Add(row);
}
}
The above code has not been compiled, so it may have some errors, but it should get you on the right track.
You can read and write csv file..
This may be helpful for you.
pass split char to this parameter "serparationChar"
Example : -
private DataTable dataTable = null;
private bool IsHeader = true;
private string headerLine = string.Empty;
private List<string> AllLines = new List<string>();
private StringBuilder sb = new StringBuilder();
private char seprateChar = ',';
public DataTable ReadCSV(string path, bool IsReadHeader, char serparationChar)
{
seprateChar = serparationChar;
IsHeader = IsReadHeader;
using (StreamReader sr = new StreamReader(path,Encoding.Default))
{
while (!sr.EndOfStream)
{
AllLines.Add( sr.ReadLine());
}
createTemplate(AllLines);
}
return dataTable;
}
public void WriteCSV(string path,DataTable dtable,char serparationChar)
{
AllLines = new List<string>();
seprateChar = serparationChar;
List<string> StableHeadrs = new List<string>();
int colCount = 0;
using (StreamWriter sw = new StreamWriter(path))
{
foreach (DataColumn col in dtable.Columns)
{
sb.Append(col.ColumnName);
if(dataTable.Columns.Count-1 > colCount)
sb.Append(seprateChar);
colCount++;
}
AllLines.Add(sb.ToString());
for (int i = 0; i < dtable.Rows.Count; i++)
{
sb.Clear();
for (int j = 0; j < dtable.Columns.Count; j++)
{
sb.Append(Convert.ToString(dtable.Rows[i][j]));
if (dataTable.Columns.Count - 1 > j)
sb.Append(seprateChar);
}
AllLines.Add(sb.ToString());
}
foreach (string dataline in AllLines)
{
sw.WriteLine(dataline);
}
}
}
private DataTable createTemplate(List<string> lines)
{
List<string> headers = new List<string>();
dataTable = new DataTable();
if (lines.Count > 0)
{
string[] argHeaders = null;
for (int i = 0; i < lines.Count; i++)
{
if (i > 0)
{
DataRow newRow = dataTable.NewRow();
// others add to rows
string[] argLines = lines[i].Split(seprateChar);
for (int b = 0; b < argLines.Length; b++)
{
newRow[b] = argLines[b];
}
dataTable.Rows.Add(newRow);
}
else
{
// header add to columns
argHeaders = lines[0].Split(seprateChar);
foreach (string c in argHeaders)
{
DataColumn column = new DataColumn(c, typeof(string));
dataTable.Columns.Add(column);
}
}
}
}
return dataTable;
}
I have found best solution
http://www.codeproject.com/Articles/415732/Reading-and-Writing-CSV-Files-in-Csharp
Just I had to re-write
void ReadTest()
{
// Read sample data from CSV file
using (CsvFileReader reader = new CsvFileReader("ReadTest.csv"))
{
CsvRow row = new CsvRow();
while (reader.ReadRow(row))
{
foreach (string s in row)
{
Console.Write(s);
Console.Write(" ");
}
Console.WriteLine();
row = new CsvRow(); //this line added
}
}
}
Well, there is another library Cinchoo ETL - an open source one, for reading and writing CSV files.
Couple of ways you can read CSV files
Id, Name
1, Tom
2, Mark
This is how you can use this library to read it
using (var reader = new ChoCSVReader("emp.csv").WithFirstLineHeader())
{
foreach (dynamic item in reader)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Name);
}
}
If you have POCO object defined to match up with CSV file like below
public class Employee
{
public int Id { get; set; }
public string Name { get; set; }
}
You can parse the same file using this POCO class as below
using (var reader = new ChoCSVReader<Employee>("emp.csv").WithFirstLineHeader())
{
foreach (var item in reader)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Name);
}
}
Please check out articles at CodeProject on how to use it.
Disclaimer: I'm the author of this library