i have text file that looks like this:
1 \t a
2 \t b
3 \t c
4 \t d
i have dataset: DataSet ZX = new DataSet();
is there any way for inserting the text file values to this dataset ?
thanks in advance
You will have to parse the file manually. Maybe like this:
string data = System.IO.File.ReadAllText("myfile.txt");
DataRow row = null;
DataSet ds = new DataSet();
DataTable tab = new DataTable();
tab.Columns.Add("First");
tab.Columns.Add("Second");
string[] rows = data.Split(new char[] { '\n' }, StringSplitOptions.RemoveEmptyEntries);
foreach (string r in rows)
{
string[] columns = r.Split(new char[] { '\t' }, StringSplitOptions.RemoveEmptyEntries);
if (columns.Length <= tab.Columns.Count)
{
row = tab.NewRow();
for (int i = 0; i < columns.Length; i++)
row[i] = columns[i];
tab.Rows.Add(row);
}
}
ds.Tables.Add(tab);
UPDATE
If you don't know how many columns in the text file you can modify my original example as the following (assuming that the number of columns is constant for all rows):
// ...
string[] columns = r.Split(new char[] { '\t' }, StringSplitOptions.RemoveEmptyEntries);
if (tab.Columns.Count == 0)
{
for(int i = 0; i < columns.Length; i++)
tab.Columns.Add("Column" + (i + 1));
}
if (columns.Length <= tab.Columns.Count)
{
// ...
Also remove the initial creation of table columns:
// tab.Columns.Add("First");
// tab.Columns.Add("Second")
-- Pavel
Sure there is,
Define a DataTable, Add DataColumn with data types that you want,
ReadLine the file, split the values by tab, and add each value as a DataRow to DataTable by calling NewRow.
There is a nice sample code at MSDN, take a look and follow the steps
Yes, create data tabel on the fly, refer this article for how-to
Read your file line by line and add those value to your data table , refer this article for how-to read text file
Try this
private DataTable GetTextToTable(string path)
{
try
{
DataTable dataTable = new DataTable
{
Columns = {
{"MyID", typeof(int)},
"MyData"
},
TableName="MyTable"
};
// Create an instance of StreamReader to read from a file.
// The using statement also closes the StreamReader.
using (StreamReader sr = new StreamReader(path))
{
String line;
// Read and display lines from the file until the end of
// the file is reached.
while ((line = sr.ReadLine()) != null)
{
string[] words = line.Split(new string[] { "\\t" }, StringSplitOptions.RemoveEmptyEntries);
dataTable.Rows.Add(words[0], words[1]);
}
}
return dataTable;
}
catch (Exception e)
{
// Let the user know what went wrong.
throw new Exception(e.Message);
}
}
Call it like
GetTextToTable(Path.Combine(Server.MapPath("."), "TextFile.txt"));
You could also check out CSV File Imports in .NET
I'd like also to add to the "volpan" code the following :
String _source = System.IO.File.ReadAllText(FilePath, Encoding.GetEncoding(1253));
It's good to add the encoding of your text file, so you can be able to read the data and in my case export those after modification to another file.
Related
Im having an issue loading just 3 records from the text file into a datagridview.
private void button1_Click(object sender, EventArgs e)
{
using (OpenFileDialog ofd = new OpenFileDialog())
{
if (ofd.ShowDialog() == DialogResult.OK && radioButton1.Checked)
{
System.IO.StreamReader file = new System.IO.StreamReader(ofd.FileName);
string[] columnnames = file.ReadLine().Split('|');
List<string> list = new List<string>();
DataTable dt = new DataTable();
foreach (string c in columnnames)
{
dt.Columns.Add(c);
}
string newline;
while ((newline = file.ReadLine()) != null)
{
DataRow dr = dt.NewRow();
string[] values = newline.Split('|');
for (int i = 0; i < values.Length; i++)
{
dr[i] = values[i];
}
dt.Rows.Add(dr);
}
file.Close();
dataGridView1.DataSource = dt;
}
}
Im trying to have someone select a radio button such as "show 3 records" and open a text file. Then it would list the 3 records only in a datagridview. I can get the file to load the file but can't figure out how to make it only show 3 records from the text file. Could someone help me please?
Use File.ReadLines and Take
var records = File.ReadLines(ofd.FileName).Take(3);
foreach(var record in records)
{
// do stuff
}
The advantages of this approach, is under the hood ReadLines creates an iterator and calls the plumbing for StreamReader and reads each line individually. When combined with Take it only reads and loads what is iterated( in this case the first 3 lines).
You can find (and follow) the source code here
https://referencesource.microsoft.com/mscorlib/R/d989485a49fbbfd2.html
Additional Resources
File.ReadLines Method
Reads the lines of a file.
Enumerable.Take(IEnumerable, Int32) Method
Returns a specified number of contiguous elements from the start of a
sequence.
You need to count the number lines of read and then exit the read-load when it reaches 3 lines.
int maxLines = 3;
string newline;
while ((newline = file.ReadLine()) != null && --maxLines >= 0)
{
....
}
I am trying to develop a tool that will take a CSV file and import it into a datatable with the first column in the datatable being a row counter.
The CSV files are from different customers and so have different structures. Some have a header line; some have several header lines; some have no header line. They have also have varying columns.
So far, I have the code below.
public void Import_CSV()
{
OpenFileDialog dialog = new OpenFileDialog();
dialog.Filter = "CSV Files (*.csv)|*.csv";
bool? result = dialog.ShowDialog();
if (result ?? false)
{
string[] headers;
string CSVFilePathName = dialog.FileName;
string delimSelect = cboDelimiter.Items.GetItemAt(cboDelimiter.SelectedIndex).ToString();
// If user hasn't selected a delimiter, assume comma
if (delimSelect == "")
{
delimSelect = ",";
}
string[] delimiterType = new string[] {cboDelimiter.Items.GetItemAt(cboDelimiter.SelectedIndex).ToString()};
DataTable dt = new DataTable();
// Read first line of file to get number of fields and create columns and column numbers in data table
using (StreamReader sr1 = new StreamReader(CSVFilePathName))
{
headers = sr1.ReadLine().Split(delimiterType, StringSplitOptions.None);
//dt.Columns.Add("ROW", typeof(int));
//dt.Columns["ROW"].AutoIncrement = true;
//dt.Columns["ROW"].AutoIncrementSeed = 1;
//dt.Columns["ROW"].AutoIncrementStep = 1;
int colCount = 1;
foreach (string header in headers)
{
dt.Columns.Add("C" + colCount.ToString());
colCount++;
}
}
using (StreamReader sr = new StreamReader(CSVFilePathName))
{
while (!sr.EndOfStream)
{
string[] rows = sr.ReadLine().Split(delimiterType, StringSplitOptions.None);
DataRow dr = dt.NewRow();
for (int i = 0; i < headers.Length; i++)
{
dr[i] = rows[i];
}
dt.Rows.Add(dr);
}
}
dtGrid.ItemsSource = dt.DefaultView;
txtColCount.Text = dtGrid.Columns.Count.ToString();
txtRowCount.Text = dtGrid.Items.Count.ToString();
}
}
This works, in as much as it creates column headers (C1, C2....according to how many there are in the csv file) and then the rows are written in, but I want to add a column at the far left with a row number as the rows are added. In the code, you can see I've got a section commented out that creates an auto-number column, but I'm totally stuck on how the rows are written into the datatable. If I uncomment that section, I get errors as the first column in the csv file tries to write into an int field. I know you can specify which field in each row can go in which column, but that won't help here as the columns are unknown at this point. I just need it to be able to read ANY file in, regardless of the structure, but with the row counter.
Hope that makes sense.
You write in your question, that uncommenting the code that adds the first column leads to errors. This is because of your loop: it starts at 0, but the 0-th column is the one you have added manually. So you need just to skip it in your loop, starting at 1. However, the source array has to be processed from the 0-th element.
So the solution is:
First, uncomment the row adding code.
Then, in your loop, introduce an offset to leave the first column untouched:
for (int i = 0; i < headers.Length; i++)
{
dr[i + 1] = rows[i];
}
I'm trying to import a CSV file to my C# site and save it in the database. While doing research I learned about CSV parsing, I've tried to implement this but I've ran into some trouble. Here is a portion of my code so far:
string fileext = Path.GetExtension(fupcsv.PostedFile.FileName);
if (fileext == ".csv")
{
string csvPath = Server.MapPath("~/CSVFiles/") + Path.GetFileName(fupcsv.PostedFile.FileName);
fupcsv.SaveAs(csvPath);
// Add Columns to Datatable to bind data
DataTable dtCSV = new DataTable();
dtCSV.Columns.AddRange(new DataColumn[2] { new DataColumn("ModuleId", typeof(int)), new DataColumn("CourseId", typeof(int))});
// Read all the lines of the text file and close it.
string[] csvData = File.ReadAllLines(csvPath);
// iterate over each row and Split it to New line.
foreach (string row in csvData)
{
// Check for is null or empty row record
if (!string.IsNullOrEmpty(row))
{
using (TextFieldParser parser = new TextFieldParser(csvPath))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
while (!parser.EndOfData)
{
//Process row
string[] fields = parser.ReadFields();
int i = 1;
foreach (char cell in row)
{
dtCSV.NewRow()[i] = cell;
i++;
}
}
}
}
}
}
I keep getting the error "There is no row at position -1" at " dtCSV.Rows[dtCSV.Rows.Count - 1][i] = cell;"
Any help would be greatly appreciated, thanks
You are trying to index rows that you have not created. Instead of
dtCSV.Rows[dtCSV.Rows.Count - 1][i] = cell;
use
dtCSV.NewRow()[i] = cell;
I also suggest you start indexing i from 0 and not from 1.
All right so it turns out there were a bunch of errors with your code, so I made some edits.
string fileext = Path.GetExtension(fupcsv.PostedFile.FileName);
if (fileext == ".csv")
{
string csvPath = Server.MapPath("~/CSVFiles/") + Path.GetFileName(fupcsv.PostedFile.FileName);
fupcsv.SaveAs(csvPath);
DataTable dtCSV = new DataTable();
dtCSV.Columns.AddRange(new DataColumn[2] { new DataColumn("ModuleId", typeof(int)), new DataColumn("CourseId", typeof(int))});
var csvData = File.ReadAllLines(csvPath);
bool headersSkipped = false;
foreach (string line in csvData)
{
if (!headersSkipped)
{
headersSkipped = true;
continue;
}
// Check for is null or empty row record
if (!string.IsNullOrEmpty(line))
{
//Process row
int i = 0;
var row = dtCSV.NewRow();
foreach (var cell in line.Split(','))
{
row[i] = Int32.Parse(cell);
i++;
}
dtCSV.Rows.Add(row);
dtCSV.AcceptChanges();
}
}
}
I ditched the TextFieldParser solution solely because I'm not familiar with it, but if you want to stick with it, it shouldn't be hard to reintegrate it.
Here are some of the things you got wrong:
Not calling NewRow() to create a new row or adding it to the table with AddRow(row)
Iterating through the characters in row instead of the fields you parsed
Not parsing the value of cell - it's value type is string and you are trying to add to an int column
Some other things worth noting (just to improve your code's performance and readability :))
Consider using var when declaring new variables, it takes a lot of the stress away from having to worry about exactly what type of variable you are creating
As others in the comments said, use ReadAllLines() it parses your text file into lines neatly, making it easier to iterate through.
Most of the times when working with arrays or lists, you need to index from 0, not from 1
You have to use AcceptChanges() to commit all the changes you've made
I have a lot of different CSV files with data in it (including headers).
I can't figure it out how to add a column in the first postition and fill the first cells with the filename value (each row).
Can anybody help me?
Thanks in advance
in case that your csv-Files are small enough to load them in your memory
// #1 Read CSV File
string[] CSVDump = File.ReadAllLines(#"c:\temp.csv");
// #2 Split Data
List<List<string>> CSV = CSVDump.Select(x => x.Split(';').ToList()).ToList();
//#3 Update Data
for (int i = 0; i < CSV.Count; i++)
{
CSV[i].Insert(0, i == 0 ? "Headername" : "Filename");
}
//#4 Write CSV File
File.WriteAllLines(#"c:\temp2.csv", CSV.Select(x => string.Join(";", x)));
Adding a column at first position is a special case which can be implemented without CSV parsing. All you need is to prepend every string with the desired value and a comma. The only exception is the very first line which should be prepended with a header name:
string newColumnHeader = "FileName";
string textToPrepend = #"some\file\name";
long lineNumber = 0;
using (StreamWriter sw = File.CreateText("output.csv"))
foreach (var line in File.ReadAllLines("input.csv"))
sw.WriteLine(
(lineNumber++ == 0 ?
newColumnHeader :
textToPrepend) +
"," + line);
I will done little bit more coding. First read the CSV in to DataTable
public static DataTable ConvertCSVtoDataTable(string strFilePath)
{
StreamReader csv = new StreamReader(strFilePath);
string[] headers = csv .ReadLine().Split(',');
DataTable dtCSV = new DataTable();
foreach (string header in headers)
{
dtCSV.Columns.Add(header);
}
while (!csv.EndOfStream)
{
string[] rows = csv.ReadLine().Split(',');
DataRow dr = dt.NewRow();
for (int i = 0; i < headers.Length; i++)
{
dr[i] = rows[i];
}
dt.Rows.Add(dr);
}
return dtCSV;
}
Then insert the column in my desire location:
DataColumn Col = dtCSV.Columns.Add("FileName", System.Type.GetType("System.String"));
Col.SetOrdinal(0);
After all the value in column return back to CSV:
var lines = new List<string>();
string[] columnNames = dataTable.Columns.Cast<datacolumn>().
Select(column => column.ColumnName).
ToArray();
var header = string.Join(",", columnNames);
lines.Add(header);
var valueLines = dt.AsEnumerable()
.Select(row => string.Join(",", row.ItemArray));
lines.AddRange(valueLines );
File.WriteAllLines("File.csv",lines);
I successfully imported following file in database but my import method removes double quotes during saving process. but i want to export this file as it is , i.e add quotes to a string which contains delimiter so how to achieve this .
here is my csv file with headers and 1 record.
PTNAME,REGNO/ID,BLOOD GRP,WARD NAME,DOC NAME,XRAY,PATHO,MEDICATION,BLOOD GIVEN
Mr. GHULAVE VASANTRAO PANDURANG,SH1503/00847,,RECOVERY,SHELKE SAMEER,"X RAY PBH RT IT FEMUR FRACTURE POST OP XRAY -ACCEPTABLE WITH IMPLANT IN SITU 2D ECHO MILD CONC LVH GOOD LV SYSTOLIC FUN, ALTERED LV DIASTOLIC FUN.", HB-11.9gm% TLC-8700 PLT COUNT-195000 BSL-173 UREA -23 CREATININE -1.2 SR.ELECTROLYTES-WNR BLD GROUP-B + HIV-NEGATIVE HBsAG-NEGATIVE PT INR -15/15/1.0. ECG SINUS TACHYCARDIA ,IV TAXIMAX 1.5 GM 1-0-1 IV TRAMADOL DRIP 1-0-1 TAB NUSAID SP 1-0-1 TAB ARCOPAN D 1-0-1 CAP BONE C PLUS 1 -0-1 TAB ANXIT 0.5 MG 0-0-1 ANKLE TRACTION 3 KG RT LL ,NOT GIVEN
Here is my method of export:
public void DataExport(string SelectQuery, string fileName)
{
try
{
DataTable dt = new DataTable();
SqlDataAdapter da = new SqlDataAdapter(SelectQuery, con);
da.Fill(dt);
//Sets file path and print Headers
// string filepath = txtreceive.Text + "\\" + fileName;
string filepath = #"C:\Users\Priya\Desktop\R\z.csv";
StreamWriter sw = new StreamWriter(filepath);
int iColCount = dt.Columns.Count;
// First we will write the headers if IsFirstRowColumnNames is true: //
for (int i = 0; i < iColCount; i++)
{
sw.Write(dt.Columns[i]);
if (i < iColCount - 1)
{
sw.Write(',');
}
}
sw.Write(sw.NewLine);
foreach (DataRow dr in dt.Rows) // Now write all the rows.
{
for (int i = 0; i < iColCount; i++)
{
if (!Convert.IsDBNull(dr[i]))
{
sw.Write(dr[i].ToString());
}
if (i < iColCount - 1)
{
sw.Write(',');
}
}
sw.Write(sw.NewLine);
}
sw.Close();
}
catch { }
}
if (myString.Contains(","))
{
myWriter.Write("\"{0}\"", myString);
}
else
{
myWriter.Write(myString);
}
The very simplest thing that you can do is replace the line sw.Write(dr[i].ToString()); with this:
var text = dr[i].ToString();
text = text.Contains(",") ? String.Format("\"{0}\"", text) : text;
sw.Write(text);
However, there are quite a few other issues with your code - most importantly you are opening a lot of disposable resources without disposing them properly.
I'd suggest a bit of a rewrite, like this:
public void DataExport(string SelectQuery, string fileName)
{
using (var dt = new DataTable())
{
using (var da = new SqlDataAdapter(SelectQuery, con))
{
da.Fill(dt);
var header = String.Join(
",",
dt.Columns.Cast<DataColumn>().Select(dc => dc.ColumnName));
var rows =
from dr in dt.Rows.Cast<DataRow>()
select String.Join(
",",
from dc in dt.Columns.Cast<DataColumn>()
let t1 = Convert.IsDBNull(dr[dc]) ? "" : dr[dc].ToString()
let t2 = t1.Contains(",") ? String.Format("\"{0}\"", t1) : t1
select t2);
using (var sw = new StreamWriter(fileName))
{
sw.WriteLine(header);
foreach (var row in rows)
{
sw.WriteLine(row);
}
sw.Close();
}
}
}
}
I've also broken apart the querying of the data from the data adapter from the writing of the data to the stream writer.
And, of course, I'm adding double-quotes to text that contains commas.
The only other thing I was concerned about was the fact that con is clearly a class-level variable and it is being left open. That's bad. Connections should be opened and closed each time they are used. You should probably consider making that change too.
You could also remove the stream write entirely by replacing that block with this:
File.WriteAllLines(fileName, new [] { header }.Concat(rows));
And, finally, wrapping your code in a try { ... } catch { } is just a bad practice. It's like saying "I'm writing some code that could fail, but I don't care and I don't want to be told if it does fail". You should only even catch specific exceptions that you do deal with. In this code you should consider catching file exceptions like running out of hard drive space, or writing to a read-only file, etc.