How to create DataTable from csv file content - c#

I have string data containing *.csv file content (read from using File.ReadAllText(filePath) method and want create new Data Table object from the string data.
I don't have an option to use file path because the file is immediately deleted from the drive once it is been read.

You could consider replacing your File.ReadAllText method with File.ReadAllLines. Then, you could do something like the below steps:
Create a datatable, define its structure etc.
Read all lines from file as an array of strings
In a loop, split the string by your separator character, then parse each portion of string as its corresponding datatype e.g. int.TryParse() for numerical values etc.
Add new row to the datatable using DataTable.Rows.Add and supplying an object array with your parsed values.

Related

How to AppendAllText in excel file for multi-variables in multi-columns using c#?

The following code line allows me to append the variable w1 in excel file. but appending process happens in the first column. how to make it for multi-variables {w1, w2, w3} in multi-columns?
File.AppendAllText(Path.Combine(docPath, "Groupbox1.CSV"), w1);
First, you're actually working with .CSV files, not Excel. The CSV stands for Comma Separated Values, so each value in a line of text on these files are typically separated by a comma (or other delimiters such as semicolon). The Excel application lets you open .CSV files, but they are different from Excel files which have either .xls or .xslx extensions. Excel will display values separated by commas in their own columns.
Then, say you have multiple variables. Then you can simply create one string with commas in between values, and write it to the file.
var w1 = "Data1";
var w2 = "Data2";
var w3 = "Data3";
// This creates a string, according to above data, that looks like
// "Data1,Data2,Data3"
var lineToWrite = string.Format("{0},{1},{2}", w1, w2, w3);
File.AppendAllText(Path.Combine(docPath, "Groupbox1.CSV"), lineToWrite);
Now we write that line to the file, and each data item is separated by a comma. Now when you open this with Excel, each data item will be displayed in its own column.

How to import text file with delimited white spaces to datagrid or datatable?

What about this: it has a delimited whitespaces and all I want to happen is to import this in database or datatable with their headers..
I'm having a hard time because the original file has no fixed spaces and other samples separated by semicolon but this is different.
Is there a way to import this to the data table that assigned those fields to its specific column header. Pls guide me. First time to encounter this.
here's the sample text file
From the looks of your sample file (original) it seems that you can treat it as space delimited data, with the exception that the date time has a space within the value. So if you use a streamreader with ReadLine() for each row you can use List columns = text.Split(" ", StringSplitOptions.RemoveEmptyEntries). You can then allocate each column's value according to the header row remembering to append the time part to the date.

I have a csv file whose contents is like as the following. How I can convert it to collection of List<string> dynamically?

This csv file has lots of rows but all the rows doesn't have the equal number of values.
For dealing complex CSV files it is better to use a reliable solution, See this http://joshclose.github.io/CsvHelper/
It's easy to use simple like this
var csv = new CsvReader( textReader );
var records = csv.GetRecords<MyClass>().ToList();
I would use the string split method. https://msdn.microsoft.com/en-us/library/system.string.split(v=vs.110).aspx.
Suppose you read your csv file into a string, you can split the string on line breaks into multiple strings. You'll create a new list for each new line string. You'll split each line on whatever you have for your delimiter, then add those values to your list.
Edit: here is a similar question How to split() a delimited string to a List<String>

creating a difference file from .csv files

I am creating an application which converts a MS Access table and an Excel sheet to .csv files and then differences the access table with the excel sheet. The .csv files are fine but the resulting difference file has errors in fields that contain html (the access table has fields with the html). I'm not sure if this is a special character issue because the special characters were not an issue in creating the .csv file in the first place, or if it is an issue with the way I am differencing the two files.
Part of the problem I suppose could be that in the access .csv file, the fields that contain the html are formatted so that some of the information is on separate lines instead of all on one line, which could be throwing off the reader, but I don't know how to correct this issue.
This is the code for creating the difference file:
string destination = Form2.destination;
string path = Path.Combine(destination, "en-US-diff.csv");
string difFile = path;
if (File.Exists(difFile))
{
File.Delete(difFile);
}
using (var wtr = new StreamWriter(difFile))
{
// Create the IEnumerable data sources
string[] access = System.IO.File.ReadAllLines(csvOutputFile);
string[] excel = System.IO.File.ReadAllLines(csvOutputFile2);
// Create the query
IEnumerable<string> differenceQuery = access.Except(excel);
// Execute the query
foreach (string s in differenceQuery)
{
wtr.WriteLine(s);
}
}
Physical line versus logical line. One solution is to use a sentinel, which is simply an arbitrary string token selected in such a way so as not to confound the parsing process, for example "##||##".
When the input files are created, add the sentinel to the end of each line...
1,1,1,1,1,1,###||##
Going back to your code, the System.IO.File.ReadAllLines(csvOutputFile); uses the Environment.Newline string as its sentinel. This means that you need to replace this statement with the following (pseudo code)...
const string sentinel = "##||##";
string myString = File.ReadAllText("myFileName.csv");
string[] access = myString.Split(new string[]{sentinel},
StringSplitOptions.RemoveEmptyEntries);
At that point you will have the CSV lines in your 'access' array the way you wanted as a collection of 'logical' lines.
To make things further conformant, you would also need to execute this statement on each line of your array...
line = line.Replace(Environment.NewLine, String.Empty).Trim();
That will remove the culprits and allow you to parse the CSV using the methods you have already developed. Of course this statement could be combined with the IO statements in a LINQ expression if desired.

Best way to handle long integers in a CSV file

I have a csv file with a column where the integers are over 16 characters long. The issue with this is that it changes the integer to a short abbreviation example: 8.71688E+17, is there a way in c# you can convert it back to a long integer without having to modify the csv file itself?
I currently convert the CSV file to a datatable, could there be a way at the conversion stage?
You can use BitInteger class. Here's more info: http://www.codeproject.com/Articles/2728/C-BigInteger-Class
prefix value with '=' and enclose large-value inside double quote so final string to put into excel cell is ="<large-value>"

Categories