I have sample json data and want to create a xls file using that.
"{\"Id\":\"123\",\"Name\":\"DEMO\",\"Address\":\"US\",\"Team\":\"JK\"}"
Want to create a excel file stream which I will upload on azure storage using below code -
CloudFile cloudFile = cloudFileDirectory.GetFileReference("filename.xls");
cloudFile.UploadFromStream(fileStream);
output expected -
I'm able to create csv by below code -
var result = new StringBuilder();
for (int i = 0; i < table.Columns.Count; i++)
{
result.Append(table.Columns[i].ColumnName);
result.Append(i == table.Columns.Count - 1 ? "\n" : delimator);
}
foreach (DataRow row in table.Rows)
{
for (int i = 0; i < table.Columns.Count; i++)
{
result.Append(row[i].ToString());
result.Append(i == table.Columns.Count - 1 ? "\n" : delimator);
}
}
return result.ToString().TrimEnd(new char[] { '\r', '\n' });
to generate xls file from json files you should done those steps
Read file
Parse JSON file (Json.NET is the best) to your c# Class object.
Choose xls framework(using Open XML SDK or anyone which do you find)
Use structure from step 2 to fill columns and rows in xls file using framework API from step 3.
If you already have a datatable, then you can convert a data table to a spreadsheet as easy as pie using EPPLus. The code could be as simple as this:
FileInfo f = new FileInfo("filename.xlsx");
ExcelPackage package = new ExcelPackage(f);
ExcelWorksheet ws = package.Workbook.Worksheets.Add("Data");
ws.Cells["A1"].LoadFromDataTable(table, true);
If you have some special handling, for date formats and such, it might be moderately more work, but not much.
From Nuget:
Install-Package EPPlus
If you've ever worked with Excel Interop, you will love how much easier EPPlus is to work with.
Related
I'm currently building a local DBF converter using C#. Current library is the FastDBF library. I currently have tested the code below and dump the DBF contents into a text file. The total rows that is showing in MS Excel does not match the rows generated by the DBF library. See below photo.
This is the piece of code I have to read and write the DBF file contents
var odbf = new DbfFile(Encoding.GetEncoding(1252));
odbf.Open(Path.Combine(this.pathWithFile), FileMode.Open)
var ofs = new FileStream(Path.Combine(this.currentDirectory, "filesource.txt"), FileMode.Create);
var osw = new StreamWriter(ofs, Encoding.Default);
var orec = new DbfRecord(odbf.Header);
for (int i = 0; i < odbf.Header.RecordCount; i++)
{
if (!odbf.Read(i, orec))
break;
osw.WriteLine("index: " + orec.RecordIndex + ": " + orec);
}
osw.Flush();
osw.Close();
The number of rows read differ from the generated result 28057 vs 28307. Is there anything I'm missing here?
I need export ListView to csv and i found solution from this answer.Here
The problem is that I am getting full line into one column.
I got this :
I need this:
Code below:
public static void ListViewToCSV(ListView listView, string filePath, bool includeHidden)
{
//make header string
StringBuilder result = new StringBuilder();
WriteCSVRow(result, listView.Columns.Count, i => includeHidden || listView.Columns[i].Width > 0, i => listView.Columns[i].Text);
//export data rows
foreach (ListViewItem listItem in listView.Items)
WriteCSVRow(result, listView.Columns.Count, i => includeHidden || listView.Columns[i].Width > 0, i => listItem.SubItems[i].Text);
File.WriteAllText(filePath, result.ToString());
}
private static void WriteCSVRow(StringBuilder result, int itemsCount, Func<int, bool> isColumnNeeded, Func<int, string> columnValue)
{
bool isFirstTime = true;
for (int i = 0; i < itemsCount; i++)
{
if (!isColumnNeeded(i))
continue;
if (!isFirstTime)
result.Append(",");
isFirstTime = false;
result.Append(String.Format("\"{0}\"", columnValue(i)));
}
result.AppendLine();
}
Any ideas what I have to do ?
This has nothing to do with CSVs. The first screenshot shows a perfectly good CSV. It also shows an attempt to open that file in Excel.
Excel will use the end-user's locale to decide what column, decimal separators and date formats to use. When you double-click on a text file Excel will import that data using the end-user's locale settings as the default.
In most European countries , is the decimal separator so it can't be used as a list separator too. Otherwise you wouldn't know what 100,00,234 mean. Is that 2 or 3 columns?
In such cases, the typical list separator is ;. This is specified in the end user's locale settings. In most European countries you'd see 100,00;234
If you want your end users to be able to open the generated files in Excel with double-clicking you'll have to use the list and decimal separators that match their locales.
A better option would be to generate real Excel files with, eg Epplus. It doesn't require installing Excel on the client or server, and generates real Excel (xlsx) files.
Exporting your data can be as easy as a call to :
using (var pck = new ExcelPackage())
{
var sheet = pck.Workbook.Worksheets.Add("sheet");
sheet.Cells["C1"].LoadFromCollection(items);
pck.SaveAs(new FileInfo(#"c:\workbooks\myworkbook.xlsx"));
}
I am writing a program that takes data from a website via selenium web driver. I am trying to create football fixture for our projects.
I am so far, I accomplished to take date and time, team names and scores from the website. Also still trying writing on txt file, but it gets little messy while writing on txt file
How do I accomplish writing on excel file, and reading?
I want to write like that
Date-Time First-Team Second-Team Score Statistics
28/07 19:00 AM AVB 2-1 Shot 13123 Pass 65465 ...
28/07 20:00 BM BVB 2-2 Shot 13123 Pass 65465 ...
28/07 20:00 CM CVB 2-3 Shot 13123 Pass 65465 ...
And this is my part of my code :
StreamWriter file = new StreamWriter(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "\\Test" + "\\" + pathName + "\\" + subFile + "\\" + pathName + ".txt", false);
for (int k = 2; k > 1; k--)
{
//doing some stuff
}
Writing part:
for (int x = 0; x <dT.Count; x++)
{
file.Write(dateTime[x] + " " + firstTeam[x] + " "
+ secondTeam[x] + " " + firstHalf[x] + " " + secondHalf[x] + " ")
for (int i = 0; i < total_FS.Count(); i++)
{
int index = total_FS[i].Length;
if (total_FS[i][index-1].ToString() != " " && total_FS[i] != "-")
{
file.Write(total_FS[i]);
}
else
{
SpaceC++;
if (total_FS[i][index - 1].ToString() == " ")
file.Write(total_FS[i]);
}
if (SpaceC == 9)
{
file.Write("\n");
SpaceC = 0;
break;
}
}
}
There are few cool libraries that you can use to easy read and write excel files. You can reference them in your project and easily create/modify spreadsheets.
EPPlus
Very developer friendly and easy to use.
EPPlus - codeplex source
Simple tutorial
NOPI
NOPI - codeplex source
DocumentFormat.OpenXml
It provides strongly typed classes for spreadsheet objects and seems to be fairly easy to work with.
DocumentFormat.OpenXml site
DocumentFormat.OpenXml tutorial
Open XML SDK 2.0 for Microsoft Office
Provides strongly typed classes / easy to work with.
Open XML SDK 2.0 - MSDN
ClosedXML - The easy way to OpenXML
ClosedXML makes it easier for developers to create (.xlsx, .xlsm, etc) files.
ClosedXML - repository hosted at GitHub
ClosedXML - codeplex source
SpreadsheetGear
*Paid - library to import / export Excel workbooks in ASP.NET
SpreadsheetGear site
Instead of creating XLS file, make a CSV text file that can be opened with Excel. Fields are comma separated and each line represents a record.
field11,field12
field21,field22
If a field contains inner commas, it needs to be wrapped in double quotation marks.
"field11(row1,column1)", field12
field21, field22
If a field contains double quotation marks, they need to be escaped. But you can use CsvHelper to do the job. Grab it from Nuget
PM> Install-Package CsvHelper
An example on how to use it.
using(var textWriter = new StreamWriter(#"C:\mypath\myfile.csv")
{
var writer = new CsvWriter(textWriter);
writer.Configuration.Delimiter = ",";
foreach (var item in list)
{
csv.WriteField("field11");
csv.WriteField("field12");
csv.NextRecord();
}
}
Full documentation can be found here.
I am trying to develop an application to read .tdms (National Instruments) files, for which I'm using the "TDMSReader" package link to the package + use. This works fine except for those files which use a set duration or time interval.
In the .tdms example file I'm providing it can be noted that the file consists of five channels, each of which holds 174080 items.
(The content of the file can be viewed with this excel add-in)
However, the C# package I mentioned doesn't take this into account, it can only read the amount of items equal to the "wf_samples" field (10240), discarding the rest. Has anyone found a solution on how read the "Length" property of the channel and extract the rest of the array values?
Example of my code to convert a .tdms file to .csv
//file.Fullname = full path to the .tdms file
using (var output = new StreamWriter(File.Create(file.FullName + ".csv")))
using (var tdms = new NationalInstruments.Tdms.File(file.FullName))
{
tdms.Open();
List<object[]> All_Values = new List<object[]>();
//Headers
string channels = "";
foreach (var group in tdms)
{
foreach (var channel in group)
{
channels = channels + channel.Name + ";";
All_Values.Add(channel.GetData<object>().ToArray());
}
}
output.WriteLine(channels);
//Values
long cnt = tdms.First().Channels.First().Value.DataCount;
for (int i = 0; i < cnt; i++)
{
string values = "";
foreach (object[] columnValues in All_Values)
{
values = values + columnValues[i] + ";";
}
output.WriteLine(values);
}
}
Any other alternative that provides a way to read .tdms files with C# is welcome.
EDIT: TDMS sample files:
NO Interval sample
This one works fine
Interval Sample
This one discards most of the array values
I have submitted a PR for a fix to https://github.com/mikeobrien/TDMSReader. Mike has made a new release on https://www.nuget.org/packages/TDMSReader/.
I have loads of .csv files I need to convert to .xslx after applying some formatting.
A file containing approx 20 000 rows and 7 columns takes 12 minutes to convert.
If the file contains more than 100 000 it runs for > 1 hour.
This is unfortunately not acceptable for me.
Code snippet:
var format = new ExcelTextFormat();
format.Delimiter = ';';
format.Encoding = new UTF7Encoding();
format.Culture = new CultureInfo(System.Threading.Thread.CurrentThread.CurrentCulture.ToString());
format.Culture.DateTimeFormat.ShortDatePattern = "dd.mm.yyyy";
using (ExcelPackage package = new ExcelPackage(new FileInfo(file.Name))){
ExcelWorksheet worksheet = package.Workbook.Worksheets.Add(Path.GetFileNameWithoutExtension(file.Name));
worksheet.Cells["A1"].LoadFromText(new FileInfo(file.FullName), format);
}
I have verified that it is the LoadFromText command that spends the time used.
Is there a way to speed things up?
I have tried without the "format" parameter, but the loadtime was the same.
What loadtimes are you experiencing?
My suggestion here is to read the file by yourself and then use the library to create the file.
The code to read the CSV could be as simple as:
List<String> lines = new List<String>();
using (StreamReader reader = new StreamReader("file.csv"))
{
String line;
while((line = reader.ReadLine()) != null)
{
lines.add(line);
}
}
//Now you got all lines of your CSV
//Create your file with EPPLUS
foreach(String line in lines)
{
var values = line.Split(';');
foreach(String value in values)
{
//use EPPLUS library to fill your file
}
}
I ran into a very similar problem with LoadFromCollection. EPPlus has to account for all situations in their methods to load data generically like that so there is a good deal of overhead. I ended up narrowing done the bottleneck to that method and ended up just manually coverting the data from the collection to Excel Cell object in EPPlus. Probably saved several minutes in my exports.
Plenty of examples on how to read csv data:
C# Read a particular value from CSV file