Writing to same csv line twice? - c#

I am attempting to create code that will write values into a csv file, do some other unrelated code and then later on add an extra value to that line
static void Main(string[] args)
{
int newNumber = 110;
var records = new List<MovieInfo>
{
new MovieInfo {Rating = newNumber, Price = 44},
};
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
HasHeaderRecord = false,
};
string filePath = #"C:\file\path\go\brrrr";
using (var stream = File.Open(filePath, FileMode.Append))
using (var writer = new StreamWriter(stream))
using (var csv = new CsvWriter(writer, config))
{
csv.WriteRecords(records);
}
//--------------------------------------
//other unrelated code things happen here
//--------------------------------------
double value = 4;
string thirdValue = "," + value;
File.AppendAllText(filePath, thirdValue);
}
public class MovieInfo
{
public int Rating { get; set; }
public int Price { get; set; }
}
The output of this code is this
Output code
Is it possible to adjust the code so that the output will be 110,44,4 instead? Additional info if it is relevant would be that the line in question will always be the last in the file. Any help is appreciated

When you call csv.WriteRecords(records); all records are written to the output file and each record is separated by a newline, even the last record has a new line appended.
Instead of writing every record in one go, you could write a record at time with the WriteRecord method. This method doesn't write the newline.
A new line is added when you call the NextRecord method. So looping on your records minus one you can control the output of the newline on the last line:
void Main()
{
int newNumber = 110;
var records = new List<MovieInfo>
{
new MovieInfo {Rating = 200, Price = 30},
new MovieInfo {Rating = newNumber, Price = 44},
};
var config = new CsvConfiguration(CultureInfo.InvariantCulture)
{
HasHeaderRecord = false,
};
string filePath = #"E:\temp\movieinfo.csv";
using (var stream = File.Open(filePath, FileMode.Append))
using (var writer = new StreamWriter(stream))
using (var csv = new CsvWriter(writer, config))
{
// Write all records minus one.
for (int x = 0; x < records.Count - 1; x++)
{
csv.WriteRecord(records[x]);
csv.NextRecord();
}
// Write the last record without the newline
csv.WriteRecord(records[records.Count-1]);
}
//--------------------------------------
//other unrelated code things happen here
//--------------------------------------
double value = 4;
string thirdValue = "," + value;
File.AppendAllText(filePath, thirdValue);
}

Related

Need to write all results to a txt file

I have the query below and it is working fine, I can see the results in the console. But I need to write all the result into a txt file to load it into a table. The query as is only write one line to the txt file. How can I make it write all lines from the output into the txt file? Really appreciate any help on this.
using System;
using Microsoft.AnalysisServices.AdomdClient;
using System.IO;
using System.Diagnostics;
using System.Text;
namespace PowerQry
{
class Program
{
#pragma warning disable IDE0060 // Remove unused parameter
static void Main(string[] args)
#pragma warning restore IDE0060 // Remove unused parameter
{
AdomdConnection adomdConnection = new AdomdConnection("Data Source=localhost:49971");
String query = #"
EVALUATE
SUMMARIZECOLUMNS(
Customer[City],
Customer[Country-Region],
Customer[Customer],
Customer[Customer ID],
Customer[CustomerKey]
)
";
AdomdCommand adomdCommand = new AdomdCommand(query, adomdConnection);
/*******************************************************
connection
*******************************************************/
adomdConnection.Open();
AdomdDataReader reader = adomdCommand.ExecuteReader();
// Create a loop for every row in the resultset
while (reader.Read())
{
String rowResults = "";
// Create a loop for every column in the current row --need to add header
for (
int columnNumber = 0;
columnNumber < reader.FieldCount;
columnNumber++
)
{
rowResults += $"\t{reader.GetValue(columnNumber)}";
}
//Console.WriteLine(rowResults);
//--write all lines to txt
{
string UserName = System.Environment.UserName;
string fileName = #"C:\Temp\nm.txt";
FileStream ostrm;
StreamWriter writer;
TextWriter oldOut = Console.Out;
try
{
ostrm = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.Write);
writer = new StreamWriter(ostrm);
for (int i = 0; i < 10; i++) ;
}
catch (Exception e)
{
Console.WriteLine("Cannot open Redirect.txt for writing");
Console.WriteLine(e.Message);
return;
}
Console.SetOut(writer);
Console.WriteLine(rowResults);
Console.SetOut(oldOut);
writer.Close();
ostrm.Close();
Console.WriteLine("Done");
}
//==
}
adomdConnection.Close();
}
}
}
You can use File.AppendAllLines.
Instead of append to a single string instance you can use List<string>
//String rowResults = "";
List<string> rows = new List<string>();
// Create a loop for every column in the current row --need to add header
for (int columnNumber = 0; columnNumber < reader.FieldCount;columnNumber++)
{
// tabulator should be removed
rows.Add($"\t{reader.GetValue(columnNumber)}");
// rowResults += $"\t{reader.GetValue(columnNumber)}";
}
File.AppendAllLines(your path - for example #"C:\output.txt", rows);

Read the flat file,group and write to file(Add special Characters as '*' in Empty Space)

E2739158012008-10-01O9918107NPF7547379999010012008-10-0100125000000
E2739158PU0000-00-00 010012008-10-0100081625219
E3180826011985-01-14L9918007NPM4927359999010011985-01-1400005620000
E3180826PU0000-00-00 020011985-01-14000110443500021997-01-1400000518799
E3292015011985-01-16L9918007NPM4927349999010011985-01-1600003623300
I have this flat file and I need to group this based on the 2nd position to 8th position
example(2739158/3180826/3292015) and write to another flat file.
So the data Starting with 'E' should Repeat in the single line along with that group field(2nd to 8th Position in the start) and I should take the 9th Position after 'E'
Also I need to replace Empty space with ('*' star)
For example
1st Line
2739158**E**012008-10-01O9918107NPF7547379999010012008-10-0100125000000*****E**012008-10-01O9918107NPF7547379999010012008-10-0100125000000
2nd Line
3180826**E**011985-01-14L9918007NPM4927359999010011985-01-1400005620000**E**011985-01-14L9918007NPM4927359999010011985-01-140000562000**E**011985-01-14L9918007NPM4927359999010011985-01-140000562000***
3rd Line
3292015**E**011985-01-16L9918007NPM4927349999010011985-01-1600003623300****
Can we do this in Stream reader c#, please?
Any help would be highly appreciated.The file size is more than 285 MB so it it good to read through Stream Reader?
Thanks
#jdweng: thanks very much for your input. i tried somehow without grouping and it works as expected.Thanks everyone who tried to solve the issue.
string sTest= string.Empty; List<SortLines> lines = new List<SortLines>();
List<String> FinalLines = new List<String>();
using (StreamReader sr = new StreamReader(#"C:\data\Input1.txt))
{
sr.ReadLine();
string line = "";
while (!sr.EndOfStream)
{
line = sr.ReadLine();
//line = line.Trim();
if (line.Length > 0)
{
line = line.Replace(" ", "*");
SortLines newLine = new SortLines()
{
key = line.Substring(1, 7),
line = line
};
if (sTest != newLine.key)
{
//Add the Line Items to String List
sOuterLine = sTest + sOneLine;
FinalLines.Add(sOuterLine);
string sFinalLine = newLine.line.Remove(1, 7);
string snewLine = newLine.key + sFinalLine;
sTest = snewLine.Substring(0, 7);
//To hold the data for the 1st occurence
sOtherLine = snewLine.Remove(0, 7);
bOtherLine = true;
string sKey = newLine.key;
lines.Add(newLine);
}
else if (sTest == newLine.key)
{
string sConcatLine = String.Empty;
string sFinalLine = newLine.line.Remove(1, 7);
//Check if 1st Set
if (bOtherLine == true)
{
sOneLine = sOtherLine + sFinalLine;
bOtherLine = false;
}
//If not add subsequent data
else
{
sOneLine = sOneLine + sFinalLine;
}
//Check for the last line in the flat file
if (sr.Peek() == -1)
{
sOuterLine = sTest + sOneLine;
FinalLines.Add(sOuterLine);
}
}
}
}
}
//Remove the Empty List
FinalLines.RemoveAll(x => x == "");
StreamWriter srWriter = new StreamWriter(#"C:\data\test.txt);
foreach (var group in FinalLines)
{
srWriter.WriteLine(group);
}
srWriter.Flush();
srWriter.Close();
Try code below :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
const string INPUT_FILENAME = #"c:\temp\test.txt";
const string OUTPUT_FILENAME = #"c:\temp\test1.txt";
static void Main(string[] args)
{
List<SortLines> lines = new List<SortLines>();
StreamReader reader = new StreamReader(INPUT_FILENAME);
string line = "";
while ((line = reader.ReadLine()) != null)
{
line = line.Trim();
if (line.Length > 0)
{
line = line.Replace(" ", "*");
SortLines newLine = new SortLines() { key = line.Substring(2, 7), line = line };
lines.Add(newLine);
}
}
reader.Close();
var groups = lines.GroupBy(x => x.key);
StreamWriter writer = new StreamWriter(OUTPUT_FILENAME);
foreach (var group in groups)
{
foreach (SortLines sortLine in group)
{
writer.WriteLine(sortLine.line);
}
}
writer.Flush();
writer.Close();
}
}
public class SortLines : IComparable<SortLines>
{
public string line { get; set; }
public string key { get; set; }
public int CompareTo(SortLines other)
{
return key.CompareTo(other);
}
}
}

Convert .XYZ to .csv using c#

Hi i am using this method to replace " " to "," but is failing when i try to use it on data that have 32 millions lines. Is anyone knows how to modify it to make it running?
List<String> lines = new List<String>();
//loop through each line of file and replace " " sight to ","
using (StreamReader sr = new StreamReader(inputfile))
{
int id = 1;
int i = File.ReadAllLines(inputfile).Count();
while (sr.Peek() >= 0)
{
//Out of memory issuee
string fileLine = sr.ReadLine();
//do something with line
string ttt = fileLine.Replace(" ", ", ");
//Debug.WriteLine(ttt);
lines.Add(ttt);
//lines.Add(id++, 'ID');
}
using (StreamWriter writer = new StreamWriter(outputfile, false))
{
foreach (String line in lines)
{
writer.WriteLine(line+","+id);
id++;
}
}
}
//change extension to .csv
FileInfo f = new FileInfo(outputfile);
f.MoveTo(Path.ChangeExtension(outputfile, ".csv"));
I general i am trying to convert big .XYZ file to .csv format and add incremental field at the end. I am using c# for first time in my life to be honest :) Can you help me?
See my comment above - you could modify your reading / writing as follows :
using (StreamReader sr = new StreamReader(inputfile))
{
using (StreamWriter writer = new StreamWriter(outputfile, false))
{
int id = 1;
while (sr.Peek() >= 0)
{
string fileLine = sr.ReadLine();
//do something with line
string ttt = fileLine.Replace(" ", ", ");
writer.WriteLine(ttt + "," + id);
id++;
}
}
}

Output not correctly being written to CSV

So I am having an issue with my output being written to a CSV file. The output to this code is in the correct format when being written to the CSV file but it is only entering a single row in the file. There should be much more. Around 150 lines. Current out put is:
(859.85 7N830127 185)
Which is correct, but there should be more of these. It seems like to me that it is only writing the first line of the parsed EDI file and then stopping. I need to find a way to make sure it writes all data that is being parsed can anyone help me?
static void Main(string[] args)
{
StreamReader sr = new StreamReader("edifile.txt");
string[] ediMapTemp = sr.ReadLine().Split('|');
List<string[]> ediMap = new List<string[]>();
List<object[]> outputMatrix = new List<object[]>();
foreach (var line in ediMapTemp)
{
ediMap.Add(line.Split('~'));
}
DetailNode node = new DetailNode(0, null, 0);
int hierarchicalDepth = 0;
int hierarchicalIdNumber;
int hierarchicalParentIdNumber;
int hierarchicalLevelCode;
int hierarchicalChildCode = 0;
for (int i = 0; i < ediMap.Count; i++)
{
string segmentHeader = ediMap[i][0];
if (segmentHeader == "HL")
{
hierarchicalIdNumber = Convert.ToInt32(ediMap[i][1]);
hierarchicalParentIdNumber = Convert.ToInt32(ediMap[i][2]);
hierarchicalLevelCode = Convert.ToInt32(ediMap[i][3]);
hierarchicalChildCode = Convert.ToInt32(ediMap[i][4]);
List<string[]> levelDetails = new List<string[]>();
for (int v = i + 1; v < ediMap.Count; v++)
{
if (ediMap[v][0] == "HL") break;
levelDetails.Add(ediMap[v]);
}
DetailNode getNode = node.Find(node, hierarchicalParentIdNumber);
getNode.headList.Add(new DetailNode(hierarchicalIdNumber, levelDetails, getNode.depth + 1));
}
}
node.Traversal(new VID(), node);
foreach (var vid in VIDList.vidList)
using (StreamWriter writer = new StreamWriter("Import.csv"))
{
//probably a loop here
writer.WriteLine(String.Join(",", vid.totalCurrentCharges, vid.assetId, vid.componentName, vid.recurringCharge));
}
}
Quick Review, should the code be the following:
using (StreamWriter writer = new StreamWriter("Import.csv"))
{
//a loop here
foreach (var vid in VIDList.vidList)
{
writer.WriteLine(String.Join(",", vid.totalCurrentCharges, vid.assetId, vid.componentName, vid.recurringCharge));
}
}
You would open the file once and then loop thru your collection, writing each one.
It looks to me like you're re-opening the output file for each line you're trying to write, so you're overwriting the output file with a new file for each line. That would mean only the last entry remains in the file. Try moving this line
using (StreamWriter writer = new StreamWriter("Import.csv"))
Outside the foreach loop.

Parsing CSV File with \" in C#

I'm using VB's TextField in C# to parse a CSV file. But I am getting an error when it gets to \"
using (TextFieldParser csvReader = new TextFieldParser(csvFilePath)) {
csvReader.SetDelimiters(new string[] { "," });
csvReader.HasFieldsEnclosedInQuotes = true;
string[] colFields = csvReader.ReadFields();
foreach (string column in colFields)
{
DataColumn datacolumn = new DataColumn(column);
datacolumn.AllowDBNull = true;
csvData.Columns.Add(datacolumn);
}
while (!csvReader.EndOfData)
{
string[] fieldData = csvReader.ReadFields();
for (int i = 0; i < fieldData.Length; i++)
{
if (fieldData[i] == "")
{
fieldData[i] = null;
}
}
csvData.Rows.Add(fieldData);
}
}
And this is the line in the csv that is causing the error:
"101","Brake System","Level should be between \"MIN\" and \"MAX\" marks."
I don't know how to deal with the \" in C# using TextFieldParser
If the csv file will fit into memory, you could read it in, replace each \" with "", and use a MemoryStream as the input to the the TextFieldParser:
string data = File.ReadAllText(#"C:\temp\csvdata.txt").Replace("\\\"", "\"\"");
//TODO: Use the correct Encoding.
using (MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(data)))
{
using (TextFieldParser csvReader = new TextFieldParser(ms))
{
csvReader.SetDelimiters(new string[] { "," });
csvReader.HasFieldsEnclosedInQuotes = true;
string[] colFields = csvReader.ReadFields();
foreach (string s in colFields)
{
Console.WriteLine(s);
}
}
}
Which, for your example data, outputs
101
Brake System
Level should be between "MIN" and "MAX" marks.
If you don't mind using a different library, Ctl.Data has a mode (parseMidQuotes: true) specifically to allow parsing broken CSV like this.
using (StreamReader sr = new StreamReader("data.csv"))
{
var reader = new CsvReader<Record>(sr, parseMidQuotes: true, readHeader: false);
while (reader.Read())
{
Record rec = reader.CurrentObject.Value;
rec.Description = rec.Description.Replace("\\\"", "\"");
// use record...
}
}
And define your Record object:
(Normally it would match the header of the file to the properties, but in your case with a headerless file you need to specify the order with the Column attribute.
class Record
{
[Column(Order = 0)]
public int Id { get; set; }
[Column(Order = 1)]
public string Category { get; set; }
[Column(Order = 2)]
public string Description { get; set; }
}
(Disclaimer: I'm the author of said library)
Here's how to do it using two different methods without using TextFieldParser. TextFieldParser is very slow and not recommended for use in an actual production application.
Here's the simpler method using just String methods, and assuming that it's delimited with , without any quotes or any other special CSV formatting.
FileInfo file = new FileInfo("myfile.csv");
using (TextReader reader = file.OpenText())
{
for(String line = reader.ReadLine(); line != null; line = reader.ReadLine())
{
string[] fields = line.Split(new[] {','});
foreach(String f in fields)
{
//do whatever you need for each field
}
}
}
Now if you want to use CsvHelper (available on nuget) becaues you have a more complicated CSV file with things like quoted field, headers, or if the rows of your CSV can map directly to an object that you have then this library might help you.
Not Mapped Example:
FileInfo file = new FileInfo("myfile.csv");
using (TextReader reader = file.OpenText())
using (CsvReader csv = new CsvReader(reader))
{
csv.Configuration.Delimiter = ",";
csv.Configuration.HasHeaderRecord = false;
csv.Configuration.IgnoreQuotes = true; //if you don't use field quoting
csv.Configuration.TrimFields = true; //trim fields as you read them
csv.Configuration.WillThrowOnMissingField = false; //otherwise null fields aren't allowed
while(csv.Read())
{
myStringVar = csv.GetField<string>(0); //gets first field as string
myIntVar = csv.GetField<int>(1); //gets second field as int
... //etc, you get the idea
}
}
Mapped Example:
Mapping Class- Assumes you have a class named MyClass with the fields named field1, field2, field3
public sealed class MyClassMap : CsvClassMap<MyClass>
{
public MyClassMap()
{
Map(m => m.field1).Index(0);
Map(m => m.field2).Index(1);
Map(m => m.field3).Index(2);
}
}
Parsing Code
FileInfo file = new FileInfo("myfile.csv");
using (TextReader reader = file.OpenText())
using (CsvReader csv = new CsvReader(reader))
{
csv.Configuration.Delimiter = ",";
csv.Configuration.HasHeaderRecord = false;
csv.Configuration.IgnoreQuotes = true; //if you don't use field quoting
csv.Configuration.TrimFields = true; //trim fields as you read them
csv.Configuration.WillThrowOnMissingField = false; //otherwise null fields aren't allowed
csv.Configuration.RegisterClassMap<MyClassMap>(); //adds our mapping class to the reader
while(csv.Read())
{
myObject = csv.GetRecord<MyClass>();
//do whatever here
}
}
Both of these methods won't care that you have any strange characters like \ in your csv file.
Disclaimer: I have no relation to CsvHelper, but have had success with it in a few projects in the past in which it has made my life much easier

Categories