I have text data that I would like to transfer to json.
I have decided to split them based on '|' an then serialize.
The example below gives the object the same value for each property. I realize that its the iteration however I can't think of a way to do it differently. Could you please share your thoughts?
var test = "Australia | dolar | 1 | AUD | 15,608 | Brasil | real | 1 | BRL 4,016 | Bulgaria | lev | 1 | BGN | 13,066 | China | žen-min-pi | 1 | CNY | 3,369";
var split = test.Split('|');
var list = new List<DailyCourse>();
foreach (var variable in split)
{
var model = new DailyCourse()
{
Country = variable,
Currency = variable,
Amount = variable,
Code = variable,
Course = variable
};
}
list.Add(model);
var json = JsonSerializer.Serialize(list);
Console.WriteLine(json);
the output now is like
[{"Date":null,"Country":"Austr\u00E1lie ","Currency":"Austr\u00E1lie ","Amount":"Austr\u00E1lie ","Code":"Austr\u00E1lie ","Course":"Austr\u00E1lie "}, ...
Your data is in groups of 5, but you are then iterating over each individual element and creating object for each (obviously with the same value in each property).
You need to iterate in steps of 5 and grab the sequence of elements:
var test = "Australia | dolar | 1 | AUD | 15,608 | Brasil | real | 1 | BRL 4,016 | Bulgaria | lev | 1 | BGN | 13,066 | China | žen-min-pi | 1 | CNY | 3,369";
var split = test.Split('|');
for (var i = 0; i < split.Length; i = i + 5)
{
var model = new DailyCourse()
{
Country = split[i];
Currency = split[i + 1];
Amount = split[i + 2];
Code = split[i + 3];
Course = split[i + 4];
}
}
list.Add(model);
var json = JsonSerializer.Serialize(list);
Console.WriteLine(json);
Try this
var test = "Australia | dolar | 1 | AUD | 15,608 | Brasil | real | 1 | BRL| 4,016 | Bulgaria | lev | 1 | BGN | 13,066 | China | žen-min-pi | 1 | CNY | 3,369";
var split = test.Split('|');
var list = new List<DailyCourse>();
var i=0;
while ( i < split.Count())
{
var model = new DailyCourse();
model.Country = split[i]; i++;
model.Currency = split[i]; i++;
model.Amount = Convert.ToInt32( split[i]); i++;
model.Code = split[i]; i++;
model.Course = Convert.ToDouble( split[i]); i++;
list.Add(model);
}
var json = JsonSerializer.Serialize(list);
output
[{"Country":"Australia ","Currency":" dolar ","Amount":1,"Code":" AUD ","Course":15608},
{"Country":" Brasil ","Currency":" real ","Amount":1,"Code":" BRL","Course":4016},
{"Country":" Bulgaria ","Currency":" lev ","Amount":1,"Code":" BGN ","Course":13066},
{"Country":" China ","Currency":" \u017Een-min-pi ","Amount":1,"Code":" CNY ","Course":3369}]
class
public class DailyCourse
{
public string Country { get; set; }
public string Currency { get; set; }
public int Amount { get; set; }
public string Code { get; set; }
public double Course { get; set; }
}
Related
How can I delete a row which contains null values from a comma separated csv file in c# ?
Example:
| FirstName | LastName | Email | Address |
|---------------------|------------------|---------------|--------------|
| lmn | lmn |lmn#lmn.com |DemoAddress |
| xy | xy |xy#xy.com |DemoAddress |
| demo | demo | | |
| demo2 | demo2 |xy#xy.com |DemoAddress |
Outcome:
| FirstName | LastName | Email | Address |
|---------------------|------------------|---------------|--------------|
| lmn | lmn |lmn#lmn.com |DemoAddress |
| xy | xy |xy#xy.com |DemoAddress |
| demo2 | demo2 |xy#xy.com |DemoAddress |
I tried the following code but doesn't work as expected
private string filterCSV(string strFilePath)
{
var columnIndex = 3;
var line = File.ReadAllLines(strFilePath);
var n = line.Where(x => x[columnIndex].Equals(""));
string result = string.Join("\r", not.ToArray());
return result;
}
Adding answer for future reference
private void RemoveBlanks(string datapath)
{
List<CSV> records;
using (var reader = new StreamReader(datapath))
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
records = csv.GetRecords<CSV>().ToList();
for(int i = 0; i < records.Count;++i)
{
if (records[i].Email== "" && records[i].Address == "")
{
records.RemoveAt(i);
}
}
}
using (var writer = new StreamWriter(datapath))
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
csv.WriteRecords(records);
}
}
i have odd problem . i ma trying to iterate bd Table with 5 row and return this table value + an image as string , well when i check foreach loop all item passed and add to object list but when it's time to return all value i found 5 same value in list item !!! all belong to last table row ?
public class TransferItem
{
public string foodType { set; get; }
public string foodName { set; get; }
public int foodPrice { set; get; }
public string foodDescription { set; get; }
public byte[] foodImage { set; get; }
}
var Transfer = new transferToFront();
var mylist = new List<object>();
foreach (var obj in FoodObjt)
{
String filePath = HostingEnvironment.MapPath(#"~/Images/");
Transfer.Id = obj.Id;
Transfer.foodName = obj.foodName;
Transfer.foodImage = obj.foodImage;
Transfer.foodPrice = obj.foodPrice;
Transfer.foodType = obj.foodType;
Transfer.foodDescription = obj.foodDescription;
var imageString = System.IO.File.ReadAllBytes(filePath + obj.foodImage);
Transfer.Image = imageString;
mylist.Add(Transfer);
}
return Ok(mylist);
result of iteration
Transfer is shared across iterations of the loop, and thus each time you append the same object to the list. The memory map looks like:
+--------------------------+
| Transfer |
+--+----+----+----+---+----+
| | | | |
+--+----+----+----+---+--+
| 0 1 2 3 4 |
| myList |
+------------------------+
So when you change Transfer, you change all of the items in the list - because the're same.
The solution is to create a fresh new instance for each iteration:
var mylist = new List<object>();
foreach (var obj in FoodObjt)
{
var Transfer = new transferToFront();
String filePath = HostingEnvironment.MapPath(#"~/Images/");
Transfer.Id = obj.Id;
Transfer.foodName = obj.foodName;
Transfer.foodImage = obj.foodImage;
Transfer.foodPrice = obj.foodPrice;
Transfer.foodType = obj.foodType;
Transfer.foodDescription = obj.foodDescription;
var imageString = System.IO.File.ReadAllBytes(filePath + obj.foodImage);
Transfer.Image = imageString;
mylist.Add(Transfer);
}
return Ok(mylist);
Now the memory map looks like:
+----------+ +----------+ +----------+ +----------+ +----------+
|Transfer 1| |Transfer 2| |Transfer 3| |Transfer 4| |Transfer 5|
+--+-------+ +-----+----+ +---+------+ +----+-----+ +---+------+
| | | | |
| +----------+ | | |
| | +----------------+ | |
| | | +-------------------------+ |
| | | | +---------------------------------+
| | | | |
+--+----+----+----+---+--+
| 0 1 2 3 4 |
| myList |
+------------------------+
I have a problem with getting grouped columns in LINQ.
My class:
public class DTO_CAORAS
{
public int? iORAS_KEY_CON { get; set; }
public int? iMERC_KEY {get;set;}
public double? decD_ORAS_QUA {get;set;}
}
LINQ query:
var results =
from oras in listCAORAS_Delivered
group oras by new
{
oras.iORAS_KEY_CON,
oras.iMERC_KEY
}
into orasGroup
select new
{
decD_ORAS_QUA = orasGroup.Sum(x => x.decD_ORAS_QUA)
};
List results is filled only with one column - decD_ORAS_QUA. I don't know how to get columns, by which query is grouped - IORAS_KEY_CON and iMERC_KEY? I would like to fill results with iORAS_KEY_CON, iMERC_KEY and decD_ORAS_QUA.
Input data:
+---------------+-----------+---------------+
| iORAC_KEY_CON | iMERC_Key | decD_ORAS_QUA |
+---------------+-----------+---------------+
| 1 | 888 | 1 |
| 1 | 888 | 2 |
| 1 | 888 | 4 |
+---------------+-----------+---------------+
Desired output:
+---------------+-----------+---------------+
| iORAC_KEY_CON | iMERC_Key | decD_ORAS_QUA |
+---------------+-----------+---------------+
| 1 | 888 | 7 |
+---------------+-----------+---------------+
To also show the keys:
var results = from oras in listCAORAS_Delivered
group oras by new { oras.iORAS_KEY_CON, oras.iMERC_KEY } into g
select new DTO_CAORAS {
iORAS_KEY_CON = g.Key.iORAS_KEY_CON,
iMERC_KEY = g.Key.iMERC_KEY,
decD_ORAS_QUA = g.Sum(x => x.decD_ORAS_QUA)
};
As you are only grouping one column you can also:
var results = from oras in listCAORAS_Delivered
group oras.decD_ORAS_QUA by new { oras.iORAS_KEY_CON, oras.iMERC_KEY } into g
select new DTO_CAORAS {
iORAS_KEY_CON = g.Key.iORAS_KEY_CON,
iMERC_KEY = g.Key.iMERC_KEY,
decD_ORAS_QUA = g.Sum()
};
I am working on a datatable. I want to get the Subtotal for each category and Grand Total.
I want to add the subtotal row at the end of rows of each. My problem is that I don't get the subtotal for the last category. I don't know where I went wrong.
Any one who can help me please.
Required output:
+------------+------------+------------+-----------+----------+-------+-----------+
|CategoryID | Category | Orgname | Penalties | Interest | Admin |TotalAmount|
+------------+------------+------------+-----------+----------+-------+-----------+
| 1 | 1-Food | Spar | $10 | $0 | $15.3 | $20.3 |
+-------------------------+------------+-----------+----------+-------+-----------+
| 1 | 1-Food |Pick n’ Pay | $0 | $10 | $20 | $30 |
+------------+------------+------------+-----------+----------+-------+-----------+
| Subtotal | $10 | $10 | $30.3 | $50.3 |
+-------------------------+------------+-----------+----------+-------+-----------+
| 2 | 2-Auto | BMW | $0 | $30 | $55 | $85 |
| 2 | 2-Auto | Toyota | $9 | $0 | $14.5 | $23.5 |
| 2 | 2-Auto | Jaguar | $22.8 | $8.2 | $50 | $81 |
+-------------------------+------------+-----------+----------+-------+-----------+
| Subtotal | $31.8 | $38.2 | $119.5| $189.5 |
+-------------------------+------------+-----------+----------+-------+-----------+
| 3 | 3-Banking | Absa | $0 | $40 | $155 | $190 |
+-------------------------+------------+-----------+----------+-------+-----------+
| Subtotal | $0 | $40 | $155 | $190 |
+-------------------------+------------+-----------+----------+-------+-----------+
| Grand Total | $41.8 | $88.2 | $304.8| $429,8 |
+-------------------------+------------+-----------+----------+-------+-----------+
My code:
var query = (from _transaction in _entities.Transactions
join _cd in _entities.Organisations on _transaction.Refno equals _cd.Refno
join _category in _entities.Categorys on _transaction.CategoryID equals _category.CategoryID
group new { _trans = _transaction, cd = _cd, }
by new { _transaction.CategoryID, _transaction.Refno, _cd.Orgname, _category.Description }
into _group
orderby _group.Key.CategoryID
select new
{ CategoryID = _group.Key.CategoryID,
Category = _group.Key.CategoryID + " - " + _group.Key.Description,
Refno = _group.Key.Refno,
Orgname = _group.Key.Orgname,
Penalties = _group.Sum(x => x._trans.Penalties),
Interest = _group.Sum(x => x._trans.Interest),
Admin = _group.Sum(x => x._trans.Admin),
TotalAmount = _group.Sum(x => x._trans.TotalAmount),
});
DataTable dt = new DataTable();
DataSet ds = new DataSet();
ds.Tables.Add(query.CopyToDataTable()); ds.Tables[0].TableName = "Table1";
dt = ds.Tables[0];
//Get Subtotal
long _CategoryID = 0; double Admin = 0; double Interest = 0;
double Penalties = 0; double TotalAmount = 0; string Title = string.Empty;
for (int i = 0; i <= dt.Rows.Count - 1; i++)
{
if (i > 0)
{
if (dt.Rows[i]["Category"].ToString().ToLower() != dt.Rows[i - 1]["Category"].ToString().ToLower())
{
dt.Rows.InsertAt(dt.NewRow(), i);
dt.Rows[i]["CategoryID"] = _CategoryID;
_CategoryID = 0;
dt.Rows[i]["Category"] = Title;
Title = string.Empty;
dt.Rows[i]["Admin"] = Admin;
Admin = 0;
dt.Rows[i]["Interest"] = Interest;
Interest = 0;
dt.Rows[i]["Penalties"] = Penalties;
Penalties = 0;
dt.Rows[i]["TotalAmount"] = TotalAmount;
TotalAmount = 0;
i++;
}
}
Title = "Subtotal";
_CategoryID = Convert.ToInt64(dt.Rows[i]["CategoryID"]);
Admin += Convert.ToDouble(dt.Rows[i].IsNull("Admin") ? 0.0 : dt.Rows[i].Field<double>("Admin"));
Interest += Convert.ToDouble(dt.Rows[i].IsNull("Interest") ? 0.0 : dt.Rows[i].Field<double>("Interest"));
Penalties += Convert.ToDouble(dt.Rows[i].IsNull("Penalties") ? 0.0 : dt.Rows[i].Field<double>("Penalties"));
TotalAmount += Convert.ToDouble(dt.Rows[i].IsNull("TotalAmount") ? 0.0 : dt.Rows[i].Field<double>("TotalAmount"));
}
// Grand Total
var subtotal = query.GroupBy(x => x.CategoryID).Select(s => new
{
CategoryID = s.Key,
ISub = s.Sum(x => x.Interest),
ASub = s.Sum(x => x.Admin),
PSub = s.Sum(x => x.Penalties),
TASub = s.Sum(x => x.TotalAmount)
});
var GrandTotal = subtotal.Select(s => new
{
GI = subtotal.Sum(x => x.ISub),
GA = subtotal.Sum(x => x.ASub),
GP = subtotal.Sum(x => x.PSub),
GTA = subtotal.Sum(x => x.TSub)
}).Distinct();
//add grand total row to datatable
foreach (var a in GrandTotal)
{
var dr = dt.NewRow();
dr["Category"] = "Grand Total";
dr["Interest"] = a.GI;
dr["Admin"] = a.GA;
dr["Penalties"] = a.GP;
dr["TotalAmount"] = a.GTA;
dt.Rows.Add(dr);
}
This is a typical mistake when implementing grouping with a single loop. In such case the last group is not processed and requires some code duplication after the loop.
That's why I prefer a nested loop approach. You get the key, initialize aggregates, process until the key is changed and consume the group.
Applying it to your specific case would be something like this:
//Get Subtotal
for (int i = 0; i < dt.Rows.Count; i++)
{
// Initialize
long _CategoryID = Convert.ToInt64(dt.Rows[i]["CategoryID"]);
double Admin = 0; double Interest = 0;
double Penalties = 0; double TotalAmount = 0; string Title = string.Empty;
// Process
do
{
Admin += Convert.ToDouble(dt.Rows[i].IsNull("Admin") ? 0.0 : dt.Rows[i].Field<double>("Admin"));
Interest += Convert.ToDouble(dt.Rows[i].IsNull("Interest") ? 0.0 : dt.Rows[i].Field<double>("Interest"));
Penalties += Convert.ToDouble(dt.Rows[i].IsNull("Penalties") ? 0.0 : dt.Rows[i].Field<double>("Penalties"));
TotalAmount += Convert.ToDouble(dt.Rows[i].IsNull("TotalAmount") ? 0.0 : dt.Rows[i].Field<double>("TotalAmount"));
i++;
}
while (i < dt.Rows.Count && _CategoryID == Convert.ToInt64(dt.Rows[i]["CategoryID"]));
// Consume
dt.Rows.InsertAt(dt.NewRow(), i);
dt.Rows[i]["CategoryID"] = _CategoryID;
dt.Rows[i]["Category"] = "Subtotal";
dt.Rows[i]["Admin"] = Admin;
dt.Rows[i]["Interest"] = Interest;
dt.Rows[i]["Penalties"] = Penalties;
dt.Rows[i]["TotalAmount"] = TotalAmount;
}
I am reading in 5000 rows of data from a stream as follows from top to bottom and store it in a new CSV file.
ProductCode |Name | Type | Size | Price
ABC | Shoe | Trainers | 3 | 3.99
ABC | Shoe | Trainers | 3 | 4.99
ABC | Shoe | Trainers | 4 | 5.99
ABC | Shoe | Heels | 4 | 3.99
ABC | Shoe | Heels | 5 | 4.99
ABC | Shoe | Heels | 3 | 5.99
...
Instead of having duplicate entries, I want the CSV to have one row but with the Price summed:
E.g. If I want a csv file with only ProductCode, Name and Type, ignoring the Size. I want it too look like this:
ProductCode |Name | Type | Price
ABC | Shoe | Trainers | 14.97
ABC | Shoe | Heels | 14.97
Show only ProductCode, Name:
ProductCode |Name | Price
ABC | Shoe | 29.94
Show ProductCode, Name, Size, ignoring Type:
ProductCode |Name | Type | Size | Price
ABC | Shoe | 3 | 14.97
ABC | Shoe | 4 | 9.98
ABC | Shoe | 5 | 4.99
I store each row with all fields as a Product and keep a list of all Products:
public class Product
{
public string ProductCode { get; set; }
public string Name { get; set; }
public string Type { get; set; }
public string Price { get; set; }
}
And then output the needed fields into the csv depending on the csvOutputType using ConvertToOutputFormat which is different for each Parser.
public class CodeNameParser : Parser {
public override string ConvertToOutputFormat(Product p) {
return string.Format("{0},{1},{2}", p.ProductCode, p.ProductName, p.Price);
}
}
My code is then:
string fileName = Path.Combine(directory, string.Format("{0}.csv", name));
switch (csvOutputType)
{
case (int)CodeName:
_parser = new CodeNameParser();
break;
case (int)CodeType:
_parser = new CodeTypeParser();
break;
case (int)CodeNameType:
_parser = new CodeNameTypeParser();
break;
}
var results = Parse(stream).ToList(); //Parse returns IEnumerable<Product>
if (results.Any())
{
using (var streamWriter = File.CreateText(fileName))
{
//writes the header line out
streamWriter.WriteLine("{0},{1}", header, name);
results.ForEach(p => { streamWriter.WriteLine(_parser .ConvertToOutputFormat(p)); });
streamWriter.Flush();
streamWriter.Close();
}
Optional<string> newFileName = Optional.Of(SharpZipWrapper.ZipFile(fileName, RepositoryDirectory));
//cleanup
File.Delete(fileName);
return newFileName;
}
I don't want to go through the 5000 rows again to remove the duplicates but would like to check if the entry already exists before I add it to the csv file. I know that I can groupBy the required fields, but since I have 3 different outputs, I would have to write the same code 3 times for different keys that I need to group by.
results = results
.GroupBy(p => new { p.ProductCode, p.Name, p.Type })
.Select(g => new Product {
ProductCode = g.Key.ProductCode,
Name = g.Key.Name,
Type = g.Key.Type,
Price = g.Sum(p => p.Price)
})
.ToList();
Is there any other way to do this?