Converting flat data from sql to List<Item> - c#

First of all I'm sorry if you feel this question has been raised before, but I can't seem to wrap my mind around this one, although it's rly not the hardest thing to do..
Basically I have a query result from sql which holds several rows, existing out of :
id, parentid, name, description, level
level is the depth of the item viewed as a tree structure represented as a positive integer.
now I would love to parse/convert this flat data into a "List<Item> mySqlData" where Item consist like the following class definition
public class Item
{
public string Id { get; set; }
public string ParentId { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public string List<Item> { get; set; }
}
can anybody give me some example code, it's probably going to be something in the lines of recursive iteration trough the list while adding the items at their place..
thanks in advance

Assuming you want to build the tree, and don't get the data out of order, you should be able to maintain a lookup as you go, i.e.
var idLookup = new Dictionary<int, Item>();
var roots = new List<Item>();
foreach([row]) {
Item newRow = [read basic row];
int? parentId = [read parentid]
Item parent;
if(parentId != null && idLookup.TryGetValue(parentId.Value, out parent)) {
parent.Items.Add(newRow);
} else {
roots.Add(newRow);
}
idLookup.Add(newRow.Id, newRow);
}

Related

C# Filter List of objects by string

I have a list of objects. Each object has n properties. I have a another list with m [1..n] property names.
Two questions:
How do I filter the list of objects by checking if one of the properties in the other list contains a string?
How do I filter the list of objects by checking if ANY property contains a string?
Here is the object class:
public class MyModel
{
public string POne { get; set; }
public string PTwo { get; set; }
public string PThree { get; set; }
}
In pseudocode in would be something like:
New List filtered
Foreach object o in myObjectList
If o.POne, o.PTwo or o.PThree contains string "test"
filtered.Add(o)
Else
Next
I tried to adapt the code from this post, but could not get a working.
Another thought would be to create a nested list with all property values. In that scenario I get the filtering working with the following line:
List<List<string>> filtered = testList.Where(q => q.Any(a => a.Contains(testString))).ToList();
But isn't that creating a lot of overhead by creating all those extra lists?
Performance is a concern for me.
Make your Class responsible for determing if it matches or not via a method
public class MyModel
{
public string POne { get; set; }
public string PTwo { get; set; }
public string PThree { get; set; }
public bool DoesMatch(string tomatch)
{
return POne.Equals(tomatch) ||
PTwo.Equals(tomatch) ||
PThree.Equals(tomatch);
}
}
Then you can do this
var filtered = testList.Where(q => q.DoesMatch("string to check"));

JSON (deserialized) sends null values to list

Firstly thank you for taking the time to look at this. It's quite alot.
Question:
I'm basically trying to download a json as a string and then deserialize it to a list. The reason why is so i can then call a specific property of that list (in my case 'ips' because it's all i actually need) and insert it into a table if requirements are met.
The problem is that it moves all null values into the array. 114 columns of null, or empty array and i can't figure out why?
I think i'll attach a link to the JSON because its a massive file its here https://endpoints.office.com/endpoints/Worldwide?clientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7
Here is my code:
Getters and setters for JSON
public class GetSetJsonIP {
[JsonProperty("id")]
public int id { get; set; }
[JsonProperty("serviceArea")]
public string ServiceArea { get; set; }
[JsonProperty("serviceAreaDisplayName")]
public string ServiceAreaDisplayName { get; set; }
[JsonProperty("urls")]
public IList<string> urls { get; set; }
[JsonProperty("ips")]
public IList<string> ips { get; set; }
[JsonProperty("tcpPorts")]
public string tcpPorts { get; set; }
[JsonProperty("expressRoute")]
public bool expressRoute { get; set; }
[JsonProperty("category")]
public string category { get; set; }
[JsonProperty("required")]
public bool required { get; set; }
[JsonProperty("notes")]
public string notes { get; set; }
[JsonProperty("udpPorts")]
public string udpPorts { get; set; }
}
List class
public class ConvertJsonIP{
public List<GetSetJsonIP> jsonIpConvert { get; set; }
public List<GetSetJsonIP> jsonIPConvert = new List<GetSetJsonIP>();
}
3.I download the JSON using an empty string called o365IP
o365IP = wc.DownloadString(wc.BaseAddress + "/endpoints/Worldwide?clientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7");
I deserialize using my List to a seperate var
var o365IpVerion = JsonConvert.DeserializeObject<List<ConvertJsonIP>>(o365IP);
This code shows no errors. so i can only assume its a logical one on my part. It should be noted that i had to put the <List< in to stop an error stating that it couldnt convert an object to an array.
Seriously, i've been stuck on this for 3 days so any help on this would be greatly appreciated! Thanks in advance!
the json you have is a list of objects and each of these objects conform to GetSetJsonIp. You should deserialize using List<GetSetJsonIP>
var o365IpVerion = JsonConvert.DeserializeObject<List<GetSetJsonIP>>(o365IP);
public class GetJsonIP works fine.
The reason you must Deserialize into a List<> is because the json object starts with a bracket making the entire object a List or array.
var O365IpVersion = JsonConvert.DeserializeObject<List<GetJsonIP>(O365IP);
There are different ways to fetch the value of a certain property. If you just need ips and want to check the value then update it, then you could loop:
JArray arr = JArray.Parse(O365IP);
foreach (JObject obj in arr.Children<JObject>())
{
foreach (JPRoperty prop in obj.Properties().Where(x => x.Name == "ips"))
{
//use prop.Value and perform tasks
}
}
Or just simply loop like this:
for (int i = 0; i < O365IpVersion.Count; i++)
{
//use O365IpVersion.ElementAt(i).ips

Preventing duplicate entry for object class to a listbox as item

I am trying to add an item from an object class into a list box and check if the item already exists on the list box. If so then I just remove the current item and replace it with the current object (which will just increase the quantity and price) but for some reason the if statement is always true as in, I am checking if the item name exists already and is always coming up with its not (even though it is).
Here is my attempt below:
private void AddListItem(OrderItem item)
{
if (!lstOrderPad.Items.Contains(item.Name))
{
lstOrderPad.Items.Add(string.Format("{0,-5}{1,-15}{2,-10}",
item.Quantity, item.Name, item.ItemPrice));
}
else
{
int index = lstOrderPad.Items.IndexOf(item);
lstOrderPad.Items.RemoveAt(index);
lstOrderPad.Items.Add(string.Format("{0,-5}{1,-15}{2,-10}",
item.Quantity, item.Name, item.ItemPrice));
}
}
This is because you are adding a formatted string to list, but comparing with only the Name property.
You are adding following string to List
string.Format("{0,-5}{1,-15}{2,-10}", item.Quantity, item.Name, item.ItemPrice)
But comparing with
if (lstOrderPad.Items.Contains(item.Name))
You should instead for the following
var itemFormat = string.Format("{0,-5}{1,-15}{2,-10}", item.Quantity, item.Name, item.ItemPrice);
if (!lstOrderPad.Items.Contains(itemFormat))
{
lstOrderPad.Items.Add(itemFormat);
}
if (!lstOrderPad.Items.Contains(item.Name))
This will always be true if None of the items/string contain exact item.Name. Every item of yours is a "{x,y},{a,b}"... never JUST "x".
if you change the query to below, you will check every item in the list of items to see if the name is part of it.
var itemWithSameName = lstOrderPad.Items.Where(eachItem = eachItem.Contains(item.Name)).FirstOrDefault();
Now you can check if the itemWithSameName is null or not to replace it. BUT, you store each of the elements as a string. Not sure how you plan to determine which part of the string is quantity that you will update. You should definitely look into a class that can help you store these values.
Something like...
public class Order
{
public Order ()
{
Items = new List<Item>();
}
public List<Item> Items { get; set; }
}
public class Item
{
public string Name { get; set; }
public int Quantity { get; set; }
public int Price { get; set; }
}
This gives you more flexibility in queries and updates to items. Hope it helps.

Single Dictionary or multiple dictionaries or any other alternatives

This is the problem I am trying to solve :
Given as list of movies in a csv file, extract the movie name and genre and be able to return a query based on the years and the genre. Basically, Parsing a CSV input file and build an indexed data storage to allow search the data in time manner.
My Attempt:
Create a Dictionary with Key as genre and value as List of movies in that genre. This dictionary will satisfy when searched by genre.
To get results by year, I was thinking of creating another dictionary with key as year and value as list of movies in that year. This dictionary will satisfy when searched by year.
Now, when we have really large data to be read from csv, it is wise to create multiple dictionaries for each look up criteria like what I have done? Or should I just create a single List for the csv data and based on the criteria, filter it out. This will slow down the performance. Are there any better approaches to this problem?
Also in my code before displaying the values I am sorting by MovieName. Should I sort the list and save in the dictionary itself?
Any feedback related to the code is also appreciated.
CSV file source : https://gist.github.com/tiangechen/b68782efa49a16edaf07dc2cdaa855ea
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Enter Genre");
string genre = Console.ReadLine();
CSVReader csv = new CSVReader(#"C:\Users\Downloads\movies.csv");
var result = csv.ReadCSVFile();
if (result != null)
{
List<MovieDetails> movieDetails;
result.TryGetValue(genre.ToUpper(), out movieDetails);
if (movieDetails != null)
{
movieDetails.Sort((x, y)=> x.MovieName.CompareTo(y.MovieName));
foreach (var item in movieDetails)
{
Console.WriteLine(item.MovieName +" "+item.Genre);
}
}
}
}
}
public class CSVReader
{
public string filePath { get; set; }
public CSVReader(string filepath)
{
this.filePath = filepath;
}
public Dictionary<string, List<MovieDetails>> ReadCSVFile()
{
try
{
string Line;
Dictionary<string, List<MovieDetails>> dictionary = new Dictionary<string, List<MovieDetails>>();
if (!string.IsNullOrEmpty(filePath))
{
using (StreamReader sdr = new StreamReader(filePath))
{
Line = sdr.ReadLine();
while ((Line = sdr.ReadLine()) != null)
{
List<MovieDetails> movieDetailsList;
string[] input = Line.Split(',');
MovieDetails movieDetails = new MovieDetails()
{
MovieName = input[0],
Genre = input[1].ToUpper(),
LeadStudio = input[2],
Audience = int.Parse(input[3]),
Profitability = float.Parse(input[4]),
RottenTomatoesPercent = int.Parse(input[5]),
WorldwideGross = decimal.Parse(input[6], System.Globalization.NumberStyles.Currency),
Year = int.Parse(input[7])
};
if (dictionary.TryGetValue(movieDetails.Genre, out movieDetailsList))
{
movieDetailsList.Add(movieDetails);
}
else
{
dictionary.Add(movieDetails.Genre, new List<MovieDetails>() { movieDetails });
}
}
return dictionary;
}
}
else
{
return null;
}
}
catch (Exception)
{
throw;
}
}
}
public class MovieDetails
{
public string MovieName { get; set; }
public string Genre { get; set; }
public string LeadStudio { get; set; }
public int Audience { get; set; }
public float Profitability { get; set; }
public int RottenTomatoesPercent { get; set; }
public decimal WorldwideGross { get; set; }
public int Year { get; set; }
}
Index selectivity (the ratio of the average number of records in each "bucket" to the total number of records) is key to designing your program efficiently.
In your sample CSV genre has poor selectivity, because more than half of all movies in the sample are comedies; you would be better off walking the entire list under the circumstances. Same goes for the year: with only four distinct years available, you would be returning a quarter of all records in every search, so you might as well walk the list.
You can get better selectivity by introducing a composite index, i.e. an index with keys combining two or more columns (say, genre + year). This index has better selectivity: you would be placing ten or fewer movies in each "bucket", so when the query asks for a combination, you'd go for an index.
Note that another possibility to run queries faster is to keep your records in sorted order on one of the search keys. This would let you handle queries with a binary search and a forward scan, rather than a full scan of the data. Picking the sort column requires statistics on your search queries, because it may negatively impact queries that are based on other columns.

SQLite, Linq, Foreach: how to optimize performances?

I've developed an UWP app where I use a SQLite database to store datas that are synced.
Among these data, there a lot of tables that contain translated data. For example, for various cases, we have:
a "businness" table, which contains the id that is really used in the database
a "translation" table, which contains transaltion for the business table
The models of the "business" tables are defined like this:
public class Failure_Type : BasePoco
{
[PrimaryKey, NotNull]
public int failure_id { get; set; }
public int? function_type_id { get; set; }
public int? component_category_id { get; set; }
[MaxLength(200), NotNull]
public string description { get; set; }
public DateTime? disable_date { get; set; }
[Ignore]
public string _descriptionTr { get; set; }
}
The field "description" stores the english/default description, and the "_descriptionTr" field will store the translated description.
The models of the "translation" tables are defined like this:
public class Failure_Type_Translation : BasePoco
{
[PrimaryKey, NotNull]
public int failure_type_translation_id { get; set; }
public int? failure_type_id { get; set; }
[MaxLength(2)]
public string language { get; set; }
[MaxLength(200), NotNull]
public string description { get; set; }
}
The field "failure_type_id" is related to the business table, the other fields store the language code and the related translation.
So, after syncing datas in the SQLite database, I refresh the "translated" datas in the app and this can take a long moment. The load of the the 2 tables from the SQLite is very quickly, but the update of the "_descriptionTr" field can be very slow:
var failureType = ServiceLocator.Current.GetInstance<IRepository>().GetAll<Failure_Type>();
var failureTypeTranslations = ServiceLocator.Current.GetInstance<IRepository>().GetAll<Failure_Type_Translation>();
FailureType = new ObservableCollection<Failure_Type>();
foreach (var ft in failureType)
{
var ftt = failureTypeTranslations.FirstOrDefault(i => i.failure_type_id == ft.failure_id && i.language.ToLower().Equals(lang));
if (ftt != null)
ft._descriptionTr = ftt.description;
else
ft._descriptionTr = ft.description;
FailureType.Add(ft);
}
Is there a better way for doing this?
How could I optimize it?
Edit :
the "business" table contains 550 rows
the "translation" table contains 3500 rows
the duration of the loop is nearly 1 minute
A couple of suggestions:
Create the observable collection at once ...
FailureType = new ObservableCollection<Failure_Type>(failureType);
... so the individual additions don't fire notifications. Now use FailureType in the loop.
Instead of fetching all translations, filter them by lang:
var failureTypeTranslations = ServiceLocator.Current.GetInstance<IRepository>()
.GetAll<Failure_Type_Translation>()
.Where(l => i.language == lang);
Create a dictionary for lookup of known translations:
var dict = failureTypeTranslations.ToDictionary(ftt => ftt.failure_id);
foreach (var ft in FailureType)
{
Failure_Type_Translation ftt;
if (dict.TryGetValue(ft.failure_id, out ftt)
ft._descriptionTr = ftt.description;
else
ft._descriptionTr = ft.description;
}
I think that esp. the part failureTypeTranslations.FirstOrDefault kills performance. The query is executed for each iteration of the loop.

Categories