How to use foreach in this Case - c#

Can you help me with this problem?
Im using C# and after getting data from the repository, I have to show the results according to a rule.
I used a distinct for an specific column called Level, and the results are the next ones:
foreach (var lev in Model.Commissions.Select(x => x.Level).Distinct()
And it prints:
"1P, "9C", "1T", "6C", "7B", "5C", "4C", "2T"
(It can be more or less distinct records depends on the selection of the query)
How Can I use a foreach for that column with distinct applied to get the output in this order?
"5C", "1P", "1T", "2T", "4C", "6C", "7B", "9C".
The rule is that 5C is always first and after that the rest in order asc, but I don't have idea how to do it.

First parse the "5C" from the list and than search everything else in your list as you would do but exclude "5C" from your second list.
Last step would be to conact both lists together to get all elements of the list.
var fiveC = Model.Commissions.Where(x => x.Level == "5C").ToList();
var rest = Model.Commissions.Where(x => x.Level != "5C").OrderBy(x => x.Level).ToList();
var result = fiveC.Concat(rest).ToList();
At the end you can do your foreach in a distinct order:
foreach (var lev in result.Select(x => x.Level).Distinct())
{
//Do here whatever you need to do
}

Related

foreach first list and see if it exists in the 2nd list and if it does then get other values from 2nd list

foreach first list by using ID and ID_SCH and see if it exists in the 2nd list and if it does then get other values from 2nd list.
string getRecords = "SELECT .....";
List <Records> firstList = ReadAll(getRecords, reader => {
return new Records(
reader.ReadByName(NameRecord.ID, string.Empty),
reader.ReadByName(NameRecord.ID_SCH, string.Empty)
);
});
string getAllRecords = "SELECT .....";
List <Records> secondList = ReadAll(getAllRecords, reader => {
return new Records(
reader.ReadByName(NameRecord.ID, string.Empty),
reader.ReadByName(NameRecord.ID_SCH, string.Empty),
reader.ReadByName(NameRecord.BSID, string.Empty),
reader.ReadByName(NameRecord.BSID_SCH, string.Empty),
);
});
// currently I am able to use id only. But I would like to include `id` and `id_sch` as well in the below statement and then get the value of `BSID` and `BSID_SCH`.
var aa= data.Select(l1 => l1.Id).Intersect(secondList .Select(l2 => l2.Id)).ToList();
Acceptance criteria
1.foreach test in the first list see if it exists in the 2nd list. some how I managed to use idto get the result but I would like to useid_sch` as well.
if it does, get the tests that are excluded from 2nd list like BSID and BSID_SCH
after getting the BSID and BSID_SCH value from acceptance criteria 2, need to check if these BSID and BSID_SCH value exist in firstlist
If it exists in the first list then how to get the value of id idsch from first list.
You can use tuples to combine the two values. In a first step we add the values of the first list into a HashSet<T>, so that we can test whether an item exists fast and easily.
var l1Exclude = data
.Select(l1 => (l1.Id, l1.id_sch))
.ToHashSet();
var l1Include = data
.Select(l1 => (l1.BSID, l1.BSID_SCH))
.ToHashSet();
Now, you can use this result to filter the second list with of all its properties
IEnumerable<Records> result = secondList
.Where(l2 => l1Include.Contains((l2.BSID, l2.BSID_SCH)) &&
!l1Exclude.Contains((l2.Id, l2.id_sch)));
But a fundamental question is, whether it would not be easier and faster to perform this logic in SQL directly yielding the expected result. Something like this
SELECT b.*
FROM
Table2 b
INNER JOIN Table1 a
ON b.BSID = a.BSID AND b.BSID_SCH = a.BSID_SCH
WHERE
NOT EXISTS (SELECT *
FROM Table1 aa
WHERE aa.Id = b.Id AND aa.IdSch = b.IdSch)

Linq value, use Where to Remove items

I am trying to remove items from an IQueryable list but the result is only pulling in those items:
public IQueryable<Biz.Data.AllItems> items_GetData()
{
var submissions = Biz.Data.AllItems.LoadNotDeleted().Where(x =>
// these items need to match to remove the item
x.itemOne != null &&
x.itemTwo != null &&
x.itemThree != null));
var filter = new Biz.Data.AllItemsFilter();
return submissions = Biz.Data.Registration.Load(filter).OrderBy(x => x.LastName).ThenBy(x => x.FirstName);
}
Currently, it's only pulling in items that match those instead of removing. I can't use RemoveAll because it's not a List and I don't want to reformat this because it passes through a filter process after this code. Is there another way to remove items that match these results first before it passes through a filter?
As discussed in comments, simply negate the condition in your predicate.
So if this is your original statement:
var itemsThatMatch = list.Where(x => /* some condition */);
This will give you the opposite:
var itemsThatDoNotMatch = list.Where(x => !(/* some condition */));

Filtering a list and removing unwanted data C#

I have a list of users for a timespan, for arguments sake let's say a month. So in that list certain users do not meet the criteria I want for a certain objective so I want to filter out the list and display the filtered out data into another list. So how I replicated the list was as follows:
List<tblList1> List1 = new XPQuery<tblList1>(session)
.Where(w => w.UserCode != null).ToList();
I then use a foreach loop to go through this list to compare the data to my criteria and then add them to the new list which is working perfectly. The problem I now have is to delete the data I took from the first list. I tried the following in a new method which I created:
public void DeleteData(Session session)
{
List<tblList1> List1= new XPQuery<tblList1>(session)
.Where(w => w.UserID != null).ToList();
List<tblList2> List2= new XPQuery<tblList2>(session)
.Where(w => w.UserID!= null).ToList();
List1.RemoveAll(w => w.UserID == List2.Any(e => e.UserID== w.UserID));
}
So in the end I want to remove all the data in list1 so that we can view the deleted data in list2. Any help would be appreciated if I can just get the RemoveAll LINQ statement correct as the current line does not work and I am unsure of how to handle this in LINQ.
As far as I can see from comments:
I want to take List1 {1, 2, 3} compare them to criteria and see that {
2 } does not meet that criteria and add that in List2. Then delete { 2
} from List1
I can't see any need in Linq at all. Let's fill both Lists in parallel:
List<tblList1> List1 = new List<tblList1>();
//TODO: please, check types; it seems that it should be List<tblList1> List2
List<tblList2> List2 = new List<tblList2>();
foreach (var item in new XPQuery<tblList1>(session).Where(w => w.UserID != null)) {
if (YourCriteriaHere)
List1.Add(item); // <- Criteria met: add to List1
else
List2.Add(item); // <- Doesn't meet: "delete" from (just not add to) List1 into List2
}
You can just use LINQ Where to filter your list, and then Except to get unfiltered values:
List<Item> all = ...; // your original list
List<Item> matching = all.Where(x => IsMatching(x)).ToList(); // IsMatching is any filtering logic
List<Item> notMatching = all.Except(matching).ToList();

Optimize LINQ to Objects query

I have around 200K records in a list and I'm looping through them and forming another collection. This works fine on my local 64 bit Win 7 but when I move it to a Windows Server 2008 R2, it takes a lot of time. There is difference of about an hour almost!
I tried looking at Compiled Queries and am still figuring it out.
For various reasons, we cant do a database join and retrieve the child values
Here is the code:
//listOfDetails is another collection
List<SomeDetails> myDetails = null;
foreach (CustomerDetails myItem in customerDetails)
{
var myList = from ss in listOfDetails
where ss.CustomerNumber == myItem.CustomerNum
&& ss.ID == myItem.ID
select ss;
myDetails = (List<SomeDetails>)(myList.ToList());
myItem.SomeDetails = myDetails;
}
I would do this differently:
var lookup = listOfDetails.ToLookup(x => new { x.CustomerNumber, x.ID });
foreach(var item in customerDetails)
{
var key = new { CustomerNumber = item.CustomerNum, item.ID };
item.SomeDetails = lookup[key].ToList();
}
The big benefit of this code is that it only has to loop through the listOfDetails once to build the lookup - which is nothing more than a hash map. After that we just get the values using the key, which is very fast as that is what hash maps are built for.
I don't know why you have the difference in performance, but you should be able to make that code perform better.
//listOfDetails is another collection
List<SomeDetails> myDetails = ...;
detailsGrouped = myDetails.ToLookup(x => new { x.CustomerNumber, x.ID });
foreach (CustomerDetails myItem in customerDetails)
{
var myList = detailsGrouped[new { CustomerNumber = myItem.CustomerNum, myItem.ID }];
myItem.SomeDetails = myList.ToList();
}
The idea here is to avoid the repeated looping on myDetails, and build a hash based lookup instead. Once that is built, it is very cheap to do a lookup.
The inner ToList() is forcing an evaluation on each loop, which has got to hurt. The SelectMany might let you avoid the ToList, something like this :
var details = customerDetails.Select( item => listOfDetails
.Where( detail => detail.CustomerNumber == item.CustomerNum)
.Where( detail => detail.ID == item.ID)
.SelectMany( i => i as SomeDetails )
);
If you first get all the SomeDetails and then assign them to the items, it might speed up. Or it might not. You should really profile to see where the time is being taken.
I think you'd probably benefit from a join here, so:
var mods = customerDetails
.Join(
listOfDetails,
x => Tuple.Create(x.ID, x.CustomerNum),
x => Tuple.Create(x.ID, x.CustomerNumber),
(a, b) => new {custDet = a, listDet = b})
.GroupBy(x => x.custDet)
.Select(g => new{custDet = g.Key,items = g.Select(x => x.listDet).ToList()});
foreach(var mod in mods)
{
mod.custDet.SomeDetails = mod.items;
}
I didn't compile this code...
With a join the matching of items from one list against another is done by building a hashtable-like collection (Lookup) of the second list in O(n) time. Then it's a matter of iterating the first list and pulling items from the Lookup. As pulling data from a hashtable is O(1), the iterate/match phase also only takes O(n), as does the subsequent GroupBy. So in all the operation should take ~O(3n) which is equivalent to O(n), where n is the length of the longer list.

most matched field value

I have a DataTable. I can also use Linq.
In a DataTable have many columns, and rows. One of the column is called as feedCode. its type is string. in database it's length is 7 varchar, nullable.
feedCode may contain values as 9051245, 9051246, 9051247, 9031454, 9021447.
Method must return most matched (in this case starting with 905) value 905 (first 3 character of string)?
thanks.
Try to use this code:
var feedCodes = new string[] { "9051245", "9051246", "9051247", "9051245", "9031454", "9021447" };
var mostOccuring = feedCodes.Where(feedCode => feedCode != null)
.GroupBy(feedCode => feedCode.Length < 3 ? feedCode : feedCode.Substring(0, 3))
.OrderByDescending(group => group.Count())
.FirstOrDefault();
if(mostOccuring == null)
{
//some exception handling
}
else
{
//process mostoccuring.Key
}
this code also handle feedcodes with length less than 3 (even empty strings). If you don't want to use them just filter them out in where statement.
Maybe i didn't understand your question correctly but maybe this will be a starting point for your:
//The feedCodes (i put one in two times, to have one appearing most often)
var values = new string[] { "9051245", "9051246", "9051247", null, "", "9051245", "9031454", "9021447" };
//Just filter the list for filled up values
var query = values.Where(value => !String.IsNullOrEmpty(value))
//and group them by their starting text
.GroupBy(value => value.Substring(0, 3))
//order by the most occuring group first
.OrderByDescending(group => group.Count());
//Iterate over all groups or just take the first one with query.First() or query.FirstOrDefault()
foreach (var group in query)
{
Console.WriteLine(group.Key + " Count: " + group.Count());
}

Categories