Possibilities and nested foreach loops C# - c#

I am writing a program that can calculate all possible places for people sitting at a random amount of tables (can go up to 1000+):
As you see on the image, the black dots represent people (but in the computer, there are different types of people.)
There are two types of tables : blue and pink ones. The blue ones can contain 3 people and the pink 2 people.
To get all possible places for any person to sit I could use foreach loops (8of them) and then I can run some extra code...
But what happens if I add 200 tables? Then do I need to use 200 foreach loops?
Is there any way that this can be coded faster and less-space-consuming-coded?
What I tried? =>
switch(listoftables.Count)
{
case 1:foreach(Table table in listoftables){ //code to add people to this table}break;
case 2: foreach(Table table1 in listoftables)
{foreach(Table table1 in listoftables){//code to add people to this table
}}break;
}
INPUT : array with editable Table class objects (its a class created by myself)
PROCESS : the above List is edited and is added to another List object, where after the whole foreach process has ended, the OUTPUT will write all possible configurations (who are in the other List object) to the screen.
Example part of output :
// List<Table> listofalltables was processed
List<listofalltables> output
=> contains as [0] in array : List first
=> contains as [0] in array : Table.attachedpeople (is list)

Try a recursive method. A small example :
public List<Table> assignToTable(List<Person> invited, List<Table> tables)
{
if(!tables.HasRoom)
return tables;
else
{
assign(tables,invited) //code to add a person to a table
assignToTable(invited, tables);
}
}
If I were you I'll create a object taht represent you tables with a propertie to know if there is still some room avaiblable. This will assign to every people a table without any foreach.
Then in you main you could have a method that will rearrange the tables in all the way possible :
Table 1
Table 2
Table 3
Then
Table 1
Table 3
Table 2
...
Table 3
Table 2
Table 1
and call the recursive method on those lists and you will have all the possibility where poeple can sit...

Taken #Guigui's answer and changed it to how I interpret the question. This will try to seat everyone everywhere (except when there are more people than chairs, is that a case? I assumed more chairs than people) by recursion and loops, as you see the complexity will be of the form O(Math.Power(nrPeople, nrSeats)) which is a lot (if I'm not mistaken).
Person[] peopleInvited = ....; // Avoid copying this, we are not modifying it
public void AssignToTable(int invited, SeatingArrangements tables)
{
if(!tables.HasRoom || invited == peopleInvited.Length)
// Do what? Print the seating?
else
{
var personToSeat = peopleInvited[invited];
foreach (var possibleSeating in tables.GetEmptyChairs())
{
// Add one person to a table somewhere, but don't modify tables
var newArrangments = Assign(possibleSeating, personToSeat, tables)
AssignToTable(invited + 1, newArrangements);
}
}
}

Well, it more a question about math than programming. What you are trying to do is creating permutations of people. Basically you have N tables and 2N+2 seats. You can assign a number to each seat. Then the result will be the set of K-permutations of 2N+2, where K is the number of people invited, and N is the number of tables.
You can do this using loops, but you can also do it recursively. There are algorithms ready to use out there. For example:
Algorithm to generate all possible permutations of a list?

Related

InvalidConstraintException when clearing parent table in C#

Maybe I just don't see the forest for the trees, but currently I'm faced with the following situation that puzzles me:
I have a typed data set (not connected to a database), which contains two tables: Orders and Lines. Each entry in the Orders table represents an order and, surprisingly, each entry in the Lines table represents one line for an order. There can be multiple lines for each order.
There is a relation between the Lines and the Orders table connecting them via the order ID. The relation is configured to be a relation as well as a foreign-key constraint, with both the update rule and the delete rule being set to "Cascade".
I'd have expected that clearing the Orders table via set.Orders.Rows.Clear(); would also remove the respective entries from the Lines table - however I'm getting an InvalidConstraintException saying I can't do that because there are entries in the Lines table associated with entries in the Orders table.
I can of course work around this by first clearing the Lines table before clearing the Orders table, but I'm still puzzled as to why the Cascade-rule is not applied in this case.
Not sure why the Clear() shows the error it does with those constraints, but the intended functionality happens when you try to Remove a row. With that in mind, an extension method like this does the trick:
public static void RemoveAll(this DataTable table)
{
for (int index = table.Rows.Count - 1; index >= 0; index--)
{
table.Rows.RemoveAt(index);
}
}
Use case:
Transactions dataset = new Transactions();
dataset.Orders.AddOrderRow("1");
dataset.Orders.AddOrderRow("2");
dataset.Lines.AddLineRow(dataset.Orders[0], 1);
dataset.Lines.AddLineRow(dataset.Orders[0], 2);
dataset.Lines.AddLineRow(dataset.Orders[0], 3);
dataset.Lines.AddLineRow(dataset.Orders[1], 1);
dataset.Lines.AddLineRow(dataset.Orders[1], 2);
dataset.Lines.AddLineRow(dataset.Orders[1], 3);
Console.WriteLine($"Total Number of Lines before delete is {dataset.Lines.Count}"); // Prints 6
//dataset.Orders.Rows.Clear();
dataset.Orders.RemoveAll();
Console.WriteLine($"Total Number of Lines after delete is {dataset.Lines.Count}"); // Prints 0

C# Matching items in different lists

I have two different lists of objects, one of them an IQueryable set (rolled up into an array) and the other a List set. Objects in both sets share a field called ID; each of the objects in the second set will match an object in the first set, but not necessarily vice versa. I need to be able to handle both groups (matched and unmatched). The size of both collections is between 300 and 350 objects in this case (for reference, the XML generated for the objects in the second set is usually no more than 7k, so think maybe half to two-thirds of that size for the actual memory used by each object in each set).
The way I have it currently set up is a for-loop that iterates through an array representation of the IQueryable set, using a LINQ statement to query the List set for the matching record. This takes too much time; I'm running a Core i7 with 10GB of RAM and it's taking anywhere from 10 seconds to 2.5 minutes to match and compare the objects. Task Manager doesn't show any huge memory usage--a shade under 25MB. None of my system threads are being taxed either.
Is there a method or algorithm that would allow me to pair up the objects in each set one time and thus iterate through the pairs and unmatched objects at a faster pace? This set of objects is just a small subset of the 8000+ this program will have to chew through each day once it goes live...
EDIT: Here's the code I'm actually running...
for (int i = 0; i < draftRecords.Count(); i++)
{
sRecord record = (from r in sRecords where r.id == draftRecords.ToArray()[i].ID select r).FirstOrDefault();
if (record != null)
{ // Do stuff with the draftRecords element based on the rest of the content of the sRecord object
You should use a method such as Enumerable.Join or Enumerable.GroupJoin to match items from the two collections. This will be far faster than doing nested for loops.
Since you want to match a collection of keys to an item in the second list which may or may not exist, GroupJoin is likely more appropriate. This would look something like:
var results = firstSet.GroupJoin(secondSet, f => f.Id, s => s.Id, (f,sset) => new {First = f, Seconds = sset});
foreach(var match in results)
{
Console.WriteLine("Item {0} matches:", match.First);
foreach(var second in item.Seconds)
Console.WriteLine(" {0}", second); // each second item matching, one at a time
}
Your question is lacking in sample code/information but I would personally look to use methods like; Join, Intersect, or Contains. If necessary use Select to do a projection of the fields you want to match or define a custom IEqualityComparer.

Is there any way to loop through my sql results and store certain name/value pairs elsewhere in C#?

I have a large result set coming from a pretty complex SQL query. Among the values are a string which represents a location (that will later help me determine the page location that the value came from), an int which is a priority number calculated for each row based on other values from the row, and another string which contains a value I must remember for display later.
The problem is that the sql query is so complex (it has UNIONS, JOINS, and complex calculations with aliases) that I can't logically fit anything else into it without messing with the way it works.
Suffice it to say, though, after the query is done and the calculations performed, I need something that perhaps aggregate functions might solve, but that IS NOT an option, as all the columns do not come from other aggregate functions.
I have been wracking my brain for days now as to how I can iterate through the results, store a pair of values in a list (or two separate lists tied together somehow) where one value is the sum of all the priority values for each location and the other value is a distinct location value (i.e., as the results are looped through, it will not create another list item with the same location value that has been used before, HOWEVER, it does still need the sum of all of the other priority values from locations that ARE identical). Also, the results need to be ordered by priority in Descending order (hence the problem with using two lists).
EXAMPLE:
EDIT: I forgot, the preserved value should be the value from the row with the highest priority from the sql query.
If I had the following results:
location priority value
--------------------------------------------------------------------------------
page1 1 some text!
page2 3 more text!
page2 4 even more text!
page3 3 text again
page3 1 text
page3 1 still more text!
page4 6 text
If I was able to do what I wanted I would be able to achieve something like this after iteration (and in this order):
location priority value
--------------------------------------------------------------------------------
page2 7 even more text!
page4 6 text
page3 5 text again
page1 1 some text!
I have done research after research after research but absolutely nothing really even gets close to solving this dilemma.
Is what I'm asking too tough for even the powerful C# language?
THINGS I HAVE CONSIDERED:
Looping through the sql results and checking each location for repeats, adding together all priority values as I go, and storing these two plus value in two or three separate lists.
Why I still need help
I can't use a foreach because the logic didn't pan out, and I can't use a for loop because I can't access an IEnumerable (or whatever type it is that stores what's returned from Database.Open.Query() by index. (this makes sense, of course). Also, I need to sort on priority, but can't get one list out of sync with the others.
Using LINQ to select and store what I need
Why I still need help
I don't know LINQ (at all!) mainly because I don't understand lambda expressions (no matter HOW MUCH I read up about it).
Using an instantiated class to store the name/value pairs
Why I still need help
Not only do I expect sorting on this sort of thing to be impossible, and while I do now how to use .cs files in my C#.net webpages with WebMatrix environment, I have mainly only ever used static classes and would also need a little refresher course on constructors and how to set this up appropriately.
Somehow fitting this functionality into the already sizeable and complex SQL query
Why I still need help
While this is probably where I would ideally like this functionality to be, I stress again that this IS NOT AN OPTION. I have tried using aggregate functions, but only get an error saying how not all the other columns come from aggregate functions.
Making another query based on values from the first query's result set
Why I still need help
I can't select distinct results based on only one column (i.e., location) alone.
Assuming I could get the loop logic correct, storing the values in a 3 dimensional array
Why I still need help
I can't declare the array, because I do not know all of its dimensions before I need to use it.
Your post has amazed me in a number of ways like saying to 'mostly using static classes' and 'expecting instantiate a class/object to be impossible'.. really strange things you say. I can only respond in a quote from Charles Babbage:
I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Anyways.. As you say you find lambdas hard, let's trace the problem in the classic 'manual' way.
Let's assume you have a list of ROWS that contains LOCATIONS and PRIORITIES.
List<DataRow> rows = .... ; // datatable, sqldatareader, whatever
You say you need:
list of unique locations
a "list" of locations paired up with summed up priorites
Let's start with the first objective.
To gather a list of unique 'values', a HashSet is just perfect:
HashSet<string> locations = new HashSet<string>();
foreach(var row in rows)
locations.Add( (string)rows["LOCATION"] );
well, and that's all. After that, the locations hashset will only remember all the unique locations. The "Add" does not result in duplicate elements. The HashSet checks and "uniquifies" all values that are put inside it. Small tricky thing is the hashset does not have the [index] operator. You'll have to enumerate the hashset to get the values:
foreach(string loc in locations)
{
Console.WriteLine(loc);
}
or convert/rewrite it to a list:
List<string> locList = new List<string>(locations);
Console.WriteLine(locList[2]); // of course, assuming there were at least three..
Let's get to the second objective.
To gather a list of values related to some thing behaving like a "logical key", a Dictionary<Key,Val> may be useful. It allows you to store/associate a "value" with some "key", ie:
Dictionary<string, double> dict = new Dictionary<string, double>();
dict["mamma"] = 123.45;
double d = dict["mamma"]; // d == 123.45
    dict["mamma"] += 101; // possible!
double e = dict["mamma"]; // d == 224.45
However, it has a behavior of happily throwing exceptions when you try to read from an unknown key:
Dictionary<string, double> dict = new Dictionary<string, double>();
dict["mamma"] = 123.45;
double d = dict["daddy"]; // throws KeyNotBlarghException
    dict["daddy"] += 101; // would throw too! tries to read the old/current value!
So, one have to be very careful with it with "keys" that it does not yet know. Fortunatelly, you can always ask the dictionary if it already knows a key:
Dictionary<string, double> dict = new Dictionary<string, double>();
dict["mamma"] = 123.45;
bool knowIt = dict.ContainsKey("daddy"); // == false
So you can easily check-and-initialize-when-unknown:
Dictionary<string, double> dict = new Dictionary<string, double>();
bool knowIt = dict.ContainsKey("daddy"); // == false
if( !knowIt )
dict["daddy"] = 5;
dict["daddy"] += 101; // now 106
So.. let's try summing up the priorities location-wise:
Dictionary<string, double> prioSums = new Dictionary<string, double>();
foreach(var row in rows)
{
string location = (string)rows["LOCATION"];
double priority = (double)rows["PRIORITY"];
if( ! prioSums.ContainsKey(location) )
// make sure that dictionary knows the location
prioSums[location] = 0.0;
prioSums[location] += priority;
}
And, really, that's all. Now the prioSums will know all locations and all sums of priorities:
var sss = prioSums["NewYork"]; // 9123, assuming NewYork was some location
However, that'd be quite useless to have to hardcode all locations. Hence, you also can ask the dictionary about what keys does it curently know
foreach(string key in prioSums.Keys)
Console.WriteLine(key);
and you can immediatelly use it:
foreach(string key in prioSums.Keys)
{
Console.WriteLine(key);
Console.WriteLine(prioSums[key]);
}
that should print all locations with all their sums.
You might already noticed an interesting thing: the dictionary can tell you what keys has it remembered. Hence, you do not need the HashSet from the first objective. Simply by summing up the priorities inside the Dictionary, you get the uniquized list of location by free: just ask the dict for its keys.
EDIT:
I noticed you've had a few more requests (like sort-descending or find-highest-prio-value), but I think I'll leave them for now. If you understand how I used a dictionary to collect the priorities, then you will easily build a similar Dictionary<string,string> to collect the highest-ranking value for a location. And the 'descending order' is done very easily if only you take the values out of dictionary and sort them as a i.e. List.. So I'll skip that for now.. This text got far tl;dr already I think :)
LINQ is really the tool to use for this kind of problems.
Suppose you have a variable pages which is an IEnumerable<Page>, where Page is a class with properties location, priority and value you could do
var query = from page in pages
group page by page.location into grp
select new { location = grp.Key,
priority = grp.Sum(page => page.priority),
value = grp.OrderByDescending(page => page.priority)
.First().value
}
You say you don't understand LINQ, so let me try to begin explain this statement.
The rows are group by location, which results in 4 groups of pages of which page.location is the key:
location priority value
--------------------------------------
page1 1 some text!
page2 3 more text!
4 even more text!
page3 1 text
1 still more text!
3 text again
page4 6 text
The select loops through these 4 groups and for each group it creates an anonymous type with 3 properties:
location: the key of the group
priority: the sum of priorities in one group
value: the first value in one group when its pages are sorted by priority in descending order.
The lamba expressions are a way to express which property should be used for a LINQ function like Sum. In short they say "transform page to page.priority": page => page.priority.
You want these new rows in descending order of priority, so finally you can do
result = query.OrderByDescending(x => x.priority).ToList();
The x is just an arbitrary placeholder representing one item in the collection in hand, query (likewise in the query above page could have been any word or character).

How to skip one (maybe more) columns of data in a list in linq

I'm reading input from two excel worksheets (using Linq-To-Excel) into two lists. One of the worksheets has an unwanted column of data (column name is known). The other columns in both worksheets however contain exactly the same type of data.
First part of my question is:
How can I exclude only that unwamted column of data in the select statement (without having to write select.column names for the other 25 or so columns? I intend to do this for the below purposes:
Make both the lists of the same type
Merge the two lists
Possibly move this block of code to a call procedure, as eventually I'll have to read from many more worksheets
ExcelQueryFactory excel = new ExcelQueryFactory(FilePath);
List<STC> stResults = (from s
in excel.Worksheet<STC>("StaticResults")
select s)
.ToList();
List<DYN> dynResults = (from s
in excel.Worksheet<DYN>("DynamicResults")
select s) //how can I EXCLUDE just one of the columns here??
.ToList();
I'm new to c# and linq. So please pardon my ignorance :-)
The second part of my question is:
The above data that I'm extracting is a bit on the fat side (varying from 100,000 to 300,000 rows). I have to keep giving repeated linq queries on the lists above (in the range of 1000 to 4000 times) using a for loop. Is there a better way to implement this, as its taking a huge toll on the performance.
EDIT_1:
Regarding the input files:
StaticResults file has 28 Columns (STC Class has 28 properties)
DynamicResults file has 29 Columns (28 columns with the same properties/column names as static plus one additional property, which is not required) (DYN is a derived class from STC)
Use anonymous type while selecting result from linq.
ExcelQueryFactory excel = new ExcelQueryFactory(FilePath);
List<STC> stResults = (from s
in excel.Worksheet<STC>("StaticResults")
select s)
.ToList();
List<DYN> dynResults = (from s
in excel.Worksheet<DYN>("DynamicResults")
select new {Property1 = s.xxx, Property2 = S.yyy) //get the props based on the type of S
.ToList();
Accidentally figured out the solution to my first question. Probably nothing great about it, but nevertheless thought would share it on here.
Got rid of the second class DYN
Made the second list is of type STC
This way both the lists generated extract only those properties/columns that are required (properties declared in the class that is). The extra column(s) not required are skipped (As I didn't define those as properties in the class. This is, I think, courtesy of linq-to-excel. I'd like to know more about that, if someone can put some more insight into it).

Iterating through two identical data sources

I have data with the same schema in a pipe delimited text file and in a database table, including the primary key column.
I have to check if each row in the file is present in the table, if not generate an INSERT statement for that row.
The table has 30 columns, but here I've simplified for this example:
ID Name Address1 Address2 City State Zip
ID is the running identity column; so if a particular ID value from the file is found in the table, there should be no insert statement generated for that.
Here's my attempt, which doesn't feel correct:
foreach (var item in RecipientsInFile)
{
if (!RecipientsInDB.Any(u => u.ID == item.ID ))
{
Console.WriteLine(GetInsertSql(item));
}
}
Console.ReadLine();
EDIT: Sorry, I missed the asking the actual question; how to do this?
Thank you very much for all the help.
EDIT: The table has a million plus rows, while the file has 50K rows. This a one time thing, not a permanent project.
I would add all the RecipientsInDB Ids in a HashSet and then test if the set contains the item Id.
var recipientsInDBIds = new Hashset(RecipientsInDB.Select(u => u.ID));
foreach (var item in RecipientsInFile)
{
if (!recipientsInDBIds.Contains(item.ID ))
{
Console.WriteLine(GetInsertSql(item));
}
}
Console.ReadLine();
Try comparing the ID lists using .Except()
List<int> dbIDs = Recipients.Select(x=>x.ID).ToList();
List<int> fileIDs = RecipientsFile.Select(x=>x.ID).ToList();
List<int> toBeInserted = fileIDs.Except(dbIDs).ToList();
toBeInserted.ForEach(x=>GetInsertSqlStatementForID(x));
For the pedantic and trollish among us in the comments, please remember the above code (like any source code you find on the interwebs) shouldn't be copy/pasted into your production code. Try this refactoring:
foreach (var item in RecipientsFile.Select(x=>x.ID)
.Except(DatabaseRecipients.Select(x=>x.ID)))
{
GetInsertSqlStatementForID(item);
}
Lots of ways of accomplishing this. Yours is one way.
Another would be to always generate SQL, but generate it in the following manner:
if not exists (select 1 from Recipients where ID == 1234)
insert Recipients (...) values (...)
if not exists (select 1 from Recipients where ID == 1235)
insert Recipients (...) values (...)
Another would be to retrieve the entire contents of the database into memory beforehand, loading the database IDs into a HashSet, then only checking that HashSet to see if it exists - would take a little longer to get started, but would be faster for each record.
Any of these three techniques would work - it all depends on how big your database table is, and how big your file is. If they're both relatively small (maybe 10,000 records or so), then any of these should work fine.
EDIT
And there's always option D: Insert all records from the file into a temporary table (could be a real table or a SQL temp table, doesn't really matter) in the database, then use SQL to join the two tables together and retrieve the differences (using not exists or in or whatever technique you want), and insert the missing records that way.

Categories