I would like to have a "percent change for 'Investigations' and 'Breaches' by each quarter. I'm currently grouping by quarter and getting counts but I cannot figure out how to add percent change.
This is what I want to have a IEnumerable/List of:
public class StatusCountDto
{
public string Quarter { get; set; }
public int Investigations { get; set; }
public double InvestigationsChange { get; set; }
public int Breaches { get; set; }
public double BreachesChange { get; set; }
}
Currently I am grouping by the Quarter and getting counts but I cannot figure out how to get the percent change of Investigation counts and Breaches counts from the previous quarter.
The data is already sorted by Quarter. If the previous value doesn't exit (first index) then it should be 0.
This is what I have so far.
Metrics.GroupBy(m => m.Quarter )
.Select((g, index) => new StatusCountDto
{
Quarter = g.Key,
Investigations = g.Count(),
Breaches = g.Where(a => a.Breach == "Yes").Count()
})
.ToList();
Is there a way to use the index to calculate the percent change?
Using an extension method based on the APL scan operator, which is like Aggregate but returns the intermediate results, you can run through the data and refer back to previous counts.
// TRes combineFn(TRes PrevResult, T CurItem)
// First PrevResult is TRes seedFn(T FirstItem)
// FirstItem = items.First()
// First CurItem = items.Skip(1).First()
// output is seedFn(items.First()), combineFn(PrevResult, CurItem), ...
public static IEnumerable<TRes> Scan<T, TRes>(this IEnumerable<T> items, Func<T, TRes> seedFn, Func<TRes, T, TRes> combineFn) {
using (var itemsEnum = items.GetEnumerator()) {
if (itemsEnum.MoveNext()) {
var prev = seedFn(itemsEnum.Current);
for (; ; ) {
yield return prev;
if (!itemsEnum.MoveNext())
yield break;
prev = combineFn(prev, itemsEnum.Current);
}
}
}
}
Given this variation of Scan that uses a lambda to seed the result stream, you can use it to compute the whole stream:
var ans = Metrics
.GroupBy(m => m.Quarter)
.Select(g => new {
Quarter = g.Key,
Investigations = g.Count(),
Breaches = g.Count(a => a.Breach == "Yes")
})
.Scan(f => new StatusCountDto { // first result
Quarter = f.Quarter,
Investigations = f.Investigations,
Breaches = f.Breaches
},
(prev, cur) => new StatusCountDto { // subsequent results
Quarter = cur.Quarter,
Investigations = cur.Investigations,
InvestigationsChange = 100.0 * (cur.Investigations - prev.Investigations) / prev.Investigations,
Breaches = cur.Breaches,
BreachesChange = 100.0 * (cur.Breaches - prev.Breaches) / prev.Breaches
}
)
.ToList();
Related
I have these two Lists scheduled with Priority 1 and registered With Priority 0:
Result should be:
I want to merge two Lists into one with checking priority. Priority rows will be entered without splitting. But unprioritized row can be split.
Assuming Scheduled contains all priority 1 periods, and that Registered contains all priority 0 periods (and these are considered unprioritized), and assuming your periods are in a class named ClassPeriod and assuming Priority is a Nullable to represent the blank column in your result diagram (your question should have answered all of these clearly), then you can define a helper method in ClassPeriod:
public class ClassPeriod {
public double From;
public double To;
public int? Priority;
public bool ContainsFrom(ClassPeriod aCP) => From <= aCP.From && aCP.From < To;
public bool ContainsTo(ClassPeriod aCP) => From < aCP.To && aCP.To <= To;
}
Now you can compute your answer as:
var ans = Scheduled
.Select(s => new ClassPeriod { From = s.From, To = s.To })
.Concat(
Registered.SelectMany(r => {
var ans = new List<ClassPeriod>();
while (r.From < r.To) {
var pcp = new ClassPeriod {
From = Scheduled.Where(s => s.ContainsFrom(r)).Select(s => s.To).DefaultIfEmpty(r.From).Max(),
To = Scheduled.Where(s => s.ContainsTo(r)).Select(s => s.From).DefaultIfEmpty(r.To).Min()
};
var nexts = Scheduled.Where(s => pcp.ContainsFrom(s));
if (nexts.Any()) {
var nextTo = nexts.Min(s => s.From);
ans.Add(new ClassPeriod { From = pcp.From, To = nextTo });
r.From = nextTo;
}
else {
if (pcp.From < pcp.To)
ans.Add(pcp);
break;
}
}
return ans;
})
)
.OrderBy(cp => cp.From).ThenBy(cp => cp.To);
I am trying to figure out the best way to calculate a running sum partition with a self joined collection using LINQ.
The query below is a somewhat simple example of what I am after. The output is the RowNumber, the RowType and the sum of all preceding RowValues within the current row's RowType.
DECLARE #T TABLE (RowNumber INT, RowType INT, RowValue INT)
INSERT #T VALUES (1,1,1),(2,1,1),(3,1,1),(4,1,1),(5,1,1),(6,2,1),(7,2,1),(8,2,1),(9,2,1),(10,2,1)
;WITH Data AS(SELECT RowNumber, RowType,RowValue FROM #T)
SELECT
This.RowNumber,
This.RowType,
RunningValue = COALESCE(This.RowValue + SUM(Prior.RowValue),This.RowValue)
FROM
Data This
LEFT OUTER JOIN Data Prior ON Prior.RowNumber < This.RowNumber AND Prior.RowType = This.RowType
GROUP BY
This.RowNumber,
This.RowType,
This.RowValue
/* OR
SELECT
This.RowNumber,
This.RowType,
RunningValue = SUM(RowValue) OVER (PARTITION BY RowType ORDER BY RowNUmber)
FROM
Data This
*/
Now, my not working attempt.
var joinedWithPreviousSums = allRows.Join(
allRows,
previousRows => new {previousRows.RowNumber, previousRows.RowType, previousRows.RowValue},
row=> new { row.RowNumber, row.RowType, row.RowValue},
(previousRows, row) => new { row.RowNumber, row.RowType, row.RowValue })
.Where(previousRows.RowType == row.RowType && previousRows.RowNumber < row.RowNumber)
.Select(row.RowNumber, row.RowType,RunningValue = Sum(previousRows.Value) + row.RowValue)).ToList()
Of course, the last two lines above are garbage and attempt to exemplify my desired projection while hinting at my lack of knowledge on performant complex LINQ projections.
I have read where some variation of the statement below could work and may be workable, however, is there a way to achieve similar results without yielding?
int s = 0;
var subgroup = people.OrderBy(x => x.Amount)
.TakeWhile(x => (s += x.Amount) < 1000)
.ToList();
EDIT : I have been able to get the snippet below to work, however, I cant seem to partition or project over RowType.
namespace ConsoleApplication1
{
class Program
{
delegate string CreateGroupingDelegate(int i);
static void Main(string[] args)
{
List <TestClass> list = new List<TestClass>()
{
new TestClass(1, 1, 1),
new TestClass(2, 2, 5),
new TestClass(3, 1, 1 ),
new TestClass(4, 2, 5),
new TestClass(5, 1, 1),
new TestClass(6, 2, 5)
};
int running_total = 0;
var result_set = list.Select(x => new { x.RowNumber, x.RowType, running_total = (running_total = running_total + x.RowValue) }).ToList();
foreach (var v in result_set)
{
Console.WriteLine("list element: {0}, total so far: {1}",
v.RowNumber,
v.running_total);
}
Console.ReadLine();
}
}
public class TestClass
{
public TestClass(int rowNumber, int rowType, int rowValue)
{
RowNumber = rowNumber;
RowType = rowType;
RowValue = rowValue;
}
public int RowNumber { get; set; }
public int RowType { get; set; }
public int RowValue { get; set; }
}
}
Your answer can be simplified greatly, but does scale poorly even then, as it must go through the Where for each row to compute each row, so O(list.Count^2).
Here is the simpler version, which preserves the original order:
var result = list.Select(item => new {
RowType = item.RowType,
RowValue = list.Where(prior => prior.RowNumber <= item.RowNumber && prior.RowType == item.RowType).Sum(prior => prior.RowValue)
});
You can go through list once if are willing to sort. (If you know the order is correct, or can use a simpler sort, you can remove or replace the OrderBy/ThenBy.)
var ans = list.OrderBy(x => x.RowType)
.ThenBy(x => x.RowNumber)
.Scan(first => new { first.RowType, first.RowValue },
(res, cur) => res.RowType == cur.RowType ? new { res.RowType, RowValue = res.RowValue + cur.RowValue }
: new { cur.RowType, cur.RowValue }
);
This answer uses an extension method that is like Aggregate, but returns the intermediate results, based on the APL scan operator:
// TRes seedFn(T FirstValue)
// TRes combineFn(TRes PrevResult, T CurValue)
public static IEnumerable<TRes> Scan<T, TRes>(this IEnumerable<T> src, Func<T, TRes> seedFn, Func<TRes, T, TRes> combineFn) {
using (var srce = src.GetEnumerator()) {
if (srce.MoveNext()) {
var prev = seedFn(srce.Current);
while (srce.MoveNext()) {
yield return prev;
prev = combineFn(prev, srce.Current);
}
yield return prev;
}
}
}
My eyes were glazed over after seeing this. The answer to my long winded question after 6 hours of skulldrugery seems to be as simple as this. Thanks to #NetMage for pointing out the SelectMany that I was missing.
var result = list.SelectMany(item => list.Where(x => x.RowNumber <= item.RowNumber && x.RowType == item.RowType)
.GroupBy(g => g.RowType)
.Select(p => new
{
RowType = p.Max(s => s.RowType),
RowValue = p.Sum(s => s.RowValue)
}));
At the moment I'm retrieving data from the DB through a method that retrieves an IQueryable<T1>, filtering, sorting and then paging it (all these on the DB basically), before returning the result to the UI for display in a paged table.
I need to integrate results from another DB, and paging seems to be the main issue.
models are similar but not identical (same fields, different names, will need to map to a generic domain model before returning);
joining at the DB level is not possible;
there are ~1000 records at the moment between both DBs (added during
the past 18 months), and likely to grow at mostly the same (slow)
pace;
results always need to be sorted by 1-2 fields (date-wise).
I'm currently torn between these 2 solutions:
Retrieve all data from both sources, merge, sort and then cache them; then simply filter and page on said cache when receiving requests - but I need to invalidate the cache when the collection is modified (which I can);
Filter data on each source (again, at the DB level), then retrieve, merge, sort & page them, before returning.
I'm looking to find a decent algorithm performance-wise. The ideal solution would probably be a combination between them (caching + filtering at the DB level), but I haven't wrapped my head around that at the moment.
I think you can use the following algorithm. Suppose your page size is 10, then for page 0:
Get 10 results from database A, filtered and sorted at db level.
Get 10 results from database B, filtered and sorted at db level (in parallel with the above query)
Combine those two results to get 10 records in the correct sort order. So you have 20 records sorted, but take only first 10 of them and display in UI
Then for page 1:
Notice how many items from database A and B you used to display in UI at previous step. For example, you used 2 items from database A and 8 items from database B.
Get 10 results from database A, filtered and sorted, but starting at position 2 (skip 2), because those two you already have shown in UI.
Get 10 results from database B, filtered and sorted, but starting at position 8 (skip 8).
Merge the same way as above to get 10 records from 20. Suppose now you used 5 item from A and 5 items from B. Now, in total, you have shown 7 items from A and 13 items from B. Use those numbers for the next step.
This will not allow to (easily) skip pages, but as I understand that is not a requirement.
The perfomance should be effectively the same as when you are querying single database, because queries to A and B can be done in parallel.
I've created something here, I will come back with explications if needed.
I'm not sure my algorithm works correctly for all edge cases, it cover all of the cases what I had in mind, but you never know. I'll leave the code here for your pleasure, I will answer and explain what is done there if you need that, leave a comment.
And perform multiple tests with list of items with large gaps between the values.
using System;
using System.Collections.Generic;
using System.Linq;
namespace ConsoleApplication1
{
class Program
{
//each time when this objects are accessed, consider as a database call
private static IQueryable<model1> dbsetModel_1;
private static IQueryable<model2> dbsetModel_2;
private static void InitDBSets()
{
var rnd = new Random();
List<model1> dbsetModel1 = new List<model1>();
List<model2> dbsetModel2 = new List<model2>();
for (int i = 1; i < 300; i++)
{
if (i % 2 == 0)
{
dbsetModel1.Add(new model1() { Id = i, OrderNumber = rnd.Next(1, 10), Name = "Test " + i.ToString() });
}
else
{
dbsetModel2.Add(new model2() { Id2 = i, OrderNumber2 = rnd.Next(1, 10), Name2 = "Test " + i.ToString() });
}
}
dbsetModel_1 = dbsetModel1.AsQueryable();
dbsetModel_2 = dbsetModel2.AsQueryable();
}
public static void Main()
{
//generate sort of db data
InitDBSets();
//test
var result2 = GetPage(new PagingFilter() { Page = 5, Limit = 10 });
var result3 = GetPage(new PagingFilter() { Page = 6, Limit = 10 });
var result5 = GetPage(new PagingFilter() { Page = 7, Limit = 10 });
var result6 = GetPage(new PagingFilter() { Page = 8, Limit = 10 });
var result7 = GetPage(new PagingFilter() { Page = 4, Limit = 20 });
var result8 = GetPage(new PagingFilter() { Page = 200, Limit = 10 });
}
private static PagedList<Item> GetPage(PagingFilter filter)
{
int pos = 0;
//load only start pages intervals margins from both database
//this part need to be transformed in a stored procedure on db one, skip, take to return interval start value for each frame
var framesBordersModel1 = new List<Item>();
dbsetModel_1.OrderBy(x => x.Id).ThenBy(z => z.OrderNumber).ToList().ForEach(i => {
pos++;
if (pos - 1 == 0)
{
framesBordersModel1.Add(new Item() { criteria1 = i.Id, criteria2 = i.OrderNumber, model = i });
}
else if ((pos - 1) % filter.Limit == 0)
{
framesBordersModel1.Add(new Item() { criteria1 = i.Id, criteria2 = i.OrderNumber, model = i });
}
});
pos = 0;
//this part need to be transformed in a stored procedure on db two, skip, take to return interval start value for each frame
var framesBordersModel2 = new List<Item>();
dbsetModel_2.OrderBy(x => x.Id2).ThenBy(z => z.OrderNumber2).ToList().ForEach(i => {
pos++;
if (pos - 1 == 0)
{
framesBordersModel2.Add(new Item() { criteria1 = i.Id2, criteria2 = i.OrderNumber2, model = i });
}
else if ((pos -1) % filter.Limit == 0)
{
framesBordersModel2.Add(new Item() { criteria1 = i.Id2, criteria2 = i.OrderNumber2, model = i });
}
});
//decide where is the position of your cursor based on start margins
//int mainCursor = 0;
int cursor1 = 0;
int cursor2 = 0;
//filter pages start from 1, filter.Page cannot be 0, if indeed you have page 0 change a lil' bit he logic
if (framesBordersModel1.Count + framesBordersModel2.Count < filter.Page) throw new Exception("Out of range");
while ( cursor1 + cursor2 < filter.Page -1)
{
if (framesBordersModel1[cursor1].criteria1 < framesBordersModel2[cursor2].criteria1)
{
cursor1++;
}
else if (framesBordersModel1[cursor1].criteria1 > framesBordersModel2[cursor2].criteria1)
{
cursor2++;
}
//you should't get here case main key sound't be duplicate, annyhow
else
{
if (framesBordersModel1[cursor1].criteria2 < framesBordersModel2[cursor2].criteria2)
{
cursor1++;
}
else
{
cursor2++;
}
}
//mainCursor++;
}
//magic starts
//inpar skipable
int skipEndResult = 0;
List<Item> dbFramesMerged = new List<Item>();
if ((cursor1 + cursor2) %2 == 0)
{
dbFramesMerged.AddRange(
dbsetModel_1.OrderBy(x => x.Id)
.ThenBy(z => z.OrderNumber)
.Skip(cursor1*filter.Limit)
.Take(filter.Limit)
.Select(x => new Item() {criteria1 = x.Id, criteria2 = x.OrderNumber, model = x})
.ToList()); //consider as db call EF or Stored Procedure
dbFramesMerged.AddRange(
dbsetModel_2.OrderBy(x => x.Id2)
.ThenBy(z => z.OrderNumber2)
.Skip(cursor2*filter.Limit)
.Take(filter.Limit)
.Select(x => new Item() {criteria1 = x.Id2, criteria2 = x.OrderNumber2, model = x})
.ToList());
; //consider as db call EF or Stored Procedure
}
else
{
skipEndResult = filter.Limit;
if (cursor1 > cursor2)
{
cursor1--;
}
else
{
cursor2--;
}
dbFramesMerged.AddRange(
dbsetModel_1.OrderBy(x => x.Id)
.ThenBy(z => z.OrderNumber)
.Skip(cursor1 * filter.Limit)
.Take(filter.Limit)
.Select(x => new Item() { criteria1 = x.Id, criteria2 = x.OrderNumber, model = x })
.ToList()); //consider as db call EF or Stored Procedure
dbFramesMerged.AddRange(
dbsetModel_2.OrderBy(x => x.Id2)
.ThenBy(z => z.OrderNumber2)
.Skip(cursor2 * filter.Limit)
.Take(filter.Limit)
.Select(x => new Item() { criteria1 = x.Id2, criteria2 = x.OrderNumber2, model = x })
.ToList());
}
IQueryable<Item> qItems = dbFramesMerged.AsQueryable();
PagedList<Item> result = new PagedList<Item>();
result.AddRange(qItems.OrderBy(x => x.criteria1).ThenBy(z => z.criteria2).Skip(skipEndResult).Take(filter.Limit).ToList());
//here again you need db cals to get total count
result.Total = dbsetModel_1.Count() + dbsetModel_2.Count();
result.Limit = filter.Limit;
result.Page = filter.Page;
return result;
}
}
public class PagingFilter
{
public int Limit { get; set; }
public int Page { get; set; }
}
public class PagedList<T> : List<T>
{
public int Total { get; set; }
public int? Page { get; set; }
public int? Limit { get; set; }
}
public class Item : Criteria
{
public object model { get; set; }
}
public class Criteria
{
public int criteria1 { get; set; }
public int criteria2 { get; set; }
//more criterias if you need to order
}
public class model1
{
public int Id { get; set; }
public int OrderNumber { get; set; }
public string Name { get; set; }
}
public class model2
{
public int Id2 { get; set; }
public int OrderNumber2 { get; set; }
public string Name2 { get; set; }
}
}
I am trying to search a List for combinations that add up to a particular value (I've tried using Knapsack but it was checking too much for my particular scenario and taking too long). The code works and finds matches for most of the datasets I throw at it but fails for some others.
All of my List s don't have more that 100 entries.
I had assumed that because my recursive method is wrapped in a sensible if statement that a stack overflow would not occur but it blows up on this line: if(x.Sum(y => y.Amount) == limit)
Here is my code:
private static void Check(List<double> trackedListToCopy, double limit)
{
List<CheckerObject> trackedList = new List<CheckerObject>();
int idCount = 1;
trackedListToCopy.ForEach(x => {
trackedList.Add(new CheckerObject() { Amount = x, Id = idCount});
idCount++;
});
List<List<CheckerObject>> possiblitiesToCheck = new List<List<CheckerObject>>();
if (trackedList.FirstOrDefault(x=>x.Amount == limit) != null)
{
Console.WriteLine("Exact match found");
return;
}
trackedList.ForEach(item =>
{
var listToCheck = new List<CheckerObject>();
if (trackedList.First().Id == item.Id)
return;
listToCheck.Add(trackedList.First());
listToCheck.Add(item);
possiblitiesToCheck.Add(listToCheck);
if (possiblitiesToCheck.Any(x => x.Sum(y => y.Amount) == limit))
{
List<CheckerObject> match = possiblitiesToCheck.First(x => x.Sum(y => y.Amount) == limit);
Console.WriteLine("Match found with 2 entries" + match);
return;
}
});
var baseList = new List<List<CheckerObject>>(possiblitiesToCheck);
SubsequentCheck(baseList, trackedList, limit);
}
private static void SubsequentCheck(List<List<CheckerObject>> baseList, List<CheckerObject> trackedList, double limit)
{
//List<List<CheckerObject>> copy = new List<List<CheckerObject>>(baseList);
trackedList.ForEach(item => {
baseList.ForEach(x =>{
if (!x.Contains(item))
{
x.Add(item);
if(x.Sum(y => y.Amount) == limit)
{
string show = "";
x.ForEach(n => { show += n.Amount + ","; });
Console.WriteLine(String.Format("Match Found from {0}", show));
}
if (x.Sum(y => y.Amount) < limit)
{
SubsequentCheck(new List<List<CheckerObject>>(baseList), trackedList, limit);
}
}
});
});
}
public class CheckerObject
{
public int Id { get; set; }
public double Amount { get; set; }
}
Any and all help is as always greatly appreciated.
Change
private static void SubsequentCheck(List<List<CheckerObject>> baseList, List<CheckerObject> trackedList, double limit)
to
private static void SubsequentCheck(List<List<CheckerObject>> baseList, List<CheckerObject> trackedList, double limit, int recursiveDepth = 0)
and
if (x.Sum(y => y.Amount) < limit)
{
SubsequentCheck(new List<List<CheckerObject>>(baseList), trackedList, limit);
}
to
if (x.Sum(y => y.Amount) < limit)
{
SubsequentCheck(new List<List<CheckerObject>>(baseList), trackedList, limit, recursiveDepth + 1);
}
Then log everything or put a conditional breakpoint on recursive depth greater than some sensible limit.
Also change the .ForEach calls to foreach loops so that the loop can be stopped properly for efficiency (the return statement does not end the loop in this case)
Also change the Lists for Hashsets when calling .Contains so frequently if performance is an issue
I have two huge lists of created objects. A List<Forecast> with all the forecasts from different Resources and a List<Capacity> with the capacities of these Resources.
A Forecast also contains booleans indicating if this Resource is over or below the capacity for the sum of all his forecasts.
public class Forecast
{
public int ResourceId { get; set; }
public double? ForecastJan { get; set; }
// and ForecastFeb, ForecastMarch, ForecastApr, ForecastMay, etc.
public bool IsOverForecastedJan { get; set; }
// and IsOverForecastedFeb, IsOverForecastedMarch, IsOverForecastedApr, etc.
}
public class Capacity
{
public int ResourceId { get; set; }
public double? CapacityJan { get; set; }
// and CapacityFeb, CapacityMar, CapacityApr, CapacityMay, etc.
}
I have to set the IsOverForecastXXX properties so I have to know for each month if the sum of forecasts for each resource is higher than the sum of the capacity for this specific resource.
Here is my code :
foreach (Forecast f in forecastList)
{
if (capacityList.Where(c => c.Id == f.ResourceId)
.Select(c => c.CapacityJan)
.First()
< forecastList.Where(x => x.ResourceId == f.ResourceId)
.Sum(x => x.ForecastJan)
)
{
f.IsOverForecastedJan = true;
}
//Same for each month...
}
My code works but I have really bad performances when the lists are too big (thousands of elements).
Do you how of can I improve this algorithm ? How to compare the sum of forecasts for each resource with the associated capacity ?
You can use First or FirstOrdefault to get the capacities for the currect resource, then compare them. I would use ToLookup which is similar to a Dictionary to get all forecasts for all resources.
ILookup<int, Forecast> forecastMonthGroups = forecastList
.ToLookup(fc => fc.ResourceId);
foreach (Forecast f in forecastList)
{
double? janSum = forecastMonthGroups[f.ResourceId].Sum(fc => fc.ForecastJan);
double? febSum = forecastMonthGroups[f.ResourceId].Sum(fc => fc.ForecastFeb);
var capacities = capacityList.First(c => c.ResourceId == f.ResourceId);
bool overJan = capacities.CapacityJan < janSum;
bool overFeb = capacities.CapacityFeb < febSum;
// ...
f.IsOverForecastedJan = overJan;
f.IsOverForecastedFeb = overFeb;
// ...
}
It seems that there is only one Capacity per ResourceID, then i would use a Dictionary to store the "way" from ResourceId to Capacity, this would improve performance even more:
ILookup<int, Forecast> forecastMonthGroups = forecastList
.ToLookup(fc => fc.ResourceId);
Dictionary<int, Capacity> capacityResources = capacityList
.ToDictionary(c => c.ResourceId);
foreach (Forecast f in forecastList)
{
double? janSum = forecastMonthGroups[f.ResourceId].Sum(fc => fc.ForecastJan);
double? febSum = forecastMonthGroups[f.ResourceId].Sum(fc => fc.ForecastFeb);
bool overJan = capacityResources[f.ResourceId].CapacityJan < janSum;
bool overFeb = capacityResources[f.ResourceId].CapacityFeb < febSum;
// ...
f.IsOverForecastedJan = overJan;
f.IsOverForecastedFeb = overFeb;
// ...
}
I would try selecting out your capacities and forecasts for each month before entering the loop this way you are not iterating each list every time you go round the loop.
Something like this:
var capicities = capacityList.GroupBy(c => c.ResourceId).ToDictionary(c=>c.First().ResourceId, c=>c.First().CapacityJan);
var forecasts = forecastList.GroupBy(x => x.ResourceId).ToDictionary(x => x.First().ResourceId, x => x.Sum(f => f.ForecastJan));
foreach (Forecast f in forecastList)
{
if (capicities[f.ResourceId] < forecasts[f.ResourceId])
{
f.IsOverForecastedJan = true;
}
}
There are lots of things you can do to speed this up. First up, make a single pass over forecastList and sum the capacity forecast for each month:
var demandForecasts = new Dictionary<int, double?[]>();
foreach (var forecast in forecastList)
{
var rid = forecast.ResourceId;
if (!demandForecasts.ContainsKey(rid))
{
demandForecasts[rid] = new double?[12];
}
var demandForecast = demandForecasts[rid];
demandForecast[0] += forecast.ForecastJan;
// etc
demandForecast[11] += forecast.ForecastDec;
}
Do the same for capacities, resulting in a capacities dictionary. Then, one more loop over forecastList to set the "over forecasted" flags:
foreach (var forecast in forecastList)
{
var rid = forecast.ResourceId;
forecast.IsOverForecastedJan = capacities[rid][0] < demandForecast[rid][0];
// ...
forecast.IsOverForecastedDec = capacities[rid][11] < demandForecast[rid][11];
}
As is obvious from the twelve-fold code repetition implicit in this code, modelling capacities etc as separate properties for each month is probably not the best way of doing things -- using some kind of indexed collection would allow the repetition to be eliminated.