Chess Quiescence Search ist too extensive - c#

I have been creating a simple chess engine in c# over the last month and made some nice progress on it. It is using a simple Alpha-Beta algorithm.
In order to correct the Horizon-Effect, I tried to implement the Quiescence Search (and failed several times before it worked). The strength of the engine seems to have improved quiet a bit from that, but it is terribly slow!
Before, I could search to a 6 ply depth in about 160 secs (somewhere in a midgame state), with the quiescent search, it takes the computer about 80secs to get a move on search depth 3!
The brute-force node counter is at about 20000 Nodes at depth 3, while the quiescent node counter is up to 20 millions!
Since this is my first chess engine, I don't really know if those numbers are normal or if I might have made a mistake in my Quiescence-Algorithm. I would appreciate if someone more experienced could tell me what the usual ratio of BF Nodes/Quiescent nodes is.
Btw, just to have a look at:
(this Method is called by the BF tree whenever the searchdepth is 0)
public static int QuiescentValue(chessBoard Board, int Alpha, int Beta)
{
QuiescentNodes++;
int MinMax = Board.WhoseMove; // 1 = maximierend, -1 = minimierend
int Counter = 0;
int maxCount;
int tempValue = 0;
int currentAlpha = Alpha;
int currentBeta = Beta;
int QuietWorth = chEvaluation.Evaluate(Board);
if(MinMax == 1) //Max
{
if (QuietWorth >= currentBeta)
return currentBeta;
if (QuietWorth > currentAlpha)
currentAlpha = QuietWorth;
}
else //Min
{
if (QuietWorth <= currentAlpha)
return currentAlpha;
if (QuietWorth < currentBeta)
currentBeta = QuietWorth;
}
List<chMove> HitMoves = GetAllHitMoves(Board);
maxCount = HitMoves.Count;
if(maxCount == 0)
return chEvaluation.Evaluate(Board);
chessBoard tempBoard;
while (Counter < maxCount)
{
tempBoard = new chessBoard(Board);
tempBoard.Move(HitMoves[Counter]);
tempValue = QuiescentValue(tempBoard, currentAlpha, currentBeta);
if (MinMax == 1) //maximierend
{
if (tempValue >= currentBeta)
{
return currentBeta;
}
if (tempValue > currentAlpha)
{
currentAlpha = tempValue;
}
}
else //minimierend
{
if (tempValue <= currentAlpha)
{
return currentAlpha;
}
if (tempValue < currentBeta)
{
currentBeta = tempValue;
}
}
Counter++;
}
if (MinMax == 1)
return currentAlpha;
else
return currentBeta;
}

I am not familliar with the english terminology - is a HitMove a move where you remove a piece from the board?
In that case it seems to me that you use GetAllHitMoves to get a list of the "noisy" moves for the quiescence search which are then evaluated further than the usual 3 or 6 plies. This is called recursively though, so you evaluate this over and over as long as there are possible HitMoves leftover. Giving a limit to your quiescence search should fix your performance issues.
As for choosing the limit for the quiescence search - wiki states:
Modern chess engines may search certain moves up to 2 or 3 times deeper than the minimum.

Related

Unity Finding difference between A and B in a list efficiently

I am trying to calculate difference in numbers between 2 Points, from a list of about 90 points. The code I have so far:
int positiveCounter = 0;
int positiveResetCounter = currentWayPointID;
int negativeCounter = 0;
int negativeResetCounter = currentWayPointID;
while ((currentWayPointID + positiveResetCounter) != pos)
{
positiveCounter++;
positiveResetCounter++;
if(positiveResetCounter > navigationTrack.AllWayPoints.Count)
{
positiveResetCounter = 0;
}
}
while((currentWayPointID+negativeResetCounter) != pos)
{
negativeCounter++;
negativeResetCounter--;
if(negativeResetCounter < 0)
{
negativeResetCounter = navigationTrack.AllWayPoints.Count;
}
}
if(positiveCounter <= negativeCounter)
{
MoveForward();
}
else if(negativeCounter < positiveCounter)
{
// MoveBack();
}
This works as intended but its too much for update to handle. How can I do this in less taxing way?
To give bit more context I have list of waypoints and vehicle that moves on each the vehicle moves to the point that is closest to my mouses position.
The path is circular so last waypoint connects first first(index 0).
I am trying to determine shortest path to each waypoint in order to go forward or back and the code above is my attempt at calculating which way to go.
I am not looking for a way to make it move as that already works.
I assume pos is the target index of the waypoint you want to reach.
instead of the while loops and index shifting you could simply compare the indexes directly:
Lets say you have the waypoint list like
[WP0, WP1, WP2, WP3, WP4, ... WPn]
so the available indexes are 0 to n, the list length n+1
Lets say currentWayPointID = n and pos = 2.
What you want to know is whether it is faster to go backwards or fowards. So you want to compare which difference is smaller:
going backwards
n - 2 // simply go steps backwards until reaching 2
or going forwards using a virtual extended list
(n+1) + 2 - n; // add the length of the list to the target index
or to visiualize it
[WP0, WP1, WP2, WP3, WP4, ... WPn]
index: 0, 1, 2, 3, 4, ... n
extended index: n+1+0, n+1+1, n+1+2, n+1+3, n+1+4, ... n+n+1
So in order to generalize that you only have to check first whether the currentwaypointID is before or after pos something like
bool isForwards = true;
if(currentwaypointID >= pos)
{
if(currentwaypointID - pos < navigationTrack.AllWayPoints.Count + pos - currentwaypointID)
{
isForwards = false;
}
}
else
{
if(pos - currentwaypointID > navigationTrack.AllWayPoints.Count + currentwaypointID - pos)
{
isForwards = false;
}
}
if(isForwards)
{
MoveForward();
}
else
{
MoveBack();
}

c# optimizing cycle working with big numbers

I have this code that finds numbers in a given range that contain only 3 and 5 and are polynoms(symetrical, 3553 for example). The problem is that the numbers are between 1 and 10^18, so there are cases in which I have to work with big numbers, and using BigInteger makes the program way too slow, so is there a way to fix this ? Here's my code:
namespace Lucky_numbers
{
class Program
{
static void Main(string[] args)
{
string startString = Console.ReadLine();
string finishString = Console.ReadLine();
BigInteger start = BigInteger.Parse(startString);
BigInteger finish = BigInteger.Parse(finishString);
int numbersFound = 0;
for (BigInteger i = start; i <= finish; i++)
{
if (Lucky(i.ToString()))
{
if (Polyndrome(i.ToString()))
{
numbersFound++;
}
}
}
}
static bool Lucky(string number)
{
if (number.Contains("1") || number.Contains("2") || number.Contains("4") || number.Contains("6") || number.Contains("7") || number.Contains("8") || number.Contains("9") || number.Contains("0"))
{
return false;
}
else
{
return true;
}
}
static bool Polyndrome(string number)
{
bool symetrical = true;
int middle = number.Length / 2;
int rightIndex = number.Length - 1;
for (int leftIndex = 0; leftIndex <= middle; leftIndex++)
{
if (number[leftIndex] != number[rightIndex])
{
symetrical = false;
break;
}
rightIndex--;
}
return symetrical;
}
}
}
Edit: Turns out it's not BigInteger, it's my shitty implementation.
You could use ulong:
Size: Unsigned 64-bit integer
Range: 0 to 18,446,744,073,709,551,615
But I would guess that BigInteger is not a problem here. I think you should create algorithm for palindrome creation instead of brute-force increment+check solution.
Bonus
Here is a palyndrome generator I wrote in 5 minutes. I think it will be much faster than your approach. Could you test it and tell how much faster it is? I'm curious about that.
public class PalyndromeGenerator
{
private List<string> _results;
private bool _isGenerated;
private int _length;
private char[] _characters;
private int _middle;
private char[] _currentItem;
public PalyndromeGenerator(int length, params char[] characters)
{
if (length <= 0)
throw new ArgumentException("length");
if (characters == null)
throw new ArgumentNullException("characters");
if (characters.Length == 0)
throw new ArgumentException("characters");
_length = length;
_characters = characters;
}
public List<string> Results
{
get
{
if (!_isGenerated)
throw new InvalidOperationException();
return _results.ToList();
}
}
public void Generate()
{
_middle = (int)Math.Ceiling(_length / 2.0) - 1;
_results = new List<string>((int)Math.Pow(_characters.Length, _middle + 1));
_currentItem = new char[_length];
GeneratePosition(0);
_isGenerated = true;
}
private void GeneratePosition(int position)
{
if(position == _middle)
{
for (int i = 0; i < _characters.Length; i++)
{
_currentItem[position] = _characters[i];
_currentItem[_length - position - 1] = _characters[i];
_results.Add(new string(_currentItem));
}
}
else
{
for(int i = 0; i < _characters.Length; i++)
{
_currentItem[position] = _characters[i];
_currentItem[_length - position - 1] = _characters[i];
GeneratePosition(position + 1);
}
}
}
}
Usage:
var generator = new PalyndromeGenerator(6, '3', '5');
generator.Generate();
var items = generator.Results.Select(x => ulong.Parse(x)).ToList();
Strange riddle, but can be simplified if I understand the requirement.
I would first map these numbers to binary as there is only two possible
"lucky" digits, then generate the numbers by counting in binary until
I have completed nine bits. Reflect it for the full number, then
convert 0 to 3 and 1 to 5.
Example 1101
Reflect it = 10111101 --> 53555535
Do this from 0 all the way to 111111111
Declare start and finish to be static inside the class.
Change the method Lucky to:
static bool Lucky(string number)
{
return !(number.Contains("1") || number.Contains("2") || number.Contains("4") || number.Contains("6") || number.Contains("7") || number.Contains("8") || number.Contains("9") || number.Contains("0"));
}
Also, you can use Parallel library to parallelize the computation.
Instead of using a regular for loop, you could use a Parallel.For.
Look at the problem a different way - how many strings of up to 9 characters (using only '3' and '5') can you make? for each string you have 2 palindromes (one repeating the last character, one not) that you can make.
e.g.
3 -> 33
5 ->, 55
33 -> 333, 3333
35 -> 353, 3553
53 -> 535, 5335
...
The only suggestion I have is to use a 3rd party library like intx, or some unmanaged code. The intx author reports that it can work faster than BigInteger in some situations: "System.Numerics.BigInteger class was introduced in .NET 4.0 so I was interested in performance of this solution. I did some tests (grab test code from GitHub) and it appears that BigInteger has performance in general comparable with IntX on standard operations but starts losing when FHT comes into play (when multiplying really big integers, for example)."
Since the number has to be symmetrical, you only need to check the first half of the number. You don't need to check 18 digits, you only have to check to 9 digits and then swap the order of the characters and add them to the back as a string.
One thing I can think of is if you are only going to count integers that are containing 3 or 5 you don't need to traverse the entire list of numbers between your beginning & ending range.
Instead, look at your character set as either '3' or '5'. Then you can simply go through the allowed permutations of half of the number itself, leaving the other half to be completed to successfully create a polyndrome.
There are some rules to this method which would help, such as :
if the starting number's left-most digit was greater than 5 there is no need to attempt for that specific number of digits.
if both numbers fall on the same amount of digits but left-most digits do not traverse / include 5 or 3, no need to process.
Developing some set of rules such as this may help other than attempting to check every possible permutation.
So, for example, your Lucky function would become something more along the lines of :
static bool Lucky(string number)
{
if((number[0] != '3') && (number[0] != '5'))
{
return false;
} //and you could continue this for the entire string
...
}

Enumerable.Range(...).Any(...) outperforms a basic loop: Why?

I was adapting a simple prime-number generation one-liner from Scala to C# (mentioned in a comment on this blog by its author). I came up with the following:
int NextPrime(int from)
{
while(true)
{
n++;
if (!Enumerable.Range(2, (int)Math.Sqrt(n) - 1).Any((i) => n % i == 0))
return n;
}
}
It works, returning the same results I'd get from running the code referenced in the blog. In fact, it works fairly quickly. In LinqPad, it generated the 100,000th prime in about 1 second. Out of curiosity, I rewrote it without Enumerable.Range() and Any():
int NextPrimeB(int from)
{
while(true)
{
n++;
bool hasFactor = false;
for (int i = 2; i <= (int)Math.Sqrt(n); i++)
{
if (n % i == 0) hasFactor = true;
}
if (!hasFactor) return n;
}
}
Intuitively, I'd expect them to either run at the same speed, or even for the latter to run a little faster. In actuality, computing the same value (100,000th prime) with the second method, takes 12 seconds - It's a staggering difference.
So what's going on here? There must be fundamentally something extra happening in the second approach that's eating up CPU cycles, or some optimization going on the background of the Linq examples. Anybody know why?
For every iteration of the for loop, you are finding the square root of n. Cache it instead.
int root = (int)Math.Sqrt(n);
for (int i = 2; i <= root; i++)
And as other have mentioned, break the for loop as soon as you find a factor.
The LINQ version short circuits, your loop does not. By this I mean that when you have determined that a particular integer is in fact a factor the LINQ code stops, returns it, and then moves on. Your code keeps looping until it's done.
If you change the for to include that short circuit, you should see similar performance:
int NextPrimeB(int from)
{
while(true)
{
n++;
for (int i = 2; i <= (int)Math.Sqrt(n); i++)
{
if (n % i == 0) return n;;
}
}
}
It looks like this is the culprit:
for (int i = 2; i <= (int)Math.Sqrt(n); i++)
{
if (n % i == 0) hasFactor = true;
}
You should exit the loop once you find a factor:
if (n % i == 0){
hasFactor = true;
break;
}
And as other have pointed out, move the Math.Sqrt call outside the loop to avoid calling it each cycle.
Enumerable.Any takes an early out if the condition is successful while your loop does not.
The enumeration of source is stopped as soon as the result can be determined.
This is an example of a bad benchmark. Try modifying your loop and see the difference:
if (n % i == 0) { hasFactor = true; break; }
}
throw new InvalidOperationException("Cannot satisfy criteria.");
In the name of optimization, you can be a little more clever about this by avoiding even numbers after 2:
if (n % 2 != 0)
{
int quux = (int)Math.Sqrt(n);
for (int i = 3; i <= quux; i += 2)
{
if (n % i == 0) return n;
}
}
There are some other ways to optimize prime searches, but this is one of the easier to do and has a large payoff.
Edit: you may want to consider using (int)Math.Sqrt(n) + 1. FP functions + round-down could potentially cause you to miss a square of a large prime number.
At least part of the problem is the number of times Math.Sqrt is executed. In the LINQ query this is executed once but in the loop example it's executed N times. Try pulling that out into a local and reprofiling the application. That will give you a more representative break down
int limit = (int)Math.Sqrt(n);
for (int i = 2; i <= limit; i++)

Help with maths/coding on possible combinations of a set to make up a total - C#

I have a coding/maths problem that I need help translating into C#. It's a poker chip calculator that takes in the BuyIn, the number of players and the total amount of chips for each colour (there are x amount of colours) and their value.
It then shows you every possible combination of chips per person to equal the Buy In. The user can then pick the chipset distribution they would like to use. It's best illustrated with a simple example.
BuyIn: $10
Number of Players: 1
10 Red Chips, $1 value
10 Blue Chips, $2 value
10 Green Chips, $5 value
So, the possible combinations are:
R/B/G
10/0/0
8/1/0
6/2/0
5/0/1
4/3/0
2/4/0
1/2/1
etc.
I have spent a lot of time trying to come up with an algorithm in C#/.NET to work this out. I am stumbling on the variable factor - there's usually only 3 or 4 different chips colours in a set, but there could be any amount. If you have more than one player than you have to count up until TotalChips / NumberOfPlayers.
I started off with a loop through all the chips and then looping from 0 up to NumberOfChips for that colour. And this is pretty much where I have spent the last 4 hours... how do I write the code to loop through x amount of chips and check the value of the sum of the chips and add it to a collection if it equals the BuyIn? I need to change my approach radically methinks...
Can anyone put me on the right track on how to solve this please? Pseudo code would work - thank you for any advice!
The below is my attempt so far - it's hopeless (and wont compile, just an example to show you my thought process so far) - Might be better not to look at it as it might biased you on a solution...
private void SplitChips(List<ChipSuggestion> suggestions)
{
decimal valueRequired = (decimal)txtBuyIn.Value;
decimal checkTotal = 0;
ChipSuggestion suggestion;
//loop through each colour
foreach (Chip chip in (PagedCollectionView)gridChips.ItemsSource)
{
//for each value, loop through them all again
foreach (Chip currentChip in (PagedCollectionView)gridChips.ItemsSource)
{
//start at 0 and go all the way up
for (int i = 0; i < chip.TotalChipsInChipset; i++)
{
checkTotal = currentChip.ChipValue * i;
//if it is greater than than ignore and stop
if (checkTotal > valueRequired)
{
break;
}
else
{
//if it is equal to then this is a match
if (checkTotal == valueRequired)
{
suggestion = new ChipSuggestion();
suggestion.SuggestionName = "Suggestion";
chipRed.NumberPerPlayer = i;
suggestion.Chips.Add(chipRed);
chipBlue.NumberPerPlayer = y;
suggestion.Chips.Add(chipBlue);
chipGreen.NumberPerPlayer = 0;
suggestion.Chips.Add(chipGreen);
//add this to the Suggestion
suggestions.Add(suggestion);
break;
}
}
}
}
}
}
Here's an implementation that reads the number of chips, the chips (their worth and amount) and the buyin and displays the results in your example format. I have explained it through comments, let me know if you have any questions.
class Test
{
static int buyIn;
static int numChips;
static List<int> chips = new List<int>(); // chips[i] = value of chips of color i
static List<int> amountOfChips = new List<int>(); // amountOfChips[i] = number of chips of color i
static void generateSolutions(int sum, int[] solutions, int last)
{
if (sum > buyIn) // our sum is too big, return
return;
if (sum == buyIn) // our sum is just right, print the solution
{
for (int i = 0; i < chips.Count; ++i)
Console.Write("{0}/", solutions[i]);
Console.WriteLine();
return; // and return
}
for (int i = last; i < chips.Count; ++i) // try adding another chip with the same value as the one added at the last step.
// this ensures that no duplicate solutions will be generated, since we impose an order of generation
if (amountOfChips[i] != 0)
{
--amountOfChips[i]; // decrease the amount of chips
++solutions[i]; // increase the number of times chip i has been used
generateSolutions(sum + chips[i], solutions, i); // recursive call
++amountOfChips[i]; // (one of) chip i is no longer used
--solutions[i]; // so it's no longer part of the solution either
}
}
static void Main()
{
Console.WriteLine("Enter the buyin:");
buyIn = int.Parse(Console.ReadLine());
Console.WriteLine("Enter the number of chips types:");
numChips = int.Parse(Console.ReadLine());
Console.WriteLine("Enter {0} chips values:", numChips);
for (int i = 0; i < numChips; ++i)
chips.Add(int.Parse(Console.ReadLine()));
Console.WriteLine("Enter {0} chips amounts:", numChips);
for (int i = 0; i < numChips; ++i)
amountOfChips.Add(int.Parse(Console.ReadLine()));
int[] solutions = new int[numChips];
generateSolutions(0, solutions, 0);
}
}
Enter the buyin:
10
Enter the number of chips types:
3
Enter 3 chips values:
1
2
5
Enter 3 chips amounts:
10
10
10
10/0/0/
8/1/0/
6/2/0/
5/0/1/
4/3/0/
3/1/1/
2/4/0/
1/2/1/
0/5/0/
0/0/2/
Break the problem down recursively by the number of kinds of chips.
For the base case, how many ways are there to make an $X buy-in with zero chips? If X is zero, there is one way: no chips. If X is more than zero, there are no ways to do it.
Now we need to solve the problem for N kinds of chips, given the solution for N - 1. We can take one kind of chip, and consider every possible number of that chip up to the buy-in. For example, if the chip is $2, and the buy-in is $5, try using 0, 1, or 2 of them. For each of these tries, we have to use only the remaining N - 1 chips to make up the remaining value. We can solve that by doing a recursive call, and then adding our current chip to each solution it returns.
private static IEnumerable<IEnumerable<Tuple<Chip, int>>> GetAllChipSuggestions(List<Chip> chips, int players, int totalValue)
{
return GetAllChipSuggestions(chips, players, totalValue, 0);
}
private static IEnumerable<IEnumerable<Tuple<Chip, int>>> GetAllChipSuggestions(List<Chip> chips, int players, int totalValue, int firstChipIndex)
{
if (firstChipIndex == chips.Count)
{
// Base case: we have no chip types remaining
if (totalValue == 0)
{
// One way to make 0 with no chip types
return new[] { Enumerable.Empty<Tuple<Chip, int>>() };
}
else
{
// No ways to make more than 0 with no chip types
return Enumerable.Empty<IEnumerable<Tuple<Chip, int>>>();
}
}
else
{
// Recursive case: try each possible number of this chip type
var allSuggestions = new List<IEnumerable<Tuple<Chip, int>>>();
var currentChip = chips[firstChipIndex];
var maxChips = Math.Min(currentChip.TotalChipsInChipset / players, totalValue / currentChip.ChipValue);
for (var chipCount = 0; chipCount <= maxChips; chipCount++)
{
var currentChipSuggestion = new[] { Tuple.Create(currentChip, chipCount) };
var remainingValue = totalValue - currentChip.ChipValue * chipCount;
// Get all combinations of chips after this one that make up the rest of the value
foreach (var suggestion in GetAllChipSuggestions(chips, players, remainingValue, firstChipIndex + 1))
{
allSuggestions.Add(suggestion.Concat(currentChipSuggestion));
}
}
return allSuggestions;
}
}
For some large combinations this is propably not solvable in finite time.
(It is a NP problem)
http://en.wikipedia.org/wiki/Knapsack_problem
There are also links with Code? that could help you.
Hope this helps a bit.

Interpolation in c# - performance problem

I need to resample big sets of data (few hundred spectra, each containing few thousand points) using simple linear interpolation.
I have created interpolation method in C# but it seems to be really slow for huge datasets.
How can I improve the performance of this code?
public static List<double> interpolate(IList<double> xItems, IList<double> yItems, IList<double> breaks)
{
double[] interpolated = new double[breaks.Count];
int id = 1;
int x = 0;
while(breaks[x] < xItems[0])
{
interpolated[x] = yItems[0];
x++;
}
double p, w;
// left border case - uphold the value
for (int i = x; i < breaks.Count; i++)
{
while (breaks[i] > xItems[id])
{
id++;
if (id > xItems.Count - 1)
{
id = xItems.Count - 1;
break;
}
}
System.Diagnostics.Debug.WriteLine(string.Format("i: {0}, id {1}", i, id));
if (id <= xItems.Count - 1)
{
if (id == xItems.Count - 1 && breaks[i] > xItems[id])
{
interpolated[i] = yItems[yItems.Count - 1];
}
else
{
w = xItems[id] - xItems[id - 1];
p = (breaks[i] - xItems[id - 1]) / w;
interpolated[i] = yItems[id - 1] + p * (yItems[id] - yItems[id - 1]);
}
}
else // right border case - uphold the value
{
interpolated[i] = yItems[yItems.Count - 1];
}
}
return interpolated.ToList();
}
Edit
Thanks, guys, for all your responses. What I wanted to achieve, when I wrote this questions, were some general ideas where I could find some areas to improve the performance. I haven't expected any ready solutions, only some ideas. And you gave me what I wanted, thanks!
Before writing this question I thought about rewriting this code in C++ but after reading comments to Will's asnwer it seems that the gain can be less than I expected.
Also, the code is so simple, that there are no mighty code-tricks to use here. Thanks to Petar for his attempt to optimize the code
It seems that all reduces the problem to finding good profiler and checking every line and soubroutine and trying to optimize that.
Thank you again for all responses and taking your part in this discussion!
public static List<double> Interpolate(IList<double> xItems, IList<double> yItems, IList<double> breaks)
{
var a = xItems.ToArray();
var b = yItems.ToArray();
var aLimit = a.Length - 1;
var bLimit = b.Length - 1;
var interpolated = new double[breaks.Count];
var total = 0;
var initialValue = a[0];
while (breaks[total] < initialValue)
{
total++;
}
Array.Copy(b, 0, interpolated, 0, total);
int id = 1;
for (int i = total; i < breaks.Count; i++)
{
var breakValue = breaks[i];
while (breakValue > a[id])
{
id++;
if (id > aLimit)
{
id = aLimit;
break;
}
}
double value = b[bLimit];
if (id <= aLimit)
{
var currentValue = a[id];
var previousValue = a[id - 1];
if (id != aLimit || breakValue <= currentValue)
{
var w = currentValue - previousValue;
var p = (breakValue - previousValue) / w;
value = b[id - 1] + p * (b[id] - b[id - 1]);
}
}
interpolated[i] = value;
}
return interpolated.ToList();
}
I've cached some (const) values and used Array.Copy, but I think these are micro optimization that are already made by the compiler in Release mode. However You can try this version and see if it will beat the original version of the code.
Instead of
interpolated.ToList()
which copies the whole array, you compute the interpolated values directly in the final list (or return that array instead). Especially if the array/List is big enough to qualify for the large object heap.
Unlike the ordinary heap, the LOH is not compacted by the GC, which means that short lived large objects are far more harmful than small ones.
Then again: 7000 doubles are approx. 56'000 bytes which is below the large object threshold of 85'000 bytes (1).
Looks to me you've created an O(n^2) algorithm. You are searching for the interval, that's O(n), then probably apply it n times. You'll get a quick and cheap speed-up by taking advantage of the fact that the items are already ordered in the list. Use BinarySearch(), that's O(log(n)).
If still necessary, you should be able to do something speedier with the outer loop, what ever interval you found previously should make it easier to find the next one. But that code isn't in your snippet.
I'd say profile the code and see where it spends its time, then you have somewhere to focus on.
ANTS is popular, but Equatec is free I think.
few suggestions,
as others suggested, use profiler to understand better where time is used.
the loop
while (breaks[x] < xItems[0])
could cause exception if x grows bigger than number of items in "breaks" list. You should use something like
while (x < breaks.Count && breaks[x] < xItems[0])
But you might not need that loop at all. Why treat the first item as special case, just start with id=0 and handle the first point in for(i) loop. I understand that id might start from 0 in this case, and [id-1] would be negative index, but see if you can do something there.
If you want to optimize for speed then you sacrifice memory size, and vice versa. You cannot usually have both, except if you make really clever algorithm. In this case, it would mean to calculate as much as you can outside loops, store those values in variables (extra memory) and use them later. For example, instead of always saying:
id = xItems.Count - 1;
You could say:
int lastXItemsIndex = xItems.Count-1;
...
id = lastXItemsIndex;
This is the same suggestion as Petar Petrov did with aLimit, bLimit....
next point, your loop (or the one Petar Petrov suggested):
while (breaks[i] > xItems[id])
{
id++;
if (id > xItems.Count - 1)
{
id = xItems.Count - 1;
break;
}
}
could probably be reduced to:
double currentBreak = breaks[i];
while (id <= lastXIndex && currentBreak > xItems[id]) id++;
and the last point I would add is to check if there is some property in your samples that is special for your problem. For example if xItems represent time, and you are sampling in regular intervals, then
w = xItems[id] - xItems[id - 1];
is constant, and you do not have to calculate it every time in the loop.
This is probably not often the case, but maybe your problem has some other property which you could use to improve performance.
Another idea is this: maybe you do not need double precision, "float" is probably faster because it is smaller.
Good luck
System.Diagnostics.Debug.WriteLine(string.Format("i: {0}, id {1}", i, id));
I hope it's release build without DEBUG defined?
Otherwise, it might depend on what exactly are those IList parameters. May be useful to store Count value instead of accessing property every time.
This is the kind of problem where you need to move over to native code.

Categories