I am developing an App which uses recursion.
void Keres(char[,] array, int width, int height)
{
_found = Search(array, height, width, _words);
if (_found.Count < 6)
{
_found.Clear();
Keres(array, width, height);
}
}
Search is a recursive method, and it gives back a string List. And I need the count of it to be bigger than 5. But if it's not, I have to call again and again the Keres method, until it's count is 6 or greater, but my app freezes.
Here is where I call Keres method:
if ((string)appSettings["gamelanguage"] == "english")
{
szo = EngInput(3, 3); //szo is a char[,] array
Keres(szo, 3, 3);
}
What can I do to avoid the recursion, or avoid the crash, and get my >6 items?
Edit: Search method
List<string> Search(char[,] letter_table, int height, int width, List<string> words_list)
{
List<string> possible_words = new List<string>();
char[,] _tmp_letter_table = _tmp_letter_table = new char[height, width];
bool possible = false;
foreach (String word in words_list)
{
possible = false;
Array.Copy(letter_table, _tmp_letter_table, width * height);
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
if (_tmp_letter_table[i, j] == word[0])
{
if (IsNeighborTest(word, i, j, height, width, _tmp_letter_table, 0) == true)
{
possible = true;
break;
}
else
{
Array.Copy(letter_table, _tmp_letter_table, width * height);
}
}
}
if (possible == true)
{
possible_words.Add(word);
break;
}
}
}
return possible_words;
}
your code is not a correct recursion, actually you are calling always the same method, each time the recursive method is called something must have been changed, obviously in your code you are never exiting the method and the application freezes.
I think, if I understood what you want, that the problem you are facing is not solvable with recursion.
Maybe the array is something that changes and until is changed to >6 you want to check with the Keres method? Then recursion is not the way to do it.
You can avoid the recursion with a simple loop:
void Keres(char[,] array, int width, int height)
{
do
{
_found = Search(array,height,width,_words);
} while (_found.Count < 6)
}
But if the app is freezing with recursion, it may likely freeze without it since they should do the same thing (this method may avoid a StackOverflow Exception however if this takes many iterations to complete)
Related
So how can I update the position every time I call the StartRandomizingRightSpikePosition
private bool CheckOverlap(GameObject o1, GameObject o2)
{
return spikeRight.Select(t => t.GetComponent<Collider>().bounds.Intersects(t.GetComponent<Collider>().bounds)).FirstOrDefault();
}
public void StartRandomizingRightSpikesPosition()
{
foreach (var t in spikeRight)
{
foreach (var t1 in spikeRight)
{
if (t == t1) continue;
if (!CheckOverlap(t, t1)) continue;
yPosition = Random.Range(-7, 7);
var position = t1.transform.position;
desiredPosition = new Vector3(position.x, yPosition, position.z);
t1.transform.position = desiredPosition;
Debug.Log(t.gameObject + " intersects " + t1.gameObject);
}
}
}
The short answer is yes but I'm not sure you would want too. I'm not sure you're going to find a way to do this efficiently and you might be better off finding a way to generate the objects such that this step is not necessary.
I can't tell from your question how the objects are actually stored so I'm going to provide some sample code that just deals with a simple array of Rectangles. You should be able to adapt it to your specifics.
I tried to make it slightly more efficient by not checking both t1 == t and t == t1.
const int Bounds = 1000;
static bool RemovedOverlappingRects(Rectangle[] rects)
{
for (int pos = 0; pos < rects.Length; ++pos)
{
for (int check = pos +1; check < rects.Length; ++check)
{
var r1 = rects[pos];
var r2 = rects[check];
if (r1.IntersectsWith(r2))
{
r2.Y = Rng.Next(1, Bounds);
rects[check] = r2;
Console.WriteLine($"{pos} overlaps with {check}");
return true;
}
}
}
return false;
}
Once we've randomly generated a new rectangle we have to start over. Which means invoking the above method in a loop.
var rects = GetRandomeRects(20).ToArray();
while (RemovedOverlappingRects(rects))
;
Because of the random movement I'm not certain you can guarantee this will always end. If you can deal with the non-random look of the results changing it to stack the overlaps would I believe always finish. That would be this:
r2.Y = r1.Y + r1.Height + 1;
in place of
r2.Y = Rng.Next(1, Bounds);
But even then you're still dealing with a very unpredictable run time due to the random input.
Maybe someone else can show us a more efficient way...
EDIT: OP here, its answered. Can't accept my own answer for 2 days? Dunno I'm a stack noob. Thanks to people that helped.
I want to have a loop that generates a random coordinate and adds it to a list only if that coordinate does not already exist in the list.
And just keeps looping until the correct number of coordinates are in the list.
while (spawnPositions.Count < myPlayer.myPlayerUnits.Count)
{
Debug.Log("While");
int[,] rnd = new int[UnityEngine.Random.Range(minX, maxX), UnityEngine.Random.Range(minZ, maxZ)];
if (spawnPositions.Contains(rnd) == false)
{
Debug.Log("SpawnPos Added!");
spawnPositions.Add(rnd);
}
}
Problem is the if statement is always true. Console output shows the loop loops X amount of times and the if statement is also true X amount of times.
Is this just not possible to do in while-loop. Or am i doing something wrong? Thanks!
EDIT: Oh and to be clear yes duplicates are added. I'm trying to generate 5 unique coordinates on a 3x3 area and there is almost always duplicates!
If I understand you right you want to compare the content of multidimensional arrays or maybe only their dimensions. But the .Contains() method looks for the reference equality and this is always unique because you are calling "new".
I defined a extension method for it.
public static bool IsContentEqualTo(this int[,] array1, int[,] array2)
{
if (array1 == null) throw new ArgumentNullException(nameof(array1), "Array cannot be null");
if (array2 == null) throw new ArgumentNullException(nameof(array2), "Array cannot be null");
if (array1.GetLowerBound(0) != array2.GetLowerBound(0)) return false;
if (array1.GetUpperBound(0) != array2.GetUpperBound(0)) return false;
if (array1.GetLowerBound(1) != array2.GetLowerBound(1)) return false;
if (array1.GetUpperBound(1) != array2.GetUpperBound(1)) return false;
var xMin = array1.GetLowerBound(0);
var xMax = array1.GetUpperBound(0);
var yMin = array1.GetLowerBound(1);
var yMax = array1.GetUpperBound(1);
for (var x = xMin; x <= xMax; x++)
for (var y = yMin; y <= yMax; y++)
{
if (array1[x, y] != array2[x, y]) return false;
}
return true;
}
And you can call it like that.
if (!spawnPositions.Any(array => array.IsContentEqualTo(rnd)))
{
Debug.Log("SpawnPos Added!");
spawnPositions.Add(rnd);
}
EDIT: OP here, its answered. Can't accept my own answer for 2 days? Dunno I'm a stack noob. Thanks to people that helped.
I fixed it, its not pretty but it works. An array of booleans to keep track if the value have been used already. And not actaully adding/removing values from the list from within the while-loop (had a few infinite-loops trying out things :S).
bool[,] tileOccupied = new bool[maxX, maxZ];
for (int i = 0; i < myPlayer.myPlayerUnits.Count; i++)
{
int tileValue = UnityEngine.Random.Range(0, myPlayer.mySpawnArea.Count);
int[,] spawnTile = myPlayer.mySpawnArea[tileValue];
if (tileOccupied[spawnTile.GetLength(0), spawnTile.GetLength(1)] == true)
{
do
{
tileValue = UnityEngine.Random.Range(0, myPlayer.mySpawnArea.Count - 1);
spawnTile = myPlayer.mySpawnArea[tileValue];
} while (tileOccupied[spawnTile.GetLength(0), spawnTile.GetLength(1)] == true);
}
tileOccupied[spawnTile.GetLength(0), spawnTile.GetLength(1)] = true;
spawnPositions.Add(spawnTile);
}
I am currently writing a C# application to demonstrate the speedup of parallel computing over single threaded applications. My case is Median blur of an image. But putting more threads to work slows down the application significantly (60 seconds single threaded vs 75 seconds multithreaded) . Given my current approach, I don't know how I could improve the process for multithreading. Sorry in advance for the long code in this post.
my current approach:
first, I calculate how many pixels each thread needs to process to even out the work, the DateTime calculation is to know how much time is passed single threaded and how much time is passed multithreaded:
public void blurImage(int cores)
{
_startTotal = DateTime.Now;
int numberOfPixels = _originalImage.Width * _originalImage.Height;
if (cores>=numberOfPixels)
{
for (int i = 0; i < numberOfPixels; i++)
{
startThread(0, numberOfPixels);
}
}
else
{
int pixelsPerThread = numberOfPixels / cores;
int threshold = numberOfPixels - (pixelsPerThread * cores);
startThread(0, pixelsPerThread + threshold);
for (int i = 1; i < cores; i++)
{
int startPixel = i * pixelsPerThread + threshold;
startThread(startPixel, startPixel + pixelsPerThread);
}
}
_SeqTime = DateTime.Now.Subtract(_startTotal);
}
the startThread method starts a thread and saves the result into a special class object so it can all be merged into one image, I pass a copy of the input image in each thread.
private void startThread(int startPixel, int numberOfPixels)
{
BlurOperation operation = new BlurOperation(blurPixels);
_operations.Add(operation);
BlurResult result = new BlurResult();
operation.BeginInvoke((Bitmap)_processedImage.Clone(), startPixel, numberOfPixels, _windowSize, result, operation, new AsyncCallback(finish), result);
}
Every thread blurs their set of pixels and saves the result into a new list of colors, the result is saved into the result object as well as the startpixel and the current operation, so the program knows when all threads are finished:
private void blurPixels(Bitmap bitmap, int startPixel, int endPixel, int window, BlurResult result, BlurOperation operation)
{
List<Color> colors = new List<Color>();
for (int i = startPixel; i < endPixel; i++)
{
int x = i % bitmap.Width;
int y = i / bitmap.Width;
colors.Add(PixelBlurrer.ShadePixel(x, y, bitmap, window));
}
result._pixels = colors;
result._startPixel = startPixel;
result._operation = operation;
}
the PixelBlurrer calculates the median of each color channel and returns it:
public static Color ShadePixel(int x, int y, Bitmap image, int window)
{
List<byte> red = new List<byte>();
List<byte> green = new List<byte>();
List<byte> blue = new List<byte>();
int xBegin = Math.Max(x - window, 0);
int yBegin = Math.Max(y - window, 0);
int xEnd = Math.Min(x + window, image.Width - 1);
int yEnd = Math.Min(y + window, image.Height - 1);
for (int tx = xBegin; tx < xEnd; tx++)
{
for (int ty = yBegin; ty < yEnd; ty++)
{
Color c = image.GetPixel(tx, ty);
red.Add(c.R);
green.Add(c.G);
blue.Add(c.B);
}
}
red.Sort();
green.Sort();
blue.Sort();
Color output = Color.FromArgb(red[red.Count / 2], green[green.Count / 2], blue[blue.Count / 2]);
return output;
}
on the callback, we return to the GUI thread and merge all pixels into the resulting image. Lastly an event is called telling my form the process is done:
private void finish(IAsyncResult iar)
{
Application.Current.Dispatcher.BeginInvoke(new AsyncCallback(update), iar);
}
private void update(IAsyncResult iar)
{
BlurResult result = (BlurResult)iar.AsyncState;
updateImage(result._pixels, result._startPixel, result._operation);
}
private void updateImage(List<Color> colors, int startPixel, BlurOperation operation)
{
DateTime updateTime = DateTime.Now;
_operations.Remove(operation);
int end = startPixel + colors.Count;
for (int i = startPixel; i < end; i++)
{
int x = i % _processedImage.Width;
int y = i / _processedImage.Width;
_processedImage.SetPixel(x, y, colors[i - startPixel]);
}
if (_operations.Count==0)
{
done(this, null);
}
_SeqTime += DateTime.Now.Subtract(updateTime);
}
Any thoughts? I tried using Parallel.For instead of delegates, but that made it worse. Is there a way to speedup Median blur by multithreading or is this a failed case?
after some thinking, I figured out my logic is solid, but I didn't send a deep copy to each thread. After changing this line in StartThread:
operation.BeginInvoke((Bitmap)_processedImage.Clone(), startPixel, numberOfPixels, _windowSize, result, operation, new AsyncCallback(finish), result);
to this:
operation.BeginInvoke(new Bitmap(_processedImage), startPixel, numberOfPixels, _windowSize, result, operation, new AsyncCallback(finish), result);
I can see a speedup multithreaded
I've seen lots of examples of how to consume a BlockingCollection<T> in the producer-consumer scenario, even how to consume one element at a time here. I'm quite new to parallel programming though, so I'm facing the following problem:
The problem, indeed, is how to write the method ConsumerProducerExample.ConsumerMethod in the example below, such that it consumes the first 2 double of the BlockingCollection<Double> for every element in the array, then proceeds to consume the next 2 double for every element in the array and so on.
I have written the method in the example below, but it is not the way I want it to work. It is what I know how to do based in the exampled I linked above though. It is exactly the opposite way: as it is here, it will consume every double before jumping to the next BlockingCollection<Double> of the array. And, again, I want it to consume only two double, then jump to the next BlockingCollection<Double>, consume two double, etc. After completing the array loop, it shall proceed to consume the next two double for each BlockingCollection<Double> element again, and so on.
public class ConsumerProducerExample
{
public static BlockingCollection<Double>[] data;
public static Int32 nEl = 100;
public static Int32 tTotal = 1000;
public static Int32 peakCounter = 0;
public static void InitData()
{
data = new BlockingCollection<Double>[nEl];
for (int i = 0; i < nEl; i++) data[i] = new BlockingCollection<Double>();
}
public static void ProducerMethod()
{
Int32 t = 0, j;
while (t < ConsumerProducerExample.tTotal)
{
j = 0;
while (j < ConsumerProducerExample.nEl)
{
data[j].Add(Math.Sin((Double)t / 10.0D));
j++;
}
t++;
}
j = 0;
while (j < ConsumerProducerExample.nEl)
{
data[j].CompleteAdding();
j++;
}
}
public static void ConsumerMethod()
{
// THE PROBLEM IS WITH THIS METHOD
Double x1, x2;
Int32 j = 0;
while (j < Program.nEl)
{
while (!data[j].IsCompleted)
{
try
{
x1 = data[j].Take();
x2 = data[j].Take();
}
catch (InvalidOperationException)
{
break;
}
if (x1 * x2 < 0.0)
{
Program.peakCounter++;
}
}
j++;
}
}
}
They should be used like this:
ConsumerProducerExample.InitData();
Task task1 = Task.Factory.StartNew(ConsumerProducerExample.ProducerMethod);
Task task2 = Task.Factory.StartNew(ConsumerProducerExample.ConsumerMethod);
Any suggestions?
In short, this is a try to count peaks in the solutions of nEl Differential Equations concurrently (the solutions are represented by sin(x) in this example).
Basically, you need to get rid of your inner loop and add another outer loop, that iterates over all of the collections, until they're complete. Something like:
while (!data.All(d => d.IsCompleted))
{
foreach (var d in data)
{
// only takes elements from the collection if it has more than 2 elements
if (d.Count >= 2)
{
double x1, x2;
if (d.TryTake(out x1) && d.TryTake(out x2))
{
if (x1 * x2 < 0.0)
peakCounter++;
}
else
throw new InvalidOperationException();
}
}
}
Is this parallel merge sort implemented correctly? It looks correct, I took the 40seconds to write a test and it hasnt failed.
The gist of it is i need to sort by splitting the array in half every time. Then i tried to make sure i go wrong and asked a question for a sanity check (my own sanity). I wanted an in place sort but decided that it was way to complicated when seeing the answer, so i implemented the below.
Granted there's no point creating a task/thread to sort a 4 byte array but its to learn threading. Is there anything wrong or anything i can change to make this better. To me it looks perfect but i'd like some general feedback.
static void Main(string[] args)
{
var start = DateTime.Now;
//for (int z = 0; z < 1000000; z++)
int z = 0;
while(true)
{
var curr = DateTime.Now;
if (curr - start > TimeSpan.FromMinutes(1))
break;
var arr = new byte[] { 5, 3, 1, 7, 8, 5, 3, 2, 6, 7, 9, 3, 2, 4, 2, 1 };
Sort(arr, 0, arr.Length, new byte[arr.Length]);
//Console.Write(BitConverter.ToString(arr));
for (int i = 1; i < arr.Length; ++i)
{
if (arr[i] > arr[i])
{
System.Diagnostics.Debug.Assert(false);
throw new Exception("Sort was incorrect " + BitConverter.ToString(arr));
}
}
++z;
}
Console.WriteLine("Tried {0} times with success", z);
}
static void Sort(byte[] arr, int leftPos, int rightPos, byte[] tempArr)
{
var len = rightPos - leftPos;
if (len < 2)
return;
if (len == 2)
{
if (arr[leftPos] > arr[leftPos + 1])
{
var t = arr[leftPos];
arr[leftPos] = arr[leftPos + 1];
arr[leftPos + 1] = t;
}
return;
}
var rStart = leftPos+len/2;
var t1 = new Thread(delegate() { Sort(arr, leftPos, rStart, tempArr); });
var t2 = new Thread(delegate() { Sort(arr, rStart, rightPos, tempArr); });
t1.Start();
t2.Start();
t1.Join();
t2.Join();
var l = leftPos;
var r = rStart;
var z = leftPos;
while (l<rStart && r<rightPos)
{
if (arr[l] < arr[r])
{
tempArr[z] = arr[l];
l++;
}
else
{
tempArr[z] = arr[r];
r++;
}
z++;
}
if (l < rStart)
Array.Copy(arr, l, tempArr, z, rStart - l);
else
Array.Copy(arr, r, tempArr, z, rightPos - r);
Array.Copy(tempArr, leftPos, arr, leftPos, rightPos - leftPos);
}
You could use the Task Parallel Library to give you a better abstraction over threads and cleaner code. The example below uses this.
The main difference from your code, other than using the TPL, is that it has a cutoff threshold below which a sequential implementation is used regardless of the number of threads that have started. This prevents creation of threads that are doing a very small amount of work. There is also a depth cutoff below which new threads are not created. This prevents more threads being created than the hardware can handle based on the number of available logical cores (Environment.ProcessCount).
I would recommend implementing both these approaches in your code to prevent thread explosion for large arrays and innefficient creation of threads which do very small amounts of work, even for small array sizes. It will also give you better performance.
public static class Sort
{
public static int Threshold = 150;
public static void InsertionSort(int[] array, int from, int to)
{
// ...
}
static void Swap(int[] array, int i, int j)
{
// ...
}
static int Partition(int[] array, int from, int to, int pivot)
{
// ...
}
public static void ParallelQuickSort(int[] array)
{
ParallelQuickSort(array, 0, array.Length,
(int) Math.Log(Environment.ProcessorCount, 2) + 4);
}
static void ParallelQuickSort(int[] array, int from, int to, int depthRemaining)
{
if (to - from <= Threshold)
{
InsertionSort(array, from, to);
}
else
{
int pivot = from + (to - from) / 2; // could be anything, use middle
pivot = Partition(array, from, to, pivot);
if (depthRemaining > 0)
{
Parallel.Invoke(
() => ParallelQuickSort(array, from, pivot, depthRemaining - 1),
() => ParallelQuickSort(array, pivot + 1, to, depthRemaining - 1));
}
else
{
ParallelQuickSort(array, from, pivot, 0);
ParallelQuickSort(array, pivot + 1, to, 0);
}
}
}
}
The full source is available on http://parallelpatterns.codeplex.com/
You can read a discussion of the implementation on http://msdn.microsoft.com/en-us/library/ff963551.aspx