Using Random.range to generate level no without repetition - c#

I tried to use recursion for the problem at hand as follows,
int newlevelgen()
{
int exampleno = Random.Range (1,4);
if (exampleno != lastlevelno)
{
lastlevelno = exampleno;
return exampleno;
}
else
{
newlevelgen();
}
return exampleno;
}
This is my code above, what I want to do is generate new number without repeating the previous one, but this simply does not work. Help!

The idea is that you get a value between a and b. If that value is greater of equal to previous value, then you increase that and return it. Other case, you return as is.
Think of it, it does work.
public int GetUniqueLevelInclusiveOrdinal(int a , int b, int previous){
// given the ordinal numbers from a to b INCLUSIVE,
// so including a and b,
// (*** NOTE, no random int calls in Unity or any system
// work inclusively so take care ***)
// choose one of the numbers from a to b inclusive
// but do not choose "previous"
// which top index to use with Random.Range which is exclusive?
int top = b+1;
// everyday algorithm for excluding one item from a random run
int topExcludeOne = top-1;
int value = Random.Range(a, topExcludeOne);
if (value >= previous) return value+1;
else return value;
}
This is an extremely well-known, basic, pattern in programming...
int result = UnityEngine.Random.Range(0,highest-1);
if (result>=exclude)
return result+1;
else
return result;
In Unity you must use extensions:
public static int RandomIndexButNotThis(this int highest, int exclude)
{
if (highest<2) Debug.Break();
int result = UnityEngine.Random.Range(0,highest-1);
if (result>=exclude)
return result+1;
else
return result;
}
To get a random index 0 to 99
Random.Range(0,100)
To get a random index 0 to 99, but excluding 61
100.RandomIndexButNotThis(61)
To get a random index 0 to 9
Random.Range(0,10)
To get a random index 0 to 9, but excluding 8
10.RandomIndexButNotThis(8)
If new to Unity, intro to extensions

Let me preface this by saying that the following is an answer to the problem with the original method that was posted. It is far from that best way to get the desired result but that's not the focus of this answer. Yes there is a chance that the recursion will go on for a few calls. 33% chance to need another recursive call, though that does not mean a 33% for an infinite loop as the likelihood of needing X recursive calls is 0.33^X so the likelihood of reaching 3 recursive calls is only 0.33^3 ~= 0.036.
Still, a different method is advised. See Joe Blow's answer for example.
There's a problem with your recursion. You aren't using the new value that the recursive call to newlevelgen() would give you in case exampleno is the same as lastlevelno and always returning the (possibly duplicate) exampleno value. Change it to:
int newlevelgen()
{
int exampleno = Random.Range (1,4);
if (exampleno != lastlevelno)
{
lastlevelno = exampleno;
return exampleno;
}
else
{
return newlevelgen();
}
}

I created utility to easely handle random without repetitions, flat distributed random, and weighted lists (universal property drawer included!). You can find it on GtiHub and use freely:
https://github.com/khadzhynov/RandomUtils

Related

Checking whether a number contains numbers 1 to n as factors

I am trying to solve Project Euler Challenge 5, What is the smallest possible number that is evenly divisible by all the numbers from 1 to 20.
My problem is that my isMultiple() method doesn't return true when it should.
* My Code *
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Challenge_5
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(isMultiple(2520, 10)); //should return True
Console.WriteLine(smallestMultiple(20)); //should return 232792560
Console.ReadLine();
}
static int factorial(int n)
{
int product = 1;
for (int i = 1; i <= n; i++)
{
product *= i;
}
return product; //returns the factorial of n, (n * n-1 * n-2... * 1)
}
static bool isMultiple(int number, int currentFactor)
{
bool returnBool = false;
if (currentFactor == 1)
{
returnBool = true; // if all factors below largestFactor can divide into the number, returns true
}
else
{
if (number % currentFactor == 0)
{
currentFactor--;
isMultiple(number, currentFactor);
}
}
return returnBool;
}
static int smallestMultiple(int largestFactor)
{
for (int i = largestFactor; i < factorial(largestFactor); i+= largestFactor) //goes through all values from the kargestFactor to largestFactor factorial
{
if (isMultiple(i, largestFactor))
{
return i; // if current number can be evenly divided by all factors, it gets returned
}
}
return factorial(largestFactor); // if no numbers get returned, the factorial is the smallest multiple
}
}
}
I know there are much easier ways to solve this, but I want the program to be used to check the lowest multiple of the numbers from 1 to any number, not just 20.
Help would be much appreciated.
EDIT
Thanks to help, i have fixed my code by changing line 42 from
isMultiple(number, currentFactor);
to
returnBool = isMultiple(number, currentFactor);
I also fixed the problem with not getting an accurate return value for smallestMultiple(20);
by changing some of the variables to long instead of int
Your problem is you forgot to use the output of isMultiple in your recursive part
if (number % currentFactor == 0)
{
currentFactor--;
returnBool = isMultiple(number, currentFactor); //you need a to save the value here.
}
Without assigning returnBool there is no way of knowing if the inner isMultiple returned true or not.
Scott's answer is perfectly valid. Forgive me if I'm wrong, but it sounds like you're a student, so in the interest of education, I thought I'd give you some pointers for cleaning up your code.
When writing recursive functions, it's (in my opinion) usually cleaner to return the recursive call directly if possible, as well as returning base cases directly, rather than storing a value that you return at the end. (It's not always possible, in complicated cases where you have to make a recursive call, modify the return value, and then make another recursive call, but these are uncommon.)
This practice:
makes base cases very obvious increasing readability and forces you to consider all of your base cases
prevents you from forgetting to assign the return value, as you did in your original code
prevents potential bugs where you might accidentally do some erroneous additional processing that alters the result before you return it
reduces the number of nested if statements, increasing readability by reducing preceding whitespace
makes code-flow much more obvious increasing readability and ability to debug
usually results in tail-recursion which is the best for performance, nearly as performant as iterative code
caveat: nowadays most compilers' optimizers will rejigger production code to produce tail-recursive functions, but getting into the habit of writing your code with tail-recursion in the first place is good practice, especially as interpreted scripting languages (e.g. JavaScript) are taking over the world, where code optimization is less possible by nature
The other change I'd make is to remove currentFactor--; and move the subtraction into the recursive call itself. It increases readability, reduces the chance of side effects, and, in the case where you don't use tail-recursion, prevents you from altering a value that you later expect to be unaltered. In general, if you can avoid altering values passed into a function (as opposed to a procedure/void), you should.
Also, in this particular case, making this change removes up to 3 assembly instructions and possibly an additional value on the stack depending on how the optimizer handles it. In long running loops with large depths, this can make a difference*.
static bool isMultiple(int number, int currentFactor)
{
if (currentFactor == 1)
{
// if all factors below largestFactor can divide into the number, return true
return true;
}
if (number % currentFactor != 0)
{
return false;
}
return isMultiple(number, currentFactor - 1);
}
* A personal anecdote regarding deep recursive calls and performance...
A while back, I was writing a program in C++ to enumerate the best moves for all possible Connect-4 games. The maximum depth of the recursive search function was 42 and each depth had up to 7 recursive calls. Initial versions of the code had an estimated running time of 2 million years, and that was using parallelism. Those 3 additional instructions can make a HUGE difference, both for sheer number of additional instructions and the amount L1 and L2 cache misses.
This algorithm came up to my mind right now, so please anyone correct me if i'm wrong.
As composite numbers are made by multiplication of prime numbers (and prime numbers can not be generated from multiplication of any other numbers) so the number must be multiplied to all prime numbers, some numbers like 6 when they are reached our smallest number is dividable (as its already multiplied to 2 and 3), but for those that are not, multiplying by a prime number (that is obviously less than the number itself so should be in our prime list) would make our smallest dividable to that number too. so for example when we get to 4, multiplying by2` (a prime number less than 4) would be enough, 8 and 9 the same way, ...
bool IsPrime(int x)
{
if(x == 1) return false;
for(int i=2;i<=Math.Sqrt(x);i+=2)
if(x%i==0) return false;
return true;
}
int smallest = 1;
List<int> primes = new List<int>();
for(int i=1;i<=20;i++)
if(IsPrime(i))
{
smallest *= i;
primes.Add(i);
}
else if(smallest % i != 0)
for(int j=0;j<primes.Count;j++)
if((primes[j]*smallest)%i == 0)
{
smallest *= primes[j];
break;
}
Edit:
As we have list of prime numbers so the best way to find out if a number is prime or not would be:
bool IsPrime(int x)
{
if(x == 1) return false;
for(int i = 0; i< primes.Count; i++)
if(x%primes[i] == 0) return false;
return true;
}

Generating a variable number of random numbers to a list, then comparing those numbers to a target?

How would I go about generating a serializable variable of random numbers to a list, then comparing those generated numbers to a target number?
What I want to do is make a program that takes in a number, such as 42, and generates that number of random numbers to a list while still keeping the original variable, in this case 42, to be referenced later. Super ham-handed pseudo-code(?) example:
public class Generate {
[SerializeField]
int generate = 42;
List<int> results = new List<int>;
public void Example() {
int useGenerate = generate;
//Incoming pseudo-code (rather, code that I don't know how to do, exactly)
while (useGenerate => 1) {
results.add (random.range(0,100)); //Does this make a number between 0 and 99?
int useGenerate = useGenerate - 1;
}
}
}
I think this will do something to that effect, once I figure out how to actually code it properly (Still learning).
From there, I'd like to compare the list of results to a target number, to see how many of them pass a certain threshold, in this case greater than or equal to 50. I assume this would require a "foreach" thingamabobber, but I'm not sure how to go about doing that, really. With each "success", I'd like to increment a variable to be returned at a later point. I guess something like this:
int success = 50;
int target = 0;
foreach int in List<results> {
if (int => success) {
int target = target + 1;
}
}
If I have the right idea, please just teach me how to properly code it. If you have any suggestions on how to improve it (like the whole ++ and -- thing I see here and there but don't know how to use), please teach me that, too. I looked around the web for using foreach with lists and it seemed really complicated and people were seemingly pulling some new bit of information from the Aether to include in the operation. Thanks for reading, and thanks in advance for any advice!

Subset Sum algorithm efficiency

We have a number of payments (Transaction) that come into our business each day. Each Transaction has an ID and an Amount. We have the requirement to match a number of these transactions to a specific amount. Example:
Transaction Amount
1 100
2 200
3 300
4 400
5 500
If we wanted to find the transactions that add up to 600 you would have a number of sets (1,2,3),(2,4),(1,5).
I found an algorithm that I have adapted, that works as defined below. For 30 transactions it takes 15ms. But the number of transactions average around 740 and have a maximum close to 6000. Is the a more efficient way to perform this search?
sum_up(TransactionList, remittanceValue, ref MatchedLists);
private static void sum_up(List<Transaction> transactions, decimal target, ref List<List<Transaction>> matchedLists)
{
sum_up_recursive(transactions, target, new List<Transaction>(), ref matchedLists);
}
private static void sum_up_recursive(List<Transaction> transactions, decimal target, List<Transaction> partial, ref List<List<Transaction>> matchedLists)
{
decimal s = 0;
foreach (Transaction x in partial) s += x.Amount;
if (s == target)
{
matchedLists.Add(partial);
}
if (s > target)
return;
for (int i = 0; i < transactions.Count; i++)
{
List<Transaction> remaining = new List<Transaction>();
Transaction n = new Transaction(0, transactions[i].ID, transactions[i].Amount);
for (int j = i + 1; j < transactions.Count; j++) remaining.Add(transactions[j]);
List<Transaction> partial_rec = new List<Transaction>(partial);
partial_rec.Add(new Transaction(n.MatchNumber, n.ID, n.Amount));
sum_up_recursive(remaining, target, partial_rec, ref matchedLists);
}
}
With Transaction defined as:
class Transaction
{
public int ID;
public decimal Amount;
public int MatchNumber;
public Transaction(int matchNumber, int id, decimal amount)
{
ID = id;
Amount = amount;
MatchNumber = matchNumber;
}
}
As already mentioned your problem can be solved by pseudo polynomial algorithm in O(n*G) with n - number of items and G - your targeted sum.
The first part question: is it possible to achieve the targeted sum G. The following pseudo/python code solves it (have no C# on my machine):
def subsum(values, target):
reached=[False]*(target+1) # initialize as no sums reached at all
reached[0]=True # with 0 elements we can only achieve the sum=0
for val in values:
for s in reversed(xrange(target+1)): #for target, target-1,...,0
if reached[s] and s+val<=target: # if subsum=s can be reached, that we can add the current value to this sum and build an new sum
reached[s+val]=True
return reached[target]
What is the idea? Let's consider values [1,2,3,6] and target sum 7:
We start with an empty set - the possible sum is obviously 0.
Now we look at the first element 1 and have to options to take or not to take. That leaves as with possible sums {0,1}.
Now looking at the next element 2: leads to possible sets {0,1} (not taking)+{2,3} (taking).
Until now not much difference to your approach, but now for element 3 we have possible sets a. for not taking {0,1,2,3} and b. for taking {3,4,5,6} resulting in {0,1,2,3,4,5,6} as possible sums. The difference to your approach is that there are two way to get to 3 and your recursion will be started twice from that (which is not needed). Calculating basically the same staff over and over again is the problem of your approach and why the proposed algorithm is better.
As last step we consider 6 and get {0,1,2,3,4,5,6,7} as possible sums.
But you also need the subset which leads to the targeted sum, for this we just remember which element was taken to achieve the current sub sum. This version returns a subset which results in the target sum or None otherwise:
def subsum(values, target):
reached=[False]*(target+1)
val_ids=[-1]*(target+1)
reached[0]=True # with 0 elements we can only achieve the sum=0
for (val_id,val) in enumerate(values):
for s in reversed(xrange(target+1)): #for target, target-1,...,0
if reached[s] and s+val<=target:
reached[s+val]=True
val_ids[s+val]=val_id
#reconstruct the subset for target:
if not reached[target]:
return None # means not possible
else:
result=[]
current=target
while current!=0:# search backwards jumping from predecessor to predecessor
val_id=val_ids[current]
result.append(val_id)
current-=values[val_id]
return result
As an another approach you could use memoization to speed up your current solution remembering for the state (subsum, number_of_elements_not considered) whether it is possible to achieve the target sum. But I would say the standard dynamic programming is a less error prone possibility here.
Yes.
I can't provide full code at the moment, but instead of iterating each list of transactions twice until finding matches (O squared), try this concept:
setup a hashtable with the existing transaction amounts as entries, as well as the summation of each set of two transactions assuming each value is made of a max of two transactions (weekend credit card processing).
for each total, reference into the hashtable - the sets of transactions in that slot are the list of matching transactions.
Instead of O^2, you can get it down to 4*O, which would make a noticeable difference in speed.
Good luck!
Dynamic programming can solve this problem efficiently:
Assume you have n transactions and the max amount of transactions is m.
we can solve it just in the complexity of O(nm).
learn it at Knapsack problem.
for this problem we can define for pre i transactions the numbers of subset, add up to sum: dp[i][sum].
the equation:
for i 1 to n:
dp[i][sum] = dp[i - 1][sum - amount_i]
the dp[n][sum] is the numbers of you need, and you need to add some tricks to get what are all the subsets.
Blockquote
You have a couple of practical assumptions here that would make brute force with smartish branch pruning feasible:
items are unique, hence you wouldn't be getting combinatorial blow up of valid subsets (i.e. (1,1,1,1,1,1,1,1,1,1,1,1,1) adding up to 3)
if the number of resulting feasible sets is still huge, you would run out of memory collecting them before running into total runtime issues.
ordering input ascending would allow for an easy early stop check - if your remaining sum is smaller then the current element, then none of the yet unexamined items could possibly be in a result (as current and subsequent items would only get bigger)
keeping running sums would speed up each step, as you wouldn't be recalculating it over and over again
Here's a bit of code:
public static List<T[]> SubsetSums<T>(T[] items, int target, Func<T, int> amountGetter)
{
Stack<T> unusedItems = new Stack<T>(items.OrderByDescending(amountGetter));
Stack<T> usedItems = new Stack<T>();
List<T[]> results = new List<T[]>();
SubsetSumsRec(unusedItems, usedItems, target, results, amountGetter);
return results;
}
public static void SubsetSumsRec<T>(Stack<T> unusedItems, Stack<T> usedItems, int targetSum, List<T[]> results, Func<T,int> amountGetter)
{
if (targetSum == 0)
results.Add(usedItems.ToArray());
if (targetSum < 0 || unusedItems.Count == 0)
return;
var item = unusedItems.Pop();
int currentAmount = amountGetter(item);
if (targetSum >= currentAmount)
{
// case 1: use current element
usedItems.Push(item);
SubsetSumsRec(unusedItems, usedItems, targetSum - currentAmount, results, amountGetter);
usedItems.Pop();
// case 2: skip current element
SubsetSumsRec(unusedItems, usedItems, targetSum, results, amountGetter);
}
unusedItems.Push(item);
}
I've run it against 100k input that yields around 1k results in under 25 millis, so it should be able to handle your 740 case with ease.

How to compare large string integer values

Currently I am working on a program that processes extremely large integernumbers .
To prevent hitting the intiger.maxvalue a script that processes strings as numbers, and splits them up into a List<int>as following
0 is the highest currently known value
list entry 0: 123 (hundred twenty three million)
list entry 1: 321 (three hundred twenty one thousand)
list entry 2: 777 (seven hundred seventy seven)
Now my question is: How would one check if the incoming string value is sub tractable from these values?
The start for subtraction I currently made is as following, but I am getting stuck on the subtracting part.
public bool Subtract(string value)
{
string cleanedNumeric = NumericAndSpaces(value);
List<string> input = new List<string>(cleanedNumeric.Split(' '));
// In case 1) the amount is bigger 2) biggest value exceeded by a 10 fold
// 3) biggest value exceeds the value
if (input.Count > values.Count ||
input[input.Count - 1].Length > values[0].ToString().Length ||
FastParseInt(input[input.Count -1]) > values[0])
return false;
// Flip the array for ease of comparison
input.Reverse();
return true;
}
EDIT
Current target for the highest achievable number in this program is a Googolplex And are limited to .net3.5 MONO
You should do some testing on this because I haven't run extensive tests but it has worked on the cases I've put it through. Also, it might be worth ensuring that each character in the string is truly a valid integer as this procedure would bomb given a non-integer character. Finally, it expects positive numbers for both subtrahend and minuend.
static void Main(string[] args)
{
// In subtraction, a subtrahend is subtracted from a minuend to find a difference.
string minuend = "900000";
string subtrahend = "900001";
var isSubtractable = IsSubtractable(subtrahend, minuend);
}
public static bool IsSubtractable(string subtrahend, string minuend)
{
minuend = minuend.Trim();
subtrahend = subtrahend.Trim();
// maybe loop through characters and ensure all are valid integers
// check if the original number is longer - clearly subtractable
if (minuend.Length > subtrahend.Length) return true;
// check if original number is shorter - not subtractable
if (minuend.Length < subtrahend.Length) return false;
// at this point we know the strings are the same length, so we'll
// loop through the characters, one by one, from the start, to determine
// if the minued has a higher value character in a column of the number.
int numberIndex = 0;
while (numberIndex < minuend.Length )
{
Int16 minuendCharValue = Convert.ToInt16(minuend[numberIndex]);
Int16 subtrahendCharValue = Convert.ToInt16(subtrahend[numberIndex]);
if (minuendCharValue > subtrahendCharValue) return true;
if (minuendCharValue < subtrahendCharValue) return false;
numberIndex++;
}
// number are the same
return true;
}
[BigInteger](https://msdn.microsoft.com/en-us/library/system.numerics.biginteger.aspx) is of aribtary size.
Run this code if you don't believe me
var foo = new BigInteger(2);
while (true)
{
foo = foo * foo;
}
Things get crazy. My debugger (VS2013) becomes unable to represent the number before it's done. ran it for a short time and got a number with 1.2 million digits in base 10 from ToString. It is big enough. There is a 2GB limit on object, which can be overriden in .NET 4.5 with the setting gcAllowVeryLargeObjects
Now what to do if you are using .NET 3.5? You basically need to reimplement BigInteger (obviously only taking what you need, there is a lot in there).
public class MyBigInteger
{
uint[] _bits; // you need somewhere to store the value to an arbitrary length.
....
You also need to perform maths on these arrays. here is the Equals method from BigInteger:
public bool Equals(BigInteger other)
{
AssertValid();
other.AssertValid();
if (_sign != other._sign)
return false;
if (_bits == other._bits)
// _sign == other._sign && _bits == null && other._bits == null
return true;
if (_bits == null || other._bits == null)
return false;
int cu = Length(_bits);
if (cu != Length(other._bits))
return false;
int cuDiff = GetDiffLength(_bits, other._bits, cu);
return cuDiff == 0;
}
It basically does cheap length and sign comparisons of the byte arrays, then, if that doesn't produce a difference hands off to GetDiffLength.
internal static int GetDiffLength(uint[] rgu1, uint[] rgu2, int cu)
{
for (int iv = cu; --iv >= 0; )
{
if (rgu1[iv] != rgu2[iv])
return iv + 1;
}
return 0;
}
Which does the expensive check of looping through the arrays looking for a difference.
All you math will have to follow this pattern and can largely be ripped of from the .Net source code.
Googleplex and 2GB:
Here the 2GB limit becomes a problem, because you will be needing an object size of 3.867×10^90 gigabyte. This the the point where you give up, or get clever and store objects as powers at the cost of not being able to represent a lot of them. *2
if you moderate your expectations, it doesn't actually change the maths of BigInteger to split _bits into multiple jagged arrays *1. You change the cheap checks a bit. Rather than checking the size of the array, you check the number of subarrays and then the size of the last one. Then the loop needs to be a bit more (but not much) more complex in that it does elementwise array comparison for each sub array. There are other changes as well, but it's by no means impossible and gets you out of the 2GB limit.
*1 Note use jagged arrays[][], not multidimensional arrays [,] which are still subject to the same limit.
*2 Ie give up on precision and store the mantissa and exponent. If you look how floating point numbers are implemented they can't represent all numbers between their max and min (as the number of real numbers in a range is 'bigger' than infinite). They make a complex trade off between precision and range. If you are wanting to do this, looking at float implementations will be a lot more useful than taking about integer representations like Biginteger.

Appropriate data structure for fast searching "between" pairs of longs

I currently (conceptually) have:
IEnumerable<Tuple<long, long, Guid>>
given a long, I need to find the "corresponding" GUID.
the pairs of longs should never overlap, although there may be gaps between pairs, for example:
1, 10, 366586BD-3980-4BD6-AFEB-45C19E8FC989
11, 15, 920EA34B-246B-41B0-92AF-D03E0AAA2692
20, 30, 07F9ED50-4FC7-431F-A9E6-783B87B78D0C
For every input long, there should be exactly 0 or 1 matching GUIDs.
so an input of 7, should return 366586BD-3980-4BD6-AFEB-45C19E8FC989
an input of 16 should return null
Update: I have about 90K pairs
How should I store this in-memory for fast searching?
Thanks
So long as they're stored in order, you can just do a binary search based on "start of range" vs candidate. Once you've found the entry with the highest "start of range" which is smaller than or equal to your target number, then either you've found an entry with the right GUID, or you've proved that you've hit a gap (because the entry with the highest smaller start of range has a lower end of range than your target).
You could potentially simplify the logic very slightly by making it a Dictionary<long, Guid?> and just record the start points, adding an entry with a null value for each gap. Then you just need to find the entry with the highest key which is less than or equal to your target number, and return the value.
Try this (I am sorry, not a solution for your IEnumerable):
public static Guid? Search(List<Tuple<long, long, Guid>> list, long input)
{
Tuple<long, long, Guid> item = new Tuple<long,long,Guid> { Item1 = input };
int index = list.BinarySearch(item, Comparer.Instance);
if (index >= 0) // Exact match found.
return list[index].Item3;
index = ~index;
if (index == 0)
return null;
item = list[index - 1];
if ((input >= item.Item1) && (input <= item.Item2))
return item.Item3;
return null;
}
public class Comparer : IComparer<Tuple<long, long, Guid>>
{
static public readonly Comparer Instance = new Comparer();
private Comparer()
{
}
public int Compare(Tuple<long,long,Guid> x, Tuple<long,long,Guid> y)
{
return x.Item1.CompareTo(y.Item1);
}
}
A B-tree is actually pretty good at this. Specifically, a B+-tree where each branch pointer has the start of your range as the key. The leaf data can contain the upper bound, so you deal with gaps correctly. I'm not sure if it's the best you could find anywhere, and you'd need to engineer it yourself, but it should certainly have very good performance.

Categories