I am trying to build a MiniMax algorithm for my chess game. To run the algorithm the game needs to destroy and respawn game objects multiple times between updates. The problem is that the Void Start function only runs the first time I spawn the object. When destroying the object and spawning it again during the same update the Void Start does not take effect.
The problem is with my Chess Pieces. I have a parent class for all pieces called Pices.cs and a bunch of child classes for the specific piece. The Pices.cs class has a char inherited and updated with Void Start by the other piece classes. The default is 'F' and then in Void start in the King class it will be updated to 'K', in Void start in queen class it will be updated to 'Q', and so on...
An example of the problem in action during one update of the game state:
if I have two ponds which is able to take the enemy queen. I would like to take the enemy queen with my first pond, reverse the move and Instantiate the queen back, then destroy the queen with my other pond, reverse the move and Instantiate the queen back, the do something totally different. But when the queen gets instantiated the second time doing this update the identification char is 'F' and not 'Q' as I wish.
The Relevant Code and notes:
All movement in-game is handled by Board_Manager.cs
Droning.cs = Queen pice in my language
Reverse move(){...} reset the game stage to start and dose all moves again except for the last one
Endgame(){...} Destroy all pice game objects and Instantiate them on the start position
The current move notation is as follows:
1. Piece Letter, 2. from x, 3. from y, 4. to x, 5. to y, 6. Death piece letter (if no pice died then -)
Picture of relevant pices.cs code 1
Picture of relevant Queen.cs code
Picture of ReversMove() in Board_Manager.cs
Picture of MiniMax base algorithem
Print of resulting move notation
I know this is a bit convoluted but I hope some of you understand the problem and can help me out here. This is part of my examination so I would really like to get this fixed (:
I want to try and get an AI to find a sequence of moves that wins in Marble Solitaire. I've completed the system which moves a random cylinder and also the system which undoes the previous move. All I need to do now is to work out how and when the AI undoes moves. I had no idea what to do so I sort of randomly tried things, and I had enough so I decided to ask here.
Here's the code I think you need to help me solve it - feel free to ask me for more snippets, I don't just want to overload you with meaningless code:
private int index; Increases when a move has been tried my the AI, and said move didn't work. I use it to stop the program from looping over the same move without checking others.
Below is the current code I use to determine whether the AI should undo a move or not, but it doesn't seem to work as I want:
if (possibleMove.Count < index)
{
index = 0;
undoMove();
}
else if (possibleMove.Count == 0)
{
undoMove();
} //possibleMove is a list of all possible Moves the AI has found
The code snippet above activates at the very end of findMove()
The general format of the code goes like this:
private void findEmpty()
{
findMove();
}
private void findMove()
{
makeMove();
}
private void makeMove()
{
}
private void undoMove()
{
}
Rules of Marble Solitaire
The player makes successive capturing moves, removing a single piece each turn until is it impossible to make any more capturing moves.
Each turn, the player captures a piece by jumping over that piece in any direction (not diagonally) from two spaces away a vacant point, making sure that there is a piece you can jump over.
Therefore, the first turn can be made only by jumping a piece into the middle hole from one of 4 possible points.
Image of marble solitaire:
You can do this with the BFS method of graph searching. What you can do is find a way to represent the board, it can be an array of booleans to represent marbles or gaps or anything you can think of, so long as it's easy for you to understand and manipulate.
Then you would have a queue that so far only holds the initial state of the board. You then set your current state to the first state in the queue and then remove it from the queue. You can now use findMoves(currentState) to return a list of states that you can access from this current move. Add each of those states to the end of the queue, and then once again set your current state to the first state in the queue and remove it and use findMoves repeatedly until your current state matches some sort of goal state. In your particular problem you can just run this until findMoves(currentState) returns an empty array since that would mean there are no more moves for you to be able to do.
That would get you to a solution, however it wouldn't keep track of the path for you. You can keep track of the path itself in any number of ways, for example instead of adding just the next state to the queue you can add an array of all of the states so far in that path and just use the last value when calculating the next states you can go to. But that's just an example.
I am new to unity and currently trying to make a LAN multiplayer RPG game.
FYI, I have followed the official unity lan multiplayer guide and everything went well.
https://unity3d.com/learn/tutorials/topics/multiplayer-networking/introduction-simple-multiplayer-example
So far I got the players to load in and they are able to move. I wrote the following code below (under the void update routine) so that when the player is moving, it will randomize a number between 1 & 50 every 1 second and if the number is 25, we have randomly "encountered an enemy". When any player encounters an enemy I made it so everyone on the network goes to the "battle scene".
if (Input.GetKey("up") || Input.GetKey("down") || Input.GetKey("left") || Input.GetKey("right"))
{
if (Time.time > NextActionTime)
{
NextActionTime = Time.time + Period;
EnemyEncounter = Random.Range(1, 50);
if (EnemyEncounter == 25)
{
NetworkManager.singleton.ServerChangeScene("Scene2");
}
}
}
The code above works fine but I am not sure how, instead of loading everyone, to load only certain players into the battle scene.
For example:
Players enter a name before Hosting/Finding LAN game
Player 1 = Joe
Player 2 = Bob
Player 3 = Billy
Player 4 = Jim
On a preset label/text that loads the text in it saying "Joe,Billy". Now when ANY player finds an encounter, I want to ONLY load the players name "Joe" and "Billy" to the next scene while the others do not.
Is this possible? Any kind of help will be greatly appreciated.
Thanks All
I was trying different ideas and I got 2 different approaches:
1-As I already tell on the comments, try to nest lobbyManagers
2-"Fake" the scene split on lobbys
1.Nested Lobbys
Concept:
First Scene, MainLobby, 4 players enter, and go to the second scene
Second Scene, MainGame+SecondLobby, there are 4 players that comes from the first scene, but now 2 of them wants to go the third scene, so they use the SecondLobby to matchmake again.
Third Scene, SecondGame.
This is the best aproach I think if we are talking about performance, but it's elaborate cause:
-Actual Unity NetworkLobby uses singleton pattern, so you need to code the singleton parts again.
-LobbyManagers are builded with the DontDestroyOnLoad, so you were charging another lobby on your next scene.
-I don't really know if you can go back from third scene to second one :S
2.Fake Scenes
Well, welcome to "dirty tricks", the second concept is to:
First Scene, MainLobby, 4 players enter, and go to the second scene
Second Scene, MainGame, there are 4 players that comes from the first scene, but now 2 of them wants to go the third scene.
Third Scene, SecondGame.
But instead of "matchmake" again, what we do is to add the scene as an additive scene, but on different coordenates, and move the 2 players that wants to battle on the thirdScene. So the players will think that they are on different scene, but they aren't they are just being move. Thinks to get in mind:
-Maybe you don't really need to use the additive scene, just build on the same scene, on different coordenates. (https://docs.unity3d.com/ScriptReference/SceneManagement.LoadSceneMode.Additive.html)
-Think that they still be 4 networked players on the same scene, so maybe you want to "disable" some networkMessages, to only affect certain players on certain "scenes". (https://docs.unity3d.com/ScriptReference/Networking.NetworkClient.Send.html)
But if you achieve some other approach let me know, it really gives interesting game design stuff! :D
I'm building a simple console game, there is the player who moves when key press down, and there are enemies which moves automatically, each type of enemy moves one time in X miliseconds.
As I understood I should using the timer, but I don't really know how to do that in the game loop (isn't built yet because I don't know how to do with the timer. but it should be while loop I think). the game ends when the enemy 'touch' the player (same x and y).
One important thing: I can't you in this exercise in Thread, but if you have other suggestions instead of using Timer you are welcome.
Thank you.
You normally don't use conventional timers in games. Games have a very different mechanism for handling their logic and the time that passed, they normally don't work with timers or not in the way you would expect:
Games normally have something called a game loop. Generally speaking it's three main functions that are called one after the other in a loop:
while(running)
{
HandleUserInput();
ChangeWorld();
Render();
}
You get user input, you change the game world accordingly and you draw it to the screen. Now, the faster your computer is, the faster this loop runs. That's good for the graphics (think FPS), but bad for the game. Imagine Tetris where every frame the blocks move. Now I would not want to buy a faster computer, the game would get more difficult that way.
So to keep the game speed constant independent of the power of the computer, the loop considers the time passed:
while(running)
{
var timePassedSinceLastLoop = CalculateTimeDelta();
HandleUserInput();
ChangeWorld(timePassedSinceLastLoop);
Render();
}
Now imagine a cooldown for something in game. The player pressed "a", some cool action happened and although he may press "a" again, nothing will happen for the next 5 seconds. But the game still runs and does all the other things that may happen ingame. This is not a conventional timer. It's a variable, lets call it ActionCooldown, and once the player triggers the action, it's set to 5 seconds. Every time the world changes, the timePassed is subtracted from that number until it's zero. All the time, the game is running and handling input and rendering. But only once ActionCooldown hits zero, another press of "a" will trigger that action again.
The ChangeWorld method includes all automatic changes to the world. Enemies, missiles, whatever moves without player interaction. And It moves based on time. If the enemy moves one square per second, You need to make his coordinate a float and add a fraction of a square every time the loop is run.
Lets say you have 30 fps so your loop runs 30 times a second. Your enemy now needs to move 1/30 of a square each loop. Then it will in the end have moved one full square per second.
The general premise behind the timer is to repeat some code every n.
To create the timer use this:
System.Timers.Timer aTimer = new System.Timers.Timer();
aTimer.Elapsed+=new ElapsedEventHandler(OnTimedEvent);
// Set the Interval to 1 millisecond. Note: Time is set in Milliseconds
aTimer.Interval=1;
aTimer.Enabled=true;
Then you implement this method:
private static void OnTimedEvent(object source, ElapsedEventArgs e)
{
//Whatever you need repeated
}
The full example can be found here:
http://msdn.microsoft.com/en-us/library/system.timers.timer(v=vs.71).aspx
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move.
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
My feeling is that the stats you're quoting are already pretty good. Two expert Tic-Tac-Toe players will always end in a tie, and there is no way to force a win if your opponent knows how to play the game.
Update
There's probably a more elegant wayt o prove the correctness of your A.I., but the most straightforward approach would be the brute force one. Just enumerate all possible board positions as a game tree, and prune the branches that lead directly to a loss. Then for each branch in the tree you can work out the probability of win resulting from following that branch. Then you just need to test your A.I. on each board position and make sure it's picking the branch with the highest probability of a win.
You should start by observing that move 9 is always forced: there is only one empty square on the board. Move can be considered 8 forced as well, because after seven moves there could be exactly three situations:
O can win on the next move, in which case it takes the win
Placing an X in either one of the two remaining squares wins the game for X, in which case O has lost regardless of its next move
X has zero or one path to victory, in which case O blocks to force a draw
This means that the game is over after at most seven moves.
Also observe that there are only three opening moves: the center, a corner, or a side. It does not matter which of the four corners or sides you take, because the board can be rotated to match a "canonical" opening (the upper-left corner or the middle of the top side).
You can now build your state analysis code. Starting with each of the three possible openings, search with backtracking up to six additional moves using all squares that are open by the time you make the move. After each move, analyze the position to see if X or O has already won; mark wins by X as Wx, and wins by O as Wo. The remaining positions are undecided.
Do not explore positions after Wx or Wo: simply return to the prior step, reporting the win by the corresponding side.
When you reach the seventh move, statically analyze the position to decide if it is one of the three situations described above is applicable, marking the position a Wx, Wo, or a Draw.
Now to the most important step: when you backtrack to the move N-1 by the player p,
If one of the moves that you try is such that all position at the next level becomes Wp, declare the current position a Wp as well.
If all of the moves that you try lead to the win of the opponent, declare the current position a win for the opponent
Otherwise, declare the current position a Draw, and return to the prior level.
If you do this right, all three opening positions will be classified as a Draw. You should see some forcible wins after three moves.
Running this procedure classifies each position as a Wx, Wo, or a Draw. If your AI gets you a win for the player p in a position classified as Wp, or gets you a draw in a position classified as a Draw, then your AI is perfect. If, on the other hand, there are positions that are statically classified as Wp in which the AI gets p only a draw, then your AI engine needs an improvement.
Additional reading: you can find additional insights into the game in this article describing methods of counting possible games of Tic-Tac-Toe.
What you're doing is more linear optimisation than A.I... I'll not describe all the linear algebra of the Tic-Tac-Toe here, there's plenty of examples on the net.
So using linear algebra, you don't have to prove anything about your results (searching for magic statistics, etc), because your results can be validated by a simple solution-injection in the original equation.
In conclusion, there is two cases :
You're using simple "deduction" logic (which is in reality non-formal linear algebra formulation) : we can't found a ready-to-use method for checking your results without look at your code. EDIT : as Andrew Cooper suggests, brute force can be a ready to use method without seeing at your code.
You're using formal linear algebra formulation : your results can be validated by a simple solution-injection in the original equation.
The only thing you can compare is one potential move against another. Whenever it's the computer's turn to make a move, have it play out all possible games from that point on, and choose the move that leads to the highest possible amount of wins. You can't always win, but you can give the opponent more chances to make a bad move.
Or, you can always try the tic tac toe algorithm in the link below:
Tic Tac Toe perfect AI algorithm: deeper in "create fork" step
given that we know
one cannot force a win
with optimal strategy one cannot lose
your AI has already proven to be optimal if
you did search the full tree when playing against it
and your AI is deterministic (if it were rolling the dice at certain stages you would have had to play against all combinations)
It did not lose, you cannot demand it to win. the wins it did do not count, as your full tree search included bad moves as well. that's all, you are done.
just for fun:
if you had no a priori knowledge about the chances to win/draw/lose a game a common strategy would be to persistently save lost positions. on the next game you would try to avoid them. if you can't avoid a move to a lost position you found another one. this way you can learn not to lose against a certain strategy (if possible) or to avoid an error in your strategy.
In order for your tic-tac-toe AI to be proven correct, it needs to satisfy two conditions:
It must never lose.
When the opponent deviates from optimal play, it must win.
Both conditions derive from the fact that if both players play optimally, the tic-tac-toe always ends in a draw.
One automatic method of determining whether your program fulfills these two conditions is to construct what is called a "minimax tree" of every possible tic-tac-toe game. The minimax tree completely characterizes the optimal move for each player, so you can use it to see if your program always selects the optimal move. This means that my answer essentially boils down to, "Write a perfect AI, and then see if it plays the same way as your own AI." However, the minimax algorithm is useful to know, and to my knowledge, this is the only way to test if your AI actually plays optimally.
Here is how the minimax algorithm works (For a gif explanation, see Wikipedia. There's also some pseudocode in the Wikipedia article on minimax.):
Beginning with the tic-tac-toe setup under consideration, construct a tree of all possible subsequent moves. The initial position at the root node. At the lowest level in the tree, you have all of the possible final positions.
Assign a value of +1 to all final positions in which the first player wins, a value of -1 to all moves in which the second player wins, and a value of 0 to all ties.
Now we propagate these values up the tree to the root node. Assume that each player plays optimally. In the last move, Player One will select any move that has a value of +1, i.e. a move that wins the game. If no move has a value of +1, Player One will select a move with value 0, tying the game. Thus, nodes where it is player Player One's move are assigned the maximum value of any of their child nodes. Conversely, when it is Player Two's move, they prefer to select moves with a value of -1, which win them the game. If no winning moves are available, they prefer to tie the game. Thus, nodes where it is Player Two's turn are assigned a value equal to the minimum of their child nodes. Using this rule, you can propagate values from the deepest level in the tree all the way up to the root node.
If the root node has a value of +1, the first player should win with optimal play. If it has a value of -1, the second player should win. If it has a value of 0, optimal play leads to a draw.
You can now determine, in each situation, whether your algorithm selects the optimal move. Construct a tree of all possible moves in tic-tac-toe, and use the minimax algorithm to assign +1, 0 or -1 to each move. If your program is Player One, it is optimal if it always selects the move with the maximum value. If it plays as Player Two, it is optimal if it always selects the move with the minimum value.
You can then loop through every move in the tree, and ask your AI to select a move. The above tells you how to determine if the move it selects is optimal.
I would use a decision tree to solve this problem.
Putting it in simple words, decision trees are a method to recursively calculate the expectancy (and chance) of the end result. each "branch" in the tree is a decision who's expectancy is calculated from the sum of (value * chance) possible for this decision.
in a limited options scenario (like tic-tac-toe) you can have the entire tree pre-calculated and therefore after each move of the human player (chance) you can make choose (decision) the next branch witch has the highest expectancy to win.
In a chess game the solution is similar but the tree is not pre-built: after each move the computer calculates the value for every possible move on the board for n depth forward. choosing the best, second best or n-th best expectancy depending on the difficulty of the game selected by the player.