I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up.
The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do.
I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on.
Do you have any suggestions on what to do about this problem?
Edit:
So I did as Justin L suggested by adding a lot of cost around the obstacles which results in the folling:
Grid with no path http://sogaard.us/uploades/1_grid_no_path.png
Here you can see the cost around the obstacles, initially the middle two obstacles should look just like the ones in the corners, but after running our pathfinder it seems like the costs are overridden:
Grid with path http://sogaard.us/uploades/1_map_grid.png
Picture that shows things found on the picture http://sogaard.us/uploades/2_complete_map.png
Picture above shows what things are found on the picture.
Path found http://sogaard.us/uploades/3_path.png
This is the path found which as our problem also was before is hugging the obstacle.
The grid from before with the path on http://sogaard.us/uploades/4_mg_path.png
And another picture with the cost map with the path on.
So what I find strange is why the A* pathfinder is overriding these field costs, which are VERY high.
Would it be when it evaluates the nodes inside the open list with the current field to see whether the current fields path is shorter than the one inside the open list?
And here is the code I am using for the pathfinder:
Pathfinder.cs: http://pastebin.org/343774
Field.cs and Grid.cs: http://pastebin.org/343775
Have you considered adding a gradient cost to pixels near objects?
Perhaps one as simple as a linear gradient:
C = -mx + b
Where x is the distance to the nearest object, b is the cost right outside the boundary, and m is the rate at which the cost dies off. Of course, if C is negative, it should be set to 0.
Perhaps a simple hyperbolic decay
C = b/x
where b is the desired cost right outside the boundary, again. Have a cut-off to 0 once it reaches a certain low point.
Alternatively, you could use exponential decay
C = k e^(-hx)
Where k is a scaling constant, and h is the rate of decay. Again, having a cut-off is smart.
Second suggestion
I've never applied A* to a pixel-mapped map; nearly always, tiles.
You could try massively decreasing the "resolution" of your tiles? Maybe one tile per ten-by-ten or twenty-by-twenty set of pixels; the tile's cost being the highest cost of a pixel in the tile.
Also, you could try de-valuing the shortest-distance heuristic you are using for A*.
You might try to enlarge the obstacles taking size of the robot into account. You could round the corners of the obstacles to address the blocking problem. Then the gaps that are filled are too small for the robot to squeeze through anyway.
I've done one such physical robot. My solution was to move one step backward whenever there is a left and right turn to do.
The red line is as I understand your problem. The Black line is what I did to resolve the issue. The robot can move straight backward for a step then turn right.
Related
Apologies for the lack of example code, I'm currently in the brainstorming phase of the problem and having trouble finding a proper solution.
As I have stated in my title, I want to find out what the intersection area of two polygon are.
To be more specific, I have two ARPlane's that may overlap each other on the x-z plane but be on different y-levels (imagine stairs with an overhang). I can get the area boundaries of these ARPlanes easily. My first idea to simplify the process is to remove the y-component so as to have them on the same plane and turn this into a 2D problem.
From here onward, I'm unsure of how to proceed. I could not find any methods that calculated the intersection areas of two polygons. I have a few solutions that look promising if I can get the planes aligned neatly (such that the +x direction points from the center of one of the planes to the other), but I cannot move them in any way so I must modify what the local "forward" for a plane is. Even then, I don't think the ARPlane has a direction vector in the first place as they are not GameObjects, so I am unsure if this is a viable option as a path to follow. ARPlane class for quick reference.
One other way is to turn the planes so that they're in alignment with world x axis. This looks promising over the other methods but as I previously stated, I cannot turn the actual ARPlanes. I must make a copy of them and turn the copies while keeping their relative rotations and positions the same.
So far these have been the methods I could come up with but could not develop fully due to unity restrictions. My question, then, is whether there is a way to get around the issues of these problems; failing that, whether there is an alternative solution to the issue that can be recommended.
Below is an example use case of the tool. As can be seen, some stair threads have an overhang that covers a portion of the previous thread's surface (second and third figure). Each stair thread will be scanned and then processed to find their usable surface. The area covered by the overhang is not a usable surface. This usable area is defined by the placements of a staircase thread (A), and the very next thread right above it (B); so then the usable area will be surface_area_of_A - xz_crossSection_of_AB
In one of my university courses (in Data-structures and Algorithmics), we are given a bonus assignment based on the game Sokoban:
With one Major exception: We only have one crate to push to our goal.
Example input
8 8
MMMMMMMM
M.....?M
M....TTM
M....TTM
M..!...M
M....+.M
M......M
MMMMMMMM
Here the first line gives the dimensions (b x h) of the board (8 by 8 in this case). This is followed up by h lines oh b characters. The meaning of these characters is as follows: . A walkable space, ? the goal (red point in the gif), ! the crate, and + is our position.
We are asked to output the shortest solution to the puzzle. (Note that a puzzle might be unsolveable.) We output this in 2 lines, the first tells us how many moves, and the second tells us the correct path. For the example, this would be:
Example Output
10
WWNNNWNEEE
Now, finding an algorithm that works isn't really an issue. Seeing as we're looking for the shortest path, and the nodes on this specific graph are in essence unweighted, I've implemented a breadth first search. In broad strokes, my current implementation looks like this:
0. Since the maze doesn't change, describe each state as a whole number based on the coordinates
of the crate and the player. - This defines a state uniquely and reduces memory costs.
1. Create a dictionary of visited states.
2. Get the input positions of the goal, crate and player.
3. Set up a Queue of move sequences.
4. Pop a move sequence from the Queue.
5. If this move sequence wins the game, go to step 8.
6. Make new move sequences which are copies of the original, each with a different legal move appended.
7. Append these new move sequences to the Queue.
8. Go to step 4
9. Print the output.
This is, of course a relatively simple algorithm. The problem is that it isn't fast enough. In one of the final test cases, we're thrown a 196 x 22 maze like "level" which has a solution that takes 2300 steps. We're asked to solve this level within 10 seconds, but it takes my algorithm more than 10 minutes.
Because of that, I'm kinda at a loss. I've already managed to increase the algorithm's speed 10 fold, and I still have 2 orders of magnitude to go...
Hence why I'm asking here: What makes this algorithm so slow, and how can I speed it up?
Yes, your comprehensive BFS search will be slow. You spend a large amount of your tree search in moves that are utterly wasted, your player thrashing around the maze area to no avail.
Change the focus of your goal: first, solve the maze for the crate rather than sending the player every which way. Include a heuristic for moving the crate closer to the goal spot. Make sure that the crate moves are possible: that there is a "push from " spot available for each move.
One initial heuristic is to make a maze fill by raw distance to the goal start at either the goal (what I've done here) and increment the steps through the maze, or start at the box and increment from there.
MMMMMMMM
M54321?M
M6543TTM
M7654TTM
M876567M <== crate is on the farther 6
M987678M <== player is on the nearer 7
Ma98789M
MMMMMMMM
Here, you would first try to find legal pushes to move the box along the path 654321?. You can also update this by making a penalty (moving the player without pushing) for any direction change.
These heuristics will give you a very good upper bound for a solution; you can then retrace decision points to try other paths, always keeping your "shortest solution" for any position.
Also keep track of where you've been, so that you don't waste time in position loops: never repeat a move (position and direction).
Does that help you get going?
Instead of using a pure dfs search of the player's movements, consider only the crate moves available to you at the time. For instance, in the very first frame of your gif, at the beginning of the simulation, the only crate move possible is the top one to the right one square.
An analogy would be for a game of chess on the first move, you would not consider any queen or bishop moves since they are all blocked by pawns.
After you've successfully found the sequence of crate moves leading to the solution, come back and trace the player moves necessary to construct the sequence of crate moves.
This improves time complexity because the time complexity will be based on the number of crates present in the map instead of total squares.
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move.
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
My feeling is that the stats you're quoting are already pretty good. Two expert Tic-Tac-Toe players will always end in a tie, and there is no way to force a win if your opponent knows how to play the game.
Update
There's probably a more elegant wayt o prove the correctness of your A.I., but the most straightforward approach would be the brute force one. Just enumerate all possible board positions as a game tree, and prune the branches that lead directly to a loss. Then for each branch in the tree you can work out the probability of win resulting from following that branch. Then you just need to test your A.I. on each board position and make sure it's picking the branch with the highest probability of a win.
You should start by observing that move 9 is always forced: there is only one empty square on the board. Move can be considered 8 forced as well, because after seven moves there could be exactly three situations:
O can win on the next move, in which case it takes the win
Placing an X in either one of the two remaining squares wins the game for X, in which case O has lost regardless of its next move
X has zero or one path to victory, in which case O blocks to force a draw
This means that the game is over after at most seven moves.
Also observe that there are only three opening moves: the center, a corner, or a side. It does not matter which of the four corners or sides you take, because the board can be rotated to match a "canonical" opening (the upper-left corner or the middle of the top side).
You can now build your state analysis code. Starting with each of the three possible openings, search with backtracking up to six additional moves using all squares that are open by the time you make the move. After each move, analyze the position to see if X or O has already won; mark wins by X as Wx, and wins by O as Wo. The remaining positions are undecided.
Do not explore positions after Wx or Wo: simply return to the prior step, reporting the win by the corresponding side.
When you reach the seventh move, statically analyze the position to decide if it is one of the three situations described above is applicable, marking the position a Wx, Wo, or a Draw.
Now to the most important step: when you backtrack to the move N-1 by the player p,
If one of the moves that you try is such that all position at the next level becomes Wp, declare the current position a Wp as well.
If all of the moves that you try lead to the win of the opponent, declare the current position a win for the opponent
Otherwise, declare the current position a Draw, and return to the prior level.
If you do this right, all three opening positions will be classified as a Draw. You should see some forcible wins after three moves.
Running this procedure classifies each position as a Wx, Wo, or a Draw. If your AI gets you a win for the player p in a position classified as Wp, or gets you a draw in a position classified as a Draw, then your AI is perfect. If, on the other hand, there are positions that are statically classified as Wp in which the AI gets p only a draw, then your AI engine needs an improvement.
Additional reading: you can find additional insights into the game in this article describing methods of counting possible games of Tic-Tac-Toe.
What you're doing is more linear optimisation than A.I... I'll not describe all the linear algebra of the Tic-Tac-Toe here, there's plenty of examples on the net.
So using linear algebra, you don't have to prove anything about your results (searching for magic statistics, etc), because your results can be validated by a simple solution-injection in the original equation.
In conclusion, there is two cases :
You're using simple "deduction" logic (which is in reality non-formal linear algebra formulation) : we can't found a ready-to-use method for checking your results without look at your code. EDIT : as Andrew Cooper suggests, brute force can be a ready to use method without seeing at your code.
You're using formal linear algebra formulation : your results can be validated by a simple solution-injection in the original equation.
The only thing you can compare is one potential move against another. Whenever it's the computer's turn to make a move, have it play out all possible games from that point on, and choose the move that leads to the highest possible amount of wins. You can't always win, but you can give the opponent more chances to make a bad move.
Or, you can always try the tic tac toe algorithm in the link below:
Tic Tac Toe perfect AI algorithm: deeper in "create fork" step
given that we know
one cannot force a win
with optimal strategy one cannot lose
your AI has already proven to be optimal if
you did search the full tree when playing against it
and your AI is deterministic (if it were rolling the dice at certain stages you would have had to play against all combinations)
It did not lose, you cannot demand it to win. the wins it did do not count, as your full tree search included bad moves as well. that's all, you are done.
just for fun:
if you had no a priori knowledge about the chances to win/draw/lose a game a common strategy would be to persistently save lost positions. on the next game you would try to avoid them. if you can't avoid a move to a lost position you found another one. this way you can learn not to lose against a certain strategy (if possible) or to avoid an error in your strategy.
In order for your tic-tac-toe AI to be proven correct, it needs to satisfy two conditions:
It must never lose.
When the opponent deviates from optimal play, it must win.
Both conditions derive from the fact that if both players play optimally, the tic-tac-toe always ends in a draw.
One automatic method of determining whether your program fulfills these two conditions is to construct what is called a "minimax tree" of every possible tic-tac-toe game. The minimax tree completely characterizes the optimal move for each player, so you can use it to see if your program always selects the optimal move. This means that my answer essentially boils down to, "Write a perfect AI, and then see if it plays the same way as your own AI." However, the minimax algorithm is useful to know, and to my knowledge, this is the only way to test if your AI actually plays optimally.
Here is how the minimax algorithm works (For a gif explanation, see Wikipedia. There's also some pseudocode in the Wikipedia article on minimax.):
Beginning with the tic-tac-toe setup under consideration, construct a tree of all possible subsequent moves. The initial position at the root node. At the lowest level in the tree, you have all of the possible final positions.
Assign a value of +1 to all final positions in which the first player wins, a value of -1 to all moves in which the second player wins, and a value of 0 to all ties.
Now we propagate these values up the tree to the root node. Assume that each player plays optimally. In the last move, Player One will select any move that has a value of +1, i.e. a move that wins the game. If no move has a value of +1, Player One will select a move with value 0, tying the game. Thus, nodes where it is player Player One's move are assigned the maximum value of any of their child nodes. Conversely, when it is Player Two's move, they prefer to select moves with a value of -1, which win them the game. If no winning moves are available, they prefer to tie the game. Thus, nodes where it is Player Two's turn are assigned a value equal to the minimum of their child nodes. Using this rule, you can propagate values from the deepest level in the tree all the way up to the root node.
If the root node has a value of +1, the first player should win with optimal play. If it has a value of -1, the second player should win. If it has a value of 0, optimal play leads to a draw.
You can now determine, in each situation, whether your algorithm selects the optimal move. Construct a tree of all possible moves in tic-tac-toe, and use the minimax algorithm to assign +1, 0 or -1 to each move. If your program is Player One, it is optimal if it always selects the move with the maximum value. If it plays as Player Two, it is optimal if it always selects the move with the minimum value.
You can then loop through every move in the tree, and ask your AI to select a move. The above tells you how to determine if the move it selects is optimal.
I would use a decision tree to solve this problem.
Putting it in simple words, decision trees are a method to recursively calculate the expectancy (and chance) of the end result. each "branch" in the tree is a decision who's expectancy is calculated from the sum of (value * chance) possible for this decision.
in a limited options scenario (like tic-tac-toe) you can have the entire tree pre-calculated and therefore after each move of the human player (chance) you can make choose (decision) the next branch witch has the highest expectancy to win.
In a chess game the solution is similar but the tree is not pre-built: after each move the computer calculates the value for every possible move on the board for n depth forward. choosing the best, second best or n-th best expectancy depending on the difficulty of the game selected by the player.
I've got two polygons defined as a list of Vectors, I've managed to write routines to transform and intersect these two polygons (seen below Frame 1). Using line-intersection I can figure out whether these collide, and have written a working Collide() function.
This is to be used in a variable step timed game, and therefore (as shown below) in Frame 1 the right polygon is not colliding, it's perfectly normal for on Frame 2 for the polygons to be right inside each other, with the right polygon having moved to the left.
My question is, what is the best way to figure out the moment of intersection? In the example, let's assume in Frame 1 the right polygon is at X = 300, Frame 2 it moved -100 and is now at 200, and that's all I know by the time Frame 2 comes about, it was at 300, now it's at 200. What I want to know is when did it actually collide, at what X value, here it was probably about 250.
I'm preferably looking for a C# source code solution to this problem.
Maybe there's a better way of approaching this for games?
I would use the separating axis theorem, as outlined here:
Metanet tutorial
Wikipedia
Then I would sweep test or use multisampling if needed.
GMan here on StackOverflow wrote a sample implementation over at gpwiki.org.
This may all be overkill for your use-case, but it handles polygons of any order. Of course, for simple bounding boxes it can be done much more efficiently through other means.
I'm no mathematician either, but one possible though crude solution would be to run a mini simulation.
Let us call the moving polygon M and the stationary polygon S (though there is no requirement for S to actually be stationary, the approach should work just the same regardless). Let us also call the two frames you have F1 for the earlier and F2 for the later, as per your diagram.
If you were to translate polygon M back towards its position in F1 in very small increments until such time that they are no longer intersecting, then you would have a location for M at which it 'just' intersects, i.e. the previous location before they stop intersecting in this simulation. The intersection in this 'just' intersecting location should be very small — small enough that you could treat it as a point. Let us call this polygon of intersection I.
To treat I as a point you could choose the vertex of it that is nearest the centre point of M in F1: that vertex has the best chance of being outside of S at time of collision. (There are lots of other possibilities for interpreting I as a point that you could experiment with too that may have better results.)
Obviously this approach has some drawbacks:
The simulation will be slower for greater speeds of M as the distance between its locations in F1 and F2 will be greater, more simulation steps will need to be run. (You could address this by having a fixed number of simulation cycles irrespective of speed of M but that would mean the accuracy of the result would be different for faster and slower moving bodies.)
The 'step' size in the simulation will have to be sufficiently small to get the accuracy you require but smaller step sizes will obviously have a larger calculation cost.
Personally, without the necessary mathematical intuition, I would go with this simple approach first and try to find a mathematical solution as an optimization later.
If you have the ability to determine whether the two polygons overlap, one idea might be to use a modified binary search to detect where the two hit. Start by subdividing the time interval in half and seeing if the two polygons intersected at the midpoint. If so, recursively search the first half of the range; if not, search the second half. If you specify some tolerance level at which you no longer care about small distances (for example, at the level of a pixel), then the runtime of this approach is O(log D / K), where D is the distance between the polygons and K is the cutoff threshold. If you know what point is going to ultimately enter the second polygon, you should be able to detect the collision very quickly this way.
Hope this helps!
For a rather generic solution, and assuming ...
no polygons are intersecting at time = 0
at least one polygon is intersecting another polygon at time = t
and you're happy to use a C# clipping library (eg Clipper)
then use a binary approach to deriving the time of intersection by...
double tInterval = t;
double tCurrent = 0;
int direction = +1;
while (tInterval > MinInterval)
{
tInterval = tInterval/2;
tCurrent += (tInterval * direction);
MovePolygons(tCurrent);
if (PolygonsIntersect)
direction = +1;
else
direction = -1;
}
Well - you may see that it's allways a point of one of the polygons that hits the side of the other first (or another point - but thats after all almost the same) - a possible solution would be to calculate the distance of the points from the other lines in the move-direction. But I think this would end beeing rather slow.
I guess normaly the distances between frames are so small that it's not importand to really know excactly where it hit first - some small intersections will not be visible and after all the things will rebound or explode anyway - don't they? :)
Was wondering if anyone has knowledge on implementing pathfinding, but using scent. The stronger the scent in the nodes surrounding, is the way the 'enemy' moves towards.
Thanks
Yes, I did my university final project on the subject.
One of the applications of this idea is for finding the shortest path.
The idea is that the 'scent', as you put it, will decay over time. But the shortest path between two points will have the strongest scent.
Have a look at this paper.
What did you want to know exactly??
Not quite clear what the question is in particular - but this just seems like another way of describing the Ant colony optimization problem:
In computer science and operations
research, the ant colony optimization
algorithm (ACO) is a probabilistic
technique for solving computational
problems which can be reduced to
finding good paths through graphs.
Well, think about it for a minute.
My idea would to divide the game field into sections of 32x32 (or whatever size your character is). Then run some checks every x seconds (so if they stay still the tiles around them will have more 'scent') to figure out how strong a scent is on any given tile. Some examples might be: 1) If you cross over the tile, add 3; 2) if you crossed over an adjacent tile, add 1.
Then add things like degradation over time, reduce every tile by 1 every x seconds until it hits zero.
The last thing you will need to worry about is using AI to track this path. I would recommend just putting the AI somewhere, and telling it to find a node with a scent, then goto an adjacent node with a higher/equal value scent. Also worry about crossing off paths taken. If the player goes up a path, then back down it another direction, make sure the AI does always just take the looped back path.
The last thing to look at with the AI would be to add a bit of error. Make the AI take the wrong path every once in a while. Or lose the trail a little more easily.
Those are the key points, I'm sure you can come up with some more, with some more brainstorming.
Every game update (or some other, less frequent time frame), increase the scent value of nodes near to where the target objects (red blobs) are.
Decrease all node scent values by some fall-off amount to zero.
In the yellow blob's think/move function get available nodes to move to. Move towards the node with the highest scent value.
Depending on the number of nodes the 'decrease all node scent values' could do with optomisation, e.g. maybe maintaining a a list of non-zero nodes to be decreased.
I see a big contradiction between scent model and pathfinding. For a hunter in the nature finding the path by scent means finding exactly the path used by the followed subject. And in games pathfinding means finding the fastest path between two points. It is not the same.
1. While modelling the scent you will count the scent concentration in the point as the SUM of the surrounding concentrations multiplied by different factors. And searching for the fastest path from the point means taking the MINIMUM of the times counted for surrounding points, multiplied by the different parametres.
2. Counting the scent you should use recursive model - scent goes in all directions, including backward. In the case of the pathfinding, if you have found the shortest paths for points surrounding the target, they won't change.
3 Level of scent can rise and fall. In pathfinding, while searching for minimum, the result can never rise.
So, the scent model is really much more complicated than your target. Of course, what I have said, is true only for the standard situation and you can have something very special...