C# GMap.Net identifying position on a route - c#

My question is not so much code-oriented, it's more theoretical.
I'm currently working on an application for a sporting event. The goal is to be able to track competitors on a map while they are moving along a predetermined route.
Currently I have already been able to map the route and I'm able to place markers on the different locations using GMap.NET.
However I have two big challenges that I don't know how to tackle.
1 Calculating the distance and the (estimated) time until the competitors reach the finish.
So for every competitor that has a tracker with them, I would like to map him/her on the map and calculate the distance to the finish. In theory that should be easy. Every competitor will always be in between two waypoints and when I get the position of the tracker I could calculate the distance to the next waypoint and from there on I add every distance between the next waypoints and I have the total remaining distance to the finish.
But that's just theory, I have no clue how I could implement this.
Is there a way to know in between which two waypoints the competitor is?
And what should I do if, for example, there is a part of the route where the competitor goes up to a turningpoint at the end of the road and then comes back on the same road but just on the other side. How would I know if the runner is going up to the turningpoint or if he/she is on the way back from the turningpoint?
2 Working with loops inside the route
This is an even more complicated task. In the route there are two sections that the competitors have to do twice. They are large loops, not small ones. In order to get a correct calculation of the distance, I would need to find a way to know if the competitors are in the first loop or the second.
I was thinking I could probably use a quite similar approach as the issue above, i.e. to specify two waypoints in which I register the time that they have passed there.
If they pass again I could compare that time with the saved time and if there is enough time in between, I could conclude that they are in the second loop of that section.
But again, that's theory. The question is how would I do this in reality? And how do I calculate the distance properly? Do I specify two indexes of waypoints where I calculate the distance in between those indexes twice?
I would love to hear some of your insights on this.
Thanks,

Related

How to go about implementing a fast shortest path search for a 1-crate sokoban?

In one of my university courses (in Data-structures and Algorithmics), we are given a bonus assignment based on the game Sokoban:
With one Major exception: We only have one crate to push to our goal.
Example input
8 8
MMMMMMMM
M.....?M
M....TTM
M....TTM
M..!...M
M....+.M
M......M
MMMMMMMM
Here the first line gives the dimensions (b x h) of the board (8 by 8 in this case). This is followed up by h lines oh b characters. The meaning of these characters is as follows: . A walkable space, ? the goal (red point in the gif), ! the crate, and + is our position.
We are asked to output the shortest solution to the puzzle. (Note that a puzzle might be unsolveable.) We output this in 2 lines, the first tells us how many moves, and the second tells us the correct path. For the example, this would be:
Example Output
10
WWNNNWNEEE
Now, finding an algorithm that works isn't really an issue. Seeing as we're looking for the shortest path, and the nodes on this specific graph are in essence unweighted, I've implemented a breadth first search. In broad strokes, my current implementation looks like this:
0. Since the maze doesn't change, describe each state as a whole number based on the coordinates
of the crate and the player. - This defines a state uniquely and reduces memory costs.
1. Create a dictionary of visited states.
2. Get the input positions of the goal, crate and player.
3. Set up a Queue of move sequences.
4. Pop a move sequence from the Queue.
5. If this move sequence wins the game, go to step 8.
6. Make new move sequences which are copies of the original, each with a different legal move appended.
7. Append these new move sequences to the Queue.
8. Go to step 4
9. Print the output.
This is, of course a relatively simple algorithm. The problem is that it isn't fast enough. In one of the final test cases, we're thrown a 196 x 22 maze like "level" which has a solution that takes 2300 steps. We're asked to solve this level within 10 seconds, but it takes my algorithm more than 10 minutes.
Because of that, I'm kinda at a loss. I've already managed to increase the algorithm's speed 10 fold, and I still have 2 orders of magnitude to go...
Hence why I'm asking here: What makes this algorithm so slow, and how can I speed it up?
Yes, your comprehensive BFS search will be slow. You spend a large amount of your tree search in moves that are utterly wasted, your player thrashing around the maze area to no avail.
Change the focus of your goal: first, solve the maze for the crate rather than sending the player every which way. Include a heuristic for moving the crate closer to the goal spot. Make sure that the crate moves are possible: that there is a "push from " spot available for each move.
One initial heuristic is to make a maze fill by raw distance to the goal start at either the goal (what I've done here) and increment the steps through the maze, or start at the box and increment from there.
MMMMMMMM
M54321?M
M6543TTM
M7654TTM
M876567M <== crate is on the farther 6
M987678M <== player is on the nearer 7
Ma98789M
MMMMMMMM
Here, you would first try to find legal pushes to move the box along the path 654321?. You can also update this by making a penalty (moving the player without pushing) for any direction change.
These heuristics will give you a very good upper bound for a solution; you can then retrace decision points to try other paths, always keeping your "shortest solution" for any position.
Also keep track of where you've been, so that you don't waste time in position loops: never repeat a move (position and direction).
Does that help you get going?
Instead of using a pure dfs search of the player's movements, consider only the crate moves available to you at the time. For instance, in the very first frame of your gif, at the beginning of the simulation, the only crate move possible is the top one to the right one square.
An analogy would be for a game of chess on the first move, you would not consider any queen or bishop moves since they are all blocked by pawns.
After you've successfully found the sequence of crate moves leading to the solution, come back and trace the player moves necessary to construct the sequence of crate moves.
This improves time complexity because the time complexity will be based on the number of crates present in the map instead of total squares.

Minimize service requests

I got a WCF service from which I can get a distance in meters from one point to another (latitude and lontitude) with the contract method:
public double GetDistance(double originLat, double originLng, double destLat, double destLng)
One of the points is a constant point, and the other point is one of several locations I need to extract from a database according to some other information I receive. The end goal is to get the 5 most closest locations to that constant point.
Imagine if using the WCF service cost money per request.. using the most direct approach, I would need to get all the locations from the database and then need to make a request from the service for each location.. Is there a way to somehow make it better like somehow filtering the locations in database in order to make less requests to the service?
This method is just a mathematical function, so there's no need to host it in a WCF service. Whatever is calling this service should just have its own local version of this method. That will minimize the service requests by eliminating them, and it will be insanely faster.
From the additional details, it sounds like you're also executing a query that returns a number of points, and out of those points you want to find the five that are closest to a given location.
Caching only helps if you're making the same requests with some frequency. It's possible that the first query, which returns a collection of points, might get repeated, so it might make some sense to cache the collection of points for a given query.
But unless the location that you're comparing to those points is also frequently repeated, adding it would mess up your caching.
For example, this might benefit from caching...
Points[] GetPointsUsingSomeQuery(queryInput)
...if queryInput repeats over and over.
But if you change it to this...
Points[] GetPointsClosestToSomeLocation(queryInput, Point location)
...then any benefit of caching goes out the window if location isn't frequently repeated. You'd just be caching a bunch of data and never using it because you never make the exact same request twice.
That's also why caching probably won't help with your original function. Unless you're going to repeat exact combinations over and over, you'd never find the result you're looking for in the cache. Even if it repeats occasionally it probably isn't worth it. You'd still make a lot of requests and you'd also store lots of data you're not using in the cache.
Your best bet is to overcome whatever constraint says that you can't execute this mathematical function locally.
If you are trying to find point to point distance or flight distance between 2 long/lat points then you can look at the answer below:
SO Answer
If you are check distance by road then your only option is to cache the results between those points if it is called often. Beware with caching, your provider might forbid this and best check their T&C's.
In the end, the answer is to treat the (Longitude, Latitude) as (x,y) coordinates and calculate a length of a line from the starting point to the current (x,y) with the formula:
d = sqrt((x1-x2)^2 + (y1-y2)^2)
We first read 5 points, calculating the length and keeping the max distance and the point to the max distance (with a stack or a list in order to keep several distances and points). at each point we read, we simply calculate the distance and update the distance and point if the new distance is lower

How do I create a test to see if my A.I. is perfect?

I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move.
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
My feeling is that the stats you're quoting are already pretty good. Two expert Tic-Tac-Toe players will always end in a tie, and there is no way to force a win if your opponent knows how to play the game.
Update
There's probably a more elegant wayt o prove the correctness of your A.I., but the most straightforward approach would be the brute force one. Just enumerate all possible board positions as a game tree, and prune the branches that lead directly to a loss. Then for each branch in the tree you can work out the probability of win resulting from following that branch. Then you just need to test your A.I. on each board position and make sure it's picking the branch with the highest probability of a win.
You should start by observing that move 9 is always forced: there is only one empty square on the board. Move can be considered 8 forced as well, because after seven moves there could be exactly three situations:
O can win on the next move, in which case it takes the win
Placing an X in either one of the two remaining squares wins the game for X, in which case O has lost regardless of its next move
X has zero or one path to victory, in which case O blocks to force a draw
This means that the game is over after at most seven moves.
Also observe that there are only three opening moves: the center, a corner, or a side. It does not matter which of the four corners or sides you take, because the board can be rotated to match a "canonical" opening (the upper-left corner or the middle of the top side).
You can now build your state analysis code. Starting with each of the three possible openings, search with backtracking up to six additional moves using all squares that are open by the time you make the move. After each move, analyze the position to see if X or O has already won; mark wins by X as Wx, and wins by O as Wo. The remaining positions are undecided.
Do not explore positions after Wx or Wo: simply return to the prior step, reporting the win by the corresponding side.
When you reach the seventh move, statically analyze the position to decide if it is one of the three situations described above is applicable, marking the position a Wx, Wo, or a Draw.
Now to the most important step: when you backtrack to the move N-1 by the player p,
If one of the moves that you try is such that all position at the next level becomes Wp, declare the current position a Wp as well.
If all of the moves that you try lead to the win of the opponent, declare the current position a win for the opponent
Otherwise, declare the current position a Draw, and return to the prior level.
If you do this right, all three opening positions will be classified as a Draw. You should see some forcible wins after three moves.
Running this procedure classifies each position as a Wx, Wo, or a Draw. If your AI gets you a win for the player p in a position classified as Wp, or gets you a draw in a position classified as a Draw, then your AI is perfect. If, on the other hand, there are positions that are statically classified as Wp in which the AI gets p only a draw, then your AI engine needs an improvement.
Additional reading: you can find additional insights into the game in this article describing methods of counting possible games of Tic-Tac-Toe.
What you're doing is more linear optimisation than A.I... I'll not describe all the linear algebra of the Tic-Tac-Toe here, there's plenty of examples on the net.
So using linear algebra, you don't have to prove anything about your results (searching for magic statistics, etc), because your results can be validated by a simple solution-injection in the original equation.
In conclusion, there is two cases :
You're using simple "deduction" logic (which is in reality non-formal linear algebra formulation) : we can't found a ready-to-use method for checking your results without look at your code. EDIT : as Andrew Cooper suggests, brute force can be a ready to use method without seeing at your code.
You're using formal linear algebra formulation : your results can be validated by a simple solution-injection in the original equation.
The only thing you can compare is one potential move against another. Whenever it's the computer's turn to make a move, have it play out all possible games from that point on, and choose the move that leads to the highest possible amount of wins. You can't always win, but you can give the opponent more chances to make a bad move.
Or, you can always try the tic tac toe algorithm in the link below:
Tic Tac Toe perfect AI algorithm: deeper in "create fork" step
given that we know
one cannot force a win
with optimal strategy one cannot lose
your AI has already proven to be optimal if
you did search the full tree when playing against it
and your AI is deterministic (if it were rolling the dice at certain stages you would have had to play against all combinations)
It did not lose, you cannot demand it to win. the wins it did do not count, as your full tree search included bad moves as well. that's all, you are done.
just for fun:
if you had no a priori knowledge about the chances to win/draw/lose a game a common strategy would be to persistently save lost positions. on the next game you would try to avoid them. if you can't avoid a move to a lost position you found another one. this way you can learn not to lose against a certain strategy (if possible) or to avoid an error in your strategy.
In order for your tic-tac-toe AI to be proven correct, it needs to satisfy two conditions:
It must never lose.
When the opponent deviates from optimal play, it must win.
Both conditions derive from the fact that if both players play optimally, the tic-tac-toe always ends in a draw.
One automatic method of determining whether your program fulfills these two conditions is to construct what is called a "minimax tree" of every possible tic-tac-toe game. The minimax tree completely characterizes the optimal move for each player, so you can use it to see if your program always selects the optimal move. This means that my answer essentially boils down to, "Write a perfect AI, and then see if it plays the same way as your own AI." However, the minimax algorithm is useful to know, and to my knowledge, this is the only way to test if your AI actually plays optimally.
Here is how the minimax algorithm works (For a gif explanation, see Wikipedia. There's also some pseudocode in the Wikipedia article on minimax.):
Beginning with the tic-tac-toe setup under consideration, construct a tree of all possible subsequent moves. The initial position at the root node. At the lowest level in the tree, you have all of the possible final positions.
Assign a value of +1 to all final positions in which the first player wins, a value of -1 to all moves in which the second player wins, and a value of 0 to all ties.
Now we propagate these values up the tree to the root node. Assume that each player plays optimally. In the last move, Player One will select any move that has a value of +1, i.e. a move that wins the game. If no move has a value of +1, Player One will select a move with value 0, tying the game. Thus, nodes where it is player Player One's move are assigned the maximum value of any of their child nodes. Conversely, when it is Player Two's move, they prefer to select moves with a value of -1, which win them the game. If no winning moves are available, they prefer to tie the game. Thus, nodes where it is Player Two's turn are assigned a value equal to the minimum of their child nodes. Using this rule, you can propagate values from the deepest level in the tree all the way up to the root node.
If the root node has a value of +1, the first player should win with optimal play. If it has a value of -1, the second player should win. If it has a value of 0, optimal play leads to a draw.
You can now determine, in each situation, whether your algorithm selects the optimal move. Construct a tree of all possible moves in tic-tac-toe, and use the minimax algorithm to assign +1, 0 or -1 to each move. If your program is Player One, it is optimal if it always selects the move with the maximum value. If it plays as Player Two, it is optimal if it always selects the move with the minimum value.
You can then loop through every move in the tree, and ask your AI to select a move. The above tells you how to determine if the move it selects is optimal.
I would use a decision tree to solve this problem.
Putting it in simple words, decision trees are a method to recursively calculate the expectancy (and chance) of the end result. each "branch" in the tree is a decision who's expectancy is calculated from the sum of (value * chance) possible for this decision.
in a limited options scenario (like tic-tac-toe) you can have the entire tree pre-calculated and therefore after each move of the human player (chance) you can make choose (decision) the next branch witch has the highest expectancy to win.
In a chess game the solution is similar but the tree is not pre-built: after each move the computer calculates the value for every possible move on the board for n depth forward. choosing the best, second best or n-th best expectancy depending on the difficulty of the game selected by the player.

Finding a Path through a Multidimensional Array

I started work on a dungeon crawler in C# and I've already coded the level generation.
However, I've run into a problem. My level map is stored in a 32x32 multidimensional array, and each tile is stored as a string. All the tiles except for the following (all of these names are the variable names that represent that tile) (mongroveplant, tree, hjalaplant, vnosplant, barraplant, weedplant, naroplant, deathweedplant, venustrap, strangulator, statue, emptiness and stonewall) cannot be walked over.
These tiles (which can be walked over), which constitute a much longer list, are found here: Walkable Tiles. In each entry in the 32x32 multidimensional array, every entry is a string.
How do I create a pathfinding algorithm that avoids all the tiles listed above, but can go through all the tiles listed in the link? I am trying to go from the "start" tile to the "exitlevel" tile.
The first thing I would remove is the notion of string. Parsing string isn't quick in term of a video game. What you want, is to have flags for each tiles (bitfields). In the end, you will love flags because you can combine them!
[Flags]
public enum TileDescription
{
Walkable,
Trap,
Altar,
Door
}
They can also be stored at a int, which take far less space. Speed and space, two amazing notions.
As for the path-finding algo, there's plenty of them out-there. But basically, you have a start point, a end point, and you must find the quickest way between both. The idea is to check the nearest "nodes" and see if you get closer or not of your goal. Each time, you repeat the check with the new node. If you get trapped, you rewind to the nodes that still had available paths.
You have some nice basic algo :
http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
http://en.wikipedia.org/wiki/A*_search_algorithm
However, long range pathfinding is ALWAYS extremely costly. You will have to limit the pathfinding to a specific range around the origin. Parsing a whole 32x32 maze could take a lot of time to find the quickest route. In most case, when you are beyond a specific range, you move your NPC up to the closest point, then repeat the pathfinding when it reaches it, or while reaching it. The trick to pathfinding is to do it over many frames and never to try to process it all at once.

Scent based pathfinding using C# in games

Was wondering if anyone has knowledge on implementing pathfinding, but using scent. The stronger the scent in the nodes surrounding, is the way the 'enemy' moves towards.
Thanks
Yes, I did my university final project on the subject.
One of the applications of this idea is for finding the shortest path.
The idea is that the 'scent', as you put it, will decay over time. But the shortest path between two points will have the strongest scent.
Have a look at this paper.
What did you want to know exactly??
Not quite clear what the question is in particular - but this just seems like another way of describing the Ant colony optimization problem:
In computer science and operations
research, the ant colony optimization
algorithm (ACO) is a probabilistic
technique for solving computational
problems which can be reduced to
finding good paths through graphs.
Well, think about it for a minute.
My idea would to divide the game field into sections of 32x32 (or whatever size your character is). Then run some checks every x seconds (so if they stay still the tiles around them will have more 'scent') to figure out how strong a scent is on any given tile. Some examples might be: 1) If you cross over the tile, add 3; 2) if you crossed over an adjacent tile, add 1.
Then add things like degradation over time, reduce every tile by 1 every x seconds until it hits zero.
The last thing you will need to worry about is using AI to track this path. I would recommend just putting the AI somewhere, and telling it to find a node with a scent, then goto an adjacent node with a higher/equal value scent. Also worry about crossing off paths taken. If the player goes up a path, then back down it another direction, make sure the AI does always just take the looped back path.
The last thing to look at with the AI would be to add a bit of error. Make the AI take the wrong path every once in a while. Or lose the trail a little more easily.
Those are the key points, I'm sure you can come up with some more, with some more brainstorming.
Every game update (or some other, less frequent time frame), increase the scent value of nodes near to where the target objects (red blobs) are.
Decrease all node scent values by some fall-off amount to zero.
In the yellow blob's think/move function get available nodes to move to. Move towards the node with the highest scent value.
Depending on the number of nodes the 'decrease all node scent values' could do with optomisation, e.g. maybe maintaining a a list of non-zero nodes to be decreased.
I see a big contradiction between scent model and pathfinding. For a hunter in the nature finding the path by scent means finding exactly the path used by the followed subject. And in games pathfinding means finding the fastest path between two points. It is not the same.
1. While modelling the scent you will count the scent concentration in the point as the SUM of the surrounding concentrations multiplied by different factors. And searching for the fastest path from the point means taking the MINIMUM of the times counted for surrounding points, multiplied by the different parametres.
2. Counting the scent you should use recursive model - scent goes in all directions, including backward. In the case of the pathfinding, if you have found the shortest paths for points surrounding the target, they won't change.
3 Level of scent can rise and fall. In pathfinding, while searching for minimum, the result can never rise.
So, the scent model is really much more complicated than your target. Of course, what I have said, is true only for the standard situation and you can have something very special...

Categories