For a personal learning project I'm making a simple neural network to control a simulated car trough a simple maze.
To provide the network with inputs to work with, I need virtual sensors around the car to indicate how close I am to any obstacles.
How would I go about this? I've seen examples where there are lines pertruding out of the vehicle that can sense how far they overlap with obstacles.
This means that for example if the front sensor line is 40% inside a wall, it will kick back the value 0.40 to the network so it knows how close the obstacle is to the front of the car. The same process would be repeated for the left, right and back sensors.
I really hope I explained myself well, I could post some pictures but I know you guys don't like links from strangers.
Any insight would be appreciated, thanks.
I'll sketch a simple outline on how I'd tackle this:
Query objects in the environment of the car with a margin that makes sense for your application. Eg: if you want your car to respond to obstacles closer than 2 meters, make your margin 2 meters.
For these nearby objects you have, calculate intersections with these virtual rays of your sensors. For this, you will most likely want to use the mathematical results of a linesegment-linesegment-intersection, which can be found here on SO: How do you detect where two line segments intersect? This of course requires you to be able to model your environment using line segments. If you have curved objects, then a multiline-piece approximation might suffice. Alternatively, define an interface for your environment objects that will calculate the intersection of a ray with itself. Now you can specialise the mathematics for rectangles, circles, arcs, pedestrians, bikers, horses, etc... Make sure you make a tradeoff between how accurate the distance of the sensor should be, versus how much time you want to spend writing intersection calculation code.
Pick for each sensor ray the object that produced the intersection that is closest.
i try to to make an AI in C# (with unity) that can predict the estimated position of a moving object to hit it with a bullet
the moving object have a movement speed of 5f and the bullet have a speed of 7f
my problem is that the time my bullet travel to my estimated position my "enemy" already moved further and the bullet don't hit
do you know a formula or code that i can adapt to improve my targeting AI ? (already looking for that in google but don't find anything usefull)
thank
An answer to your question from unreal engine forums
Here is the top answer from there in case the link dies. I did not write this code I simply found it with a quick google of your problem which you stated you already tried.
Link answer:
Get the "velocity" of the target player. Multiply by the time the bullet will take to travel to the target. Then get the position of the target, add the velocity*time vector, and that's the position you should aim at. You can either hard-code the travel time (half a second, or whatever,) or you can in turn measure the distance between AI and player, and divide by bullet travel time, to come up with an approximate travel time. You can also apply a differential equation to calculate the exact time of impact and exact direction, but that requires a little more math and is slightly harder to write out, so I think the above will work best for you.
Simply:
Distance = Length(Target_Position - Firing_Position)
Time = Distance / Bullet_Speed
Predicted_Position = Target_Position + (Target_Velocity * Time)
I made a tic tac toe A.I. Given each board state, my A.I. will return 1 exact place to move.
I also made a function that loops though all possible plays made with the A.I.
So it's a recursive function that lets the A.I. make a move for a given board, then lets the other play make all possible moves and calls the recursive function in it self with a new board for each possible move.
I do this for when the A.I goes first, and when the other one goes first... and add these together. I end up with 418 possible wins and 115 possible ties, and 0 possible loses.
But now my problem is, how do I maximize the amount of wins? I need to compare this statistic to something, but I can't figure out what to compare it to.
My feeling is that the stats you're quoting are already pretty good. Two expert Tic-Tac-Toe players will always end in a tie, and there is no way to force a win if your opponent knows how to play the game.
Update
There's probably a more elegant wayt o prove the correctness of your A.I., but the most straightforward approach would be the brute force one. Just enumerate all possible board positions as a game tree, and prune the branches that lead directly to a loss. Then for each branch in the tree you can work out the probability of win resulting from following that branch. Then you just need to test your A.I. on each board position and make sure it's picking the branch with the highest probability of a win.
You should start by observing that move 9 is always forced: there is only one empty square on the board. Move can be considered 8 forced as well, because after seven moves there could be exactly three situations:
O can win on the next move, in which case it takes the win
Placing an X in either one of the two remaining squares wins the game for X, in which case O has lost regardless of its next move
X has zero or one path to victory, in which case O blocks to force a draw
This means that the game is over after at most seven moves.
Also observe that there are only three opening moves: the center, a corner, or a side. It does not matter which of the four corners or sides you take, because the board can be rotated to match a "canonical" opening (the upper-left corner or the middle of the top side).
You can now build your state analysis code. Starting with each of the three possible openings, search with backtracking up to six additional moves using all squares that are open by the time you make the move. After each move, analyze the position to see if X or O has already won; mark wins by X as Wx, and wins by O as Wo. The remaining positions are undecided.
Do not explore positions after Wx or Wo: simply return to the prior step, reporting the win by the corresponding side.
When you reach the seventh move, statically analyze the position to decide if it is one of the three situations described above is applicable, marking the position a Wx, Wo, or a Draw.
Now to the most important step: when you backtrack to the move N-1 by the player p,
If one of the moves that you try is such that all position at the next level becomes Wp, declare the current position a Wp as well.
If all of the moves that you try lead to the win of the opponent, declare the current position a win for the opponent
Otherwise, declare the current position a Draw, and return to the prior level.
If you do this right, all three opening positions will be classified as a Draw. You should see some forcible wins after three moves.
Running this procedure classifies each position as a Wx, Wo, or a Draw. If your AI gets you a win for the player p in a position classified as Wp, or gets you a draw in a position classified as a Draw, then your AI is perfect. If, on the other hand, there are positions that are statically classified as Wp in which the AI gets p only a draw, then your AI engine needs an improvement.
Additional reading: you can find additional insights into the game in this article describing methods of counting possible games of Tic-Tac-Toe.
What you're doing is more linear optimisation than A.I... I'll not describe all the linear algebra of the Tic-Tac-Toe here, there's plenty of examples on the net.
So using linear algebra, you don't have to prove anything about your results (searching for magic statistics, etc), because your results can be validated by a simple solution-injection in the original equation.
In conclusion, there is two cases :
You're using simple "deduction" logic (which is in reality non-formal linear algebra formulation) : we can't found a ready-to-use method for checking your results without look at your code. EDIT : as Andrew Cooper suggests, brute force can be a ready to use method without seeing at your code.
You're using formal linear algebra formulation : your results can be validated by a simple solution-injection in the original equation.
The only thing you can compare is one potential move against another. Whenever it's the computer's turn to make a move, have it play out all possible games from that point on, and choose the move that leads to the highest possible amount of wins. You can't always win, but you can give the opponent more chances to make a bad move.
Or, you can always try the tic tac toe algorithm in the link below:
Tic Tac Toe perfect AI algorithm: deeper in "create fork" step
given that we know
one cannot force a win
with optimal strategy one cannot lose
your AI has already proven to be optimal if
you did search the full tree when playing against it
and your AI is deterministic (if it were rolling the dice at certain stages you would have had to play against all combinations)
It did not lose, you cannot demand it to win. the wins it did do not count, as your full tree search included bad moves as well. that's all, you are done.
just for fun:
if you had no a priori knowledge about the chances to win/draw/lose a game a common strategy would be to persistently save lost positions. on the next game you would try to avoid them. if you can't avoid a move to a lost position you found another one. this way you can learn not to lose against a certain strategy (if possible) or to avoid an error in your strategy.
In order for your tic-tac-toe AI to be proven correct, it needs to satisfy two conditions:
It must never lose.
When the opponent deviates from optimal play, it must win.
Both conditions derive from the fact that if both players play optimally, the tic-tac-toe always ends in a draw.
One automatic method of determining whether your program fulfills these two conditions is to construct what is called a "minimax tree" of every possible tic-tac-toe game. The minimax tree completely characterizes the optimal move for each player, so you can use it to see if your program always selects the optimal move. This means that my answer essentially boils down to, "Write a perfect AI, and then see if it plays the same way as your own AI." However, the minimax algorithm is useful to know, and to my knowledge, this is the only way to test if your AI actually plays optimally.
Here is how the minimax algorithm works (For a gif explanation, see Wikipedia. There's also some pseudocode in the Wikipedia article on minimax.):
Beginning with the tic-tac-toe setup under consideration, construct a tree of all possible subsequent moves. The initial position at the root node. At the lowest level in the tree, you have all of the possible final positions.
Assign a value of +1 to all final positions in which the first player wins, a value of -1 to all moves in which the second player wins, and a value of 0 to all ties.
Now we propagate these values up the tree to the root node. Assume that each player plays optimally. In the last move, Player One will select any move that has a value of +1, i.e. a move that wins the game. If no move has a value of +1, Player One will select a move with value 0, tying the game. Thus, nodes where it is player Player One's move are assigned the maximum value of any of their child nodes. Conversely, when it is Player Two's move, they prefer to select moves with a value of -1, which win them the game. If no winning moves are available, they prefer to tie the game. Thus, nodes where it is Player Two's turn are assigned a value equal to the minimum of their child nodes. Using this rule, you can propagate values from the deepest level in the tree all the way up to the root node.
If the root node has a value of +1, the first player should win with optimal play. If it has a value of -1, the second player should win. If it has a value of 0, optimal play leads to a draw.
You can now determine, in each situation, whether your algorithm selects the optimal move. Construct a tree of all possible moves in tic-tac-toe, and use the minimax algorithm to assign +1, 0 or -1 to each move. If your program is Player One, it is optimal if it always selects the move with the maximum value. If it plays as Player Two, it is optimal if it always selects the move with the minimum value.
You can then loop through every move in the tree, and ask your AI to select a move. The above tells you how to determine if the move it selects is optimal.
I would use a decision tree to solve this problem.
Putting it in simple words, decision trees are a method to recursively calculate the expectancy (and chance) of the end result. each "branch" in the tree is a decision who's expectancy is calculated from the sum of (value * chance) possible for this decision.
in a limited options scenario (like tic-tac-toe) you can have the entire tree pre-calculated and therefore after each move of the human player (chance) you can make choose (decision) the next branch witch has the highest expectancy to win.
In a chess game the solution is similar but the tree is not pre-built: after each move the computer calculates the value for every possible move on the board for n depth forward. choosing the best, second best or n-th best expectancy depending on the difficulty of the game selected by the player.
I have been searching the web for quite some time about this but I couldn't find anything that's concrete enough to help me out. I know XNA is going to die, but there is still use for it (in my heart, before I port it later to SharpDX)
I'm making a 3D FPS shooter in XNA 4.0 and I am having serious issues on setting up my collision detection.
First of all, I am making models in blender and I have a high polygon and low polygon version of the model. I would like to use the low polygon model with collision detection but I'm baffled as to how to do it. I want to use JigLibX but I'm not sure how to set my project up in order to do so.
In a nutshell: I want to accomplish this one simple goal:
Make a complicated map in blender, and have boundingboxes be made from it and then use a quadtree to split it up. Then my main character and his gun can run around it shooting stuff!
Any help would be greatly appreciated.
I don't understand exactly what your concrete question is, but I assume you want to know how to implement collision detection efficiently in principal:
for characters: use (several) bounding-boxes and bounding spheres (like a sphere for the head, and 9 boxes for torso, legs and arms.
for terrain: use data from height-map for Y (up/down) collision detection and bounding-boxes/spheres for objects on terrain (like trees, walls, bushes, ...)
for particles - like gunfire: use points, small bounding spheres or - even better because framerateindependant - raytraycing.
In almost no case you want to do collision detection on a polygon-basis as you suggested in your post (quote "low poly modell for collision detection").
I hope that put you in the right direction.
cheers
I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up.
The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do.
I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on.
Do you have any suggestions on what to do about this problem?
Edit:
So I did as Justin L suggested by adding a lot of cost around the obstacles which results in the folling:
Grid with no path http://sogaard.us/uploades/1_grid_no_path.png
Here you can see the cost around the obstacles, initially the middle two obstacles should look just like the ones in the corners, but after running our pathfinder it seems like the costs are overridden:
Grid with path http://sogaard.us/uploades/1_map_grid.png
Picture that shows things found on the picture http://sogaard.us/uploades/2_complete_map.png
Picture above shows what things are found on the picture.
Path found http://sogaard.us/uploades/3_path.png
This is the path found which as our problem also was before is hugging the obstacle.
The grid from before with the path on http://sogaard.us/uploades/4_mg_path.png
And another picture with the cost map with the path on.
So what I find strange is why the A* pathfinder is overriding these field costs, which are VERY high.
Would it be when it evaluates the nodes inside the open list with the current field to see whether the current fields path is shorter than the one inside the open list?
And here is the code I am using for the pathfinder:
Pathfinder.cs: http://pastebin.org/343774
Field.cs and Grid.cs: http://pastebin.org/343775
Have you considered adding a gradient cost to pixels near objects?
Perhaps one as simple as a linear gradient:
C = -mx + b
Where x is the distance to the nearest object, b is the cost right outside the boundary, and m is the rate at which the cost dies off. Of course, if C is negative, it should be set to 0.
Perhaps a simple hyperbolic decay
C = b/x
where b is the desired cost right outside the boundary, again. Have a cut-off to 0 once it reaches a certain low point.
Alternatively, you could use exponential decay
C = k e^(-hx)
Where k is a scaling constant, and h is the rate of decay. Again, having a cut-off is smart.
Second suggestion
I've never applied A* to a pixel-mapped map; nearly always, tiles.
You could try massively decreasing the "resolution" of your tiles? Maybe one tile per ten-by-ten or twenty-by-twenty set of pixels; the tile's cost being the highest cost of a pixel in the tile.
Also, you could try de-valuing the shortest-distance heuristic you are using for A*.
You might try to enlarge the obstacles taking size of the robot into account. You could round the corners of the obstacles to address the blocking problem. Then the gaps that are filled are too small for the robot to squeeze through anyway.
I've done one such physical robot. My solution was to move one step backward whenever there is a left and right turn to do.
The red line is as I understand your problem. The Black line is what I did to resolve the issue. The robot can move straight backward for a step then turn right.