For a personal learning project I'm making a simple neural network to control a simulated car trough a simple maze.
To provide the network with inputs to work with, I need virtual sensors around the car to indicate how close I am to any obstacles.
How would I go about this? I've seen examples where there are lines pertruding out of the vehicle that can sense how far they overlap with obstacles.
This means that for example if the front sensor line is 40% inside a wall, it will kick back the value 0.40 to the network so it knows how close the obstacle is to the front of the car. The same process would be repeated for the left, right and back sensors.
I really hope I explained myself well, I could post some pictures but I know you guys don't like links from strangers.
Any insight would be appreciated, thanks.
I'll sketch a simple outline on how I'd tackle this:
Query objects in the environment of the car with a margin that makes sense for your application. Eg: if you want your car to respond to obstacles closer than 2 meters, make your margin 2 meters.
For these nearby objects you have, calculate intersections with these virtual rays of your sensors. For this, you will most likely want to use the mathematical results of a linesegment-linesegment-intersection, which can be found here on SO: How do you detect where two line segments intersect? This of course requires you to be able to model your environment using line segments. If you have curved objects, then a multiline-piece approximation might suffice. Alternatively, define an interface for your environment objects that will calculate the intersection of a ray with itself. Now you can specialise the mathematics for rectangles, circles, arcs, pedestrians, bikers, horses, etc... Make sure you make a tradeoff between how accurate the distance of the sensor should be, versus how much time you want to spend writing intersection calculation code.
Pick for each sensor ray the object that produced the intersection that is closest.
Related
So I have been tasked in making a 2D top-down racing game over the summer for college, and I have been dreading doing the AI but it is finally time. I have googled many different ways of the same thing just to try and find a person asking the same question but it seems everyone uses Unity over Monogame.
So I have an "enemy" car which can accelerate (as in slowly speeds up to top speed), decelerate and steer left and right. I have got the actual car the player drives working fine but the game is boring when the player isn't racing against anyone. All I need is a very basic AI which will follow a path around the course and will readjust if it gets knocked or something happens to it. I don't even know where to start, Please help!!! Please let me know if you need any more details.
I may be misunderstanding your question, but it does not seem like you are looking for AI capabilities in your enemy car.... "All I need is a very basic AI which will follow a path around the course and will readjust if it gets knocked or something happens to it.". AI typically implies learning, but no where does it seem that you need your car to learn from past mistakes/"experiences". It sounds like you can use a path-finding algorithm to solve your problem since you have no requirement of the car actually learning from previous interactions with other cars, fields, etc. A super popular algorithm you can look into is A*. You can set up your game to be a graph with edges that have the "boosts" be lower weighted then the common "road". The obstacles or path-finding equivalent term - walls can be represented as high weight edges which would cause your car to avoid them automatically, by nature of A* finding the fastest path to a point.
AStar explanation with pseudo code: https://en.wikipedia.org/wiki/A*_search_algorithm
Great visualizer tool: https://qiao.github.io/PathFinding.js/visual/
Accelerating/Decelerating
As for accelerating/decelerating that can be separate logic like randoms deciding whether to speed up or not.
If it gets knocked or something happens to it
You can re-calculate the A* when the car is hit to ensure that your car gets the new fastest path to get back on course. The actual collision logic is up to you (not part of the A* algo).
Note that if you are planning to have more than just a straight path in which the cars can steer (meaning there is no crazy bends or turns) the A* should not have too much variation from the natural algorithm. If you are planning to support that kind of track you may need to look into slightly different algorithms, because you will need to keep track of the rotated angle of your car.
What you need to implement would be dependent on how complex, of course, your AI needs to be. If all it needs to do is readjust its steering and monitor its speed, a basic AI car could at a given time step...
Accelerate if not at top speed
Decelerate if cooling down from a boost
Steer away from the track boundaries
Decide whether or not to boost
(1) and (2) are easy enough to implement at a given time interval. Something like if(speed < maxSpeed) { accel(); } else if(speed > maxSpeed) { decel(); } where a double maxBoostSpeed exists to limit speed during a boost.
(3) and (4) could be achieved by drawing a trajectory in front of the car with something like [ x + speed*Math.cos(angle), y + speed * Math.sin(angle) ]. Then (3) could be achieved by steering towards the center of the track, and (4) could be from projecting the trajectory into a line and finding the distance before the next track boundary a.k.a. the next turn. If that distance to the trajectory intersection is large, it may be time to boost.
Some objects which I have placed at a position (-19026.65,58.29961, 1157) from the origin (0,0,0) are rendering with issues, the problem is referred to as spatial Jitter (SJ) ref. Like You can check its detail here or You can see the below image. The objects are rendering with black spots/lines or maybe it is Mesh flickering. (Actually, I can't describe the problem, maybe the picture will help you to understand it)
I have also tried to change the camera near and far clipping but it was useless. Why I am getting this? Maybe my object and camera are far away from the origin.
Remember:
I have a large environment and some of my game objects (where the problem is) are at (-19026.65,58.29961, 1157) position. And I guess this is the problem that Object and Camera are very far from the origin (0,0,0). I found numerous discussions which is given below
GIS Terrain and Unity
Unity Coordinates bound Question at unity
Unity Coordinate and Scale - Post
Infinite Runner and Unity Coordinates
I didn't find that what is the minimum or maximum limit to place the Object in unity so that works correctly.
Since the world origin is a vector 3(0,0,0), the max limit that you can place an object would be 3.402823 × 10^38 since it is a floating point. However, as you are finding, this does not necessarily mean that placing something here will insure it works properly. Your limitation will be bound by what other performance factors your have in your game. If you need to have items placed at this point in the world space,consider building objects at runtime based on where the camera is. This will allow the performance to work at different points from the origin.
Unity suggests: not recommended to go any further than 100,000 units away from the center, the editor will warn you. If you notice in today's gaming world, many games move the world around the player rather than the player around the world.
To quote Dave Newson's site Read Here:
Floating Point Accuracy
Unity allows you to place objects anywhere
within the limitations of the float-based coordinate system. The
limitation for the X, Y and Z Position Transform is 7 significant
digits, with a decimal place anywhere within those 7 digits; in effect
you could place an object at 12345.67 or 12.34567, for just two
examples.
With this system, the further away from the origin (0.000000 -
absolute zero) you get, the more floating-point precision you lose.
For example, accepting that one unit (1u) equals one meter (1m), an
object at 1.234567 has a floating point accuracy to 6 decimal places
(a micrometer), while an object at 76543.21 can only have two decimal
places (a centimeter), and is thus less accurate.
The degradation of accuracy as you get further away from the origin
becomes an obvious problem when you want to work at a small scale. If
you wanted to move an object positioned at 765432.1 by 0.01 (one
centimeter), you wouldn't be able to as that level of accuracy doesn't
exist that far away from the origin.
This may not seem like a huge problem, but this issue of losing
floating point accuracy at greater distances is the reason you start
to see things like camera jitter and inaccurate physics when you stray
too far from the origin. Most games try to keep things reasonably
close to the origin to avoid these problems.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
How do I go about creating a dynamic road map, that will be capable of implementing algorithms to calculate suggested directions like any GPS system would?
The things I have thought about so far:
Creating a class Road that stores data like: List of Long- and Latitude coordinates and connected roads (e.g. A coordinate + the id of another Road that is connected on this coordinate).
Drawing the roads with Polyline from the Long- and Latitude coordinates stored in road objects
How the algorithm to iterate through the roads should look like, to prevent endless loops of attempts to find the "best" road direction. (Any suggestions or references?)
A better way to track current location than Geolocation (I have yet to test it on a Phone device, but it was very inaccurate when tested on my laptop here at home)
As to the four points above, I am unsure if this is the right way to go on about this system.
I would really appreciate some input on the Road class that i mean to create. It is the only way i could think of that "might" work, when trying to iterate through the roads to find a suggested direction from Point A to Point B. Also if it is, should I store a reference to another road (id) + the coordinate of where the roads cross?
Look at Dijkstra's algorithm.
The language used is a bit different:
Your Roads are Edges.
Your Roads are joined by Vertices or Nodes.
The Map is known as a Graph.
Note that the algorithm doesn't care about where the roads are, no need for lat-long other than for your drawing. It just needs a cost for traveling, i.e. distance or time, but the article/algorithm refers to this as distance.
I am working on a project with a robot that has to find its way to an object and avoid some obstacles when going to that object it has to pick up.
The problem lies in that the robot and the object the robot needs to pick up are both one pixel wide in the pathfinder. In reality they are a lot bigger. Often the A* pathfinder chooses to place the route along the edges of the obstacles, sometimes making it collide with them, which we do not wish to have to do.
I have tried to add some more non-walkable fields to the obstacles, but it does not always work out very well. It still collides with the obstacles, also adding too many points where it is not allowed to walk, results in that there is no path it can run on.
Do you have any suggestions on what to do about this problem?
Edit:
So I did as Justin L suggested by adding a lot of cost around the obstacles which results in the folling:
Grid with no path http://sogaard.us/uploades/1_grid_no_path.png
Here you can see the cost around the obstacles, initially the middle two obstacles should look just like the ones in the corners, but after running our pathfinder it seems like the costs are overridden:
Grid with path http://sogaard.us/uploades/1_map_grid.png
Picture that shows things found on the picture http://sogaard.us/uploades/2_complete_map.png
Picture above shows what things are found on the picture.
Path found http://sogaard.us/uploades/3_path.png
This is the path found which as our problem also was before is hugging the obstacle.
The grid from before with the path on http://sogaard.us/uploades/4_mg_path.png
And another picture with the cost map with the path on.
So what I find strange is why the A* pathfinder is overriding these field costs, which are VERY high.
Would it be when it evaluates the nodes inside the open list with the current field to see whether the current fields path is shorter than the one inside the open list?
And here is the code I am using for the pathfinder:
Pathfinder.cs: http://pastebin.org/343774
Field.cs and Grid.cs: http://pastebin.org/343775
Have you considered adding a gradient cost to pixels near objects?
Perhaps one as simple as a linear gradient:
C = -mx + b
Where x is the distance to the nearest object, b is the cost right outside the boundary, and m is the rate at which the cost dies off. Of course, if C is negative, it should be set to 0.
Perhaps a simple hyperbolic decay
C = b/x
where b is the desired cost right outside the boundary, again. Have a cut-off to 0 once it reaches a certain low point.
Alternatively, you could use exponential decay
C = k e^(-hx)
Where k is a scaling constant, and h is the rate of decay. Again, having a cut-off is smart.
Second suggestion
I've never applied A* to a pixel-mapped map; nearly always, tiles.
You could try massively decreasing the "resolution" of your tiles? Maybe one tile per ten-by-ten or twenty-by-twenty set of pixels; the tile's cost being the highest cost of a pixel in the tile.
Also, you could try de-valuing the shortest-distance heuristic you are using for A*.
You might try to enlarge the obstacles taking size of the robot into account. You could round the corners of the obstacles to address the blocking problem. Then the gaps that are filled are too small for the robot to squeeze through anyway.
I've done one such physical robot. My solution was to move one step backward whenever there is a left and right turn to do.
The red line is as I understand your problem. The Black line is what I did to resolve the issue. The robot can move straight backward for a step then turn right.