How do I go about creating a dynamic road map, that will be capable of implementing algorithms to calculate suggested directions like any GPS system would?
The things I have thought about so far:
Creating a class Road that stores data like: List of Long- and Latitude coordinates and connected roads (e.g. A coordinate + the id of another Road that is connected on this coordinate).
Drawing the roads with Polyline from the Long- and Latitude coordinates stored in road objects
How the algorithm to iterate through the roads should look like, to prevent endless loops of attempts to find the "best" road direction. (Any suggestions or references?)
A better way to track current location than Geolocation (I have yet to test it on a Phone device, but it was very inaccurate when tested on my laptop here at home)
As to the four points above, I am unsure if this is the right way to go on about this system.
I would really appreciate some input on the Road class that i mean to create. It is the only way i could think of that "might" work, when trying to iterate through the roads to find a suggested direction from Point A to Point B. Also if it is, should I store a reference to another road (id) + the coordinate of where the roads cross?
Look at Dijkstra's algorithm.
The language used is a bit different:
Your Roads are Edges.
Your Roads are joined by Vertices or Nodes.
The Map is known as a Graph.
Note that the algorithm doesn't care about where the roads are, no need for lat-long other than for your drawing. It just needs a cost for traveling, i.e. distance or time, but the article/algorithm refers to this as distance.
Related
Apologies for the lack of example code, I'm currently in the brainstorming phase of the problem and having trouble finding a proper solution.
As I have stated in my title, I want to find out what the intersection area of two polygon are.
To be more specific, I have two ARPlane's that may overlap each other on the x-z plane but be on different y-levels (imagine stairs with an overhang). I can get the area boundaries of these ARPlanes easily. My first idea to simplify the process is to remove the y-component so as to have them on the same plane and turn this into a 2D problem.
From here onward, I'm unsure of how to proceed. I could not find any methods that calculated the intersection areas of two polygons. I have a few solutions that look promising if I can get the planes aligned neatly (such that the +x direction points from the center of one of the planes to the other), but I cannot move them in any way so I must modify what the local "forward" for a plane is. Even then, I don't think the ARPlane has a direction vector in the first place as they are not GameObjects, so I am unsure if this is a viable option as a path to follow. ARPlane class for quick reference.
One other way is to turn the planes so that they're in alignment with world x axis. This looks promising over the other methods but as I previously stated, I cannot turn the actual ARPlanes. I must make a copy of them and turn the copies while keeping their relative rotations and positions the same.
So far these have been the methods I could come up with but could not develop fully due to unity restrictions. My question, then, is whether there is a way to get around the issues of these problems; failing that, whether there is an alternative solution to the issue that can be recommended.
Below is an example use case of the tool. As can be seen, some stair threads have an overhang that covers a portion of the previous thread's surface (second and third figure). Each stair thread will be scanned and then processed to find their usable surface. The area covered by the overhang is not a usable surface. This usable area is defined by the placements of a staircase thread (A), and the very next thread right above it (B); so then the usable area will be surface_area_of_A - xz_crossSection_of_AB
For a personal learning project I'm making a simple neural network to control a simulated car trough a simple maze.
To provide the network with inputs to work with, I need virtual sensors around the car to indicate how close I am to any obstacles.
How would I go about this? I've seen examples where there are lines pertruding out of the vehicle that can sense how far they overlap with obstacles.
This means that for example if the front sensor line is 40% inside a wall, it will kick back the value 0.40 to the network so it knows how close the obstacle is to the front of the car. The same process would be repeated for the left, right and back sensors.
I really hope I explained myself well, I could post some pictures but I know you guys don't like links from strangers.
Any insight would be appreciated, thanks.
I'll sketch a simple outline on how I'd tackle this:
Query objects in the environment of the car with a margin that makes sense for your application. Eg: if you want your car to respond to obstacles closer than 2 meters, make your margin 2 meters.
For these nearby objects you have, calculate intersections with these virtual rays of your sensors. For this, you will most likely want to use the mathematical results of a linesegment-linesegment-intersection, which can be found here on SO: How do you detect where two line segments intersect? This of course requires you to be able to model your environment using line segments. If you have curved objects, then a multiline-piece approximation might suffice. Alternatively, define an interface for your environment objects that will calculate the intersection of a ray with itself. Now you can specialise the mathematics for rectangles, circles, arcs, pedestrians, bikers, horses, etc... Make sure you make a tradeoff between how accurate the distance of the sensor should be, versus how much time you want to spend writing intersection calculation code.
Pick for each sensor ray the object that produced the intersection that is closest.
I have a planet mesh that I am generating procedurally (based on a random seed). The generator creates tectonic plates and moves them creating the land masses, mountains and water depressions you see below; ~2500 points in a planet. The textured surface is just a shader.
Each vertex will have information associated with it, and I need to know which vertex they are pointing at to relay this information.
I am looking for a way identify which vertex they are pointing at. The current solution is to generate a cube at each vertex, then use a collider/Ray to identify it. The two white cubes in the picture above are for testing.
What I want to know is if there is a better way to identify the vertex without generating cubes?
if you're doing such awesome and advanced work, surely you know about this
http://docs.unity3d.com/ScriptReference/RaycastHit-triangleIndex.html
when working with mesh, it's totally commonplace to need the nearest vertex.
note that very simply, you just find the nearest one, ie look over them all for the nearest.
(it's incredibly fast to do this, you only have a tiny number of verts so there's no way performance will even be measurable.)
{Consider that, of course, you could break the object in to say 8 pieces - but that's just something you have to do anyway in many cases, for example a race track, so it can occlude properly.}
Given an elevation map consisting of lat/lon/elevation pairs, what is the fastest way to find all points above a given elevation level (or better yet, just the the 2D concave hull)?
I'm working on a GIS app where I need to render an overlay on top of a map to visually indicate regions that are of higher elevation; it's determining this polygon/region that has me stumped (for now). I have a simple array of lat/lon/elevation pairs (more specifically, the GTOPO30 DEM files), but I'm free to transform that into any data structure that you would suggest.
We've been pointed toward Triangulated Irregular Networks (TINs), but I'm not sure how to efficiently query that data once we've generated the TIN. I wouldn't be surprised if our problem could be solved similarly to how one would generate a contour map, but I don't have any experience with it. Any suggestions would be awesome.
It sounds like you're attempting to create a polygonal representation of the boundary of the high land.
If you're working with raster data (sampled on a rectangular grid), try this.
Think of your grid as an assembly of right triangles.
Let's say you have a 3x3 grid of points
a b c
d e f
g h k
Your triangles are:
abd part of the rectangle abed
bde the other part of the rectangle abed
bef part of the rectangle bcfe
cef the other part of the rectangle bcfe
dge ... and so on
Your algorithm has these steps.
Build a list of triangles that are above the elevation threshold.
Take the union of these triangles to make a polygonal area.
Determine the boundary of the polygon.
If necessary, smooth the polygon boundary to make your layer look ok when displayed.
If you're trying to generate good looking contour lines, step 4 is very hard to to right.
Step 1 is the key to this problem.
For each triangle, if all three vertices are above the threshold, include the whole triangle in your list. If all are below, forget about the triangle. If some vertices are above and others below, split your triangle into three by adding new vertices that lie precisely on the elevation line (by interpolating elevation). Include the one or two of those new triangles in your highland list.
For the rest of the steps you'll need a decent 2d geometry processing library.
If your points are not on a regular grid, start by using the Delaunay algorithm (which you can look up) to organize your pointss in into triangles. Then follow the same algorith I mentioned above. Warning. This is going to look kind of sketchy if you don't have many points.
Assuming you have the lat/lon/elevation data stored in an array (or three separate arrays) you should be able to use array querying techniques to select all of the points where the elevation is above a certain threshold. For example, in python with numpy you can do:
indices = where(array > value)
And the indices variable will contain the indices of all elements of array greater than the threshold value. Similar commands are available in various other languages (for example IDL has the WHERE() command, and similar things can be done in Matlab).
Once you've got this list of indices you could create a new binary array where each place where the threshold was satisfied is set to 1:
binary_array[indices] = 1
(Assuming you've created a blank array of the same size as your original lat/long/elevation and called it binary_array.
If you're working with raster data (which I would recommend for this type of work), you may find that you can simply overlay this array on a map and get a nice set of regions appearing. However, if you need to convert the areas above the elevation threshold to vector polygons then you could use one of many inbuilt GIS methods to convert raster->vector.
I would use a nested C-squares arrangement, with each square having a pre-calculated maximum ground height. This would allow me to scan at a high level, discarding any squares where the max height is not above the search height, and drilling further into those squares where parts of the ground were above the search height.
If you're working to various set levels of search height, you could precalculate the convex hull for the various predefined levels for the smallest squares that you decide to use (or all the squares, for that matter.)
I'm not sure whether your lat/lon/alt points are on a regular grid or not, but if not, perhaps they could be interpolated to represent even 100' ft altitude increments, and uniform
lat/lon divisions (bearing in mind that that does not give uniform distance divisions). But if that would work, why not precompute a three dimensional array, where the indices represent altitude, latitude, and longitude respectively. Then when the aircraft needs data about points at or above an altitude, for a specific piece of terrain, the code only needs to read out a small part of the data in this array, which is indexed to make contiguous "voxels" contiguous in the indexing scheme.
Of course, the increments in longitude would not have to be uniform: if uniform distances are required, the same scheme would work, but the indexes for longitude would point to a nonuniformly spaced set of longitudes.
I don't think there would be any faster way of searching this data.
It's not clear from your question if the set of points is static and you need to find what points are above a given elevation many times, or if you only need to do the query once.
The easiest solution is to just store the points in an array, sorted by elevation. Finding all points in a certain elevation range is just binary search, and you only need to sort once.
If you only need to do the query once, just do a linear search through the array in the order you got it. Building a fancier data structure from the array is going to be O(n) anyway, so you won't get better results by complicating things.
If you have some other requirements, like say you need to efficiently list all points inside some rectangle the user is viewing, or that points can be added or deleted at runtime, then a different data structure might be better. Presumably some sort of tree or grid.
If all you care about is rendering, you can perform this very efficiently using graphics hardware, and there is no need to use a fancy data structure at all, you can just send triangles to the GPU and have it kill fragments above or below a certain elevation.
I am working on a project where the game world is irregularly shaped (Think of the shape of a lake). this shape has a grid with coordinates placed over it. The game world is only on the inside of the shape. (Once again, think Lake)
How can I efficiently represent the game world? I know that many worlds are basically square, and work well in a 2 or 3 dimension array. I feel like if I use an array that is square, then I am basically wasting space, and increasing the amount of time that I need to iterate through the array. However, I am not sure how a jagged array would work here either.
Example shape of gameworld
X
XX
XX X XX
XXX XXX
XXXXXXX
XXXXXXXX
XXXXX XX
XX X
X
Edit:
The game world will most likely need each valid location stepped through. So I would a method that makes it easy to do so.
There's computational overhead and complexity associated with sparse representations, so unless the bounding area is much larger than your actual world, it's probably most efficient to simply accept the 'wasted' space. You're essentially trading off additional memory usage for faster access to world contents. More importantly, the 'wasted-space' implementation is easier to understand and maintain, which is always preferable until the point where a more complex implementation is required. If you don't have good evidence that it's required, then it's much better to keep it simple.
You could use a quadtree to minimize the amount of wasted space in your representation. Quad trees are good for partitioning 2-dimensional space with varying granularity - in your case, the finest granularity is a game square. If you had a whole 20x20 area without any game squares, the quad tree representation would allow you to use only one node to represent that whole area, instead of 400 as in the array representation.
Use whatever structure you've come up with---you can always change it later. If you're comfortable with using an array, use it. Stop worrying about the data structure you're going to use and start coding.
As you code, build abstractions away from this underlying array, like wrapping it in a semantic model; then, if you realize (through profiling) that it's waste of space or slow for the operations you need, you can swap it out without causing problems. Don't try to optimize until you know what you need.
Use a data structure like a list or map, and only insert the valid game world coordinates. That way the only thing you are saving are valid locations, and you don't waste memory saving the non-game world locations since you can deduce those from lack of presence in your data structure.
The easiest thing is to just use the array, and just mark the non-gamespace positions with some special marker. A jagged array might work too, but I don't use those much.
You could present the world as an (undirected) graph of land (or water) patches. Each patch then has a regular form and the world is the combination of these patches. Every patch is a node in the graph and has has graph edges to all its neighbours.
That is probably also the most natural representation of any general world (but it might not be the most efficient one). From an efficiency point of view, it will probably beat an array or list for a highly irregular map but not for one that fits well into a rectangle (or other regular shape) with few deviations.
An example of a highly irregular map:
x
x x
x x x
x x
x xxx
x
x
x
x
There’s virtually no way this can be efficiently fitted (both in space ratio and access time) into a regular shape. The following, on the other hand, fits very well into a regular shape by applying basic geometric transformations (it’s a parallelogram with small bits missing):
xxxxxx x
xxxxxxxxx
xxxxxxxxx
xx xxxx
One other option that could allow you to still access game world locations in O(1) time and not waste too much space would be a hashtable, where the keys would be the coordinates.
Another way would be to store an edge list - a line vector along each straight edge. Easy to check for inclusion this way and a quad tree or even a simple location hash on each vertice can speed lookup of info. We did this with a height component per edge to model the walls of a baseball stadium and it worked beautifully.
There is a big issue that nobody here addressed: the huge difference between storing it on disk and storing it in memory.
Assuming you are talking about a game world as you said, this means it's going to be very large. You're not going to store the whole thing in memory in once, but instead you will store the immediate vicinity in memory and update it as the player walks around.
This vicinity area should be as simple, easy and quick to access as possible. It should definitely be an array (or a set of arrays which are swapped out as the player moves). It will be referenced often and by many subsystems of your game engine: graphics and physics will handle loading the models, drawing them, keeping the player on top of the terrain, collisions, etc.; sound will need to know what ground type the player is currently standing on, to play the appropriate footstep sound; and so on. Rather than broadcast and duplicate this data among all the subsystems, if you just keep it in global arrays they can access it at will and at 100% speed and efficiency. This can really simplify things (but be aware of the consequences of global variables!).
However, on disk you definitely want to compress it. Some of the given answers provide good suggestions; you can serialize a data structure such as a hash table, or a list of only filled-in locations. You could certainly store an octree as well. In any case, you don't want to store blank locations on disk; according to your statistic, that would mean 66% of the space is wasted. Sure there is a time to forget about optimization and make it Just Work, but you don't want to distribute a 66%-empty file to end users. Also keep in mind that disks are not perfect random-access machines (except for SSDs); mechanical hard drives should still be around another several years at least, and they work best sequentially. See if you can organize your data structure so that the read operations are sequential, as you stream more vicinity terrain while the player moves, and you'll probably find it to be a noticeable difference. Don't take my word for it though, I haven't actually tested this sort of thing, it just makes sense right?