Following the apparent tradition of of using this question as the basis of new questions I too have a problem I am looking to solve as elegantly as possible:
I have implemented a hexagonal map as such:
(Wanted to insert image here.. but I'm not allowed due to being new... Please see above link)
But am now wondering how to (elegantly) implement A* for this type of map with these types of coordinates.
I have experience with using A* on typical squared grids (cartesian grids I think?) and the way I handle it there seems incompatible with this coordinate system.
Typically I would generate a 2D array of bytes. The indices of the array would correspond to a grid coordinate and the value at said index would give the 'weight' of that node. (0 being impassible and higher numbers 'weighing' more then lower numbers).
Example:
sbyte[,] pathGrid = new sbyte[5, 5]
{
{0,0,1,0,0},
{9,5,1,3,0},
{9,5,1,3,0},
{9,5,1,3,0},
{0,0,1,0,0}
};
Where the 0's would be impassible, the 1's would be easily traversable, and higher numbers would 'cost' more to traverse. (sorry about formatting.. I'm a stack overflow newb :P )
This array would be generated based on the composition of my map and then fed into my path finding algorithm which would in turn spit out a list of nodes (the path) or return null if no path was found.
However, using this type of grid, that isn't possible (at least at first glance) due to negative coordinates (which obviously do not work in an array) and the fact that the grid doesn't follow the same rules as a 'typical' grid.
There are ways to solve this using my A* method I think but they are all rather sloppy (converting grid coordinates and using empty nodes) and I was wondering if anybody has thought of a way to do this elegantly.
Thanks for reading in any case :)
(Btw I am doing this in C#/.net for what it's worth)
Even though the indices of an array begin at 0, your program doesn't need to conceptually treat the arrays that way. For instance, if you always add e.g. 3 to your indices before using them to look up in an array, you effectively have an array where the indices begin at 3. In order to simplify working with arrays in this way, you could create a class called e.g. ArbitraryBaseArray that wraps an array and a number that specifies the desired base index.
Then, you could create a HexGrid class that contains an array of ArbitraryBaseArray, each with their own base index (depending on how the left edge of your hex area looks). The class could have an indexer that lets you look up a specific element based on two hex coordinates. It could also have a static method which, given a coordinate in the hex grid, returns an array with the six neighbouring coordinates; this method could be used by A*. (Note that while the illustration in the question you linked to uses three coordinates for each hex tile, two coordinates are sufficient.)
You could store your coordinates in a dictionary:
var nodes = new Dictionary<Point, Vector[]>;
This way you're not limited to positive coordinates, and you're also not limited on the number of paths from each node
Related
I think the title says it all... but for completeness sake here is the full problem.
The Problem
So, I have a 2-column array (a matrix, or "grid") in Visual Basic/C#/Anything.net comprised of Cell instances declared as Public Matrix(,) As Cell.
A Cell is roughly this:
Class Cell
Public Value as integer
Public Height as integer
Public Tags as Dictionary
Sub New(Optional v As CellType = CellType.Void)
value = v
type = v
End Sub
Function GetPos() as Point
*need arcane necromancy here*
End Sub
End Class
The question is simple, but I know the answer may not: can I get the Cell's position without passing it in the constructor?
Here is an example of what I'd like to achieve (always given Public Matrix(,) As Cell):
Dim x,y as integer
Dim apple as Cell = GetARandomAppleFrom(Matrix)
x=apple.GetArrayPos.x
y=apple.GetArrayPos.y
Console.WriteLine(String.Format("An Apple is in {0}, {1}",x,y)
A side question about the need of this question
At the moment I am using the following snippet of code to initialize all of the cells to a zero-value because using Matrix.Initialize() failed miserably leaving every element set to nothing.
If the methods description says it calls the default constructor, then why did it fail?
Although I doubt my initialization is correct from a coding/efficiency perspective, as I think that a double loop isn't that great...
For x = 0 To Me.zWidth
For y = 0 To Me.zHeight
Matrix.SetValue(New Cell(CellTypes.Void), x, y)
Next
Next
I am sticking to it for now. Hence here is a
Possible Solution
This implies that I could pass the indices to a Position field at initialization time - through the constructor - like this...
For x = 0 To Me.zWidth
For y = 0 To Me.zHeight
Matrix.SetValue(New Cell(CellTypes.Void,x,y), x, y)
Next
Next
But, since a Cell could change position or be overwritten, I would like its position to be dynamically determined.
However, if nothing better comes in the radar I will definitely use this approach and then update the value as per position changes,
Addendums
Some info that may - or not - be useful:
The Matrix size is less or equal to (512,512)
This is going to become a terrain data-map generator for an isometric (so no 3d, but I already have an height variable in cells) game with fairly small maps, with support to superficial features like mountains, river(s),
Why can't you just cut down your random picker to just return the x,y value found and use that to retrive your cell? Something like this:
Dim pt as Point
pt = GetRandomPosition()
Dim apple as Cell = Matrix(pt.X,pt.Y)
Console.WriteLine(String.Format("An Apple is in {0}, {1}",pt.X,pt.Y)
First off to initialize N elements you must call your initializer N times, no efficiency problem there.
To answer your question: Your item shouldn't care about its position. The position of an item in a collection is not a problem of the item itself and can lead to headaches very quickly if you move your item or change the collection.
What if your collection changes from a 2d matrix to a 3d one? You have to rewrite code and add a Z variable (and that just a quick example).
I'd try to structure my code/solution completely in another way.
By the way, a item can locate itself inside a collection by looking for a reference to itself. (Adapt code to your matrix as needed)
GetPosition( Array source)
{
foreach(var element in source)
Object.ReferenceEquals(this,element)
}
Performance are bad since you loop the collection but it may fit your needs
It really is a question of modelling. I personally would prefer if cells knew about their position since it seems to be important to them, but for one you appear to dislike this approach, and for two, when updating positions this has to be done (by a helper method) in two realms at once - in the affected cells and in the grid, which can be problematic. Thus, something else:
How about stepping away from a simple 2D array towards building your own type like a somewhat smarter array which not only keeps the grid of cells as a simple 2D array, but also takes record of each Cell with their coordinates -- a dictionary which maps each contained Cell to a pair of ints representing the x and y. Let's call that smart array SmartArray. It would need methods for moving cells around, which update the map and the grid atomically (which is OK because all the changes are within the SmartArray "realm"). Then you can ask the SmartArray instance "Where on the grid is Cell c?" and you'll have an answer in O(1). That'd be a tradeoff towards speed over memory footprint. Each Cell must, however, know about their containing SmartArray so it knows who to ask.
You can also leave out the map and keep the query methods, but make them traverse the grid each time a question is asked. The answer will be the same, lookups will take O(n^2), but moving cells will be easier because you only have to update one structure. That'll turn around the tradeoff towards one favoring memory footprint over speed, and it is essentially the Double Loop you mentioned in the q.
I'm building a simple program that creates a bunch of scattered trees on the screen in C#. I am still relatively new to C# so bear with me. My program creates the trees but some of the images end up on top of each other because the trees are drawn in a seemingly random order.
I have a list of the tree objects and I was wondering how one goes about sorting this list by the trees' Y value (treeObject.position.Y), that way when I call every trees' draw methods in a for loop it will draw the ones furthest back (smallest Y) first. I tried hard coding it but It became too cumbersome.
Full Code is given here:
http://pastebin.com/5G6aecLm
Using some sort algorithm, preferably : QuickSort
If I understood your problem, there are many ways to do it, here is just one:
Assuming that your list is of type List<TreeObject>:
using System.Linq;
var q = yourList.AsEnumerable<TreeObject>().OrderBy(obj => obj.position.Y);
Then just loop q to get your objects in the correct order.
If you're using a drawing engine capable of 3D, just use a ortho view of the world. You'll see no difference, but you'll be able to use z to control depth (z=-y will then have the desired effect), as well as being able to scale, rotate, and morph your 2D sprites efficiently.
I am using the SURF algorithm in C# (OpenSurf) to get a list of interest points from an image. Each of these interest points contains a vector of descriptors , an x coordinate (int), an y coordinate (int), the scale (float) and the orientation (float).
Now, i want to compare the interest points from one image to a list of images in a database which also have a list of interest points, to find the most similar image. That is: [Image(I.P.)] COMPARETO [List of Images(I.P.)]. => Best match. Comparing the images on an individual basis yields unsatisfactory results.
When searching stackoverflow or other sites, the best solution i have found is to build an FLANN index while at the same time keeping track of where the interest points comes from. But before implementation, I have some questions which puzzle me:
1) When matching images based on their SURF interest points an algorithm I have found does the matching by comparing their distance (x1,y1->x2,y2) with each other and finding the image with the lowest total distance. Are the descriptors or orientation never used when comparing interest points?
2) If the descriptors are used, than how do i compare them? I can't figure out how to compare X vectors of 64 points (1 image) with Y vectors of 64 points (several images) using a indexed tree.
I would really appreciate some help. All the places I have searched or API I found, only support matching one picture to another, but not to match one picture effectively to a list of pictures.
There are multiple things here.
In order to know two images are (almost) equal, you have to find the homographic projection of the two such that the projection results in a minimal error between the projected feature locations. Brute-forcing that is possible but not efficient, so a trick is to assume that similar images tend to have the feature locations in the same spot as well (give or take a bit). For example, when stitching images, the image to stitch are usually taken only from a slightly different angle and/or location; even if not, the distances will likely grow ("proportionally") to the difference in orientation.
This means that you can - as a broad phase - select candidate images by finding k pairs of points with minimum spatial distance (the k nearest neighbors) between all pairs of images and perform homography only on these points. Only then you compare the projected point-pairwise spatial distance and sort the images by said distance; the lowest distance implies the best possible match (given the circumstances).
If I'm not mistaken, the descriptors are oriented by the strongest angle in the angle histogram. Theat means you may also decide to take the euclidean (L2) distance of the 64- or 128-dimensional feature descriptors directly to obtain the actual feature-space similarity of two given features and perform homography on the best k candidates. (You will not compare the scale in which the descriptors were found though, because that would defeat the purpose of scale invariance.)
Both options are time consuming and direcly depend on the number of images and features; in other word's: stupid idea.
Approximate Nearest Neighbors
A neat trick is to not use actual distances at all, but approximate distances instead. In other words, you want an approximate nearest neighbor algorithm, and FLANN (although not for .NET) would be one of them.
One key point here is the projection search algorithm. It works like this:
Assuming you want to compare the descriptors in 64-dimensional feature space. You generate a random 64-dimensional vector and normalize it, resulting in an arbitrary unit vector in feature space; let's call it A. Now (during indexing) you form the dot product of each descriptor against this vector. This projects each 64-d vector onto A, resulting in a single, real number a_n. (This value a_n represents the distance of the descriptor along A in relation to A's origin.)
This image I borrowed from this answer on CrossValidated regarding PCA demonstrates it visually; think about the rotation as the result of different random choices of A, where the red dots correspond to the projections (and thus, scalars a_n). The red lines show the error you make by using that approach, this is what makes the search approximate.
You will need A again for search, so you store it. You also keep track of each projected value a_n and the descriptor it came from; furthermore you align each a_n (with a link to its descriptor) in a list, sorted by a_n.
To clarify using another image from here, we're interested in the location of the projected points along the axis A:
The values a_0 .. a_3 of the 4 projected points in the image are approximately sqrt(0.5²+2²)=1.58, sqrt(0.4²+1.1²)=1.17, -0.84 and -0.95, corresponding to their distance to A's origin.
If you now want to find similar images, you do the same: Project each descriptor onto A, resulting in a scalar q (query). Now you go to the position of q in the list and take the k surrounding entries. These are your approximate nearest neighbors. Now take the feature-space distance of these k values and sort by lowest distance - the top ones are your best candidates.
Coming back to the last picture, assume the topmost point is our query. It's projection is 1.58 and it's approximate nearest neighbor (of the four projected points) is the one at 1.17. They're not really close in feature space, but given that we just compared two 64-dimensional vectors using only two values, it's not that bad either.
You see the limits there and, similar projections do not at all require the original values to be close, this will of course result in rather creative matches. To accomodate for this, you simply generate more base vectors B, C, etc. - say n of them - and keep track of a separate list for each. Take the k best matches on all of them, sort that list of k*n 64-dimensional vectors according to their euclidean distance to the query vector, perform homography on the best ones and select the one with the lowest projection error.
The neat part about this is that if you have n (random, normalized) projection axes and want to search in 64-dimensional space, you are simply multiplying each descriptor with a n x 64 matrix, resulting in n scalars.
I am pretty sure that the distance is calculated between the descriptors and not their coordinates (x,y). You can compare directly only one descriptor against another. I propose the following possible solution (surely not the optimal)
You can find for each descriptor in the query image the top-k nearest neighbors in your dataset, and later take all top-k lists and finds the most common image there.
Can anyone suggest a fast, efficient method for storing and accessing a sparse octree?
Preferably something that can be easily implemented in HLSL. (I'm working a raycasting/voxel app)
In this instance, the tree can be precalculated, so I'm mostly concerned with size and search time.
Update
For anyone looking to do this, a more efficient solution may be to store the nodes as a linear octree generated with a Z-order curve/Morton tree. Doing so eliminates storage of inner nodes, but may require cross-referencing the linear tree array with a second "data texture," containing information about the individual voxel.
I'm not very experienced at HLSL, so I'm not sure this will meet your needs, here are my thoughts. Let me know if something here is not sane for your needs - I'd like to discuss so maybe I can learn something myself.
Every node in the octree can exist as a vector3, where the (x,y,z) component represents the center point of the node. The w component can be used as a flags field.
a. The w-flags field can denote which octant child nodes follow the current node. This would require 8 bits of the value.
Each entity stored in your octree can be stored as a bounding box, where r,g,b can be the bounding box dimensions, and w can be used for whatever.
Define a special vector denoting that an object list follows. For example, if the (w + z) is some magic value. Some func(x, y) can, say, be the number of objects that follow. Or, whatever works.
a. Each node is potentially followed by this special vector, indicating that there are objects stored in the node. The next X vectors are all just object identifiers or something like that.
b. Alternatively, you could have one node that just specifies an in-memory object list. Again, not sure what you need here or the constraints on how to access objects.
So, first, build the octree and stuff it with your objects. Then, just walk the octree, outputting the vectors to a memory buffer.
I'm thinking that a 512x512 texture can hold a fully packed octree 5 levels deep (32,768 nodes), each containing 8 objects. Or, a fully packed 4-level octree with 64 objects each.
There is a great article about sparse octrees focusing on GPUs: Efficient Sparse Voxel Octrees – Analysis, Extensions, and Implementation
Given an elevation map consisting of lat/lon/elevation pairs, what is the fastest way to find all points above a given elevation level (or better yet, just the the 2D concave hull)?
I'm working on a GIS app where I need to render an overlay on top of a map to visually indicate regions that are of higher elevation; it's determining this polygon/region that has me stumped (for now). I have a simple array of lat/lon/elevation pairs (more specifically, the GTOPO30 DEM files), but I'm free to transform that into any data structure that you would suggest.
We've been pointed toward Triangulated Irregular Networks (TINs), but I'm not sure how to efficiently query that data once we've generated the TIN. I wouldn't be surprised if our problem could be solved similarly to how one would generate a contour map, but I don't have any experience with it. Any suggestions would be awesome.
It sounds like you're attempting to create a polygonal representation of the boundary of the high land.
If you're working with raster data (sampled on a rectangular grid), try this.
Think of your grid as an assembly of right triangles.
Let's say you have a 3x3 grid of points
a b c
d e f
g h k
Your triangles are:
abd part of the rectangle abed
bde the other part of the rectangle abed
bef part of the rectangle bcfe
cef the other part of the rectangle bcfe
dge ... and so on
Your algorithm has these steps.
Build a list of triangles that are above the elevation threshold.
Take the union of these triangles to make a polygonal area.
Determine the boundary of the polygon.
If necessary, smooth the polygon boundary to make your layer look ok when displayed.
If you're trying to generate good looking contour lines, step 4 is very hard to to right.
Step 1 is the key to this problem.
For each triangle, if all three vertices are above the threshold, include the whole triangle in your list. If all are below, forget about the triangle. If some vertices are above and others below, split your triangle into three by adding new vertices that lie precisely on the elevation line (by interpolating elevation). Include the one or two of those new triangles in your highland list.
For the rest of the steps you'll need a decent 2d geometry processing library.
If your points are not on a regular grid, start by using the Delaunay algorithm (which you can look up) to organize your pointss in into triangles. Then follow the same algorith I mentioned above. Warning. This is going to look kind of sketchy if you don't have many points.
Assuming you have the lat/lon/elevation data stored in an array (or three separate arrays) you should be able to use array querying techniques to select all of the points where the elevation is above a certain threshold. For example, in python with numpy you can do:
indices = where(array > value)
And the indices variable will contain the indices of all elements of array greater than the threshold value. Similar commands are available in various other languages (for example IDL has the WHERE() command, and similar things can be done in Matlab).
Once you've got this list of indices you could create a new binary array where each place where the threshold was satisfied is set to 1:
binary_array[indices] = 1
(Assuming you've created a blank array of the same size as your original lat/long/elevation and called it binary_array.
If you're working with raster data (which I would recommend for this type of work), you may find that you can simply overlay this array on a map and get a nice set of regions appearing. However, if you need to convert the areas above the elevation threshold to vector polygons then you could use one of many inbuilt GIS methods to convert raster->vector.
I would use a nested C-squares arrangement, with each square having a pre-calculated maximum ground height. This would allow me to scan at a high level, discarding any squares where the max height is not above the search height, and drilling further into those squares where parts of the ground were above the search height.
If you're working to various set levels of search height, you could precalculate the convex hull for the various predefined levels for the smallest squares that you decide to use (or all the squares, for that matter.)
I'm not sure whether your lat/lon/alt points are on a regular grid or not, but if not, perhaps they could be interpolated to represent even 100' ft altitude increments, and uniform
lat/lon divisions (bearing in mind that that does not give uniform distance divisions). But if that would work, why not precompute a three dimensional array, where the indices represent altitude, latitude, and longitude respectively. Then when the aircraft needs data about points at or above an altitude, for a specific piece of terrain, the code only needs to read out a small part of the data in this array, which is indexed to make contiguous "voxels" contiguous in the indexing scheme.
Of course, the increments in longitude would not have to be uniform: if uniform distances are required, the same scheme would work, but the indexes for longitude would point to a nonuniformly spaced set of longitudes.
I don't think there would be any faster way of searching this data.
It's not clear from your question if the set of points is static and you need to find what points are above a given elevation many times, or if you only need to do the query once.
The easiest solution is to just store the points in an array, sorted by elevation. Finding all points in a certain elevation range is just binary search, and you only need to sort once.
If you only need to do the query once, just do a linear search through the array in the order you got it. Building a fancier data structure from the array is going to be O(n) anyway, so you won't get better results by complicating things.
If you have some other requirements, like say you need to efficiently list all points inside some rectangle the user is viewing, or that points can be added or deleted at runtime, then a different data structure might be better. Presumably some sort of tree or grid.
If all you care about is rendering, you can perform this very efficiently using graphics hardware, and there is no need to use a fancy data structure at all, you can just send triangles to the GPU and have it kill fragments above or below a certain elevation.