Can anyone suggest a fast, efficient method for storing and accessing a sparse octree?
Preferably something that can be easily implemented in HLSL. (I'm working a raycasting/voxel app)
In this instance, the tree can be precalculated, so I'm mostly concerned with size and search time.
Update
For anyone looking to do this, a more efficient solution may be to store the nodes as a linear octree generated with a Z-order curve/Morton tree. Doing so eliminates storage of inner nodes, but may require cross-referencing the linear tree array with a second "data texture," containing information about the individual voxel.
I'm not very experienced at HLSL, so I'm not sure this will meet your needs, here are my thoughts. Let me know if something here is not sane for your needs - I'd like to discuss so maybe I can learn something myself.
Every node in the octree can exist as a vector3, where the (x,y,z) component represents the center point of the node. The w component can be used as a flags field.
a. The w-flags field can denote which octant child nodes follow the current node. This would require 8 bits of the value.
Each entity stored in your octree can be stored as a bounding box, where r,g,b can be the bounding box dimensions, and w can be used for whatever.
Define a special vector denoting that an object list follows. For example, if the (w + z) is some magic value. Some func(x, y) can, say, be the number of objects that follow. Or, whatever works.
a. Each node is potentially followed by this special vector, indicating that there are objects stored in the node. The next X vectors are all just object identifiers or something like that.
b. Alternatively, you could have one node that just specifies an in-memory object list. Again, not sure what you need here or the constraints on how to access objects.
So, first, build the octree and stuff it with your objects. Then, just walk the octree, outputting the vectors to a memory buffer.
I'm thinking that a 512x512 texture can hold a fully packed octree 5 levels deep (32,768 nodes), each containing 8 objects. Or, a fully packed 4-level octree with 64 objects each.
There is a great article about sparse octrees focusing on GPUs: Efficient Sparse Voxel Octrees – Analysis, Extensions, and Implementation
Related
I am producing some simple network drawing software in C# (I shall write 'network' rather than 'graph' for clarity). Networks may eventually have multiple directed edges between some vertices/nodes. I have designed the data structure for this software as follows:
Each node and edge has a corresponding 'interactible' object determining the visualisation and processing inputs such as clicks targeted at these visible objects. These objects may store any amount of data about the respective objects that I choose, and for now they contain the endpoints of the edges, for example, as well as an integer identifier for each object.
Nodes (and edges between them) are divided into connected components. I want to implement routines which merge these components upon addition of edges and detect disconnectedness when edges or nodes are deleted.
The overall data for the network which shall for now simply record the number of edges between each pair of nodes, ignoring direction is to be represented in an instance of an INetwork interface, which contains such routines as those for adding and removing nodes and edges, identifying the neighbours of a vertex, and so on.
My question, then, is how to actually implement this IMatrix interface. The implementation can be made to vary with the sparseness of the graph, but I would like to know in terms of memory, processing speed etc what would be most efficient.
More precisely, I am aware that I could just produce an adjacency matrix for each component, or similarly an adjacency list, but which types are best suited to these roles in C#, bearing in mind that my nodes have integer identifiers which I would expect are best left constant throughout?
Would Dictionary<int,Dictionary<int,int>> ever be a good way of implementing an adjacency count so as to be able to remove entries efficiently? Would a large 2-dimensional array work given the node indexing? If I instead store a matrix as a 1-dim'l array, say as an int[] with some methods for treating this as a 2- or 3-dimensional array, would this be better in any meaningful way?
Of course, there are pros and cons to every implementation, but I expect the most taxing routine will always be the one that checks whether the removal of an edge has resulted in the network becoming disconnected. Thus I want whatever implementation I use to be able to quickly check:
Whether a removed edge was the last edge between its pair of vertices.
If so, to be able to quickly identify the neighbours of each node successively so as to identify which nodes lie in which of the resulting components.
Thanks in advance for any and all suggestions. In case anyone wonders, the choice of C# was because of my reliance on the Unity game-making engine for its rendering and (possibly, later on) physics capabilities.
I'm looking for a data structure similar to T[,,] (a 3D array) except I don't know the dimensions beforehand (nor have a reasonable upper bound) that will expand outwards as time goes on. I'd like to use negative indexes as well.
The only thing that comes to mind is a dictionary, with some kind of Point3 struct as the key. Are there any other alternatives? I'd like to lookups to be as fast as possible. The data will always be clustered around 0,0,0. It can expand in any direction, but there will never be any "gaps" between points.
I think I'm going to go ahead and just use a Dictionary<Point3, T> for now, and see how that performs. If it's a performance issue I'll try building a wrapper around T[,,] such that I can use negative indexes.
Obviously you'll need to store this in a data structure resembling a sparse array, because you have no idea how large your data-set is going to be. So a Dictionary seems reasonable.
I'm going a little crazy here, but I think your indices should be in Spherical Coordinates. It makes sense to me as your data grows outwards. It will also make finding elements at a specific range from (0, 0, 0) extremely easy.
If you may need range queries, KD-Trees comes to mind. They are tree-like structures, at each level seperate the universe into two along one axis. They offer O(logN) lookup time (for constant number of dimensions) which may or may not be fast enough, but they also provide O(logN + S) time for range queries where S is the size of the items found, which usually is very good. They can handle dynamic data (insertions and deletions along with lookups) but the tree may become unbalanced as a result. Also you can do Nearest Neighboor search from a given point, (i.e. get the 10 nearest objects to point (7,8,9)) Wikipedia, as always, is a good starting point: http://en.wikipedia.org/wiki/Kd-tree
If there are huge numbers of things in the world, of if the world is very dynamic (things move, be created/destroyed all the time), kd-trees may not be good enough. If most of the time you will only ask "give me the thing at (7,8,9)" you can either use a hash as you mentioned in your question or something like a List<List<List<T>>>. I'd just implement whichever is easier within an interface and worry about the performance later.
I am kind of assuming you need the dynamic aspect because array could be huge. In that case, what you could try is to allocate your array as a set of 3d 'tiles'. At the top level you have a 3d data structure that stores pointers to your tiles. You expand and allocate this as you go along.
Each individual tile could contain, say, 32x32x32 voxels. Or whatever amount suits your goals.
Looking up you tile is done by dividing your coordinate index by 32 (by bitshifting, of course), and the index in the tile is calculated by masking away the upper bits.
A lookup like this is fairly cheap, possible on par with a .net Dictionary, but it will use less memory, which is good for performance too.
The resulting array will be chunky, though: the array boundaries are multiples of your tile size.
Array access is a very fast linear lookup - if speed of lookup is your priority, then it's probably the way to go, depending on how often you will need to modify your arrays.
If your goal is to maintain the chunks around a player, you might want to arrange a "current world" array structure around the player, such that it is a 3-dimensional array with the centre chunk at 9,9,9 with a size of [20,20,20]. Each time the player leaves the centre chunk for another, you re-index the array to drop the old chunks and move on.
Ultimately, you're asking for options on how to optimize your game engine, but it's nearly impossible to say which is going to be correct for you. Even though games tend to be more optimized for performance than other applications, don't get lured into micro-optimizations; optimize for readability first and then optimize for performance when you find it necessary.
If you suspect this particular data-structure is going to be a bottleneck in your engine, put some performance tracing in so it's easy to be sure one way or the other once you've got the engine running.
I have large 2D object collection, only lines for now.
I need algorithm suggestion how to create fastest spatial index over
this collection so that I can collect all objects that are inside some
bounds.
Once built index will not be updated.
Object distribution in this database is not spatially uniform.
Algorithm implementation in C#.
Update: Current usage is for road graph of some country, so lines are small, from one crossroad to another, bigger density in populated areas. I think this gives good picture about data.
Obviously there are many indexing methods to achieve this, but I would require one that is fastest.
You can use the Segment Tree if you want to save 2-D lines and your queries are 2-D range queries.
The algorithmic complexity of a query is O( log^2 N ).
Check out quadtrees.... and DotSpatial for spatial type handling, including a quadtree implementation.
You can also try an R-tree. There's a C# implementation available at http://sourceforge.net/projects/cspatialindexrt/.
R-trees should have the kind of performance of a Segment Tree and the above implementation should be stand alone and fairly independent of a lot of extra code references, but I haven't tested it.
There is no silver bullet on this. It depends on the type of data (i.e., only points, only lines, triangles, meshes, any combination of them, etc.) and the type of query (point inside polygon, line intersection, nearest neighbors, any geometry inside a circle or box, etc).
You have a datastructure designed for specific type of query and data. If you want to use a single datastructure for all types of queries and all type of data you have to trade off either space or time or both. You can approach to be reasonably fast but you won't be optimal in general.
In my experience, a datastructure general enough to cope with most geometrical objects and can handle several types of queries I would recommend the AABB-Tree:
https://doc.cgal.org/latest/AABB_tree/index.html
i want to use sift/surf for template matching. Image can have 1...n targets.
Using surf/sift only one target can be extracted. One idea can be segment image in many segments and then look for sift/surf matching. It works but obviously it is not ideal because of speed and effort. Does there exist any alternative approach?. / Anyone has source code for scale and rotation invariant template matching.
regards,
If i understand correctly what you are saying (provide more informations please), you have N planar image objects. You want to extract SIFT/SURF features from the N images and put all the features in some sort of container (an array or an acceleration data structure for high-dimensional nearest neighbors). When you process a given image, you extract SIFT (or SURF) features and search, for every feature, its closest feature in the container. You end up having a list of pairs (feature from current image, feature from container). Now you have to apply some robust model estimator (RANSAC for example) to construct the homography. If a good homography can be found (with at least 10, 12 inliers), you will be sure that your target is there. Obviously, given the array of features pairs, you subdivide it into groups, where each group is one of the N planar image objects of your database (this is a not the best way to do, probably you should associate to each feature extracted from the current image to k features of the database and using some form of voting-scheme to establish which are the pairs, but doing so things gets more complicated).
So, generally speaking, you have to make some decisions:
feature to use (SIFT? SURF? others?)
robust model estimator (RANSAC? PROSAC? MLSAC?)
which geometric considerations to use when computing the homography (take advantage of
the fact that the homography relates points in two planar objects)
which multi-dimensional data structure you will use to accelerate the search
how to compute the homography (well, probably there is only one way: normalized DLT)
If your obects are NOT planar, the problem is more difficult, since a 3D rigid objects probably changes as the viewpoint changes. To describe it, you will need K images instead of only one. This is a lot more challenging to do, because as N and K grows, recognition rates drops down. Probably, there are other better ways. I strongly suggest to check using google relevant literature.
I am working on a project where the game world is irregularly shaped (Think of the shape of a lake). this shape has a grid with coordinates placed over it. The game world is only on the inside of the shape. (Once again, think Lake)
How can I efficiently represent the game world? I know that many worlds are basically square, and work well in a 2 or 3 dimension array. I feel like if I use an array that is square, then I am basically wasting space, and increasing the amount of time that I need to iterate through the array. However, I am not sure how a jagged array would work here either.
Example shape of gameworld
X
XX
XX X XX
XXX XXX
XXXXXXX
XXXXXXXX
XXXXX XX
XX X
X
Edit:
The game world will most likely need each valid location stepped through. So I would a method that makes it easy to do so.
There's computational overhead and complexity associated with sparse representations, so unless the bounding area is much larger than your actual world, it's probably most efficient to simply accept the 'wasted' space. You're essentially trading off additional memory usage for faster access to world contents. More importantly, the 'wasted-space' implementation is easier to understand and maintain, which is always preferable until the point where a more complex implementation is required. If you don't have good evidence that it's required, then it's much better to keep it simple.
You could use a quadtree to minimize the amount of wasted space in your representation. Quad trees are good for partitioning 2-dimensional space with varying granularity - in your case, the finest granularity is a game square. If you had a whole 20x20 area without any game squares, the quad tree representation would allow you to use only one node to represent that whole area, instead of 400 as in the array representation.
Use whatever structure you've come up with---you can always change it later. If you're comfortable with using an array, use it. Stop worrying about the data structure you're going to use and start coding.
As you code, build abstractions away from this underlying array, like wrapping it in a semantic model; then, if you realize (through profiling) that it's waste of space or slow for the operations you need, you can swap it out without causing problems. Don't try to optimize until you know what you need.
Use a data structure like a list or map, and only insert the valid game world coordinates. That way the only thing you are saving are valid locations, and you don't waste memory saving the non-game world locations since you can deduce those from lack of presence in your data structure.
The easiest thing is to just use the array, and just mark the non-gamespace positions with some special marker. A jagged array might work too, but I don't use those much.
You could present the world as an (undirected) graph of land (or water) patches. Each patch then has a regular form and the world is the combination of these patches. Every patch is a node in the graph and has has graph edges to all its neighbours.
That is probably also the most natural representation of any general world (but it might not be the most efficient one). From an efficiency point of view, it will probably beat an array or list for a highly irregular map but not for one that fits well into a rectangle (or other regular shape) with few deviations.
An example of a highly irregular map:
x
x x
x x x
x x
x xxx
x
x
x
x
There’s virtually no way this can be efficiently fitted (both in space ratio and access time) into a regular shape. The following, on the other hand, fits very well into a regular shape by applying basic geometric transformations (it’s a parallelogram with small bits missing):
xxxxxx x
xxxxxxxxx
xxxxxxxxx
xx xxxx
One other option that could allow you to still access game world locations in O(1) time and not waste too much space would be a hashtable, where the keys would be the coordinates.
Another way would be to store an edge list - a line vector along each straight edge. Easy to check for inclusion this way and a quad tree or even a simple location hash on each vertice can speed lookup of info. We did this with a height component per edge to model the walls of a baseball stadium and it worked beautifully.
There is a big issue that nobody here addressed: the huge difference between storing it on disk and storing it in memory.
Assuming you are talking about a game world as you said, this means it's going to be very large. You're not going to store the whole thing in memory in once, but instead you will store the immediate vicinity in memory and update it as the player walks around.
This vicinity area should be as simple, easy and quick to access as possible. It should definitely be an array (or a set of arrays which are swapped out as the player moves). It will be referenced often and by many subsystems of your game engine: graphics and physics will handle loading the models, drawing them, keeping the player on top of the terrain, collisions, etc.; sound will need to know what ground type the player is currently standing on, to play the appropriate footstep sound; and so on. Rather than broadcast and duplicate this data among all the subsystems, if you just keep it in global arrays they can access it at will and at 100% speed and efficiency. This can really simplify things (but be aware of the consequences of global variables!).
However, on disk you definitely want to compress it. Some of the given answers provide good suggestions; you can serialize a data structure such as a hash table, or a list of only filled-in locations. You could certainly store an octree as well. In any case, you don't want to store blank locations on disk; according to your statistic, that would mean 66% of the space is wasted. Sure there is a time to forget about optimization and make it Just Work, but you don't want to distribute a 66%-empty file to end users. Also keep in mind that disks are not perfect random-access machines (except for SSDs); mechanical hard drives should still be around another several years at least, and they work best sequentially. See if you can organize your data structure so that the read operations are sequential, as you stream more vicinity terrain while the player moves, and you'll probably find it to be a noticeable difference. Don't take my word for it though, I haven't actually tested this sort of thing, it just makes sense right?