I am working with the Hololens in Unity and trying to map a large area (15x15x25) meters. I am able to map the whole area using the SpatialMapping prefab, but I want to do some spatial processing on that mesh to smoothen out the floors and walls. I have been trying to use SpatialUnderstanding for this, but there seems to be a hard limit on how big of an area you can scan with this, which has been detailed by hololens forums thread.
Currently, I don't understand how the pipeline of data works from SpatialMapping to SpatialUnderstanding. Why can I not simply use the meshes generated from SpatialMapping in SpatialUnderstanding? Is there some better method of creating smooth surfaces?
This solution works best for pre-generated rooms. In other words a general solution, one that could be expected to be used by end users, is not possible, given the current limitations.
I will start with the last question: "Is there some better method of creating smooth surfaces?"
Yes, use a tripod on wheels to generate the initial scan. Given the limited resolution of the accelometers and compasses in the hardware, reducing the variance in one linear axis, height, and one rotational axis, roll(roll should not vary during at all during a scan), will result in a much more accurate scan.
The other method to create smooth surfaces is to export the mesh to a 3D editing program and manually flatten the surfaces, then reimport the mesh into Unity3D.
"Why can I not simply use the meshes generated from SpatialMapping in SpatialUnderstanding?"
SpacialUnderstanding further divides the generated mesh into (8cm,8cm,8cm) voxels and then calculates the surfels based on each voxel. To keep performance and memory utilization in check, a hard limit of approximately(10m,10m,10m). That is implemented as(128,128,128) voxels.
Any attempt to use SpacialUnderstanding beyond its defined limits, will produce spurious results due to overflow of the underlying data structures.
Related
I need to render an object multiple times a frame using different textures. I was wondering about the most performant way to do so. My first approach was to have one Material and in OnRenderImage() call SetTexture() on it for the given number of textures I have. Now I'm wondering if it would be a noticeable improvement if I set up one Material per Texture in Start() and change between Materials in OnRenderImage(). Then I wouldn't need the SetTexture() call. But I can't find what SetTexture() actually does. Does it just set a flag or does it copy or upload the texture somewhere?
From working with low-end devices extensively, performance comes from batching. It's hard to pin point what would improve performance in your context w/o a clear understanding of the scope:
How many objects
How many images
Target platform
Are the images packaged or external at runtime
...
But as a general rule you want to use least amount of materials and individual textures possible. If pre-processing is an option, I would recommend creating spritesheets with as many images as possible on a single image. Then using UV offsetting you can show multiple images on multiple objects for 1 draw call.
I have used extensively a solution called TexturePacker which supports Unity. It's not cheap, there's an app to buy plus a plugin for Unity but it saves time, and draw calls, in the end.
Things like packing hundreds of images into a few 4k textures and down to 3 or 4 draw calls vs 100s before.
That might not be a solution in your case, but the concept is still valid.
Also unity prefabs will not save Draw Calls, but reduce memory usage.
hth.
J.
I got a (basic) voxel engine running and a water system that looks (and I assume basically works) like this: https://www.youtube.com/watch?v=Q_TdeGIOOts (not my game).
The water values are stored in a 3d Array of floats, and every 0.05s it calculates water flow by checking the voxel below and adjacent (y-1, x-1, x+1, z-1, z+1) and adds the value.
This system works fine (70+ fps) for small amounts of water, but when I start calculating water on 8+ chunks, it gets too much.
(I disabled all rendering or mesh creation to check if that is the bottleneck, it isnt. Its purely the flow calculations).
I am not a very experienced programmer so I wouldnt know where to start optimizing, apart from making the calculations happen in a coroutine as I already did.
In this post: https://gamedev.stackexchange.com/questions/55414/how-to-define-areas-filled-with-water (near the bottom) Boreal suggests running it in a compute shader. Is this the way to go for me? And how would I go about such a thing?
Any help is much appreciated.
If you're really calculating a voxel based simulation, you will be expanding the number of calculations geometrically as your size increases, so you will quickly run out of processing power on larger volumes.
A compute shader is great for doing massively parallel calculations quickly, although it's a very different programming paradigm that takes some getting used to. A compute shader will look at the contents of a buffer (ie, a 'texture' for us civilians) and do things to it very quickly -- in your case the buffer will probably be a buffer/texture whose pixel values represent water cells. If you want to do something really simple like increment them up or down the compute shader uses the parallel processing power of the GPU to do it really fast.
The hard part is that GPUs are optimized for parallel processing. This means that you can't write code like "texelA.value += texelB.value" - without extra work on your part, each fragment of the buffer is processed with zero knowledge of what happens in the other fragments. To reference other texels you need to read the texture again somehow - some techniques read one texture multiple times with offsets (this GL example does this to implement blurs, others do it by repeatedly processing a texture, putting the result into a temporary texture and then reprocessing that.
At the 10,000 foot level: yes, a compute shader is a good tool for this kind of problem since it involves tons of self-similar calculation. But, it won't be easy to do off the bat. If you have not done conventional shader programming before, You may want to look at that first to get used to the way GPUs work. Even really basic tools (if-then-else or loops) have very different performance implications and uses in GPU programming and it takes some time to get your head around the differences. As of this writing (1/10/13) it looks like Nvidia and Udacity are offering an intro to compute shader course which might be a good way to get up to speed.
FWIW you also need pretty modern hardware for compute shaders, which may limit your audience.
I am looking for a way to approximate a volume of fluid moving over a heightmap. The easiest solution I can think of is to approximate it as a large number of non-drawn spheres, of small diameter (<0.1m). I would then place a visible plane representing the surface of the water on "top" of the spheres, at the locations they came to rest. To my knowledge, no managed physics engines contain a built in fluid simulator, hence the question.
Implementation would consist of using a physics engine such as JigLibX, which is capable of simulating the motion of the spheres. To determine the height of the planes, I was thinking of averaging the maximum height of each sphere that is on the top layer of a grouping.
I dont expect performance to be great, but would it be approachable for real time? If not, could I use this simulation to pre-bake lines of flow?
I hope this makes sense, I really want opinions/suggestions as to whether this is feasible, or if there is a better way of approaching this.
Thanks for any help, Venatu
(If its relevant, my target platform is XNA 4.0, using C#. Windows only at this point in time, so PhysX/Havok are possibilities for the simulation, but I would prefer a managed solution)
I haven't seen realistic fluid dynamics in real time without using something like PhysX as of yet - probably because the calculations needed are so complicated! The problem with your approach as I see it would come with the resting contact of all those spheres as they settled down, which takes up a lot of processing power. Lots of resting contact points are notorious for eating into performance very quickly, even on the most powerful of desktops.
If you are going down this route then I'd recommend modelling the fluid as an elastic but solid body using spring based physics, where the force applied to one part of the water would use springs to propagate out to the rest. This gives you the option of setting a breaking point for the springs and separating the body into two or more bodies when that happens (and the reverse for coming back together.) This can give you the foundation for things like spray. It's also a more versatile approach in terms of performance, because you can choose the number of particles and springs you use to approximate your model.
It's a big and complicated topic, but I hope that provided at least some insight!
The most popular method to simulate fluids in real-time is Smoothed-particle hydrodynamics.
Several useful links:
http://en.wikipedia.org/wiki/Smoothed-particle_hydrodynamics
http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html
http://www.plunk.org/~trina/thesis/html/thesis_toc.html
In addition to simulation itself you will also need some specialized broad-phase collision detection algorithms such as sweep-and-prune or hashing cells.
And you're right, there is no completed 2d solutions for the fluid dynamics.
I'm getting images from a C328R camera attached to a small arduino robot. I want the robot to drive towards orange ping-pong balls and pick them up. I'm using the C# code supplied by funkotron76 at http://www.codeproject.com/KB/recipes/C328R.aspx.
Is there a library I can use to do this, or do I need to iterate over every pixel in the image looking for orange? If so, what kind of tolerance would I need to compensate for various lighting conditions?
I could probably test to figure out these numbers, but I'm hoping someone out there knows the answers.
Vision can be surprisingly difficult, especially as you try to tolerate varying conditions. A few good things to research include Blob Finding (searching for contiguous pixels matching certain criteria, usually a threshold of brightness), Image Segmentation (can you have multiple balls in an image?) and general theory on Hue (most vision algorithms work with grayscale or binary images, so you'll first need to transform the image in a way that highlights the orangeness as the criteria for selection.)
Since you are presumably tracking these objects in real time as you move toward them, you might also be interested in learning about tracking models, such as the Kalman filter. It's overkill for what you're doing, but it's interesting and the basic ideas are helpful. Since you presumably know that the object should not be moving very quickly, you can use that fact to filter out false positives that could otherwise lead you to move away from the object. You can put together a simpler version of this kind of filtering by simply ignoring frames that have moved too far from the previous accepted frame (with a few boundary conditions to avoid getting stuck ignoring the object.)
Here's my problem. I'm creating a game and I'm wondering about how to do the collisions. I have several case to analyze and to find the best solution for.
I'll say it beforehand, I'm not using any third party physics library, but I'm gonna do it in house. (as this is an educational project, I don't have schedules and I want to learn)
I have 2 types of mesh for which I have to make the collisions for :
1) Static Meshes (that move around the screen, but does not have ANY animation)
2) Skinned/Boned Meshes (animated)
Actually I have this solution (quite hacky :|)
First of all I have a test against some bounding volume that enclose the full mesh (capsule in my case), after :
1) For the static meshes I divide them manually in blocks (on the modeler) and for each of these blocks i use a sphere/AABB test. (works fine, but its a little messy to slice every mesh :P) (i tried an automatic system to divide the mesh through planes, but it gives bad results :()
2) For the animated mesh ATM i'm dividing the mesh at runtime into x blocks (where x is the number of bones). Each block contain the vertex for which that bone is the major influencer. (Sometimes works, sometimes gives really bad results. :|)
Please note that the divide of the mesh is done at loading time and not each time (otherwise it would run like a slideshow :D)
And here's the question :
What is the most sane idea to use for those 2 case?
Any material for me to study these methods? (with some sourcecode and explanations would be even better (language is not important, when i understand the algorithm, the implementation is easy))
Can you argument why that solution is better than others?
I heard a lot of talk about kd-tree, octree, etc..while I understand their structure I miss their utility in a collision detection scenario.
Thanks a lot for the answers!!!
EDIT : Trying to find a K-Dop example with some explanation on the net. Still haven't found anything. :( Any clues?
I'm interested on HOW the K-Dop can be efficiently tested with other type of bounding volumes etc...but the documentation on the net seems highly lacking. :(
Prior to doing complex collision detection you should perform basic detection.
Using spheres or rectangles as bounding volumes is your best bet. Then if this detects a collision, move onto your more complex methods.
What I'm getting at is simple is often better, and quicker. Wrapping bounding volumes and splitting meshes up is costly, not to mention complex. You seem to be on the right track though.
As with game programming there are multiple ways of collision detection. My advice would be start simple. Take a cube and perfect your routines on that, then in theory you should be able to use any other model. As for examples I'd check gamedev.net as they have some nice articles. Much or my home made collision detection is a combination of many methods, so I can't really recommended the definitive resource.
The most common approaches used in many current AAA games is "k-DOP" simplified collision for StaticMeshes, and a simplified physical body representation for the SkeletalMeshes.
If you google for "kDOP collision" or "discrete orientation polytopes" you should find enough references. This is basicly a bounding volume defined of several planes that are moved from outside towards the mesh, until a triangle collision occurs. The "k" in kDOP defines how many of these planes are used, and depending on your geometry and your "k" you can get really good approximations.
For SkeletalMeshes the most common technique is to define simple geometry that is attached to specific bones. This geometry might be a box or a sphere. This collision-model than can be used for quite accurate collision detection of animated meshes.
If you need per-triangle collision, the "Separating Axis Theorem" is the google-search term of your choice. This is usefull for specific cases, but 75% of your collision-detection needs should be covered with the above mentioned methods.
Keep in mind, that you most probably will need a higher level of early collision rejection than a bounding-volume. As soon as you have a lot of objects in the world, you will need to use a "spatial partitioning" to reject groups of objects from further testing as early as possible.
The answering question comes down to how precise do you need?
Clearly, sphere bounding boxes are the most trivial. On the other side of the scale, you have a full triangle mesh-mesh collision detection, which has to happen each time an object moves.
Game development physics engine rely on the art of the approximation(I lurked in GameDev.net's math and physics forums years ago).
My opinion is that you will need some sort of bounding ellipsoid associated with each object. An object can be a general multimesh object, a mesh, or a submesh mesh. This should provide a 'decent' amount of approximation.
Pick up Christer Ericson's book, Real-Time Collision Detection. He discusses these very issues in great detail.
When reading articles, remember that in a real-world game application you will be working under hard limits of memory and time - you get 16.6ms per frame and that's it! So be cautious of any article or paper that doesn't seriously discuss the memory and CPU footprint of its algorithm.