Windows phone Indoor Augmented Reality [closed] - c#

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to make an aplication for windows phone that uses augmented reality inside a building.
It's just for one building.
Can anyone tell me if is that even possible, because is indoors and GPS wont work.
I'm thinking on doing a Matrix were i put manualy all divisions and points of interest and so on (i will need to apply dijstrka or A* so the matrix is needed anyway).
But how can i navigate and use AR with that matrix in windows phone? Is it possible?
If so can anyone provide some tutorial or sample? Or some clues to get me in the right direction.
Thank you all in advance

I've written quite a few AR apps for WP. There are one major problems with this: to do AR on such a small scale, points of interest within such a small area like a building requires very high accuracy for both location and orientation to be useful. As you say your GPS isn't really working inside and even if it did on that scale you need at least accuracy down to ~1 foot. Secondary you need pretty accurate angular precision, but again the motion sensor in the phone isn't that accurate even in an outside environment. It gets way worse inside a building because the metal in the construction can significantly offset the compass (I often see errors >90° inside).
so unless you find some external location and orientation sensor that can work at that accuracy inside, I would say its not possible to make anything that's really useful.
I do have an article how to at least do AR rendering on a windows phone at my blog http://www.sharpgis.net/post/2011/12/07/Building-an-Augmented-Reality-XAML-control.aspx but again I wouldn't expect that great a result in your case.

Related

Why are my physics different in the editor and deployment at same framerate? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So I'm making a 2D tile map test project, and it runs perfectly* in the editor, but once I get it to a windows .exe it runs like crap for no reason**. I didn't touch one character of code, and the deployment settings are default, so I have no idea whats going on. Can anyone give me any ideas?
*You can move about and jump on the level ground and slopes without getting stuck at all
**You get stuck on every tiles corner, and can't go up slopes at all.
Test case - https://dl.dropboxusercontent.com/u/28109593/unity/MapTest2D.zip
Actually it seems (and it is logic) that the Unity engine is faster as standalone than in the editor (of course when you don't have to run all the editor around, and i dont even speak about the profiler...) so knowing that :
Physics will be able to happen more often and it will make more precise calculations, so small details which are not spotted in Editor will be in the build version like small tile differences, and also forces may be applied more accurately and/or more often which could prevent slopes limits to work proerly...
That's not all though you should take a look at your project properties, in player and check if you are running the Editor in Standalone mode and if your selected quality setting is the same.

Best algorithm for recognizing user-defined gestures on Kinect [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm developing a Windows application that would allow the user to fully interact with his computer using a Kinect sensor. The user should be able to teach the application his own gestures and assign each one of them some Windows event. After the learning process, the application should detect user's movements, and when it recognizes a known gesture, the assigned event should be fired.
The crucial part is the custom gesture recognizer. Since the gestures are user-defined, the problem cannot be solved by hard-coding all the gestures directly into the application. I've read many articles discussing this problem, but none of them has given me the answer to my question: which algorithm is the best for learning and recognizing user-defined gestures?
I'm looking for algorithm that is:
Highly flexible (the gestures can vary from simple hand gestures to
whole body movements)
Fast & effective (the application might be used
with video games so we can't consume all of the CPU capacity)
Doesn't require more than 10 repetitions when learning new gesture (repeating gesture more than 10 times to teach application is in my opinion not very user friendly)
Easy to
implement (preferably, I want to avoid struggling with two-page
equations or so)
Note that the outcome does not have to be perfect. If the algorithm recognizes wrong gesture from time to time, it is more acceptable than if the algorithm runs slow.
I'm currently deciding between 3 approaches:
Hidden Markov Models - these seem to be very popular when comes to gesture recognition, but also seem pretty hard to understand and implement. Besides, I'm not sure if HMM are suitable for what I'm trying to accomplish.
Dynamic Time Warping - came across this site offering gesture recognition using DTW, but many users are complaining about the performance.
I was thinking about adapting the $1 recognizer to 3D space and using movement of each joint as a single stroke. Then I would simply compare the strokes and pick the most similar gesture from the set of known gestures. But, in this case, I'm not sure about the performance of this algorithm, since there are many joints to compare and the recognition has to run in real-time.
Which of these approaches do you think is most suitable for what I'm trying to do? Or are there any other solutions to this problem? I would appreciate any piece of advice that could move me forward. Thank you.
(I'm using Kinect SDK.)

kinect development with windows c# or c++? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I'm totally new to windows development.I'm coming from Objective-c but now i wanna start to develop for the solution Kinect-Windows. I have to choose between C++ and C# , one of this languages is more appropriate to kinect development? I'm inclined to C++ but i don't know if C# will made all things easier, maybe more support for kinect?
EDIT
Another question, i need to buy the Kinect window sensor ? Or to develop i can use a standard xbox Kinect sensor?
Assuming you are using the official Kinect SDK, it supports C++, C# and VB. Use the language which best suites your needs.
To answer your second question, you can use the Kinect for Windows sensor or the Kinect for Xbox 360 sensor. The choice is yours.
However, there are some notable differences. This blog post does a good job of explaining them. Below are the main features that the Windows sensors offers over the Xbox sensor, taken from the blog encase the link breaks in the future.
Near mode: Enables the camera to see objects as close as 40 centimeters
in front of the device without losing accuracy or precision, with
graceful degradation out to 3 meters.
Seated or “10 joint” mode: Skeletal tracking which provides the
capability to track the head, neck and arms of either a seated or
standing user.
USB cable: Ensures reliability across a broad range of computers and
improves coexistence with other USB peripherals.
Extended camera settings: Provides extra settings such as brightness,
exposure, etc. so you can tune it even more.
Kinect Fusion: Maps the environnement to 3D on the fly or lets you use
object replacement.
Handgrip: Hand detection enables you to implement gestures like
pinch-to-zoom, grab, etc. to improve your apps and build whole new
kind of applications.
Licensing: When you want to go public with you’re application you’ll need to use a Kinect for Windows. Kinect for Xbox 360 isn’t legal.

Why does Windows only allow one application to access the webcam? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've been trying to make a sample webcam app in c#, and I discovered the app cannot run at the same time Skype or Oovoo or any other application is running? (and vice versa) Why do applications get exclusive locks over a webc
Video capture APIs come from time when adding layers to share video hardware was unreasonable in terms of performance. Also, with 2+ apps working with a camera one would have to make them agree on capture format in some way that both are satisfied. So it was made the simplest and straightforward way: you grabbed the camera, it's yours and you can set it up for your own needs. However others would wait for you to release the hardware before anyone else can use it.
You can find third party software that shares a camera, which internally grabs it exclusively and then exposes virtual camera that is shareable. This trades off performance for flexibility.
Audio APIs were also locking hardware exclusively some time ago, but then at some point OS APIs introduced hardware abstraction layers to share hardware and do mixing from multiple applications behind the scene.
This is probably intended to avoid an application spying on people while they are using their webcam through skype or whatever.

C# XNA Lag and framerate issues XBOX 360 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am a pretty good programmer and I am working on a minecraft like block building game for Xbox. I have about 10 thousands blocks in my game but when ever I run it on my xbox I have some really bad lag problems. One thing I did that kind of helped was setting all objects to null after using them but I am still having issues. How do most game developers solve this problem??? I thought of only drawing blocks that are close to the player but I think that using a loop to cycle through all the blocks in the world would slow it down even more.
You're on the right track, you definitely only want to be drawing things in the immediate vicinity if at all possible.
Quadtrees and octrees are data structures designed to slice up 2D/3D space respectively to make finding objects in a given area very easy. Sounds like this is what you are looking for.
You could use either, depending on what you wanted your definition of "nearby" to be. If you wanted to achieve the same as Minecraft, then what Minecraft does is display entire columns of blocks, so you could get away with a quadtree used to manage things on the X/Z coordinates and always show everything on the Y. If you wanted to do a 3D based definition of nearby, then you'd need a octree.
The way these work is by partitioning space using a tree structure. Each branch in the tree represents a quadrant (or octant in the case of an octree) of the available space, and each subsequent branch is a quadrant of that quadrant. Hence, it is very easy to drill down to a specific area. The leafs of the tree hold the actual data, ie. the blocks that make up your world.

Categories