Leap Motion Coordinate System, Translations and Interaction Box with Unity C# - c#

I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.

Related

C# generating lake-like shapes in a 2d top-down tile grid

I'm a beginner in game development and want to create a top-down game using tilemaps. I'm using Godot as my game engine but a general C# solution is fine.
I'm looking to generate lakes in the tilemap but I can't come up with any ideas that might work due to my inexperience. Previously I tried using Simplex, but I decided against it due to the lack of control over where the lakes spawn.
Performance is somewhat important but the world will be finite and not procedural, similar to Terraria.
I'm open to any ideas on the matter that would be reasonable within a videogame.
So for a finite world size, I can sketch an approach for you that you can try out. It is not language specific but you should be able to do that (easily) with C#:
The tiled world is represented as a bitmap: 0=land, 1=water
To generate a lake, mark the starting tile as water and add its coordinates to a queue
Deque a point from the queue. For that tile and for each direction, randomly decide if that adjacent tile is also water. Add newly added tiles to the queue.
Control the lake shape by different probabilities for different directions
Control the size of the lake by either limiting the iteration count or by decreasing the chance of a new water tile by the distance from the starting point
Repeat until the queue is empty.
Let me know how this workes out - I have only tested it using my mental code simulator so mileage may vary;) If you need help in implementing that approach, don't hesitate to ask.

Unity: Machine learning - level solving

Recently I've been messing around with machine learning and I wanted to see if I could create AI for the game I'm currently making. The AI should be able to solve the puzzle for you.
The game currently works as followed. You have a few tiles in a grid, some of them are movable some of them aren't. You click on a tile you want to move, and you drag it into a direction. It'll then start moving the tiles and optionally also the player character itself. The end goal is to reach the end tile. Level example, Solving the level
Playing the game yourself:
Whenever you select a tile (you do this by clicking), you then hold the mouse button down, and drag onto the direction you want the tile to move towards. Once the tiles are done moving, the player object will move one step in the same direction. If the player is on top of a tile that you move, it'll move with the tile. And afterwards do another step in the same direction.
I was wondering if it's possible (and if so, how) for machine learning to define a position on the screen, (optionally) click and then define a movement direction?
Please keep in mind that I'm fairly new to machine learning!
To give some more clarification:
The grid is static for now, to keep it simple for the AI. But later one, the goal is to generate a level randomly, and see if it can solve it.
In theory, all the AI should have to do, is select a tile to move (A number between 0 and the width of the grid, and the same for the height). And define a movement direction. Either (0, 1), (0, -1), (1, 0) or (-1, 0).
Falling off the grid will results in a reset.
Reaching the end of the grid results in a win.
Moving in an invalid direction results in a reset.
Based off of your bullet points, I would honestly suggest just implementing the A* Pathfinding algorithm, with some modifications to emulate machine learning. The A* Pathfinding algorithm determines the best path on a grid from point a to point b, and using clever programming you could achieve the result you want with a reasonable amount of overhead.
Something along the lines of having a list of "do not touch" grid points(death traps, etc), which gets filled as the AI runs into them, so on the next iteration it knows not to take that path. This is a very basic abstraction of your idea, but would be highly obtainable.
Obviously we cannot write the code for you, luckily there are tons of resources on A* Pathfinding to help you get started!
Here is a simple tutorial
Here is an implementation that was used in Unity
Here is a code review on someones implementation
Assuming you actually want to use machine learning and not just a pathing system:
I will lay out some pseudo code that you can use for a basic scenario of the AI learning a static board. There are different ways you can write and implement this code, I have only suggested one way. But before we get to that lets first discuss this project overall and some suggestions for it.
Suggestions:
I would say that you will want to measure the game state on the board, and not the mouse movements. So basically the AI is measuring what moves can be made. The mouse movement part is just a way for the player to interact with the board so it is not needed by the AI. It will be simpler to just let the AI make the moves directly.
I don't think that unity is a good platform for this kind of experimentation. I think you would be better off programming this in a console program. So for example using a 2 dimensional array (board) in a visual studio c# console program, or in a C console program via CS50 IDE (comes with free sign up via edx.org for cs50 https://manual.cs50.net/ide). I have suggested these because I think Unity will just add unnecessary layers to a machine learning experiment.
My assumption is you want to learn machine learning, and not just how to make an ai solve a puzzle in your game. Because in the latter case better options would be a proper pathing system, or having the ai brute force several attempts at the puzzle before moving and select the solution with the fewest steps.
Pseudo Code:
Now onto some pseudo code for your machine learning program.
Assumptions:
A. You have a board with set dimensions that you can pass to the Ai at the start.
B. There are tiles on the board the AI cannot move into (obstacles).
C. The AI should learn to solve the problem, instead of having the answer at the beginning because of good code that we designed (like a decent pathing system).
D. We don't want the AI to brute force this by trying a billion different combinations before moving, because this suggests perfect understanding of its environment. If the ai has perfect understanding of its environment then yes, it should use brute force where reasonable.
Coding Logic:
Scenario 1: The AI plays on the same board every time with the same starting conditions.
I. You start by setting a discrete amount of time in which the AI makes a move. For example 1 move every 1 second.
II. Have a counter for the number of moves made to reach the end tile, and record the sequence of moves associated with this counter.
III. If the AI has no history with which to make a move it makes a move in a random direction.
IV. If the move is invalid then the counter increases and the move is recorded, but the AI stays on the same tile.
V. When the AI completes the puzzle the counter and sequence of moves is stored for later use.
VI. In subsequent play throughs the AI always starts by selecting the paths it has tried with smallest count.
VII. Once the AI begins moving it has a 1% chance per move to try something different. Here is an example. When the 1% is triggered the AI has a 50% to try one of the following:
a. 50% chance: It checks through all the sequences in its history to see if there is any section in the past sequences where the counter between its current tile and the finish tile is shorter than its current path. If there are multiple it selects the shortest. When the AI finishes the round it records the new total sequence taken.
b. 50% chance. The Ai makes a move in a random direction. If it made a move in a random direction. Subsequent moves again follow this logic of 50% chance check, and 50% chance move randomly again. When completed again record the sequence of moves.
VIII. You can seed this by making the AI run the puzzle a 10,000 times in a few seconds behind the scenes, and then when you observe it afterwards it should have selected a reasonable path.
If a computer can brute force a problem in reasonable time it should start with that. However bear in mind that machine learning in a computer program where the machine already knows all the variables is different from machine learning in the environment, where for example you have a robot that has to navigate an unknown environment. The above code should work in the latter case. You may also want to investigate the idea of the AI mapping out the entire terrain by trying to move to every tile and forming an understanding of the environment, then just brute forcing a solution once it understands the variables.
In a non static environment you will want to enhance the valuation system. This answer is already too long so I won't go into it.
Short answer to both questions: Yes,
You can create an ai that uses either gamestate (so it can read the objects/properties of your grid) or you could use raw-screen input combined with image processing, which is a hard thing to create, and expensive (computational) to run.
On the Unity forms there are several answers to the question "How to mimic mouse input" or alike. Take a look here:
https://answers.unity.com/questions/564664/how-i-can-move-mouse-cursor-without-mouse-but-with.html
If you are looking for the code for the AI, sadly, you are out of luck. There are lots of ai tutorials online to create a simple ai for such a game. I would advice not to dive head-first in the fancy stuff (like neural networks) and start simple. It would be the best, in my opinion, too start with creating an (class) structure for your ai, and start learning AI by practice. Start with an "AI" that just randomly returns something, then see what you can learn & manage online and make other versions.
For one of the first AI's, take a look into goal-driven AI's or state-machines. I think they should be able to give nice results, given your gifs.

Why does the Tango Point Cloud lose so much accuracy in the Y direction?

I am using the Tango Point Cloud to place 3D objects in the world, however over time the accuracy of the point cloud worsens.
When I start the app the point cloud lines up with everything correctly but after maybe 10 seconds of moving the camera around the point cloud mesh is hovering about 1-2 inches above the real world objects. It gets worse until I restart the app. Otherwise, it seems ok in the X and Z directions but it always slowly increases in the Y direction.
I found a similar question but I'm not sure it's an offset issue because it looks correct in the beginning, it just slowly gets worse over time: How to I get more reliable Y position tracking for the Google Tango in Unity?
Also, I tried going back to the Point Cloud example from the Tango github and enabling video overlay so I could compare the point cloud mesh with the real world objects, and it happens there too - the mesh slowly begins to hover above the actual objects. What is causing this and how do I fix it?
I found that this project (another example project from Google) didn't have the problem with the point cloud losing accuracy. After examining for any differences, I noticed that it doesn't have a character controller for the Tango AR camera. So, I removed the character controller from the camera in both my project and the Point Cloud example from the Tango github and this does fix the problem of bad Y accuracy. (You have to completely remove it from the camera, just un-checking it in the Unity editor doesn't fix it.)

AR objects drift issue in Google TANGO

I am trying to create a simple scene where a few objects are placed on the table. Object placement works perfect but when I move the device, the objects drift around a bit. Which at one point makes the objects placed at the corner feel like they are not on the table but floating in the air.
Even in the sun moon and earth example in Unity examples here: https://github.com/googlesamples/tango-examples-unity
The earth n moon drifts as you move the device
Is this a bug or is there any special setting which I'm missing?
The objects drift because as the Tango device moves through space, it is only tracking its own position in 3D space. For objects to remain static in a dynamic environment, the device needs to understand the position of the placed objects in 3D space and their relation to the surroundings in order to anchor the objects and reduce drift.
Luckily, TangoCore has you covered here and the 3 Core technologies of Motion Tracking, Depth Perception and Area Learning all work together to help out.
If I'm not mistaken, the Sun and Moon example is the scene "SimpleAugmentedReality" under tango-examples-unity / UnityExamples / Assets / TangoSDK / Examples / Scenes /
However if you would like to anchor the objects in 3D space and reduce drift, you'll need to use Area Learning and Depth Perception as well. Area learning performs Loop Closures as the device realises it has "seen" an area before and adjusts the path and markers to provide a more accurate device and augmented content position.
So here is what you can do to learn what you need to. Save your current scene, go to open Scene and follow this path tango-examples-unity / UnityExamples / Assets / TangoSDK / Examples / Scenes / and load up some of the other scenes to get an understanding of how the technologies intertwine.
For example, you could load up the ExperimentalMeshBuilderWithColour scene, and learn how the Depth Processing works programmatically, and then load the MotionTracking scene and learn how to access and use Motion Tracking from the TangoManager Game Object. And finally (also probably most frustratingly difficult) learn how Area Learning is managed with the AreaDescriptionManagement and AreaLearning scenes.
This will not only solve your drift issues, but also give you a much fuller understanding of the capabilities of the Tango Technology and allow you to express your ideas much easier.

C# Mobile Game Development

I'm currently trying to implement a marble maze game for a WM 5.0 device and have been struggling with developing a working prototype. The prototype would need the user to control the ball using the directional keys and display realistic acceleration and friction.
I was wondering if anyone has experience with this and can give me some advice or point me in the right direction of what is essential and the best way to go around doing such a thing.
Thanks in advance.
Frank.
When reading your answer I didn't get the feeling you are looking for a game framework, but more: how can I easily model a ball with acceleration and friction.
For this you don't need a full fledged physics framework since it is relatively simple to do:
First create a timer which fires 30 times a second, and in the timer callback do the following:
Draw the maze background
Draw a ball at ballX, ballY (both floating point variables)
Add ballSpdX to ballX and add ballSpdY to ballY (the speed)
Now check the keys...
if the directional key is left, then subtract a small amount of ballSpdX
if the directional key is topleft, then subtract a small amount of ballSpdX and ballSpdY
etc
For collision do the following:
first move the ball in the horizontal direction. Then check the collisions with the walls. If a collision has been detected, then move the ball back to its previous positions and reverse the speed: ballSpdX = -ballSpdX
move the ball in the vertical direction. Then check the collisions with the walls. If a collision has been detected, then move the ball back to its previous positions and reverse the speed: ballSpdY = -ballSpdY
by handling the vertical and horizontal movement separately, the collision is much easier since you know which side the ball needs to bounce to.
last nu not least friction, friction is just doing this every frame: ballSpdX *= friction;
Where friction is something like 0.99. This makes sure the speed of the ball get's smaller every frame due to friction;
Hope this helped
I would recommend checking out XNA Studio 3, it has built in support for PC, Xbox 360 and mobile devices, and it's an official & free spin-off of Visual Studio from Microsoft.
http://creators.xna.com/en-US/
http://blogs.msdn.com/xna/
If you search around, people have written tutorials using physics (velocity on this one)
http://www.xnamachine.com/2007/12/fun-with-very-basic-physics.html
Try XFlib. It is in c++, but most cool things for the mobile have to be in c++, unfortunately. The site has some very cool free games. You can also see the source of most of the game too. Many have the physics you want.
Unfortunately, XNA doesn't support the windows mobile platform. However, as it seems that you're not having a problem with the technical issue of drawing on the WM device, but with the logic required to implement physics based movement, then it's not a bad idea to consider XNA to prototype the physics and movement code.
Check out some of the educational topics at creators.xna.com, and also "gamedev.net"
If you are at a loss, there's no mistake in trying a "lighter" tool for prototype. I would try Torque Game Builder - it spits out XNA, although maybe not meant for your platform.
At the Samples of the Windows Mobile SDK (check out the WM 6.0 SDK too), there are a couple of game applications. One of them is a simple puzzle game; not much, but it is a starting point.
The use of physics in game development is not specific for Windows Mobile. You can find a huge literature about this subject. This comes up in my mind now. If you are serious about game development, in any platform, you should do a little research first.
I dont know if this may help but i saw a Marble application for the Android platform on google code. Check it out here, it may throw some insight on the actual logic of the game.
The code is open sourced and written in java (using the android sdk) put nevertheless it may be useful. Also to better understand the code checkout the documentation for the SensorsManager, SensorEvent etc here
I wouldn't recommend using the same architecture as this application thou.

Categories