How do you simulate a liquid inside a bottle using Unity? - c#

I need bottles to be filled with liquid in UNITY 3D. As Unity doesn't have liquids, I need to simulate them. Can you suggest me on how I can achieve these following functionalities with liquid simulation :
3D Object with any shape (Bottle, Conical flask, beaker etc) must be filled with liquid. The volume of liquid to be filled is a variable, which will be decided by the user.
When I tilt/rotate the object, physics must act upon the liquid inside the object as shown in the figure. Liquid inside a bottle and it has to move depending on how the bottle is positioned in 3D.
,
I had tried with Stencil buffers, particle system, Cloth component etc. But couldn't achieve with any of them.
The problem with particle system is that, it is heavy performance and the particles are leaking out from the sharp edges of the GameObject's mesh, even though the Collision is enabled for the particle system. With stencil buffers, I didn't understand how liquid inside a object can move depending upon the positioning of the object.
Any suggestions or solutions are appreciated.

There is good assest on the assets store. For GPU based Fluid Simulation.
https://www.assetstore.unity3d.com/en/#!/content/65359
You can give it a shot.

Related

Object too fast for Collision in Unity

I'm currently making a simulation where disks are placed in a conveyor belt and sorted by color. When the sensor detects a white disk, a rectangular object 'pushes' out of the conveyor belt into a box, and it needs to move quickly. However, whenever we set the speed to a high number it just goes through the disks, without pushing them. I have already tried making the pushing object's collision detection 'Continuous dynamic' (RigidBody) and the disks' collision detection 'Continuous' (like in this video: https://www.youtube.com/watch?v=XvrFQJ3n8Mo). I have attached an image of how the mentioned part of the robot looks and the settings for both the disk and pusher object.
disk and pusher object settings
visualisation of robot simulation
Your problem has been solved HERE , in short it is because only convex mesh colliders can collide with each other. (Your right one is convex and the left one is not).

How to always get the AR design in front of the spatial walls Unity HoloLens

I am building an app for the HoloLens gen 1 device using Unity 2018.3.13f and MRTK V2 RC1. I got a simple AR design with 2 text objects and 1 rawimage object. After building the project and deploying it to the HoloLens the AR objects ends up behind the spatial mesh (you know all those spatial triangles), but I want all the objects to be in front of the wall.
How do I accomplish this?
The canvas is set to be on the main camera
I have the original settings for the DefaultMixedRealityconfiguraitonProfile if there is something there that needs to be changed.
This is how it looks through the hololens with the app when it does not show the mesh of the wall (sorry for the bad quality)
and this is how it looks when it falls behind the mesh
Do I need to add some mesh renderer or something on the MainCamera to make this possible?
Any help is appreciated, thanks!
I don't believe that the MRTKv2 as of 2019/5/9 has code that will auto-ensure that a specific object is positioned in between the camera and other arbitrary meshes and colliders (i.e. the spatial awareness is one such mesh, though you could imagine just having an arbitrary box or plane in the scene that would occlude that object, in which case, maybe you'd want your "in between" object to stay in between both those two types of potentially occluding things).
There used to be a script in the HTK called Tagalong.cs that would do something like this by doing raycasts from the camera to collidable object:
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/htk_release/Assets/HoloToolkit/Utilities/Scripts/Tagalong.cs
This single large script I think got broken up into smaller scripts (i.e. specific behaviors in the solvers here:)
https://github.com/microsoft/MixedRealityToolkit-Unity/tree/mrtk_release/Assets/MixedRealityToolkit.SDK/Features/Utilities/Solvers
However, from what I can tell, the specific interaction of "keep things automatically between the camera and whatever collidable object" wasn't preserved. Someone else can correct me here if I'm wrong, it looks like this wasn't a behavior that got preserved in V2.
Going forward, there are a couple of possibilities:
1) Probably file an issue on Github here (https://github.com/microsoft/MixedRealityToolkit-Unity/issues) to request this feature be ported over.
2) Use the code in Tagalong.cs to add your own solver that would accomplish this (i.e. the code looks to be all there, there's just some work needed to get done to reorder it to handle what you want)
If you use a sprite renderer, set order in layer (into 0 or -1).
If you use a mesh renderer, try to deactivate dynamic occluded.
Try to change the hierarchy of the sorting layers under Edit-> Project Settings -> Sorting Layers

AR objects drift issue in Google TANGO

I am trying to create a simple scene where a few objects are placed on the table. Object placement works perfect but when I move the device, the objects drift around a bit. Which at one point makes the objects placed at the corner feel like they are not on the table but floating in the air.
Even in the sun moon and earth example in Unity examples here: https://github.com/googlesamples/tango-examples-unity
The earth n moon drifts as you move the device
Is this a bug or is there any special setting which I'm missing?
The objects drift because as the Tango device moves through space, it is only tracking its own position in 3D space. For objects to remain static in a dynamic environment, the device needs to understand the position of the placed objects in 3D space and their relation to the surroundings in order to anchor the objects and reduce drift.
Luckily, TangoCore has you covered here and the 3 Core technologies of Motion Tracking, Depth Perception and Area Learning all work together to help out.
If I'm not mistaken, the Sun and Moon example is the scene "SimpleAugmentedReality" under tango-examples-unity / UnityExamples / Assets / TangoSDK / Examples / Scenes /
However if you would like to anchor the objects in 3D space and reduce drift, you'll need to use Area Learning and Depth Perception as well. Area learning performs Loop Closures as the device realises it has "seen" an area before and adjusts the path and markers to provide a more accurate device and augmented content position.
So here is what you can do to learn what you need to. Save your current scene, go to open Scene and follow this path tango-examples-unity / UnityExamples / Assets / TangoSDK / Examples / Scenes / and load up some of the other scenes to get an understanding of how the technologies intertwine.
For example, you could load up the ExperimentalMeshBuilderWithColour scene, and learn how the Depth Processing works programmatically, and then load the MotionTracking scene and learn how to access and use Motion Tracking from the TangoManager Game Object. And finally (also probably most frustratingly difficult) learn how Area Learning is managed with the AreaDescriptionManagement and AreaLearning scenes.
This will not only solve your drift issues, but also give you a much fuller understanding of the capabilities of the Tango Technology and allow you to express your ideas much easier.

Leap Motion Coordinate System, Translations and Interaction Box with Unity C#

I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.

Trying to write generic code for a board game

I'm actually working on a board game using unity. It's a board game where players have to connect two opposites borders by claiming empty hexagonal cases. Borders are assigned to players at the beginning.
I chose to represent my empty cases with gems, white when unclaimed, and red or blue for players. So I made a single prefab for all gems.
To check connections between gems themselves and with borders, I assigned to them slightly bigger box colliders so they are colliding (trigger) with other objects, so I can detect if a claimed gem is connected (directly or not) to a border. I wrote this code generically as gems are one single prefab, naively thinking that every entity would run its own version of the script.. But that's not the case.
So I'm looking for an alternative way to detect connections with borders, but still generically, as the size of the board is variable.
Thanks in advance !
Your problem is closely related on how you break your game in game objects and components on Unity3D.
First of all, use scripts to define components. For example your gems could have a script that contains data related to its state (your white, red and blue gems meaning empty, playerA and playerB).
You will probably need a game object (normally called a GameManager) to handle all behavior related to game rules like which player is playing, if a player can pick a specific gem and if a player won the game. Depending on the game complexity is a good idea to also have a game object with a component (script) to keep your board state, instead of simply getting it from the gems objects Transform component.
After you have structured your game you do not need colliders to detect connections. Every time you have to check for connections, just iterate over your gems or use your board state object.
Edit:
References to help you develop your game:
How do I represent a hextile/hex grid in memory?
Creating a holder for game level for simple board tile based game.
creating 2d table\chess board\2d array

Categories