My team currently needs to create a visual simulator for some track hardware that we have.
We tried using X3DOM, but the amount of data we were sending it proved to be more than it could take. (I ended up refreshing every 3 seconds.)
We are currently looking for a replacement that can handle a fairly significant input stream.
Unity 3D came up as a possibility. However, I know that it is usually used for games.
I will not need any kind of Physics Engine or other similar features.
I will feed all the coordinates into it and then want it to show these objects moving on a track too the coordinates I specify.
Does unity fit for that?
Yes, you can input those coordinates in Unity and then track those objects. My team is currently using Unity3D for something similar and we input complex data structures with pose matrices, etc..
You can import your 3D objects into Unity as well. I import .obj into Unity.
Related
I am trying to create a C# software that can control the CNC machines, First I am trying to create a 3d Space area in the PictureBox (or Pannel), I have gone through the internet and reached about this and I got one video. He made a 3D place in the form space, and it was a good tutorial but I don't know how to create a coordinate system in that space as you see in the image that I have given below.
Pls, Help me to do this.
enter image description here
enter image description here
If I get the proper solution for this, in the future I can develop this project.
Thank you, guys...
Update:
Actually, I have designed a C# program That controls the CNC machines,
For example, we can take UGS, There is much software out there but I need to create this in C#. The Only thing I am suffering here is the 3D coordinate system. How to create the Axis diagram in 3D.
Example: Planet Cnc's "Cnc USB Controller" I want to create this software in C#.
Thanks...
Build a CNC CAD programmer from skratch it's not a trivial activity.
You should be skilled in:
Math and Geometry in particular (vector/matrix calculus, base transformation, ...)
A language to comunicate with CNC (maybe G Code)
3D file formats
etc.
This is old, and you should have fixed the problem by now... but I hope to help someone in the future.
Have you seen Helix-toolkit? This library provides easy 3D camera and object controls for .Net Framework. I used this library in WPF C# for an application that generated G-Code for a CNC machine... this saved me in the simulation.
You can import 3D models (.3Ds or .obj) from your machine to 3D Panels (HelixViewport3D) and use them in the simulation or create meshes on demand in C#.
The documentation is simple, but on GitHub there are lots of examples.
Helix-toolkit on GitHub
I am building an app for the HoloLens gen 1 device using Unity 2018.3.13f and MRTK V2 RC1. I got a simple AR design with 2 text objects and 1 rawimage object. After building the project and deploying it to the HoloLens the AR objects ends up behind the spatial mesh (you know all those spatial triangles), but I want all the objects to be in front of the wall.
How do I accomplish this?
The canvas is set to be on the main camera
I have the original settings for the DefaultMixedRealityconfiguraitonProfile if there is something there that needs to be changed.
This is how it looks through the hololens with the app when it does not show the mesh of the wall (sorry for the bad quality)
and this is how it looks when it falls behind the mesh
Do I need to add some mesh renderer or something on the MainCamera to make this possible?
Any help is appreciated, thanks!
I don't believe that the MRTKv2 as of 2019/5/9 has code that will auto-ensure that a specific object is positioned in between the camera and other arbitrary meshes and colliders (i.e. the spatial awareness is one such mesh, though you could imagine just having an arbitrary box or plane in the scene that would occlude that object, in which case, maybe you'd want your "in between" object to stay in between both those two types of potentially occluding things).
There used to be a script in the HTK called Tagalong.cs that would do something like this by doing raycasts from the camera to collidable object:
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/htk_release/Assets/HoloToolkit/Utilities/Scripts/Tagalong.cs
This single large script I think got broken up into smaller scripts (i.e. specific behaviors in the solvers here:)
https://github.com/microsoft/MixedRealityToolkit-Unity/tree/mrtk_release/Assets/MixedRealityToolkit.SDK/Features/Utilities/Solvers
However, from what I can tell, the specific interaction of "keep things automatically between the camera and whatever collidable object" wasn't preserved. Someone else can correct me here if I'm wrong, it looks like this wasn't a behavior that got preserved in V2.
Going forward, there are a couple of possibilities:
1) Probably file an issue on Github here (https://github.com/microsoft/MixedRealityToolkit-Unity/issues) to request this feature be ported over.
2) Use the code in Tagalong.cs to add your own solver that would accomplish this (i.e. the code looks to be all there, there's just some work needed to get done to reorder it to handle what you want)
If you use a sprite renderer, set order in layer (into 0 or -1).
If you use a mesh renderer, try to deactivate dynamic occluded.
Try to change the hierarchy of the sorting layers under Edit-> Project Settings -> Sorting Layers
I shifted to unity few weeks ago. I am developing a 2D platformer. For creating the maps I am using Tiled map editor from www.mapeditor.org . I have created a basic map. Included the tileSheet png and the .tmx file (saved as XML) in the Assets of the project. I am able to read the XML , that is all the gid's. But I don't know how to access a particular portion(tile) from the tileSheet corresponding to a gid.
I think for this I need to load sprite in the memory and select a tile (by specifying Height and width and coords) from texture memory to display it on screen. As given here :http://gamedevelopment.tutsplus.com/tutorials/parsing-and-rendering-tiled-tmx-format-maps-in-your-own-game-engine--gamedev-3104
but its for flash , how I can achieve same thing in Unity using C#. Notice the copyPixel stuff in the flash code. I thought I could use ReadPixels but it is used for reading from screen only not the texture memory.
Thanks.
If you're working in Windows then the Tiled2Unity Utility sounds like it will fit your needs. It exports Object Layers and was made with Unity 4.3 features in mind.
(Full disclosure: I'm the author of Tiled2Unity)
EDIT: Tiled2Unity is available for Mac users as well now. There is a command-line version for Linux users. (all free)
If you can describe more carefully your problem and what you are trying to do, maybe myself or someone can help you better, for example what exactly do you mean by "load a sprite into memory"? Or "select a tile"? Copying pixel data is SLOWWW, and hopefully you don't mean to be doing this in real time.
Here is my real advice though:
Have you checked out UTiled? It does tiled maps in 2D in Unity so I think it already does what you want and it's free.
There is also UniTMX... free.
There is also 'Tiled Tilemaps'... which is like $2.
I also built a system that can also do what I think you are trying to do (your link is broken, so I can't be sure).
The system I built is called 'Tiled to Unity' (you can search it in youtube to see if it does what you want). It allows you to attach gameObjects to tiles and have tile variants, and can do 3D tiles.
Anyway, trying to roll your own pipeline from Tiled into Unity is a ton of work, and with these tools available, I think it is almost certainly unnecessary... That's just imo.
Is it possible to use Unity as a render engine? I have a game which is written in C# and I would very much want to rewrite it to use Unity because of it's portability and use of C#. The game itself is a 2D maze game (think Pac-man).
I read alot of tutorials for using Unity for 2D games but all of them used almost only menus and editors which embedded in Unity and used only small portion with C# or other programming language. If I'm going to go down this road it means I need to "rewrite" my whole game logic with Unity's editors/menus/managers etc...
I'm looking to use it like the XNA library for example. Is there any way to achieve that with Unity? If not, is there another game engine/library using C# which is cross-platform and canm be run under mobile devices?
I made Fangz (https://www.youtube.com/watch?v=x6-5D6IkD5E) using Unity. If that doesn't prove that a complex 2d game can be made in Unity, I don't know what does :).
You can use Unity like XNA, but that would deny you from Unity's strengths. Once you get used to visualising your member variables in an editor in a custom way (via inspector editor scripts) it'll be hard to go back :).
Furthermore Unity now has native 2d support, which is as good is not better as 2d Toolkit, once of the most well designed 2d packages I've had the pleasure of working with.
Another advantage of Using Unity is that you'll be able to easily add animations to your 2d game. Just add an animation component, press record and move.
All in all I find myself thinking the other way around: how can anyone do a 2d game in anything other that Unity :).
I'm developing an application for the Kinect for my final year university project, and I have a requirement to develop a number of gesture recognition algorithms. I'd appreciate some advice on this.
My initial algorithm is detecting the users hand moving closer towards the kinect, within a certain time frame. For now i'll say this is an arbitrary 500ms.
My idea is as follows:
Record z-axis position every 100ms and store in List.
Each time a new position is recorded, check the z-position for each of the previous 4 positions in the List.
If the z position has varied by the required distance between any of those individually or collectively, fire off a gesture recognised event.
If gesture recognised, clear List, and start again.
This is the first time that I have tried anything like this, and would like some advise on my initial naive implementation.
Thanks.
Are you going to use the official Kinect SDK or opensource drivers(libfreenect or OpenNI) ?
If you're using the Kinect SDK you can start by having a look at something like:
Kinect SDK Dynamic Time Warping (DTW) Gesture Recognition
Candescent NUI
(Candescent NUI focuses more on finger detection though)
If you're planning to use opensource drivers, try OpenNI and NITE.
NITE comes with hand tracking and gestures(swipe, circle control, 2d sliders, etc.).
The idea is to at least have hand detection and carry on from there. If you've got that, you could implement something like an adaptation of the Unistroke Gesture Recognizer or look into other techniques like Motion Templates/MotionHistory, etc....adapting them to the new data you can play with now.
Goodluck!
If you're just trying to recognise the user swinging her hand towards you, your approach should work (despite being very susceptible to misfiring due to noisy data). What you're trying to do falls very nicely in the field of pattern recognition. For this, and very similar tasks, people very often use hidden Markov models with great success. You might want to check the Wikipedia article. I'm not a C# person, but as far as I know, Microsoft has very nice statistical inference libraries for C#, and they will definitely include HMM implementations.