How to improve performance of rendering 3D scene using HelixViewport - c#

I work on 3D project in C# and WPF I use Helix-Toolkit to show different 3D objects. I use Spheres a lot and I have also 3D text in the scene.
The problem is performance, for instance I on each mousemove I calculate the positions of each text in the scene but as the number of text increase the performance decrease.
I also have a Slider control to change spheres radius for each slider value as the user moves slider, this is also a problem of performance.
I do not know what to do is there any way to render the scene when all visual objects are changed their values because I think now the problem is that for each changed text position the 3D scene renders automatically its content.

First, Helixtoolkit.Wpf is using WPF internal 3D engine, all billboards/points/lines are drawn using CPU. If you have many billboards, you will experience performance hit very quickly. Try to use Helixtoolkit.Wpf.SharpDX if possible.
Mouse move has 100Hz frequency in WPF, you can try to only calculate the position every 2 move event or 3 move event to decrease the update rate.
I am not sure how you update your sphere size, usually you need to only update the transform to scale your sphere instead of creating a new sphere mesh each time.

Related

Restrict the movement of an Object with ManipulationHandler-script attached

I want to move little Spheres with the ManipulationHandler-script attached on a big-Sphere in my Scene.
The movement of the little spheres needs to be restricted to the big sphere's "shell".
I have accomplished the behaviour (link provides a gif) without using the Manipulation-handler, updating the X and Y of the little Sphere in the Update-function.
Is there a way to achieve the same behaviour with the ManipulationHandler without rewriting it?
According to your description, the Solver in MRTK be able to implement this idea without even writing any code. If you are not limited to only using ManipulationHandler for other reasons, I highly recommend that you to use RadialView. You can refer to the following steps to implement this feature with Solver:
Add SolverHandler and RadialView components to the small sphere.
In the RadialView component, select Custom Overrideset property in the Tacked Target Type field.
Set the Transfom Override field to the large sphere.
In the Radial View component, set MaxViewDegrees to 360, set Min Distance and Max Distance to the radius of the large sphere,
Disable Smoothing .
Now, the small sphere can rotate around the large sphere and keep a fixed distance from it.

Particles not appearing in Unity

I have made a smoke particle effect that I instantiate everytime the player shoots, its working but can not be seen with the background. If I delete the background it's visible, I've tried changing the ordering layer, creating a new layer and putting it above the default, but no matter what it's still below the background. Any ideas?
Assuming you are developping a 2D game, you can use the z axis to set the distance of the elements to the camera (what will be the equivalent to the order of render)
Usually in a 2D game you will have the main camera at -10 in the z axis. So moving in the inspector the backgroun to, lets say z=5 or z=10, and keeping in you particle system in z=0 should solve your problem.
Try this easy trick and let me know if you are still facing problems.
You can check the second half of this video for a better understanding
Also, if you are making a 3D game you can go to the material of the particles and make sure that render face is set to both. I just had that problem and it was annoying me. but that fixed it.

Canvas scaling for GameObjects in Unity

I have searched online for the problem, and they always result in code. I am trying to do scaling of GameObjects just like I would an Image. Let me explain, here is an Image of me manipulating an image with a canvas:
Now here is an image of my paddle in inspector
The paddle has an image, a ridged body, box collider, etc. But I want the paddle to be able to be scaled like the background image gets scaled. Is there a way I can put the paddle in the canvas so that the box collider, image, etc scales. Any help with this is extremely appreciated.
No, I don't hink there is a simple solution to your problem, because the elements you want don't necessarily have a width and height that means something in a RectTransform.
For instance, your box collider is a 3D shape (speaking of this, you might want to use a BoxCollider2D)
Anyway, Only the Unity UI components react to RectTransforms. Colliders aren't UI elements. All the components you talk about react differently:
Image is a 2d ui component with a width and height. It will scale in your canevas
BoxCollider isn't a UI element. it won't scale, but you can do it programmatically by copying the values of transform.Rect inside collider.Size. I am pretty sure you can find several examples of that on the internet
Paddle is your own component, so of course it will only do what you programmed it to do. I don't see what you mean by "put the paddle in the canvas". Your paddle isn't a graphic element. It doesn't have a width and height. The Image component however will have a width since it is a UI component

Getting "giggly" effect when slowly moving a sprite

How do I remove this "giggly" effect when slowly moving a sprite?
I have tried adjusting Antialiasing values in QualitySettings and Filter Mode in ImportSettings in the Unity Editor but that doesn't change anything.
Ideally, I would like to keep the Filter Mode to Point (no filter) and anti aliasing turned on to 2x
The sprite is located inside a Sprite Renderer component of a GameObject.
I have uploaded my Unity Project here: http://www.filedropper.com/sprite
I really don't know how to fix the problem... Can anyone help with my personal project?
I cooked up a quick animation to demonstrate what's happening here:
The grid represents the output pixels of your display. I've overlaid on top of it the sliding sprite we want to sample, if we could render it with unlimited sub-pixel resolution.
The dots in the center of each grid cell represent their sampling point. Because we're using Nearest-Nieghbour/Point filtering, that's the only point in the texture they pay attention to. When the edge of a new colour crosses that sampling point, the whole pixel changes colour at once.
The trouble arises when the source texel grid doesn't line up with our output pixels. In the example above, the sprite is 16x16 texels, but I've scaled it to occupy 17x17 pixels on the display. That means, somewhere in every frame, some texels must get repeated. Where this happens changes as we move the sprite around.
Because each texel is rendered slightly larger than a pixel, there's a moment where it completely bridges the sampling points of two adjacent pixels. Both sampling points land within the same enlarged texel, so both pixels see that texel as the nearest one to sample from, and the texel gets output to the screen in two places.
In this case, since there's only a 1/16th scale difference, each texel is only in this weird situation for a frame or two, then it shifts to its neighbour, creating a ripple of doubled pixels that appears to slide across the image.
(One could view this as a type of moiré pattern resulting from the interaction of the texel grid and the sampling grid when they're dissimilar)
The fix is to ensure that you scale your pixel art so each texel is displayed at the size of an integer multiple of pixels.
Either 1:1
Or 2:1, 3:1...
Using a higher multiple lets the sprite move in increments shorter than its own texel size, without localized stretching that impacts the intended appearance of the art.
So: pay close attention to the resolution of your output and the scaling applied to your assets, to ensure you keep an integer multiple relationship between them. The blog post that CAD97 links has practical steps you can take to achieve this.
Edit: To demonstrate this in the Unity project you've uploaded, I modified the camera settings to match your pixels to units setting, and laid out the following test. The Mario at the top has a slightly non-integer texel-to-pixel ratio (1.01:1), while the Mario at the bottom has 1:1. You can see only the top Mario exhibits rippling artifacts:
You might be interested in this blog post about making "pixel-perfect" 2D games in Unity.
Some relevant excerpts:
If you start your pixel game with all the default settings in Unity, it will look terrible!
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel.
These other settings are essential to make things as crisp as possible.
On the sprite:
Ensure your sprites are using lossless compression e.g. True Color
Turn off mipmapping
Use Point sampling
In Render Quality Settings:
Turn off anisotropic filtering
Turn off anti aliasing
Turn on pixel snapping in the sprite shader by creating a custom material that uses the Sprite/Default shader and attaching it to the SpriteRenderer.
Also, I'd just like to point out that Unless you are applying Physics, Never Use FixedUpdate. Also, if your sprite has a Collider and is moving, it should have a Kinematic RigidBody attached even if you're never going to use physics, to tell the engine that the Collider is going to move.
Same problem here. I noticed that the camera settings and scale are also rather important to fix the rippling problem.
Here is What Worked for me:
Go to Project Settings > Quality
Under Quality Make the default Quality as High for all.
Set the Anistropic Texture to "Disabled"
Done, And the issue is resolved for me.
Image Reference:
enter image description here

VR score text display

I am trying to make a VR game with google cardboard in unity. However we can not find a way to display score text right in front of the player. However when I add 2D text it is only on one side and therefore on one side of the eye and getting the position right for 2 texts is hard. If I use 3D text and set in front of the players position I think it will go into the wall if a player hits one. Is their any way to display a score on google cardboard / Unity VR.
You can either use native Unity Canvas UI or Googles hack to render OnGUI calls onto a texture.
I would definately recommend Canvas as that is the way Unity is working on their UI features, and it has much better layout capability.
To use canvas, Right click in the hierarchy and add UI->Text. You will automatically get a canvas. The important part is set the canvas to world-space (not screen space overlay). Then drag the canvas game object so it is a child of the Google Cardboard Main head object. Scale it down (like x:0.001,y:0.001,z=0.001) because by default it will be massive. To avoid going through walls position it about 0.5m in front of the camera - within any collider you may have.
there is another approach as well which you can place that canvas under camera make it world space, and then adjust it as you want, after that where ever you look at it will be seen easily ( as suggested by earlier user in answers) as you can see below in picture i placed canvas > Text > under camera which i used for oculus/ google camera

Categories