I'm trying to build a scene with a number of prefabs placed onto a tiny planet, think something like this. The problem I am facing is that while I can place prefabs on a sphere easily using Control+Shift, they are not rotated and thus appear at terrible rotations. I'm aiming for them to be placed perpendicularly.
Currently I am aware of three solutions:
Place the objects in the scene using Control+Shift then manually rotate them into position.
Place the objects in the scene like before, then add a snippet of code to each of their Update methods: transform.rotation = Quaternion.FromToRotation(transform.up, transform.position - origin) * transform.rotation;
Like option 2, run the code and find some way to save the world state to the scene, but that is easier said than done. It seems like to much effort for something that should be trivial.
The first is tedious and hard to correctly align, the second is easy but renders your scene builder unrepresentative of your final game, and I have no idea where to start on the third. Is there a better way?
In the end I decided to perform all of the orientation manually using Control+Shift. As Johan suggested you could write your own [ExecuteInEditMode] Editor scripts, and if the planet I was foresting was significantly bigger I would have looked into it.
Be aware that if you do venture down the scripting path that the object isn't floating and fully clips through the surface. Manual placement may be necessary after all.
Related
I've create a movement script that moves a character (ship) in a topdown view.
Gravity is disabled, but the rest of the settings i left mostly alone.
I'm applying a constant force in the forward or backward direction (local, so relative to the character NOT the world). This however causes some movement in other directions if the character is rotated for some reason.
Debug.Log(nameof(force) + force);
rigidBody2D.AddRelativeForce(force);
Lets me explain with pictures.
Starting point:
Moving forward somewhat (works okay):
Back to starting point (stopping game, starting anew)
And then i manually change the rotation to -90 in the inspector.
I then move forward somewhat.
As we can see since the rotation is -90, moving forward in relative to the character (AddRelativeForce(..)) causes it to move to the right on the screen.
The X position increments accordingly.
However, also the Y component changes, the character gets velocity in this direction, even though i apply no force in that direction - which we can see in the output.
How can I move my character with force without this issue?
Edit:
Here are the project files. Runs under the latest unity 2020.1.14f1. The logic mentioned here is located in MoveExecuteSystem. I use a small ECS framework: https://github.com/Leopotam/ecs
https://wetransfer.com/downloads/b86dfb27b4388b85ac4fef285d87b39520201124193329/00af66
Take into account that AddRealtiveForce adds the force in the local coords system. If the force vector has got always the same value, the outcome is going to be different depending on the rotational position of the entity.
To solve this, you need to apply the force such as: Vector3 force = myShip.forward * forceValue where force value can be an int or a float.
It can be myShip.up or myShip.right depending on what you want. My point is that with the .forward, .up or .right you always will apply the force according to the instant local coordinate system rotational position.
EDIT:
This question seems deeper than I thought, I made the trial in a project myself, and actually there are variations in the position when you apply force only when the object is rotated. Had a quick look and don't know why this is. Seem to be a precisiĆ³n thinh regarding the rotations of the physics the rigidbody uses under the hood.
I tried to remove the friction which I thought might be also cause of the problem but seems not possible. +1 to this question and lookig forward to know the answer.
EDIT2:
Bounciness and friction can be set to 0 if a new physics material 2d is created, but the issue persists.
You can freeze position in determined axis as a solution, but this does not explain the position variation problem.
I am currently working on a classical fps game, I want to be able to generate larger numbers "enemies", ideally in the hundreds. Though this may not be feasible. I am hoping to use physics and Rigidbod[ies] for these enemy models.
Obviously the nice solution would be to have mesh colliders on all these models but I'm pretty sure that will affect performance. So my alternative idea was to instead set them up initially with box colliders while they are atleast X distance from the player and then once they enter that radius around the player to switch (via enabling and disabling) these box colliders to mesh colliders.
Is this possible to do and what are the implications? My initial tests caused the model to lose its collision detection with the floor, and so drop through the environment.
Are there are performance implications to switching (enabling/disabling) colliders/if this would even make a difference?
Lastly if none of the above, is there a better solution to this problem that is performant?
The most common practice I know is using multiple primitive colliders under the same rigidbody; I believe this is much more efficient than using mesh colliders, plus, as an added bonus you can even very easily handle special hits (headshots do x2 damage, leg shots slow enemies down) since you know exactly which collider was hit.
Here is an example of how they did it back in the day in Counter Strike
Notice that you will have to place each primitive collider correctly on the bone structure for it to move correctly during animations.
I think you should have two colliders for your ennemies :
1) A persistant box for physical interaction. (This collider would be on a separate layer, colliding only with the physical space.
2) Then, you would have your flexible box / mesh collider. Instead of swapping colliders, enable/disable them. If it is too far, try enabling the box, and if too close, enable the mesh collider. I think enabling and disabling is better for performance than removing and adding components at runtime. Plus, since you already have an 'environmental collider', swapping isn't an issue for the movement of your ennemies.
Hope this helps.
Hi guys i was wondering how to create a shape adjustment with two objects which specifically could be described as the independent cells, one of which is static, and the second one is dynamic and surrounded by "plasma". The movement of the active object must be controllable by the user (WSAD). Collision of the active object with the static one causes the static object to be swallen, though doesn't change it's position stays in place all the time. As the active object moves, passes the swallen object and troughts it out.
See the image below:
Player character
When it comes close enough to pink enemy it's starting to swallow it (surround by yellow thing)
Pink enemy is completely sourrounded when red circle is in the centre of both.
When it leaves enemy it takes off the yellow thing
I was wondering what is the simplest way to do it. I've been thinking about cloth, physics joints, mesh substraction (is it even possible?), some kind of animation... I don't have much time to do it. Can you show me the simplest way. Which tools and approach should i use? I'm not asking for full code or full solution only for some tips.
Tim Hunter mentioned a wonderful way, most perfect in 3D.
You can use another approach in 2D :
Inside OnCollisionEnter2D try finding hit points using Collision2D.contacts . See this reference .
Create some particle effect there.
Disable the enemy
Now play swallowing animation of the player.
At animation end, enable enemy again.
Maybe calculation is little tricky, still efficient.
I have an Unity 3D scene with several cameras looking at the same object (a huge brain mesh ~100k tri) but not necessary with the same point of view.
In the same 3D scene there is a huge number of spheric plots meshes (from 100 to 30000).
In all the cameras i have to display the brain mesh with a part of the plots meshes.
Depending on the camera view, each plot can have a different size (mesh filter and spheric collider), a different material (opaque or transparent) and can be visible or not.
The spheric collider must have the same size than the mesh.
I set up a shared mesh in common for each spheric mesh.
Their material can be one of the several shared materials i have defined.
Before rendering the scene, for each camera view in the OnPreCull function i have to define which plots are visibles and how they look.
This part can be very costly, i tried several things :
setting gameobject inactive : too costly
setting local scale to vector3(0,0,0) : better but i can see that the rendering is still done in the profiler
setting a total transparent material : same result, but the in the profiler the rendering is now transparent instead of opaque
setting a layer not in the cameras layers masks : huge script cost
I don't kwnow if i can make an efficient culling system with all theses cameras looking at the same point...
I welcome any new ideas.
First issue:
Regarding your specific question with the four dots.
Simply set the renderer.enabled = false, that's all there is to it.
Note however that as I mention in a comment, you would never try to "cull yourself" in Unity (unless I have misunderstood your description).
Second issue:
Regarding the small spheres. I suspect you have very many in the scene. You simply can't do that. In video games (the most difficult of all 3D engineering), you just do this with billboarding. It's how say "grass" is done in a scene. You can achieve this nicely with the particle system in Unity, or other techniques. An implementation is beyond the scope of this answer, but you will have to fully investigate billboarding. Simply it's a small flat image which always faces the camera in the render pass.
Issue 2B:
Note however that sphere colliders are wonderful, and you can use as many as you want. I'm sure this is obvious from base mathematical reasons. Side tip: often folks try to "write their own" thinking it will be faster. It's impossible to outwrite the 100? person-years of spatial culling scientific research in PhysX, and moreover they use the metal, the gpu, so you can't beat it.
Issue three:
Is there a chance you're using a mesh collider somewhere in the project? Never use mesh colliders, at all. (It's extremely confusing they are mentioned or used in Unity; they only have one or two very specific limited uses.)
Issue four:
I'm confused about why you are turning things on and off. I have a guess.
I suspect you are not using more than one "stage"!
There's an amazing trick about video games when you have more than one camera. In fact you have "offscreen" scenes! So you may have players in a dungeon or the like. Off "to the side" you may have an entirely duplicate or triplicate setup of the whole thing running (you could "see it if the camera turned the wrong way") for the other cameras. (In the example you would have different qualities on the dopplegangers, coloring, map-style or whatever the case is.) Sometimes you make a whole double just to run physics calculations or address other problems.
Fascinating extreme example of that sort of thing.
In short in your situation,
You likely need one whole 'stage' of a camera and brain for each of the camera views!
Again this can be http://answers.unity3d.com/answers/299823/view.html but it is indeed the everyday thing. In your overall scene you will see eight happy brains sitting in a row, each with their own camera. In each one you would display whatever items/angle etc are relevant. (Obviously, if certain items are "identical, other than the viewing angle" you could use the "same brain with more than one camera": but I would not do that, best to have one-brain-one-camera for each view.)
I believe that could be the fundamental issue you're having!
I have an object which has a diffuse shader and on runtime I want the shader to switch to Diffuse Always Visible but this should trigger only if the unit is behind a specific object within the layer named obstacles.
First I tried to switch the object shader with the following code and the shader is changed in the inspector but not in the game during play. I tried placing and calling the shader from the resources and also created seprate materials but its not working.
Here is the code I am using in C#
Unit.renderer.material.shader = Shader.Find("Diffuse - Always visible");
As for the rest I was thinking of using a raycast but not sure how to handle this.
Thanks in advance
I only need the other concept now where the shader changes if a unit is behind an object!
For the trigger, how to handle it will depend a lot on how your camera moves.
To check in a 3D game, raycasts should work. Because whether or not something is "behind" something else depends on the camera's perspective, I would personally try something like this to start:
Use Camera.WorldToScreenPoint to figure out where your object is on the screen.
Use Camera.ScreenPointToRay to convert this into a ray you'll send through the physics engine.
Use Physics.Raycast and check what gets hit. If it doesn't hit the object in question, something is in front of it.
Depending on your goals, you might want to check different points. If you're just checking if something is more than halfway covered by an object, doing a single raycast at its center is probably sufficient. If you want to see if it's at all occluded, you could try something like finding four approximate corners of the object and raycasting against them.
If it's a 2D game or the camera is always at a fixed angle (for example, in a side scroller), the problem is potentially a lot simpler. Add colliders to the objects, and use OnTriggerEnter or OnTriggerEnter2D (along with the corresponding Exit functions). That only works if the tiniest bit of overlap should trigger the effect, and it only works if depth doesn't really matter (e.g. in a 2D game) so your colliders will actually intersect.