ok so at the moment a camera is following the object consistently only in 1 axis. here is the code:
Matrix rotationMatrix = Matrix.CreateRotationY(avatarYaw);
Matrix rotationMatrix2 = Matrix.CreateRotationX(avatarXaw);
Vector3 transformedheadOffset2;
Vector3 transformedReference2;
transformedheadOffset2 = Vector3.Transform(AvatarHeadOffset, rotationMatrix);
transformedReference2 = Vector3.Transform(TargetOffset, rotationMatrix);
how can i make it follow the object in 2 axes? (obviously something to do with rotationMatrix2) , since when i use something like:
transformedheadOffset2 = Vector3.Transform(transformedheadOffset2 , rotationMatrix);
everything goes fuzzy. Any insight will be helpful. thanks
It is difficult to know exactly what your camera issue is. Here is a video I made to explain a common camera problem that may (or may not) be applicable to your issue.
http://www.screencast.com/users/sh8zen/folders/Xna/media/929e0a9a-16d1-498a-b777-8b3d85fd8a00
I'm not trying to just push a video I made... It's just that after 3.5 years on the xna forums, the problem that the video addresses has come up countless times from beginners working with cameras. Also, based on your description of the problem, it is very difficult to know what your camera is doing wrong so it stands a reasonable chance of being this issue.
Related
The problem:
I've been working on trying to create a 3D position in Worldspace based on a 2D face RGB face detection, similar to this Microsoft example. I am using Unity 2020.3.16f1, MRTK 2.8.2 and C# for the Unity scripts. I have been able to convert the C++ code shown in the link to C# with a lot of success. One final issue is accessing the HoloLens 2 origin SpatialCoordinateSystem to be used in the Transform between the camera's 2D coordinate system and the 3D worldspace system.
The SO question at this link asks a very similar question, and I have tried to use SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem() as the answers suggest. I call this function in Unity's "Awake" method, to ensure it is set as early as possible, as shown below.
private void Awake()
{
worldSpatialCoordinateSystem = SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem;
}
Problem is that, if the user's headset is moving while the application starts, I notice an offset in the 3D locations commensurate with the direction/position of the head when the application was starting. I have narrowed the problem down to the fact that the HL2 and Unity set an origin SpatialCoordinateSystem just before the function in Awake is called, accounting for the offset between what I expect and what I see.
What I've tried:
I have tried using some of the other solutions listed here as well. I cannot use UnityEngine.Windows.WebCam.PhotoCapture becuase of the way I am create still image captures, and (SpatialCoordinateSystem)Marshal.GetObjectForIUnknown(WorldManager.GetNativeISpatialCoordinateSystemPtr()) appears to be deprecated and unusable. Finally, I tried CreateStationaryFrameOfReferenceAtCurrentLocation(Vector3, Quaternion), and used the inverse of the current Camera.main position and rotation, hoping to compensate for the offset, but it did not appear to work (NumericsConversionExtensions is the UnityEngine-to-System.Numerics converver found here). That code is below.
worldSpatialCoordinateSystem = SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation(NumericsConversionExtensions.ToSystem(Camera.main.transform.position*-1),
NumericsConversionExtensions.ToSystem(Quaternion.Inverse(Camera.main.transform.rotation))).CoordinateSystem;
My question:
Is there either another way to access the origin spatial coordinates or possibly to compensate for the offset when the user is moving their head before Awake is called?
I spent 3 days working on the solution, and found one 1 hour after asking SO. For those who come here, use the code below, originally found here.
using Microsoft.MixedReality.OpenXR;
worldSpatialCoordinateSystem = PerceptionInterop.GetSceneCoordinateSystem(Pose.identity) as SpatialCoordinateSystem;
I'm following this tutorial to create PONG in Unity 2d :-
http://unity.grogansoft.com/beginners-guide-create-pong-clone-in-unity-part-6/
And understand the code for the most part, but this section confuses me. I have highlighted the confusing part in bold. I can't see in any of the code examples where the name of the ball is being checked? What am I missing?
Code:
void OnCollisionExit2D(Collision2D other)
{
float adjust = 5 * direction;
other.rigidbody.velocity = new Vector2(other.rigidbody.velocity.x, other.rigidbody.velocity.y + adjust);
}
We make sure the item hitting the paddle is the ball by checking
its name, then we apply a force to its rigidbody in the direction
of the paddle’s movement. This also has the pleasant side effect of
adding a little extra speed to the ball, making it faster and faster
as the game goes on.
I think you are correct in your thinking: they don't really "check the name". But, to clarify for you without really having gone through the tutorial, the code you quote appears to be the "Paddle" class ("PaddleScript"?).
The input parameter "other" is the ball--the only object that can strike the paddle.
So, their text is a bit misleading. Perhaps there was supposed to be another object floating around.
I've been company many of question people post and all the answers you guys give, followed several tutorials and since all the links on my google search are marked as “already visited” I’ve decided to put my pride aside and post a question for you.
This is my first post so i don’t know if im doing that right sorry if not, anyway the problems is this:
I’m working in a C# planetary exploration game on unity 5, I’ve already built a sphere out of an octagon following some tutorials mentioned here, and also could build the perlin textures and heightmaps with them as well, the problem comes on applying them to the sphere and produce the terrain on the mesh, I know I have to map the vertices and UVs of the sphere to do that, but the problem is that I really suck at the math thing and I couldn’t find any step by step to follow, I’ve heard about Tessellation shaders, LOD, voronoi noise, perlin noise, and got lost on the process. To simplify:
What I have:
I have the spherical mesh
I have the heightmaps
I’ve assigned them to a material along with the proper normal maps
what I think I; (since honestly, I don’t know if this is the correct path anymore) need assistance with:
the code to produce the spherical mesh deformation based on the heightmaps
How to use those Tessellation LOD based shaders and such to make a real size procedural planet
Ty very much for your attention and sorry if I was rude or asked for too much, but any kind of help you could provide will be of a tremendous help for me.
I don't really think I have the ability to give you code specific information but here are a few additions/suggestions for your checklist.
If you want to set up an LOD mesh and texture system, the only way I know how to do it is the barebones approach where you physically create lower poly versions of your mesh and texture, then in Unity, write a script where you have an array of distances, and once the player reaches a certain distance away or towards the object, switch to the mesh and that texture that are appropriate for that distance. I presume you could do the same by writing a shader that would do the same thing but the basic idea remains the same. Here's some pseudocode as an example (I don't know the Unity library too well):
int distances[] = {10,100,1000}
Mesh mesh[] = {hi_res_mesh, mid_res_mesh, low_res_mesh}
Texture texture[] = {hi_res_texture, mid_res_texture, low_res_texture}
void Update()
{
if(player.distance < distances[0])
{
gameobject.Mesh = mesh[0]
gameobject.Texture = texture[0]
else
{
for(int i = 1; i < distances.length(); i++)
{
if (player.distance <= distances[i] && player.distance >= distances[i-1]):
{
gameobject.Texture = texture[i]
gameobject.Mesh = mesh[i]
}
}
}
}
If you want real Tesselation based LOD stuff which is more difficult to code: here are some links:
https://developer.nvidia.com/content/opengl-sdk-simple-tessellation-shader
http://docs.unity3d.com/Manual/SL-SurfaceShaderTessellation.html
Essentially the same concept applies. But instead of physically changing the mesh and texture you are using, you change the mesh and texture procedurally using the same idea of having set distances where you change the resolution of the mesh and texture inside of your shader code.
As for your issue with spherical mesh deformation. You can do it in a 3d editing software like 3dsMax, Maya, or Blender by importing your mesh and applying a mesh deform modifier and use your texture as the deform texture and then alter the level of deformation to your liking. But if you want to do something more procedural in real time you are going to have to physically alter the verticies of your mesh using the vertex arrays and then retriangulating your mesh or something like that. Sorry I'm less helpful about this topic as I am less knowledgeable about it. Here are the links I could find related to your problem:
http://answers.unity3d.com/questions/274269/mesh-deformation.html
http://forum.unity3d.com/threads/deform-ground-mesh-terrain-with-dynamically-modified-displacement-map.284612/
http://blog.almostlogical.com/2010/06/10/real-time-terrain-deformation-in-unity3d/
Anyways, good luck and please let me know if I have been unclear about something or you need more explanation.
I have been searching the web for quite some time about this but I couldn't find anything that's concrete enough to help me out. I know XNA is going to die, but there is still use for it (in my heart, before I port it later to SharpDX)
I'm making a 3D FPS shooter in XNA 4.0 and I am having serious issues on setting up my collision detection.
First of all, I am making models in blender and I have a high polygon and low polygon version of the model. I would like to use the low polygon model with collision detection but I'm baffled as to how to do it. I want to use JigLibX but I'm not sure how to set my project up in order to do so.
In a nutshell: I want to accomplish this one simple goal:
Make a complicated map in blender, and have boundingboxes be made from it and then use a quadtree to split it up. Then my main character and his gun can run around it shooting stuff!
Any help would be greatly appreciated.
I don't understand exactly what your concrete question is, but I assume you want to know how to implement collision detection efficiently in principal:
for characters: use (several) bounding-boxes and bounding spheres (like a sphere for the head, and 9 boxes for torso, legs and arms.
for terrain: use data from height-map for Y (up/down) collision detection and bounding-boxes/spheres for objects on terrain (like trees, walls, bushes, ...)
for particles - like gunfire: use points, small bounding spheres or - even better because framerateindependant - raytraycing.
In almost no case you want to do collision detection on a polygon-basis as you suggested in your post (quote "low poly modell for collision detection").
I hope that put you in the right direction.
cheers
I'm trying to use the Sprite Class in Microsoft.DirectX.Direct3D to draw some sprites to my device.
GameObject go = gameDevice.Objects[x];
SpriteDraw.Draw2D(go.ObjectTexture,
go.CenterPoint,
go.DegreeToRadian(go.Rotation),
go.Position,
Color.White);
GameObject is a class i wrote, in which all the basic information required of a game object is stored (like graphics, current game position, rotation, etc)
My dice with Sprite.Draw2D is the Position parameter (satisfied here with go.Position)
if i pass go.Position, the sprite draws at 0,0 regardless of the Position value of the Object.
i tested hard-coding in "new Point(100, 100)" and all objects drew at 100,100.
I cant figure out why the variable doesnt correctly satisfy the parameter.
I've done some googling, and many people have said MDX's Sprite.Draw2D is buggy and unstable, but i didnt find a solution.
Thus i call upon Stack Overflow to hopefully shed some light on this problem!
Fixed
Yes sprite.Draw2D some time gives problem. Have u tried sprite.Draw its working fine for me.
Here is the sample for Sprite.Draw.
GameObject go = gameDevice.Objects[x];
SpriteDraw.Draw2D(go.ObjectTexture,
new Vector3(go.CenterPoint.X,go.CenterPoint.Y,O),
new Vector3(go.Position.X,go.Position.Y,O),
Color.White); and for rotation u can use matrix transform of sprite.