I'm trying to use the Sprite Class in Microsoft.DirectX.Direct3D to draw some sprites to my device.
GameObject go = gameDevice.Objects[x];
SpriteDraw.Draw2D(go.ObjectTexture,
go.CenterPoint,
go.DegreeToRadian(go.Rotation),
go.Position,
Color.White);
GameObject is a class i wrote, in which all the basic information required of a game object is stored (like graphics, current game position, rotation, etc)
My dice with Sprite.Draw2D is the Position parameter (satisfied here with go.Position)
if i pass go.Position, the sprite draws at 0,0 regardless of the Position value of the Object.
i tested hard-coding in "new Point(100, 100)" and all objects drew at 100,100.
I cant figure out why the variable doesnt correctly satisfy the parameter.
I've done some googling, and many people have said MDX's Sprite.Draw2D is buggy and unstable, but i didnt find a solution.
Thus i call upon Stack Overflow to hopefully shed some light on this problem!
Fixed
Yes sprite.Draw2D some time gives problem. Have u tried sprite.Draw its working fine for me.
Here is the sample for Sprite.Draw.
GameObject go = gameDevice.Objects[x];
SpriteDraw.Draw2D(go.ObjectTexture,
new Vector3(go.CenterPoint.X,go.CenterPoint.Y,O),
new Vector3(go.Position.X,go.Position.Y,O),
Color.White); and for rotation u can use matrix transform of sprite.
Related
The problem:
I've been working on trying to create a 3D position in Worldspace based on a 2D face RGB face detection, similar to this Microsoft example. I am using Unity 2020.3.16f1, MRTK 2.8.2 and C# for the Unity scripts. I have been able to convert the C++ code shown in the link to C# with a lot of success. One final issue is accessing the HoloLens 2 origin SpatialCoordinateSystem to be used in the Transform between the camera's 2D coordinate system and the 3D worldspace system.
The SO question at this link asks a very similar question, and I have tried to use SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem() as the answers suggest. I call this function in Unity's "Awake" method, to ensure it is set as early as possible, as shown below.
private void Awake()
{
worldSpatialCoordinateSystem = SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation().CoordinateSystem;
}
Problem is that, if the user's headset is moving while the application starts, I notice an offset in the 3D locations commensurate with the direction/position of the head when the application was starting. I have narrowed the problem down to the fact that the HL2 and Unity set an origin SpatialCoordinateSystem just before the function in Awake is called, accounting for the offset between what I expect and what I see.
What I've tried:
I have tried using some of the other solutions listed here as well. I cannot use UnityEngine.Windows.WebCam.PhotoCapture becuase of the way I am create still image captures, and (SpatialCoordinateSystem)Marshal.GetObjectForIUnknown(WorldManager.GetNativeISpatialCoordinateSystemPtr()) appears to be deprecated and unusable. Finally, I tried CreateStationaryFrameOfReferenceAtCurrentLocation(Vector3, Quaternion), and used the inverse of the current Camera.main position and rotation, hoping to compensate for the offset, but it did not appear to work (NumericsConversionExtensions is the UnityEngine-to-System.Numerics converver found here). That code is below.
worldSpatialCoordinateSystem = SpatialLocator.GetDefault().CreateStationaryFrameOfReferenceAtCurrentLocation(NumericsConversionExtensions.ToSystem(Camera.main.transform.position*-1),
NumericsConversionExtensions.ToSystem(Quaternion.Inverse(Camera.main.transform.rotation))).CoordinateSystem;
My question:
Is there either another way to access the origin spatial coordinates or possibly to compensate for the offset when the user is moving their head before Awake is called?
I spent 3 days working on the solution, and found one 1 hour after asking SO. For those who come here, use the code below, originally found here.
using Microsoft.MixedReality.OpenXR;
worldSpatialCoordinateSystem = PerceptionInterop.GetSceneCoordinateSystem(Pose.identity) as SpatialCoordinateSystem;
So I want to instanstiate an object and update its position from what where I am spawning it. I am spawning this object on the parets + 10x position.
Instantiate(gameObject, transform.position + new Vector3(10, 0, 0), Quaternion.identity);
When I do this, it nearly does what I want it to, however it will spawn the object and give it 10x on the global position not local, where it was spawned from. After looking online, I tried:
transform.localposition
However that gave me the error of "CS1061: 'Transform' does not contain a definition for 'localpositon'"
Would anyone know what I am missing when it comes to instanstiating something on a local Vector3?
Edit* Thinking of the terminology, I guess the result that I am expecting is for the object I am creating to be pivoted around a certain point when spawned in, not just rotated and position on the Vector3 coordinates.
This point being the spawn location in the world, which is just parent world location + child location.
When a name does not work try to use auto complete (visual studio) or verify with the Unity docs that it is spelled correctly and that it exists.
The field you seek is called localPosition (P is capital).
I've been company many of question people post and all the answers you guys give, followed several tutorials and since all the links on my google search are marked as “already visited” I’ve decided to put my pride aside and post a question for you.
This is my first post so i don’t know if im doing that right sorry if not, anyway the problems is this:
I’m working in a C# planetary exploration game on unity 5, I’ve already built a sphere out of an octagon following some tutorials mentioned here, and also could build the perlin textures and heightmaps with them as well, the problem comes on applying them to the sphere and produce the terrain on the mesh, I know I have to map the vertices and UVs of the sphere to do that, but the problem is that I really suck at the math thing and I couldn’t find any step by step to follow, I’ve heard about Tessellation shaders, LOD, voronoi noise, perlin noise, and got lost on the process. To simplify:
What I have:
I have the spherical mesh
I have the heightmaps
I’ve assigned them to a material along with the proper normal maps
what I think I; (since honestly, I don’t know if this is the correct path anymore) need assistance with:
the code to produce the spherical mesh deformation based on the heightmaps
How to use those Tessellation LOD based shaders and such to make a real size procedural planet
Ty very much for your attention and sorry if I was rude or asked for too much, but any kind of help you could provide will be of a tremendous help for me.
I don't really think I have the ability to give you code specific information but here are a few additions/suggestions for your checklist.
If you want to set up an LOD mesh and texture system, the only way I know how to do it is the barebones approach where you physically create lower poly versions of your mesh and texture, then in Unity, write a script where you have an array of distances, and once the player reaches a certain distance away or towards the object, switch to the mesh and that texture that are appropriate for that distance. I presume you could do the same by writing a shader that would do the same thing but the basic idea remains the same. Here's some pseudocode as an example (I don't know the Unity library too well):
int distances[] = {10,100,1000}
Mesh mesh[] = {hi_res_mesh, mid_res_mesh, low_res_mesh}
Texture texture[] = {hi_res_texture, mid_res_texture, low_res_texture}
void Update()
{
if(player.distance < distances[0])
{
gameobject.Mesh = mesh[0]
gameobject.Texture = texture[0]
else
{
for(int i = 1; i < distances.length(); i++)
{
if (player.distance <= distances[i] && player.distance >= distances[i-1]):
{
gameobject.Texture = texture[i]
gameobject.Mesh = mesh[i]
}
}
}
}
If you want real Tesselation based LOD stuff which is more difficult to code: here are some links:
https://developer.nvidia.com/content/opengl-sdk-simple-tessellation-shader
http://docs.unity3d.com/Manual/SL-SurfaceShaderTessellation.html
Essentially the same concept applies. But instead of physically changing the mesh and texture you are using, you change the mesh and texture procedurally using the same idea of having set distances where you change the resolution of the mesh and texture inside of your shader code.
As for your issue with spherical mesh deformation. You can do it in a 3d editing software like 3dsMax, Maya, or Blender by importing your mesh and applying a mesh deform modifier and use your texture as the deform texture and then alter the level of deformation to your liking. But if you want to do something more procedural in real time you are going to have to physically alter the verticies of your mesh using the vertex arrays and then retriangulating your mesh or something like that. Sorry I'm less helpful about this topic as I am less knowledgeable about it. Here are the links I could find related to your problem:
http://answers.unity3d.com/questions/274269/mesh-deformation.html
http://forum.unity3d.com/threads/deform-ground-mesh-terrain-with-dynamically-modified-displacement-map.284612/
http://blog.almostlogical.com/2010/06/10/real-time-terrain-deformation-in-unity3d/
Anyways, good luck and please let me know if I have been unclear about something or you need more explanation.
I'm just starting to learn the Unity3D game development framework. I'm trying to make a cylinder "point" another object when some key is pressed.
public GameObject target;
void Update () {
if (Input.GetKeyDown(KeyCode.A)) {
???
}
}
I know that I have to use the target's and the cylinder's position to alter the cylinder's rotation, but I can't figure out how, I don't think I understand what those Quaternions are yet.
I'd really appreciate any help!
Thanks,
Manuel
First, your cylinder needs some notion of 'forward' or its 'pointing direction' (my words) in the cylinder's local space. For this you can assume (or visually see) either +X, +Y, +Z, -X, -Y, or -Z; or you can specify your own arbitrary vector pointing in some other direction.
Second, you need to a vector that points from your cylinder's center to the other object's center (you mentioned this already).
Now, you can use Unity's Quaternion.FromToRotation(...) to generate a quaternion that, if applied to your cylinder's world rotation, will rotate your pointing direction to be in the direction of your other object. Done.
Note that if your cylinder is more than a couple transforms deep, then you may need to alter the mechanics of this approach slightly to possibly account for parents' transforms.
Is there a reason as to why you are using Quaternions i would use Quaternions.Eular angles which represent Quaternions as a vector 3 which is how we commonly understand angles.
what Ducky said is correct however if you are having troubles with Quaternions i would recomend not using them until you have a better understanding of these angle sets.
hope it helps
ok so at the moment a camera is following the object consistently only in 1 axis. here is the code:
Matrix rotationMatrix = Matrix.CreateRotationY(avatarYaw);
Matrix rotationMatrix2 = Matrix.CreateRotationX(avatarXaw);
Vector3 transformedheadOffset2;
Vector3 transformedReference2;
transformedheadOffset2 = Vector3.Transform(AvatarHeadOffset, rotationMatrix);
transformedReference2 = Vector3.Transform(TargetOffset, rotationMatrix);
how can i make it follow the object in 2 axes? (obviously something to do with rotationMatrix2) , since when i use something like:
transformedheadOffset2 = Vector3.Transform(transformedheadOffset2 , rotationMatrix);
everything goes fuzzy. Any insight will be helpful. thanks
It is difficult to know exactly what your camera issue is. Here is a video I made to explain a common camera problem that may (or may not) be applicable to your issue.
http://www.screencast.com/users/sh8zen/folders/Xna/media/929e0a9a-16d1-498a-b777-8b3d85fd8a00
I'm not trying to just push a video I made... It's just that after 3.5 years on the xna forums, the problem that the video addresses has come up countless times from beginners working with cameras. Also, based on your description of the problem, it is very difficult to know what your camera is doing wrong so it stands a reasonable chance of being this issue.