I have created a scene in 3DS Max with a stock engine model and some things i added my self, the plane, and the button.
http://i.stack.imgur.com/R2iva.png
Regardless of how i export the scene, whether its as a .X using panda exporter or .fbx using 2012.2 fbx exporter both, when loaded into XNA and rendered, all appear on top of each other.
http://i.stack.imgur.com/6gdMb.png
Since the individual parts of the engine all remain where they should (and are seperate in 3ds max) im pretty sure there is something im not setting correctly in 3ds max with the layout of the rest of my objects.
Update 1 : The code i use to load the models in xna is as follows
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect be in mesh.Effects)
{
be.EnableDefaultLighting();
be.Projection = camera.projection;
be.View = camera.view;
be.World = GetWorld() * mesh.ParentBone.Transform;
// adding the additional * transforms[i]; didnt do anything
}
mesh.Draw();
}
This code works great for other peoples models but not any that i make. Its like 3ds max isnt exporting out the positions of the objects that i create in the scene relative to the scenes origin.
You need to combine all the transform matrices from parent bones and child bones like this:
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in model.Meshes) {
foreach (BasicEffect ef in mesh.Effects) {
ef.World = transforms[mesh.ParentBone.Index];
//Also do other stuff here, set projection and view matrices
}
}
There is probably a better way, but this code should work.
Related
I'm attempting to dynamically add an FBX model into a scene and add the following components to the child objects of the main GameObject created from the FBX model:
MeshCollider (I Add a mesh to this using the sharedMesh variable)
ManipulationHandler
NearInteractionGrabbable
When I test the functionality in Unity it works flawlessly. When I deploy to the HoloLens 2 only the objects that wouldn't require "convex = true" are working.
It makes me feel like setting convex = true isn't working when deployed on the hololens 2.
Here's my code for your reference on how I'm getting things to work:
void Start()
{
//Grab object from Resources/Objects using provided unique identifier
//I'm instantiating like this because I'm setting the parent to another GameObject in the original code and I got errors if I didn't do it this way
GameObject go = Instantiate(Resources.Load<GameObject>("Objects/" + objectId));
//Loop through child objects of model
for (int i = 0; i < go.transform.childCount; i++)
{
AddManipulationComponents(go.transform.GetChild(i).gameObject);
}
}
void AddManipulationComponents(GameObject item)
{
//Get MeshFilter so I can set the mesh of it to the sharedMesh of the MeshCollider
MeshFilter meshFilter = item.GetComponent<MeshFilter>();
//Only Add MeshCollider components if there's a mesh on the object.
if (meshFilter != null)
{
Mesh mesh = item.GetComponent<MeshFilter>().mesh;
if (mesh != null)
{
//Add MeshCollider
MeshCollider collider = item.EnsureComponent<MeshCollider>();
//A lot of components are curved and need convex set to false
collider.convex = true;
//Add NearInteractionGrabbable
item.EnsureComponent<NearInteractionGrabbable>();
//Add ManipulationHandler
item.EnsureComponent<ManipulationHandler>();
//Set mesh to MeshCollider
collider.sharedMesh = mesh;
}
}
//If current component has children, then loop through those.
if (item.transform.childCount > 0)
{
for (int i = 0; i < item.transform.childCount; i++)
{
AddManipulationComponents(item.transform.GetChild(i).gameObject);
}
}
}
This works when I run it through Unity, but when I build it and deploy to the HoloLens 2 it doesn't completely work. By that I mean there's only 3 or 4 child objects that actually work and they happen to be flat objects. It makes me think the "convex" variable in MeshCollider is not behaving as I'd hope.
Anyone have a fix for this?
Here's a photo of it working through unity (I clipped out proprietary information):
Convex mesh colliders are limited to 255 triangles, perhaps this is why its not working?
When importing the FBX you need to select the checkbox for "Read/Write"
I enabled that and made sure to export all materials as well.
I'm trying to make a simple game where the player runs around a sphere and has to evade collision with objects that are on its surface. To complicate the game a little bit, I'm trying to shrink the sphere over time.
The main problem that I'm having is that I'm not able to maintain the obstacles that are on the sphere's surface attached to it (the obstacles are spawned on the surface and rotated so that the obstacle's top is looking out from the sphere's center) while shrinking it.
Here's what my planet shrinking code looks like:
void Update()
{
transform.localScale *= 1f - shrinkSpeed * Time.deltaTime;
}
Here's what the code that I tried to move the obstacles inwards at the same pace as the sphere's shrinking process looks like (this code is also inside the update function):
foreach (GameObject obstacle in GameObject.FindGameObjectsWithTag("Obstacle"))
{
Vector3 shrinkDirection = obstacle.transform.position - myTransform.position;
obstacle.transform.Translate(-shrinkDirection * (1f - shrinkSpeed * Time.deltaTime));
}
The obstacles just fly away, and I've also tried several things like using the obstacle's localPosition but nothing works. How can I make the obstacles attach to the sphere's surface even while shrinking it? Note: The obstacles are not children of the sphere.
You should store all the references to the obstacles when you spawn them something like
private List<GameObject> obstacles = new List<GameObject>();
// ...
var obst = Instantiate(obstaclePrefab);
// do your translations and rotation etc
obstacles.Add(obst);
than you could have additionally a Dictionary for storing the initial relativ positions like
private Dictionary<GameObject, Vector3> obstacleOffsets = new Dictionary<GameObject, Vector3>();
// ...
obstacleOffsets.Add(obst, obst.transform.position - sphereTranform.position);
than you can later use that offset to position the obstacles
foreach (GameObject obstacle in obstacles)
{
var originalOffset = obstacleOffsets[obstacle];
obstacle.transform.position = transform.position + MergeVectors(originalOffset, transform.localScale);
}
Ofcourse you could also use only the Dictionary like
foreach (var kvp in obstacleOffsets)
{
var obstacle = kvp.Key;
var originalOffset = kvp.Value;
obstacle.transform.position = transform.position + MergeVectors(originalOffset, transform.localScale);
}
Result (interesting GIF compression ^^)
Note
Don't forget that whenever you should destroy an obstacle you have to remove it from the List/Dictionary first!
obstacles.Remove(obstacle);
obstacleOffsets.Remove(obstacle);
I'm working on a simple 3D model viewer, I need to be able to support very large models (100,000 + triangles) and have smooth movement while rotating the camera.
To optimize the drawing instead of creating a GeometryModel3D of each segment in a polymesh I want to use the full list of vertices and triangle indexes. The speed up was amazing but now the lighting is messed up. Each triangle now has its own shade.
I think the issue is related to normals, if I manually set all normals to Vector3(0,0,1) then I get even lighting. But when I try to see the side of the model or the reverse side it is dark. I also attempted to use a formulae to calculate the normals of each triangle but the result was the same: One large model would be messed up, separate models would look good. So maybe it isn't a normal issue?
I'm curious as to why when the models are separate everything works correctly and by combining the models causes issues.
Image Showing the Issue
The polyMesh object contains a bunch of faces which contain which vertex indices it uses. Either a triangle or a quad. The polyMesh has all vertex information.
var models = new Model3DCollection();
var brush = new SolidColorBrush(GetColorFromEntity(polyMesh));
var material = new DiffuseMaterial(brush);
foreach (var face in polyMesh.FaceRecord)
{
var indexes = new Int32Collection();
if (face.VertexIndexes.Count == 4)
{
indexes.Add(face.VertexIndexes[0]);
indexes.Add(face.VertexIndexes[1]);
indexes.Add(face.VertexIndexes[2]);
indexes.Add(face.VertexIndexes[2]);
indexes.Add(face.VertexIndexes[3]);
indexes.Add(face.VertexIndexes[0]);
}
else
{
indexes.Add(face.VertexIndexes[0]);
indexes.Add(face.VertexIndexes[1]);
indexes.Add(face.VertexIndexes[2]);
}
MeshGeometry3D mesh = new MeshGeometry3D()
{
Positions = GetPoints(polyMesh.Vertices),
TriangleIndices = indexes,
};
GeometryModel3D model = new GeometryModel3D()
{
Geometry = mesh,
Material = material,
};
models.Add(model);
}
return models;
If I take the Mesh and the Geometry out of the loop and create one large mesh things go wrong.
var models = new Model3DCollection();
var brush = new SolidColorBrush(GetColorFromEntity(polyMesh));
var material = new DiffuseMaterial(brush);
var indexes = new Int32Collection();
foreach (var face in polyMesh.FaceRecord)
{
//Add indices as above (code trimmed to save space.)
indexes.Add(face.VertexIndexes[0]);
}
MeshGeometry3D mesh = new MeshGeometry3D()
{
Positions = GetPoints(polyMesh.Vertices),
TriangleIndices = indexes,
};
GeometryModel3D model = new GeometryModel3D()
{
Geometry = mesh,
Material = material,
};
models.Add(model);
return models;
Other important details: The model isn't just a flat surface it is a complex model of a road way. I can't show the whole model nor provide code for how it was imported. I am using HelixToolKit for camera controls and diagnostics.
Viewport code:
<h:HelixViewport3D ZoomExtentsWhenLoaded="True" IsPanEnabled="True" x:Name="ViewPort" IsHeadLightEnabled="True">
<h:HelixViewport3D.DefaultCamera>
<!--フリッカー回避方法 This code fixes a flicker bug! Found at http://stackoverflow.com/a/38243386 -->
<PerspectiveCamera NearPlaneDistance="25"/>
</h:HelixViewport3D.DefaultCamera>
<h:DefaultLights/>
</h:HelixViewport3D>
Setting a back material doesn't change anything. I'm hoping I'm just being a moron and missing something obvious as I am new to 3D.
Finally fixed this myself.
Came down to not enough knowledge about Wpf3D (or 3D in general?).
If a vertex is reused as in the combination model about then smoth shading is applied. The reason smooth shading looks so terrible on the flat side is that it is applying shading to the full 3D model. Each polyface I had was a 3D shape. Each segment on the wall was a different polymesh with ~6 faces. (front, up, down, left, right, back).
When each vertex position was unique the problem went away even when combining the models.
When separate since each model had a copy of the positions they were unique.
Found the answer from:
http://xoax.net/blog/automatic-3d-normal-vector-calculation-in-c-wpf-applications/
I am trying to import in XNA an .fbx model exported with blender.
Here is my drawing code
public void Draw()
{
Matrix[] modelTransforms = new Matrix[Model.Bones.Count];
Model.CopyAbsoluteBoneTransformsTo(modelTransforms);
foreach (ModelMesh mesh in Model.Meshes)
{
foreach (BasicEffect be in mesh.Effects)
{
be.EnableDefaultLighting();
be.World = GameCamera.World * Translation * modelTransforms[mesh.ParentBone.Index];
be.View = GameCamera.View;
be.Projection = GameCamera.Projection;
}
mesh.Draw();
}
}
The problem is that when I start the game some model parts are overlying others instead of being behind. I've tried to download other models from internet but they have the same problem.
This line:
be.World = GameCamera.World * Translation * modelTransforms[mesh.ParentBone.Index];
is usually arrainged the other way around, and the order that you multiply matrices in will make the results different. Try this:
be.World = modelTransforms[mesh.ParentBone.Index] * GameCamera.World * Translation;
I am creating a minecraft clone, and whenever I move the camera even a little bit fast there is a big tear between the chunks as shown here:
Each chunk is 32x32x32 cubes and has a single vertex buffer for each kind of cube, in case it matters. I am drawing 2D text on the screen as well, and I learned that I had to set the graphic device state for each kind of drawing. Here is how I'm drawing the cubes:
GraphicsDevice.Clear(Color.LightSkyBlue);
#region 3D
// Set the device
device.BlendState = BlendState.Opaque;
device.DepthStencilState = DepthStencilState.Default;
device.RasterizerState = RasterizerState.CullCounterClockwise;
// Go through each shader and draw the cubes of that style
lock (GeneratedChunks)
{
foreach (KeyValuePair<CubeType, BasicEffect> KVP in CubeType_Effect)
{
// Iterate through each technique in this effect
foreach (EffectPass pass in KVP.Value.CurrentTechnique.Passes)
{
// Go through each chunk in our chunk map, and pluck out the cubetype we care about
foreach (Vector3 ChunkKey in GeneratedChunks)
{
if (ChunkMap[ChunkKey].CubeType_TriangleCounts[KVP.Key] > 0)
{
pass.Apply(); // assign it to the video card
KVP.Value.View = camera.ViewMatrix;
KVP.Value.Projection = camera.ProjectionMatrix;
KVP.Value.World = worldMatrix;
device.SetVertexBuffer(ChunkMap[ChunkKey].CubeType_VertexBuffers[KVP.Key]);
device.DrawPrimitives(PrimitiveType.TriangleList, 0, ChunkMap[ChunkKey].CubeType_TriangleCounts[KVP.Key]);
}
}
}
}
}
#endregion
The world looks fine if I'm standing still. I thought this might be because I'm in windowed mode, but when I toggled full screen the problem persisted. I also assume that XNA is double buffered by itself? Or so google has told me.
I had a similar issue - I found that I had to call pass.Apply() after setting all of the Effect's parameters...
The fix so far has been to use 1 giant vertex buffer. I don't like it, but that's all that seems to work.