I'm working on a game in XNA, and i'm loading a model from blender. The model didn't have a texture until now, and when it tries to compile I get this error:
The mesh "", using BasicEffect, contains geometry that is missing texture coordinates for channel 0.
The model loaded before this point. I know I have to add the texture file in the same location as the .x file in my content, and I did that. The .x file contains the segment that references the texture.
Material ShipMat {
0.640000; 0.552144; 0.594688; 1.000000;;
96.078431;
0.500000; 0.500000; 0.500000;;
0.000000; 0.000000; 0.000000;;
TextureFilename {"shipTexture.jpg";}
}
I'm using the add-on DirectX exporter for blender, because when I tried exporting my model as a .fbx it didn't load the texture and it was rotated in an odd direction. Any Ideas? Thanks in advance.
For a texture to work, each model vertex needs texture coordinates.
Sounds like the model did not export from blender with a texture coordinate element for each vertex. Most likely, your model vertices only have position, color, & maybe normal elements only.
Just go back to blender, apply any old texture you want, then re-export it & swap out textures in Xna and you will get what you're expecting now.
Related
im new in image processsing and 3D applications.
I have a PLY mesh from Kinect. I want to mapping the whole model to a planar image.
I imagine the result image like a tissue, flaterned in a planar surface from a topograifc surface.
My worry is dont lose the color depth image data or mesures from the original mesh.
Solutions in C# to mapping the mesh are welcome too.
Thanks!
( sorry my english)
I am writing a virtual globe using DirectX similar to Google Earth. So far, I have completed tessellation, and have tested with a wrapped texture over the entire sphere, which was successful. I have written the texture coordinates to correspond with the latitude and longitude (90lat,-180lon = 0,0 and -90lat,180lon = 1,1).
For this project, I need to layer several image tiles over the sphere. For example, 8 images spanning 90 degrees by 90 degrees. These tiles may dynamically update (i.e. tiles may be added or removed as you pan around). I have thought about using a render target view and drawing the tiles directly to that, but I'm sure there is a better way.
How would I go about doing this? Is there a way to set the texture to only span a specific texture coordinate space? I.e. from (0, 0) to (0.25, 0.5)?
There are three straight-forward solutions (and possibly many more sopisticated ones).
You can create geometry that matches the part of the sphere covered by a tile and draw those subsequently, setting the correct texture before each draw call (if the tiles are laid out in a simple way, you can also generate this geometry using instancing and a single draw call).
You can write a pixel shader that evaluates the texture coordinates and chooses the appropriate texture using transformed texture coordinates.
Render all textures to a big texture and use that to render the sphere. Whenever a tile changes, bind the big texture as a render target and draw the new tile on top of it.
I am working on a 3D game and it has lots of game objects in a scene. So I'm working on reducing draw calls. I'v used mesh combining on my static game objects. But my player is'n static and I can't use mesh combining on it. My player is nothing but combination of some cubes which uses the standard shader and some different color Materials on different parts. So I'm guessing, I can use texture atlasing on my player to reduce darw calls. But I don't know how to do it.
Is my theory of work right? If I'm right please help me with atlasing, and if I'm wrong please point out my fault.
Thanks in advance.
Put all the required images into the same texture. Create a material from that texture. Apply the same material to all of the cubes making up you character. UV map all cubes to the relevant part of the texture (use UV offset in the Unity editor if the UV blocks are quite simple, otherwise you'll need to move the UV elements in your 3D modelling program).
I am trying to augment the live RGBA stream from Kinect sensor with some 3D models using XNA (i.e. adding 3D models into a live video scene).
I succeeded in augmenting the scene with 2D sprites (e.g. circles) but I cannot add 3D objects (I think the objects are there but they hide because of the video texture). I can see 3D objects if I don't draw the video stream, but as I start applying video stream, objects disappear.
In XNA 2D and 3D rendering calls are handled differently:
2D renderings are executed without using a depth buffer (have a look at this SO post)
3D renderings are executed using a depth buffer by default
So what you want to check is if the z coordinates of the objects you want to render are right:
If the rendered pixels of your 3D model are farther away than your RGBD data, your RGBD video stream overwrites your 3D model's pixel or they are discarded right away if they are rendered after your RGBD data.
Try moving your whole RGBD data away from the camera and see if your 3D model appears. To achieve this just increment the depth values of your data. Otherwise decrement your 3D models z coordinate until you can see it. Watch out as this may result in the 3D model being rendered behind the camera.
I need to write a text on top of 3d model on xna or on the model it self ?
I've made models for players and I need to show the name of each player either on it or on top of it !
thanks in advance !
Well, you could either use Billboarding which draws the texture (or text) so that it always points at the camera, or you can make a new plane in your 3d modelling program, and position it where you want the text to appear on the player. Then texture it separately from the rest of the player and change the texture with BasicEffect.Texture
msdn: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.basiceffect_members.aspx
I would definitely recommend billboarding over changing the texture, as it makes it a whole lot less complicated.