I've used the .NET port of LibNoise to create a planetary map using its built-in sphere projection. However, now I want to wrap that texture around a sphere in XNA. I've got a sphere model, but I know very little about UV wrapping, etc. It's entirely possible, if not plausible, that the way I've put UV coordinates on my model absolutely will not work with the generated texture.
I've set up a small test project rather than fiddle around in my main game. It's your basic rotating model project. I'm using BasicEffect on the model and setting the Texture parameter as my map. However, all I see is the model with its default diffuse color and no texture.
For your convenience, the full code of the project:
Game1.cs
PlanetTerrainMap.cs
Required files:
sphere.fbx
EarthLookupTable.png
Also, I totally recognize that my map does not look like a map. I can handle that issue later. I just want to see all that crappy grain noise on the sphere so I can move forward.
Do I need to use a custom shader? Or do I need a different model?
Have you tried opening this in Blender? It's a great way to confirm if the UV coordinates specified in your model line up to the texture that your trying to use. If it's not going to render after it's imported in to Blender, it's highly likely you won't get it to Render in XNA without specifying the mapping yourself.
Related
I am building an app for the HoloLens gen 1 device using Unity 2018.3.13f and MRTK V2 RC1. I got a simple AR design with 2 text objects and 1 rawimage object. After building the project and deploying it to the HoloLens the AR objects ends up behind the spatial mesh (you know all those spatial triangles), but I want all the objects to be in front of the wall.
How do I accomplish this?
The canvas is set to be on the main camera
I have the original settings for the DefaultMixedRealityconfiguraitonProfile if there is something there that needs to be changed.
This is how it looks through the hololens with the app when it does not show the mesh of the wall (sorry for the bad quality)
and this is how it looks when it falls behind the mesh
Do I need to add some mesh renderer or something on the MainCamera to make this possible?
Any help is appreciated, thanks!
I don't believe that the MRTKv2 as of 2019/5/9 has code that will auto-ensure that a specific object is positioned in between the camera and other arbitrary meshes and colliders (i.e. the spatial awareness is one such mesh, though you could imagine just having an arbitrary box or plane in the scene that would occlude that object, in which case, maybe you'd want your "in between" object to stay in between both those two types of potentially occluding things).
There used to be a script in the HTK called Tagalong.cs that would do something like this by doing raycasts from the camera to collidable object:
https://github.com/microsoft/MixedRealityToolkit-Unity/blob/htk_release/Assets/HoloToolkit/Utilities/Scripts/Tagalong.cs
This single large script I think got broken up into smaller scripts (i.e. specific behaviors in the solvers here:)
https://github.com/microsoft/MixedRealityToolkit-Unity/tree/mrtk_release/Assets/MixedRealityToolkit.SDK/Features/Utilities/Solvers
However, from what I can tell, the specific interaction of "keep things automatically between the camera and whatever collidable object" wasn't preserved. Someone else can correct me here if I'm wrong, it looks like this wasn't a behavior that got preserved in V2.
Going forward, there are a couple of possibilities:
1) Probably file an issue on Github here (https://github.com/microsoft/MixedRealityToolkit-Unity/issues) to request this feature be ported over.
2) Use the code in Tagalong.cs to add your own solver that would accomplish this (i.e. the code looks to be all there, there's just some work needed to get done to reorder it to handle what you want)
If you use a sprite renderer, set order in layer (into 0 or -1).
If you use a mesh renderer, try to deactivate dynamic occluded.
Try to change the hierarchy of the sorting layers under Edit-> Project Settings -> Sorting Layers
I wanted to make an Outline shader, but only achieved the following. Works perfectly exclusively on the sphere. And in Cuba, it hardly works at all. Who have to use the Shader Graph, please help!
I tried to recreate your shader, do you have the correct shader type?
the effect that you're creating there is going to put an emission on faces that are a certain angle. it does not work to create an outline shader in the shader editor. here is a youtube tutorial to create the type of effect that you're trying to achieve https://youtu.be/SMLbbi8oaO8
Old post, but I have found a very easy way to do a simple outline shader. This is in the newest version of Unity (Unity 2022.1.0b14.2951).
Here's the graph:
Here are the settings:
And here are the results:
It starts to show its negative qualities when two outlined objects intersect, however.
I have been poking around online about creating height maps for terrain and I really can't get the hang of it, my question is, if I create a terrain model in blender would I be able to use that in XNA, (VS2010) as my terrain?
Are there any drawbacks from doing so?
Xna support .FBX model so you can export your model from blender and use in Xna.
using-blender-models-in-xna
Blender-toXNA
but using a model that used for terrain is not good at all,as you know in each frame game engine render each object in your your camera view so if you have a large terrain it must rendered and it is not good.
so Using a model for the terrain is probably a bad idea unless the terrain is just one big square plane. You're probably going to have better luck building your terrain in code no matter what you intend the terrain to look like.you can split your terrain to small part like 512 unit.
Edited
An example of terrain in xna
A complete guide
I have made a terrain that is generated from a height map file where each pixel (black to white) represent the height of the terrain at the corresponding location.
Now, my question is how would one make a map editor for something like that? I can think of two general ways:
1) The map editor modifies the height map file and regenerates the terrain based on that.
2) The map editor directly alters the vertices of the map, and later upon saving process it generates a height map based on those vertices.
Do you have any good tutorials or resources as to how to get either one to work? I have no idea where to begin.
Check out the XNA Terrain Editor by Eric Grossinger.
I've played around with this thing a little bit and it's pretty slick and should, at the very least, give you some ideas if not an out-right solution.
This book: Building XNA games is an excellent reference and has a great overview of how to create your map editor. The only downfall is it's in XNA 2.0 so you would have to do some converting, but the idea remains the same.
I have a 3D car mesh object. How can I reflect a 3D text onto the 3D mesh surface?
I'm using Visual Studio 2008 C# Express. The 3D car mesh object is ready for use in C# project.
That is, I must use it in my C# project, not in Blender.
All the development processes that I need must be done in C# development environment.
You should use a 3D modeling tool like Blender, AC3D or similar. This will help you create the "model" and placing any textures on it. Then you will also need some kind of drawing engine so you also can load and draw the model in your app.
It's unclear specifically what you're talking about here.
Did you mean Text, or Texture?
Simply texturing the model - If you're using blender to create the model, texture it, and export it in .x or another format and render it using an appropriate library.
Reflecting Text - Look into projective decals.
Or a bit more advanced
Reflection of the environment - To simulate reflective surfaces you're going to need to program a shader. Look into HLSL, GLSL, CG, or you can do the shader in ASM. - Reflection Mapping
EDIT: Added link to reflection mapping.
If you need Direct 3D access at run-time, try using SlimDX.