How to store additional data in a mesh? - c#

I'm prototyping a climbing system for my stealth game. I want to mark some edges so that the player can grab them, but I'm not sure how to approach it. I know I can add separate colliders, but this seems quite tedious. Instead, what I'd like to do is to mark mark the edges in the mesh. Can this be achieved? How?

You're probably better off storing that edge data externally (that is, elsewhere in your running game than in the mesh) in an accessible format, that said, if you truly want to embed it INTO the mesh...
Tuck the metadata into Mesh.colors or Mesh.uv2 like #Draco18s suggests. Most shaders ignore one or both of these. Of course, you'll be duplicating edge/face level info for each vertex on the face.
When a player tries to "grab" an edge, run your collision test, grab the face, grab any vertex on the face, and look up your edge/face info.
You could also, for example, attach your edge metadata collection to a script attached to the mesh's game object. So long as you can access your collection of edges when you need to perform the test, where you store it is an implementation detail.

If you have the ability to add nested objects in your mesh, you could create a primitive box that represents the desired ledge and collider size. When you import the mesh in Unity, the child mesh object will appear as a child gameObject with its own MeshRenderer. You can disable the MeshRenderer for the ledge and add a BoxCollider component to it. The collider will be automatically sized to match the primitive mesh.
Blender Nested Objects example
Unity Nested Objects example

Related

How to instantiate multiple objects in area without intersection? (Unity)

The task is to create a bunch of 3D prefabs with rigidbody and mesh colliders in specific box-shape area, so all of them would not be overlapping each other, after which they should fall to the surface beneath. If I set random position for each object, sometimes (especially if there are a lot of objects to create), some of them are being created too close to other, so they push each other while falling, and everything turns into a mess.
Is there a way to create these objects with at least minimal space between each other, so they could physically interact after they have fallen? Please, take into account, that spawning box is big enough to contain necessary amount of objects, no need in huge amount of them, but still the number of objects could be different each time.
One solution would be to make the box Collider smaller for instance if the object scale is (1,1,1) the collider would be (0.95f, 0.95f, 0.95f) or something similar. Of course only if you generate your cubes in a grid like system. After the first collision try to change the collider size again. If the problem persists try to make the colliders bigger and add a physics material with 0 friction.
Assuming you have big enough area for your objects, the dirtiest but the easiest solution (not meddling with bounding boxes etc.) would be to split the logic into two phases and using triggers to reposition the object randomly.
In the first phase, the generation phase, the object renderers are not active and the colliders are active.
Generate an object.
If the collider triggers an overlap and we are in the generation phase, reposition or respawn the object.
In the second phase, make renderers active and make them fall, disable colliders etc.

Update Shader on Runtime in Unity

I have an object which has a diffuse shader and on runtime I want the shader to switch to Diffuse Always Visible but this should trigger only if the unit is behind a specific object within the layer named obstacles.
First I tried to switch the object shader with the following code and the shader is changed in the inspector but not in the game during play. I tried placing and calling the shader from the resources and also created seprate materials but its not working.
Here is the code I am using in C#
Unit.renderer.material.shader = Shader.Find("Diffuse - Always visible");
As for the rest I was thinking of using a raycast but not sure how to handle this.
Thanks in advance
I only need the other concept now where the shader changes if a unit is behind an object!
For the trigger, how to handle it will depend a lot on how your camera moves.
To check in a 3D game, raycasts should work. Because whether or not something is "behind" something else depends on the camera's perspective, I would personally try something like this to start:
Use Camera.WorldToScreenPoint to figure out where your object is on the screen.
Use Camera.ScreenPointToRay to convert this into a ray you'll send through the physics engine.
Use Physics.Raycast and check what gets hit. If it doesn't hit the object in question, something is in front of it.
Depending on your goals, you might want to check different points. If you're just checking if something is more than halfway covered by an object, doing a single raycast at its center is probably sufficient. If you want to see if it's at all occluded, you could try something like finding four approximate corners of the object and raycasting against them.
If it's a 2D game or the camera is always at a fixed angle (for example, in a side scroller), the problem is potentially a lot simpler. Add colliders to the objects, and use OnTriggerEnter or OnTriggerEnter2D (along with the corresponding Exit functions). That only works if the tiniest bit of overlap should trigger the effect, and it only works if depth doesn't really matter (e.g. in a 2D game) so your colliders will actually intersect.

Changing Objects Texture in Unity Game Engine Using C# Scripts

I have two objects, both with different textures and I want to make them the same at a certain point of time. The current code I am looking at is the following:
weaponObject.renderer.material.mainTexture = selectedWeapon.renderer.material.mainTexture;
Unfortunately this does not seem to work. The "weaponObject" texture seems to remain the same but simply moves further backwards in terms of the z axis. Any tips? Both Objects are type GameObject.
You need to make sure that the textures fit both GameObjects. Pretty much you can't attach the texture of an m4 to an m16, the texture won't align correctly.
You also need to make sure that both objects use the same type of material. Remember a material affects how it will look, so even the same texture on different materials will look different.
Example, same texture with different materials:
If the two objects are identical, which they should be if you want consistent results, then just swap the materials:
weaponObject.renderer.material = NewMaterial;

Draw points along the surface of a cube

So for an assignment I have to morph a cube into a sphere. All that's required is to have a bunch of points along the surface of a cube (don't need to be connected or actually be on a cube, but has to form a cube, though connections would make it easier to look at) and have them smoothly translate into a sphere shape.
The main problem is I never learned how to make points in XNA 4.0 and from what I've seen it's very different to what we did in OpenGL (we learned the old one in a previous class).
Would anyone be able to help me figure out making the cube shape I need? Each side would have 10x10 points with the points on the edge shared by the surfaces of that edge. The structure would need to be easy to copy or modify since I would need to have the start state, end state, and the intermediate state to translate the points between the two states.
If I left out anything that could be important let me know.
First of all, you should familiarise yourself with the Primitives3D sample. It illustrates all of the rendering APIs that you need.
Here is how I would approach this problem (you can look up these classes and methods on MSDN and it will hopefully help you flesh out the details):
Create an array of Vector3[] that represents an appropriately tessellated unit cube around (0,0)
Create a second array of Vector3[] and and use Vector3.Normalize to copy in the vertices from your first array. This will create a unit sphere with vertices that match up with the original cube.
Create an array of VertexPositionColor[]. Fill in the colour data however you like.
Use Vector3.Lerp to loop through the first two arrays, interpolating each element to set positions in the third array. This gives you a parameter you can animate - you will have to do this each frame (in Update is probably best).
Create an array of indices (short[]) that describes a triangle list of the tessellated cube (and, in turn, the sphere and animation between the two).
Set up a BasicEffect for rendering. This involves setting its World, View and Projection matrices and maybe turning on VertexColorEnabled. If you want lighting, see the sample for details (you'll need to use a vertex type with normals, and animate those normals correctly).
The way to render with an effect is: foreach(EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); /* your stuff here */ }
You could create a DynamicVertexBuffer and IndexBuffer and draw with those. But for something simple like this, DrawUserIndexedPrimitives is much easier (here is a recent answer with some details, and here's another one with a complete example of BasicEffect).
Note that you should only create these objects at startup (LoadContent is a good place). During rendering you're simply using them.
If you have trouble with your 3D rendering, and you're drawing text or sprites, see this article for how to fix it.

Two Classes Dependent On Each Other

I have Camera class, which handles camera behavior. Among it's fields is a reference to the target's Cube class (Cube is just one of the object, but I won't mention others to keep it simple). In order to calculate the View matrix, I need the camera's position and target's position, so I can explain to my program that: "the camera is placed here, and from here it's looking at this cube". Should the cube happen to move around, so too would the camera's point of view change automatically.
So far, everything is good: there is a Camera class which depends on the Cube class, and there's the Cube class which depends on nothing (in this example).
I get to a problem when I need to draw a cube, or anything else -- in order to draw something, among the required values it the View matrix of the Camera; that's the one that I've just calculated in the first paragraph. In essence, this means when I get to the point to draw things on-screen, the Cube class becomes dependant on the Camera class as well, and they're now dependant on each other. That would mean that I either:
need to make the View matrix field of the Camera class static, so I can access it directly from the Cube class.
need to make a method (eg. SetView) in Cube class, which I can then invoke from the Camera class (since I already have it's reference there).
need to keep a View matrix in outside scope.
need to make a bidirectional dependency.
But, I don't like either of these:
there are more cameras which handle multiple views (currently there are 3 of them on-screen) and there may be more (or less).
this makes the code slightly (sometimes, maybe, very) unreadable -- for example when I'm drawing the cube, it's not quite clear where the View matrix came from, you just kinda use it and don't look back.
I would access the outside scope from the camera class, or the outside scope would access the camera, and I wouldn't like this because the outside scope is only used to handle the execution mechanics.
I like to keep my reference fields "readonly" as it's currently everywhere in this system -- the references are set in the constructor, and only used to get data from referenced classes.
And, if I haven't made it clear, let me just repeat that there are multiple Camera objects, and multiple Cube objects; whereas any Camera may or may not depend on any Cube, but usually there is at least one Camera dependent on a Cube.
Any suggestions would be appreciated :)
If your Cube must know how to render itself with respect to a Camera (and therefore must know about the Camera), then it probably doesn't make sense for the Camera to know how to align itself to a Cube. The alignment behavior that's currently in Camera probably belongs in a higher-level class like a CameraDirector that knows about both Cubes and Cameras. This improves class cohesion, as it splits off some of the responsibilities of Camera into a different, tightly-focused CameraDirector class. This lets the Camera focus on its core job and makes it easier to understand and maintain.
Assuming your DrawWorld() routine already knows about Cubes and Cameras, I would pass the view matrix to the Cube's Draw() method:
foreach (Cube cube in cubes) {
cube.Draw(..., mainCamera.ViewMatrix, ...);
}
That way Cube only "depends" on Matrix and not on Camera. Then again, maybe that violates rule 3 above. I can't do much better without seeing some of your code, though.
Two alternate options:
Create base classes for the Camera and Cube that don't link to each other but still contain most of the logic you want to use. Then you can add a BaseCamera reference to Cube, to which you would assign a Camera object, not a BaseCamera object. (And a Camera with a BaseCube field.) This is the power of polymorphism.
Define an ICamera and ICube interface. The Camera class would then use an ICube field to link to the cube and vice versa.
Both solutions will require you to take care when creating and freeing new camera and cube objects, though. My personal preference would be the use of interfaces. Do keep in mind that the ICube and ICamera interfaces should not link to each others. Their related classes will link to the other interface, but not the interfaces.
I made each object class responsible for it's own rendering.
In your case, you would have to pass each rendering method the graphics instance and the viewpoint relative to the object.
A drawing class would have access to the instances of all the object classes, and call the drawing method of each object in whatever order makes sense. Your object classes would probably have to have another method that determines the distance from the viewpoint, so that you can call the drawing methods in a furthest to closest order.
When doing XNA i hav had a sililar problem.
I soolved it by adding an interface to the camera into my Draw method interface.
Its not pretty and a camera gets passed around everywhere, but it worked well.
The real thing to get is that your update and draw loops are seperate.
when drawing you ahve a list of objects to draw and your draw routine has some camera class passed to it.
the alterative to to code in a way to your classes to produce a list of all their objects that require rendering. Pass this to the renderer which contains the camera.
The point being that a list of cameras is maintained, along with a list of drawable objects despite all of these objects also belonging to a logical pattern describing game state.
Read about inversion of control. It is all I am describing really.

Categories