Two Classes Dependent On Each Other - c#

I have Camera class, which handles camera behavior. Among it's fields is a reference to the target's Cube class (Cube is just one of the object, but I won't mention others to keep it simple). In order to calculate the View matrix, I need the camera's position and target's position, so I can explain to my program that: "the camera is placed here, and from here it's looking at this cube". Should the cube happen to move around, so too would the camera's point of view change automatically.
So far, everything is good: there is a Camera class which depends on the Cube class, and there's the Cube class which depends on nothing (in this example).
I get to a problem when I need to draw a cube, or anything else -- in order to draw something, among the required values it the View matrix of the Camera; that's the one that I've just calculated in the first paragraph. In essence, this means when I get to the point to draw things on-screen, the Cube class becomes dependant on the Camera class as well, and they're now dependant on each other. That would mean that I either:
need to make the View matrix field of the Camera class static, so I can access it directly from the Cube class.
need to make a method (eg. SetView) in Cube class, which I can then invoke from the Camera class (since I already have it's reference there).
need to keep a View matrix in outside scope.
need to make a bidirectional dependency.
But, I don't like either of these:
there are more cameras which handle multiple views (currently there are 3 of them on-screen) and there may be more (or less).
this makes the code slightly (sometimes, maybe, very) unreadable -- for example when I'm drawing the cube, it's not quite clear where the View matrix came from, you just kinda use it and don't look back.
I would access the outside scope from the camera class, or the outside scope would access the camera, and I wouldn't like this because the outside scope is only used to handle the execution mechanics.
I like to keep my reference fields "readonly" as it's currently everywhere in this system -- the references are set in the constructor, and only used to get data from referenced classes.
And, if I haven't made it clear, let me just repeat that there are multiple Camera objects, and multiple Cube objects; whereas any Camera may or may not depend on any Cube, but usually there is at least one Camera dependent on a Cube.
Any suggestions would be appreciated :)

If your Cube must know how to render itself with respect to a Camera (and therefore must know about the Camera), then it probably doesn't make sense for the Camera to know how to align itself to a Cube. The alignment behavior that's currently in Camera probably belongs in a higher-level class like a CameraDirector that knows about both Cubes and Cameras. This improves class cohesion, as it splits off some of the responsibilities of Camera into a different, tightly-focused CameraDirector class. This lets the Camera focus on its core job and makes it easier to understand and maintain.

Assuming your DrawWorld() routine already knows about Cubes and Cameras, I would pass the view matrix to the Cube's Draw() method:
foreach (Cube cube in cubes) {
cube.Draw(..., mainCamera.ViewMatrix, ...);
}
That way Cube only "depends" on Matrix and not on Camera. Then again, maybe that violates rule 3 above. I can't do much better without seeing some of your code, though.

Two alternate options:
Create base classes for the Camera and Cube that don't link to each other but still contain most of the logic you want to use. Then you can add a BaseCamera reference to Cube, to which you would assign a Camera object, not a BaseCamera object. (And a Camera with a BaseCube field.) This is the power of polymorphism.
Define an ICamera and ICube interface. The Camera class would then use an ICube field to link to the cube and vice versa.
Both solutions will require you to take care when creating and freeing new camera and cube objects, though. My personal preference would be the use of interfaces. Do keep in mind that the ICube and ICamera interfaces should not link to each others. Their related classes will link to the other interface, but not the interfaces.

I made each object class responsible for it's own rendering.
In your case, you would have to pass each rendering method the graphics instance and the viewpoint relative to the object.
A drawing class would have access to the instances of all the object classes, and call the drawing method of each object in whatever order makes sense. Your object classes would probably have to have another method that determines the distance from the viewpoint, so that you can call the drawing methods in a furthest to closest order.

When doing XNA i hav had a sililar problem.
I soolved it by adding an interface to the camera into my Draw method interface.
Its not pretty and a camera gets passed around everywhere, but it worked well.
The real thing to get is that your update and draw loops are seperate.
when drawing you ahve a list of objects to draw and your draw routine has some camera class passed to it.
the alterative to to code in a way to your classes to produce a list of all their objects that require rendering. Pass this to the renderer which contains the camera.
The point being that a list of cameras is maintained, along with a list of drawable objects despite all of these objects also belonging to a logical pattern describing game state.
Read about inversion of control. It is all I am describing really.

Related

How can I make two animations work together in unity?

I'm making a first person shooter and im trying to make a reload animation.
I want to make it look like the hand grabs the mag and pulls it down.
How do I make two animations work together?
I remember having to do that for my wrestling game. What I did was create 2 controllers and 2 animations. Controllers should be the same (have same timing, transition and all that). Then make one animation for example animation of how hand reloads a gun. Now memorize sample rate and in which frame you put keys and what do they do (what do they change). Now start animating how mag is getting pulled BASED on what you have done previously. Like in what frame it should go down and in which frame it should be thrown or new one should be inserted and etc.
Then you will have to enter play mode in order to see them simultaneously. I am making a wrestling game and this method is what I use.
Good luck
https://docs.unity3d.com/Manual/AnimatorOverrideController.html
he Animator Override Controller is a type of asset which allows you to extend an existing Animator Controller
, replacing the specific animations used but otherwise retaining the original’s structure, parameters and logic.
This allows you to create multiple variants of the same basic state machine
, but with each using different sets of animations. For example, your game may have a variety of NPC types living in the world, but each type (goblin, ogre, elf, etc) has their own unique animations for walking, idling, sitting, etc.
By creating one “base” Animator Controller containing the logic for all NPC types, you could then create an override for each type and drop in their respective animation files.

How can I make two objects stick together on collision? (unity 3d ) C#

In a game I'm making I'm trying to make two Game Objects stick together on collision. I've tried making the first a child of the other, so that when the parent moves the child moves with it. But when I do that the child teleports, and his scale changes( i know it has something to do with World-location/Local-location and World-scale/Local-scale. The child's position and scale change in relative to the parent's position and scale). But I don't know how to solve it.
If anyone could help I would appreciate it.
(it doesn't have to be parent-child related, I just need a clean fix)
Reparenting is the default solution here. If you expereincing unexpected behaviour with that, its ususally a sign that you are using non-uniform scale somewhere in either of the parent chains. Best practice is to never use scales that have different x, y, z factors. If you need that to change shape of the box, make sure that you scale the box only, and have a dummy parent, to which you reparent your 'attaching' object. Having a non uniform scale somewhere up in the chain (i.e. reparenting to an object that is non uniformly scaled) will skew rotation/scale pairs down the chain, and while this might give the desired effect when only one object is involved, it may bite you when reparenting.
Alternativelty, if that fails to solve your problem for any reason, in newer versions of Unity there is a component called ParentConstraint, which should enable you to achieve the same effect

How to store additional data in a mesh?

I'm prototyping a climbing system for my stealth game. I want to mark some edges so that the player can grab them, but I'm not sure how to approach it. I know I can add separate colliders, but this seems quite tedious. Instead, what I'd like to do is to mark mark the edges in the mesh. Can this be achieved? How?
You're probably better off storing that edge data externally (that is, elsewhere in your running game than in the mesh) in an accessible format, that said, if you truly want to embed it INTO the mesh...
Tuck the metadata into Mesh.colors or Mesh.uv2 like #Draco18s suggests. Most shaders ignore one or both of these. Of course, you'll be duplicating edge/face level info for each vertex on the face.
When a player tries to "grab" an edge, run your collision test, grab the face, grab any vertex on the face, and look up your edge/face info.
You could also, for example, attach your edge metadata collection to a script attached to the mesh's game object. So long as you can access your collection of edges when you need to perform the test, where you store it is an implementation detail.
If you have the ability to add nested objects in your mesh, you could create a primitive box that represents the desired ledge and collider size. When you import the mesh in Unity, the child mesh object will appear as a child gameObject with its own MeshRenderer. You can disable the MeshRenderer for the ledge and add a BoxCollider component to it. The collider will be automatically sized to match the primitive mesh.
Blender Nested Objects example
Unity Nested Objects example

Update Shader on Runtime in Unity

I have an object which has a diffuse shader and on runtime I want the shader to switch to Diffuse Always Visible but this should trigger only if the unit is behind a specific object within the layer named obstacles.
First I tried to switch the object shader with the following code and the shader is changed in the inspector but not in the game during play. I tried placing and calling the shader from the resources and also created seprate materials but its not working.
Here is the code I am using in C#
Unit.renderer.material.shader = Shader.Find("Diffuse - Always visible");
As for the rest I was thinking of using a raycast but not sure how to handle this.
Thanks in advance
I only need the other concept now where the shader changes if a unit is behind an object!
For the trigger, how to handle it will depend a lot on how your camera moves.
To check in a 3D game, raycasts should work. Because whether or not something is "behind" something else depends on the camera's perspective, I would personally try something like this to start:
Use Camera.WorldToScreenPoint to figure out where your object is on the screen.
Use Camera.ScreenPointToRay to convert this into a ray you'll send through the physics engine.
Use Physics.Raycast and check what gets hit. If it doesn't hit the object in question, something is in front of it.
Depending on your goals, you might want to check different points. If you're just checking if something is more than halfway covered by an object, doing a single raycast at its center is probably sufficient. If you want to see if it's at all occluded, you could try something like finding four approximate corners of the object and raycasting against them.
If it's a 2D game or the camera is always at a fixed angle (for example, in a side scroller), the problem is potentially a lot simpler. Add colliders to the objects, and use OnTriggerEnter or OnTriggerEnter2D (along with the corresponding Exit functions). That only works if the tiniest bit of overlap should trigger the effect, and it only works if depth doesn't really matter (e.g. in a 2D game) so your colliders will actually intersect.

Draw points along the surface of a cube

So for an assignment I have to morph a cube into a sphere. All that's required is to have a bunch of points along the surface of a cube (don't need to be connected or actually be on a cube, but has to form a cube, though connections would make it easier to look at) and have them smoothly translate into a sphere shape.
The main problem is I never learned how to make points in XNA 4.0 and from what I've seen it's very different to what we did in OpenGL (we learned the old one in a previous class).
Would anyone be able to help me figure out making the cube shape I need? Each side would have 10x10 points with the points on the edge shared by the surfaces of that edge. The structure would need to be easy to copy or modify since I would need to have the start state, end state, and the intermediate state to translate the points between the two states.
If I left out anything that could be important let me know.
First of all, you should familiarise yourself with the Primitives3D sample. It illustrates all of the rendering APIs that you need.
Here is how I would approach this problem (you can look up these classes and methods on MSDN and it will hopefully help you flesh out the details):
Create an array of Vector3[] that represents an appropriately tessellated unit cube around (0,0)
Create a second array of Vector3[] and and use Vector3.Normalize to copy in the vertices from your first array. This will create a unit sphere with vertices that match up with the original cube.
Create an array of VertexPositionColor[]. Fill in the colour data however you like.
Use Vector3.Lerp to loop through the first two arrays, interpolating each element to set positions in the third array. This gives you a parameter you can animate - you will have to do this each frame (in Update is probably best).
Create an array of indices (short[]) that describes a triangle list of the tessellated cube (and, in turn, the sphere and animation between the two).
Set up a BasicEffect for rendering. This involves setting its World, View and Projection matrices and maybe turning on VertexColorEnabled. If you want lighting, see the sample for details (you'll need to use a vertex type with normals, and animate those normals correctly).
The way to render with an effect is: foreach(EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); /* your stuff here */ }
You could create a DynamicVertexBuffer and IndexBuffer and draw with those. But for something simple like this, DrawUserIndexedPrimitives is much easier (here is a recent answer with some details, and here's another one with a complete example of BasicEffect).
Note that you should only create these objects at startup (LoadContent is a good place). During rendering you're simply using them.
If you have trouble with your 3D rendering, and you're drawing text or sprites, see this article for how to fix it.

Categories