Here is the link that says about:
https://docs.unity3d.com/ScriptReference/Sprite.OverridePhysicsShape.html
The part of IList I understand. But Vector2[], I do not understand how I would use it to populate with data and use it here.
I haven't actually used this and I'm not in a position to knock up a quick test at the moment, but this might help. You should be able to define the physics boundaries for your sprite as a collection of shapes.
Imagine you have a single sprite with two separate components and a gap in the middle. Rather than defining a single physics shape around the entire thing, you might choose to define the physics shape in two parts so that it better conforms to the two parts of the image.
Therefore, the IList is your collection of shapes. This could quite likely just contain a single shape. The Vector2 array defines the points (at least three points in sprite space) for that individual shape. All of the Vector2 arrays combined form the overall physics shape.
EDIT: How to set the values? Something like this perhaps?
void Start ()
{
SpriteRenderer spriteRenderer = GetComponent<SpriteRenderer>();
Sprite sprite = spriteRenderer.sprite;
sprite.OverridePhysicsShape(new List<Vector2[]> {
new Vector2[] { new Vector2(0, 0), new Vector2(1, 0), new Vector2(1, 1), new Vector2(0, 1) },
new Vector2[] { new Vector2(2, 2), new Vector2(3, 2), new Vector2(3, 3), new Vector2(2, 3) },
});
}
DISCLAIMER: Not sure if that actually works.
I'm sorry, but the official Unity answer is "Don't bother." Calls to Sprite.OverridePhysicsShape() only work when importing (or reimporting) images and cannot work at runtime (or even in the editor). Please see this link about it:
https://issuetracker.unity3d.com/issues/sprite-dot-overridephysicsshape-raises-an-error-when-using-it-in-editor-mode
The previous answer is correct that this is a list of Vector2[] arrays, but the coordinates of the Vector2 points must all be between [0,0] and [1,1] because the coordinates are parametric to the bounds of the Sprite.
Here is some code that should work. It doesn't throw an error now (in v2020.3.1), but it also doesn't seem to do anything.
public class SpritePhysicsShapeTest : MonoBehaviour {
public Sprite sprite;
// Start is called before the first frame update
void Start() {
List<Vector2[]> pts = new List<Vector2[]>();
pts.Add( new Vector2[] {
new Vector2( 0, 0 ),
new Vector2( 0.5f, 0 ),
new Vector2( 0.5f, 0.5f ),
new Vector2( 0, 0.5f )
} );
sprite.OverridePhysicsShape(pts);
}
}
If you're trying to programmatically affect collision on a Unity Tilemap, my current solution is to have one Tilemap for visuals (with 256 possible Sprites) and a second invisible Tilemap that handles collisions (with only 16 possible Sprites). This seems to work well, though I did have to manually draw those 16 Sprites' collisions in the Sprite Editor.
Related
In Unity3D, I created this sort of prediction path for some kind of missile in 3D. First I was just using line render, until I quickly realized it was very limiting. So I switched to drawing my own meshes and it's rather beautiful. I just have one problem. In the scene when I'm running the drawMesh function everything runs quickly, but as soon as I switch to another scene there's about a 2 second lag and then everything resumes as normal. Line render didn't present any lag whatsoever, so I must have messed something up because drawing your own meshes should be more efficient.
I suspect somehow when creating my meshes the old ones never get delete, therefore when I change the scene it takes a bit to clear them all away. I'd expect the garbage collector to take care of this, but I'm probably setting something up slightly wrong. The longer you stay in the scene the longer the lag is so that's where my suspicion arises. This is just speculation though.
If anyone could take a look at my code and suggest a fix I'd appreciate it greatly.
Here's the code. It's a little complicated, but not much can be done about that. I'll include the area of code that I think is the culprit because the same code with line render worked. It runs about 30 times per frame.
void drawMesh(Vector3[] vertices, Material m)
{
var mf = GetComponent<MeshFilter>();
var mesh = new Mesh();
mf.mesh = mesh;
mesh.vertices = vertices;
var tris = new int[6]
{
// lower left triangle
0, 2, 1,
// upper right triangle
2, 3, 1
};
mesh.triangles = tris;
var normals = new Vector3[4]
{
-Vector3.forward,
-Vector3.forward,
-Vector3.forward,
-Vector3.forward
};
mesh.normals = normals;
var uv = new Vector2[4]
{
new Vector2(0, 0),
new Vector2(1, 0),
new Vector2(0, 1),
new Vector2(1, 1)
};
mesh.uv = uv;
Graphics.DrawMesh(mesh, Vector3.zero, Quaternion.identity, m, 0, null, 0, null, false, false);
}
}
Thanks for any feedback!
Solved it. Unity garbage collector has a problem. I had to make sure I was manually deleting the meshes after they were created. I put them in an array of length (however long you want them to exist for) and then delete old elements once they exceeded then length.
I am trying to make realistic-looking procedurally generated buildings. So far I have made a texture generation program for them. I would like to apply that texture to a simple cube in Unity, however, I don't want to apply the texture to all of the faces. Currently when I apply the texture to the cube's material it apply's the same texture to all of the faces and on some of the faces the texture is upside down. Do you recommend that I make plane objects and apply the textures to each of those (and form a cube that way). I know this would work, but is it efficient at a large scale? Or is there a way to apply different textures to individual faces of a cube in C#?
Considering that you are trying to create buildings you should procedurally generate your own 'cube'/building mesh data.
For example:
using UnityEngine;
using System.Collections;
public class ExampleClass : MonoBehaviour {
public Vector3[] newVertices;
public Vector2[] newUV;
public int[] newTriangles;
void Start() {
Mesh mesh = new Mesh();
mesh.vertices = newVertices;
mesh.uv = newUV;
mesh.triangles = newTriangles;
GetComponent<MeshFilter>().mesh = mesh;
}
}
Then you populate the verticies, and tris with data.
For example, you could create a small 1 by 1 cube around origin 0 with:
int size = 1;
newVertices = new Vector3[]{
Vector3(-size, -size, -size),
Vector3(-size, size, -size),
Vector3( size, size, -size),
Vector3( size, -size, -size),
Vector3( size, -size, size),
Vector3( size, size, size),
Vector3(-size, size, size),
Vector3(-size, -size, size)
};
Then because you only want to render the texture on 1 of the meshes faces;
newUV = new Vector2[]{
Vector2(0,0),
Vector2(0,1),
Vector2(1,0),
Vector2(1,1),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0),
Vector2(0,0)
};
Note that only allocating UV coords for some of the verts This will effectively only place your desired texture on 1 of the faces and then depending on how you allocate the tris, the rest of the UVs can be altered as you see fit to appear as though they are untextured.
Please note that I wrote this with out an IDE so there may be some syntax errors. Also, this isn't the most straight forward process, but I promise that it far exceeds the use of a series of quads as a building because with this you can make a whole range of shapes 'easily'.
Resources:
http://docs.unity3d.com/ScriptReference/Mesh.html
https://msdn.microsoft.com/en-us/library/windows/desktop/bb205592%28v=vs.85%29.aspx
http://docs.unity3d.com/Manual/GeneratingMeshGeometryProcedurally.html
AFAIK unity does not natively support this.
There are however some easy ways to work around. Your example of using a plane is a good thought, but using a quad would make more sense. Or even easier, create a multi faced cube as a model and export it as for example a .fbx
I am new to 3d Graphics and also wpf and need to combine these two in my current project. I add points and normals to MeshGeometry3D and add MeshGeometry3D to GeometryModel3D. Then add GeometryModel3D to ModelVisual3D and finally add ModelVisual3D to ViewPort3D. Now if i need to rotate i perform the required Transform either on GeometryModel3D or ModelVisual3D and add it again finally to the ViewPort3D. I'm running into a problems:
objViewPort3D.Remove(objModelVisual3D);
objGeometryModel3D.Transform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), angle += 15));
objModelVisual3D.Content = objGeometryModel3D;
objViewPort3D.Children.Add(objModelVisual3D);
to rotate it everytime by 15 degrees why must i do angle += 15 and not just 15? It seems that the stored model is not transformed by Transform operation but transformation is applied only when displaying by ViewPort3D. I want the transformation to actually change the coordinates in the stored MeshGeometry3D object so that when i do the transform next time it does on the previously transformed model and not the original model. How do i obtain this behaviour?
I think you can use Animation
some pseudo-code:
angle = 0
function onClick:
new_angle = angle + 30
Animate(angle, new_angle)
angle = new_angle
You have to do angle += 15 because you're applying a new RotateTransform3D each time.
This might help:
public RotateTransform3D MyRotationTransform { get; set; }
...
//constructor
public MyClass()
{
MyRotationTransform = new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), 0));
}
//in your method
MyRotationTransform.Rotation += 15;
objGeometryModel3D.Transform = MyRotationTransform;
Correct, the position of the mesh is not transformed by the "Transform" operation. Instead the Transform property defines the world transform of the mesh during rendering.
In 3d graphics the world transform transforms the points of the mesh from object space to world space during the render of the object.
(Image from World, View and Projection Matrix Unveiled)
It's much faster to set the world transform and let the renderer draw the mesh in a single transform than transforming each vertex of a mesh, like you want.
I'm working on an RPG game that has a Top-Down view. I want to load a picture into the background which is what the character is walking on, but so far I haven't figured out how to correctly have the background redraw so that it's "scrolling". Most of the examples I find are auto scrolling.
I want the camera to remained centered at the character until you the background image reaches its boundaries, then the character will move without the image re-drawing in another position.
Your question is a bit unclear, but I think I get the gist of it. Let's look at your requirements.
You have an overhead camera that's looking directly down onto a two-dimensional plane. We can represent this as a simple {x, y} coordinate pair, corresponding to the point on the plane at which the camera is looking.
The camera can track the movement of some object, probably the player, but more generally anything within the game world.
The camera must remain within the finite bounds of the game world.
Which is simple enough to implement. In broad terms, somewhere inside your Update() method you need to carry out steps to fulfill each of those requirements:
if (cameraTarget != null)
{
camera.Position = cameraTarget.Position;
ClampCameraToWorldBounds();
}
In other words: if we have a target object, lock our position to its position; but make sure that we don't go out of bounds.
ClampCameraToBounds() is also simple to implement. Assuming that you have some object, world, which contains a Bounds property that represents the world's extent in pixels:
private void ClampCameraToWorldBounds()
{
var screenWidth = graphicsDevice.PresentationParameters.BackBufferWidth;
var screenHeight = graphicsDevice.PresentationParameters.BackBufferHeight;
var minimumX = (screenWidth / 2);
var minimumY = (screnHeight / 2);
var maximumX = world.Bounds.Width - (screenWidth / 2);
var maximumY = world.Bounds.Height - (screenHeight / 2);
var maximumPos = new Vector2(maximumX, maximumY);
camera.Position = Vector2.Clamp(camera.Position, minimumPos, maximumPos);
}
This makes sure that the camera is never closer than half of a screen to the edge of the world. Why half a screen? Because we've defined the camera's {x, y} as the point that the camera is looking at, which means that it should always be centered on the screen.
This should give you a camera with the behavior that you specified in your question. From here, it's just a matter of implementing your terrain renderer such that your background is drawn relative to the {x, y} coordinate specified by the camera object.
Given an object's position in game-world coordinates, we can translate that position into camera space:
var worldPosition = new Vector2(x, y);
var cameraSpace = camera.Position - world.Postion;
And then from camera space into screen space:
var screenSpaceX = (screenWidth / 2) - cameraSpace.X;
var screenSpaceY = (screenHeight / 2) - cameraSpace.Y;
You can then use an object's screen space coordinates to render it.
Your can represent the position in a simple Vector2 and move it towards any entity.
public Vector2 cameraPosition;
When you load your level, you will need to set the camera position to your player (Or the object it should be at)
You will need a matrix and some other stuff, As seen in the code below. It is explained in the comments. Doing it this way will prevent you from having to add cameraPosition to everything you draw.
//This will move our camera
ScrollCamera(spriteBatch.GraphicsDevice.Viewport);
//We now must get the center of the screen
Vector2 Origin = new Vector2(spriteBatch.GraphicsDevice.Viewport.Width / 2.0f, spriteBatch.GraphicsDevice.Viewport.Height / 2.0f);
//Now the matrix, It will hold the position, and Rotation/Zoom for advanced features
Matrix cameraTransform = Matrix.CreateTranslation(new Vector3(-cameraPosition, 0.0f)) *
Matrix.CreateTranslation(new Vector3(-Origin, 0.0f)) *
Matrix.CreateRotationZ(rot) * //Add Rotation
Matrix.CreateScale(zoom, zoom, 1) * //Add Zoom
Matrix.CreateTranslation(new Vector3(Origin, 0.0f)); //Add Origin
//Now we can start to draw with our camera, using the Matrix overload
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default,
RasterizerState.CullCounterClockwise, null, cameraTransform);
DrawTiles(spriteBatch); //Or whatever method you have for drawing tiles
spriteBatch.End(); //End the camera spritebatch
// After this you can make another spritebatch without a camera to draw UI and things that will not move
I added the zoom and rotation if you want to add anything fancy, Just replace the variables.
That should get you started on it.
However, You will want to make sure the camera is in bounds, and make it follow.
Ill show you how to add smooth scrolling, However if you want simple scrolling see this sample.
private void ScrollCamera(Viewport viewport)
{
//Add to the camera positon, So we can see the origin
cameraPosition.X = cameraPosition.X + (viewport.Width / 2);
cameraPosition.Y = cameraPosition.Y + (viewport.Height / 2);
//Smoothly move the camera towards the player
cameraPosition.X = MathHelper.Lerp(cameraPosition.X , Player.Position.X, 0.1f);
cameraPosition.Y = MathHelper.Lerp(cameraPosition.Y, Player.Position.Y, 0.1f);
//Undo the origin because it will be calculated with the Matrix (I know this isnt the best way but its what I had real quick)
cameraPosition.X = cameraPosition.X -( viewport.Width / 2);
cameraPosition.Y = cameraPosition.Y - (viewport.Height / 2);
//Shake the camera, Use the mouse to scroll or anything like that, add it here (Ex, Earthquakes)
//Round it, So it dosent try to draw in between 2 pixels
cameraPosition.Y= (float)Math.Round(cameraPosition.Y);
cameraPosition.X = (float)Math.Round(cameraPosition.X);
//Clamp it off, So it stops scrolling near the edges
cameraPosition.X = MathHelper.Clamp(cameraPosition.X, 1f, Width * Tile.Width);
cameraPosition.Y = MathHelper.Clamp(cameraPosition.Y, 1f, Height * Tile.Height);
}
Hope this helps!
Imagine a segmented creature such as a centipede. With control of the head segment, the body segments are attached to the previous body segment by a point.
As the head moves (in the 8 cardinal/inter-cardinal directions for now) a point moves in relation to its rotation.
public static Vector2 RotatePoint(Vector2 pointToRotate, Vector2 centerOfRotation, float angleOfRotation)
{
Matrix rotationMatrix = Matrix.CreateRotationZ(angleOfRotation);
return Vector2.Transform(pointToRotate - centerOfRotation, rotationMatrix);
}
Was going to post a diagram here but you know...
center(2) point(2) center(1) point(1)
point(1)
point(2) ^ |
/ \ |
| |
center(2) center(1) \ /
V
I have thought of using a rectangle property/field for the base sprite,
private Rectangle bounds = new Rectangle(-16, 16, 32, 32);
and checking that a predefined point within the body segment remains within the head sprite's bounds.
Though I am currently doing:
private static void handleInput(GameTime gameTime)
{
Vector2 moveAngle = Vector2.Zero;
moveAngle += handleKeyboardMovement(Keyboard.GetState()); // basic movement, combined to produce 8 angles
// of movement
if (moveAngle != Vector2.Zero)
{
moveAngle.Normalize();
baseAngle = moveAngle;
}
BaseSprite.RotateTo(baseAngle);
BaseSprite.LeftAnchor = RotatePoint(BaseSprite.LeftAnchor,
BaseSprite.RelativeCenter, BaseSprite.Rotation); // call RotatePoint method
BaseSprite.LeftRect = new Rectangle((int)BaseSprite.LeftAnchor.X - 1,
(int)BaseSprite.LeftAnchor.Y - 1, 2, 2);
// All segments use a field/property that is a point which is suppose to rotate around the center
// point of the sprite (left point is (-16,0) right is (16,0) initially
// I then create a rectangle derived from that point to make use of the .Intersets method of the
// Rectangle class
BodySegmentOne.RightRect = BaseSprite.LeftRect; // make sure segments are connected?
BaseSprite.Velocity = moveAngle * wormSpeed;
//BodySegmentOne.RightAnchor = BaseSprite.LeftAnchor;
if (BodySegmentOne.RightRect.Intersects(BaseSprite.LeftRect)) // as long as there two rects occupy the
{ // same space move segment with head
BodySegmentOne.Velocity = BaseSprite.Velocity;
}
}
As it stands now, the segment moves with head but in a parallel fashion. I would like to get a more nuanced movement of the segment as it is being dragged by the head.
I understand that the coding of such movement will be much more involved than what I have here. Some hints or directions as to how I should look at this problem would be greatly appreciated.
I will describe what you need to do using a physics engine like Farseer but the same holds if you want to write your own physics engine.
Create a Body for each articulated point of the centipede body.
Create a Shape that encapsulates the outer shell that will be attached to each point.
Attach the Body and Shape using a Fixture. This creates one link in your centipede.
Attach multiple links using a SliderJoint.
For example - assuming that the outer shell of each link is a circle, here's how to create two links and join them together.
Fixture fix1 = FixtureFactory.CreateCircle(_world, 0.5f, 1, new Vector2(-5, 5));
fix1.Body.BodyType = BodyType.Dynamic;
Fixture fix2 = FixtureFactory.CreateCircle(_world, 0.5f, 1, new Vector2(5, 5));
fix2.Body.BodyType = BodyType.Dynamic;
JointFactory.CreateSliderJoint(_world, fix1.Body, fix2.Body, Vector2.Zero, Vector2.Zero, 10, 15);
Now applying a force on any of the bodies or collisions on the shapes will drag the second joint around - just like you want.
This is all just stick physics - so you could implement your own if you REALLY wanted to. ;)