I am working on a smooth terrain generation algorithm in C# and using XNA to display the data.
I am making it so it creates a new point halfway between each point per iteration, at a random height between the two. This works OK, and I am getting the current result, randomly placed points.
Now what I want to do is turn these points into a primitive (I think that is what it is) and display it like a mountain, obviously using a mountain texture. Example below (using different point data, made up in paint)
Any help or tips are greatly appreciated, and look forward to your responses.
Thanks.
Twitchy
You can draw triangle strips alternating between the points in your primitive and the bottom of the screen with the same x coordinate as the point just before it, stepping along the bottom of the screen.
I am not familiar with drawing primitives in XNA (just openGL), but it should be similar.
You take your points, e.g. A,B,C and D
to draw the strips. you would have your vertices ordered as;
vertex1= A
vertex2 = point(A.x, 0)
vertex3= B
vertex4 = point(B.x, 0)
vertex5= C
vertex6 = point(C.x, 0)
vertex7= D
vertex8 = point(D.x, 0)
(I assume the bottom of the screen has a y coordinate of 0, it can be screen height or whatever y you choose)
http://en.wikipedia.org/wiki/Triangle_strip
Related
I'm not really like to post questions about problems without doing the research, but I'm close to give up, so I thought I give it a shot and ask you about my problem.
I want to create a custom collision detection in Unity ( So please don't advice "use rigidbody and\or colliders" because I don't want to use them by purpose).
The main idea: I want to detect Basic Sphere and Basic Box collision. I already find AABB vs Sphere theme with the following solution:
bool intersect(sphere, box) {
var x = Math.max(box.minX, Math.min(sphere.x, box.maxX));
var y = Math.max(box.minY, Math.min(sphere.y, box.maxY));
var z = Math.max(box.minZ, Math.min(sphere.z, box.maxZ));
var distance = Math.sqrt((x - sphere.x) * (x - sphere.x) +
(y - sphere.y) * (y - sphere.y) +
(z - sphere.z) * (z - sphere.z));
return distance < sphere.radius;
}
And this code does the job, the box bounding and the sphere center point with radius works fine, I can detect the Sphere collision on Box.
The problem is, I want to Rotating the Cube in Runtime, so that will screw up everything, the bounding will split away and the collision will gone (or collide on random places). I've read about some comments where they said, bounding not works with rotation, but I'm not sure what else can I use to solve this problem.
Can you help me with this topic please? I'll take every advice I can get (except Colliders & Rigidbodies of course).
Thank you very much.
You might try using the separating axis theorem. Essentially, for a polyhedron, you use the normal of each face to create an axis. Project the two shapes you are comparing onto each axis and look for an intersection. If there is no intersection along any of the axes, there is no intersection of shapes. For a sphere, you will just need to project onto the polyhedron's axes. There is a great 2D intro to this from metanet.
Edit: hey, check it out-- a Unity implementation.
A good method to find if an AABB (axis aligned bounding box) and sphere are intersecting is to find the closest point on the box to the sphere's center and determine if that point is within the sphere's radius. If so, then they are intersecting, if not then not.
I believe you can do the same thing with this more complicated scenario. You can represent a rotated AABB with a geometrical shape called a parallelepiped. You would then find the closest point on the parallelepiped to the center of the sphere and again check if that point exists within the sphere's radius. If so, then they intersect. If not, then not.
The difficult part is finding the closest point on the parallelepiped. You can represent a parallelepiped in code with 4 3d vectors: center, extentRight, extentUp, and extentForward. This is similar to how you can represent an AABB with a 3d vector for center along with 3 floats: extentRight, extentUp, and extentForward. The difference is that for the parallelepiped those 3 extents are not 1 dimensional scalars, but are full vectors.
When finding the closest point on an AABB surface to a given point, you are basically taking that given point and clamping it to the AABB's volume. You would, for example, call Math.Clamp(point.x, AABB.Min.x, AABB.Max.x) and so on for Y and Z.
The resulting X,Y,Z would be the closest point on the AABB surface to the given point.
To do this for a parallelepiped you need to solve the "linear combination" (math keyword) of extentRight(ER), extentUp(EU), and extentForward(EF) to get the given point. In other words, what scalars do you have to multiply ER, EU, and EF by to get to the given point? When you find those scalars you need to clamp them between 0 and 1 and then multiply them again by ER, EU, and EF respectively to get that closest point on the surface of the parallelepiped. Be sure to offset the given point by the Parallelepiped's min position so that the whole calculation is done in its local space.
I didn't want to spend any extra time learning how to solve for a linear combination (it seems it involves things like using an "augmented matrix" and "gaussian elimination") otherwise I'd include that here too. This should get you or anyone else reading this off to the right track hopefully.
Edit:
Actually I think its a lot simpler and you don't need a parallelepiped. If you have access to the rotation (Vector3 or Quaternion) that rotated the cube you could get the inverse of that and use that inverse rotation to orbit the sphere around the cube so that the new scenario is just the normal axis aligned cube and the orbited sphere. Then you can do a normal AABB - sphere collision detection.
So I have a seamless texture. The black lines in the picture represent the repeating texture. Then I have a series of blocks who have known x,y coordinates and known height and width. In Unity Textures have a 'scale' and 'offset'.
Scale is the amount of the texture that will show on the block. So the block starting at 0,0 will have a scale of about (.2,1.3). It's width is .2x the width of a single texture (for the sake of simple numbers) and it's height is about 1.3x the height of the texture.
Offset then moves the 'starting point of the texture. The block at 0,0 will have an offset of 0,0 because it is perfectly aligned with a corner but the block immediately to it's right would have a texture of about .2,0 because the texture needs to start about .2 units in to align properly. This is as far as I understand offset. I am pretty sure this is correct but feel free to correct me if I am wrong.
Now when you apply a texture to a block unity automatically scales the texture to start at the top left corner of the block and stretch it appropriately to fit 1 full iteration inside of that space. I obviously don't want this.
My question comes in for the three blocks labeled with the (x,y) coordinates. I have tried for several hours over a few weeks to get it right, unsuccessfully.
So how do I take in the x,y position and width/height to create a correct scale and offset so that those blocks will look like they are exactly where they are supposed to be in the texture?
It is not a particularly difficult concept but after staring at it I have no more ideas.
For the sake of the question assume a single texture is 12x12. The x,y and width/height are known values but are arbitrary.
I know it's normally good practice to post attempted code but I would rather see a good way of doing it than see answers that try to fix my failed attempts. But I will post code if people want to see that I did try on my own or how I initially tried.
What is a UV Map
Textures are applied to models by what is known as UV map. The idea is that each (x,y,z) vertex has two (u,v) coordinates assigned. UV coordinates define which point on the texture should correspond to that vertex. If you want to know more, I suggest the awesome (and free) Udacity 3D graphics course. Unit 8 talks about UV mapping.
How to solve your problem using UV Maps
Let's ignore all the vertices that are not visible - your blocks are basically rectangles. You can assign a UV mapping, where world possition of each vertex will be turned into its UV coordinate. This way, all the blocks will have the same origin point in texture space (0,0 in world position corresponds to 0,0 on texture). Code looks like this:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector2[] uvs = new Vector2[vertices.Length];
for (int i = 0; i < uvs.Length; i++)
{
// Find world position of each point
Vector3 vertexPos = transform.TransformPoint(vertices[i]);
// And assign its x,y coordinates to u,v.
uvs[i] = new Vector2(vertexPos.x, vertexPos.y);
}
mesh.uv = uvs;
You have to do this each time your block position changes.
I am messing about in XNA and have run into a problem. I have a 48 * 48 sprite that I can keep track of its location in the game world by the top left corner of the sprite.
I want to be able to rotate the square and still keep track of the same point. For instance if I rotate 90degrees clockwise and the orginal X position was 200 the new X position should be 200 + 48(the size of the width of the image). Its fine for 90 degrees I am able to work that out in my head but each one in between is the problem!
I know there is probably some kind of formula to work this out.
Any help would be great! Oh the square is rotating on its center.
I'm just using spriteBatch.Draw()
spriteBatch.Draw( animations[currentAnimation].Texture,
Camera.WorldToScreen(WorldRectangle),
animations[currentAnimation].FrameRectangle,
color, rotationScale , new Vector2((float)frameHeight/2, (float)frameWidth/2), effect, TileMap.characterDepth);
If you have to keep track of a moving rotating sprite you can't use the top left corner, but its centroid. You already draw your sprite using the centroid to rotate it.
The problem is that the second parameter of your Draw call is a Rectangle, you sholud use a Vector2 position, instead.
You're building your application on top of a 3D graphics library. 3D graphics libraries are very good at solving this kind of problem! Break it down into smaller operations and let the library do the work for you.
First: it's easiest to think about these kinds of questions when you're working in model space rather than world space. In other words: you don't need to worry about where the rotating point is in absolute terms, you only need to worry about where it is relative to the untransformed model (in this case, your sprite without any rotation or translation).
So where is that? Simple:
var pt = new Vector3(-frameWidth / 2f, -frameHeight / 2f, 0f);
Your point of origin is the center of your sprite, so the center of your sprite in model space is (0, 0). This means that the top left corner of your sprite is half the width of the sprite in the negative x direction, and half the height of the sprite along the negative y direction.
Now create an object that represents the desired transformation. You can do this by creating a rotation matrix using XNA's built-in methods:
var transformation = Matrix.CreateRotationZ(MathHelper.ToRadians(90f));
Now apply the transformation to your original point:
var transformedPt = Vector3.Transform(pt, transformation);
This is still in model space, remember, so to get world coordinates you'll need to transform it into world space:
var transformedWorldX = transformedPt.X + spritePosition.X;
var transformedWorldY = transformedPt.Y + spritePosition.Y;
And there you go.
I have been trying to wrap my head around how my Linear and Vector Algerbra knowledge fits in with Computer Graphics. Particulary in the language C#
The knowledge I mean is:
Points
Vectors
Matrices
Matrix multiplaction - Rotations, Skews, etc..
Heres my goal: Create a simple box, and apply a rotation, translation, and skew to it via matrix multiplication. Afterwards, start messing around with the camera. I wish to do this all myself, only using the functions that actually take in the data and draw it. I wish to create all the logical stuff inbetween.
Heres what i've got so far:
My custom Vector3 class, which holds
-an X, Y, and Z variable (floats)
-Several static matrices (as 2x2 2d float arrays?) that hold ZERO and TRANSLATION matrices (for 2x2 and 3x3)
-Methods
1. Rotate(float inAngle) - Creates a rotation matrix and multiplies the xyz by it.
2. Translate(inx,iny,inz) - Adds the ins to the member variables
3. etc...
When complete, i translate the vector back into a C# Vector3 class and pass it to a drawing class, such as DrawPrimitiveShapes which would draw Lines.
The box class is like this:
4 Vector3's, UpperLeftX, UpperRightX, LowerLeftX, LowerRightX
a Draw class which uses the 4 points to then render lines to each one
My confusion comes at this:
How do I rotate this box? Am I on the right track by using 4 vector3's for the box?
Do I just rotate all four vector3's by the angle and be done with it? How does a texture get rotated if it's got all this texture data in the middle?
The way I learned is by using the upper level built in Xna methods and using 'Reflector' to see inside those methods to see how they work.
To rotate the box, each of the four vertices needs to be transformed from where they were to: a number of degrees about a particular axis.
In Xna 2d the axis is always the Z axis and that axis always runs through the worlds origin, the top left corner of the screen in xna.
So to rotate your four rectangle vertices in xna, you would do something like this:
foreach(Vector2 vert in vertices)
{
vert = Vector2.Transform(vert, Matrix.CreateRotationZ(someRadians));
}
This gets the vertices to rotate (orbit) the top left corner of the screen.
In order to have the box rotate in place, you would first move the box to the top left corner of the screen , rotate it a bit, then move it back. All this happens in a single frame so all the user sees is the rectangle rotating in place. There are many ways to do that in code but here is my favorite:
// assumes you know the center of the rectangle's x & y as a Vector2 'center'
foreach(Vector2 vert in vertices)
{
vert = Vector2.Transform(vert - center, Matrix.CreateRotationZ(someRadians)) + center;
}
Now if you were to reflect the "Matrix.CreateRotationZ" method, or the "Vector2.Transform" method, you would see the lines of code MS used to make that work. By working through them, you can learn the math behind more efficiently without so much trial and error.
Is there any way to generate a Curve class and then draw that curve in 2D on the screen in XNA?
I want to basically randomly generate some terrain using the Curve and then draw it. Hoping that I can then use that curve to detect collision with the ground.
It sounds like what you want is the 2D equivalent of a height-map. I'd avoid making a true "curve" and simply approximate one with line segments.
So basically you'll have an array or list of numbers that represent the height of your terrain at a series evenly spaced (horizontally) points. When you need a height between two points, you simply linearly interpolate between the two.
To generate it - you could set a few points randomly, and then do some form of smooth interpolation to set the rest. (It really depends on what kind of curve you want.)
To render it you could then just use a triangle strip. Each point in your height-map will have two vertices associated with it - one at the bottom of the screen, the other at the height of that point in the height-map.
To do collision detection - the easiest way is to have your objects be a single point (it sounds like you're making a artillery game like Scorched Earth) - simply take the X position of your object, get the Y position of your terrain at that X position, if the Y position of your object is below the terrain, set it so that it is on the terrain's surface.
That's the rough guide, anyway :)