AlphaTestEffect.Projection property - c#

I'm looking over this tutorial to mix different textures based on the types of pixels I want to pass:
http://www.crappycoding.com/tag/xna/page/2/
and so far I hink I understand the whole concept, except for couple lines in creating the AlphaTestEffect object, as there is very little explanation to it given and I have no clue what it is there for and why it's set up like that.
Matrix projection = Matrix.CreateOrthographicOffCenter(0, PlanetDataSize, PlanetDataSize, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
alphaTestEffect.Projection = halfPixelOffset * projection;
Could somebody please explain these necesities, what they do and what they are for? I hope it won't take too much time, and my question is not a silly one.
cheers
Lucas

Because he is using a custom effect instead of the default SpriteBatch one, he has to make sure the projection works the same way as the default (or rather, he's making it the same to make everything play nice together).
http://blogs.msdn.com/b/shawnhar/archive/2010/04/05/spritebatch-and-custom-shaders-in-xna-game-studio-4-0.aspx
It's explained there if you scroll down a bit:
" This code configures BasicEffect to replicate the default SpriteBatch coordinate system:"
The default SpriteBatch camera is a simple orthographic projection with a half pixel offset to display 2D textures better. That can be explained here:
http://drilian.com/2008/11/25/understanding-half-pixel-and-half-texel-offsets/

Related

Procedural Planet Spherical Mesh Deformation C# Unity 5

I've been company many of question people post and all the answers you guys give, followed several tutorials and since all the links on my google search are marked as “already visited” I’ve decided to put my pride aside and post a question for you.
This is my first post so i don’t know if im doing that right sorry if not, anyway the problems is this:
I’m working in a C# planetary exploration game on unity 5, I’ve already built a sphere out of an octagon following some tutorials mentioned here, and also could build the perlin textures and heightmaps with them as well, the problem comes on applying them to the sphere and produce the terrain on the mesh, I know I have to map the vertices and UVs of the sphere to do that, but the problem is that I really suck at the math thing and I couldn’t find any step by step to follow, I’ve heard about Tessellation shaders, LOD, voronoi noise, perlin noise, and got lost on the process. To simplify:
What I have:
I have the spherical mesh
I have the heightmaps
I’ve assigned them to a material along with the proper normal maps
what I think I; (since honestly, I don’t know if this is the correct path anymore) need assistance with:
the code to produce the spherical mesh deformation based on the heightmaps
How to use those Tessellation LOD based shaders and such to make a real size procedural planet
Ty very much for your attention and sorry if I was rude or asked for too much, but any kind of help you could provide will be of a tremendous help for me.
I don't really think I have the ability to give you code specific information but here are a few additions/suggestions for your checklist.
If you want to set up an LOD mesh and texture system, the only way I know how to do it is the barebones approach where you physically create lower poly versions of your mesh and texture, then in Unity, write a script where you have an array of distances, and once the player reaches a certain distance away or towards the object, switch to the mesh and that texture that are appropriate for that distance. I presume you could do the same by writing a shader that would do the same thing but the basic idea remains the same. Here's some pseudocode as an example (I don't know the Unity library too well):
int distances[] = {10,100,1000}
Mesh mesh[] = {hi_res_mesh, mid_res_mesh, low_res_mesh}
Texture texture[] = {hi_res_texture, mid_res_texture, low_res_texture}
void Update()
{
if(player.distance < distances[0])
{
gameobject.Mesh = mesh[0]
gameobject.Texture = texture[0]
else
{
for(int i = 1; i < distances.length(); i++)
{
if (player.distance <= distances[i] && player.distance >= distances[i-1]):
{
gameobject.Texture = texture[i]
gameobject.Mesh = mesh[i]
}
}
}
}
If you want real Tesselation based LOD stuff which is more difficult to code: here are some links:
https://developer.nvidia.com/content/opengl-sdk-simple-tessellation-shader
http://docs.unity3d.com/Manual/SL-SurfaceShaderTessellation.html
Essentially the same concept applies. But instead of physically changing the mesh and texture you are using, you change the mesh and texture procedurally using the same idea of having set distances where you change the resolution of the mesh and texture inside of your shader code.
As for your issue with spherical mesh deformation. You can do it in a 3d editing software like 3dsMax, Maya, or Blender by importing your mesh and applying a mesh deform modifier and use your texture as the deform texture and then alter the level of deformation to your liking. But if you want to do something more procedural in real time you are going to have to physically alter the verticies of your mesh using the vertex arrays and then retriangulating your mesh or something like that. Sorry I'm less helpful about this topic as I am less knowledgeable about it. Here are the links I could find related to your problem:
http://answers.unity3d.com/questions/274269/mesh-deformation.html
http://forum.unity3d.com/threads/deform-ground-mesh-terrain-with-dynamically-modified-displacement-map.284612/
http://blog.almostlogical.com/2010/06/10/real-time-terrain-deformation-in-unity3d/
Anyways, good luck and please let me know if I have been unclear about something or you need more explanation.

a texture that repeats across the world, based on X, Y coordinates

I'm using XNA/MonoGame to draw some 2D polygons for me. I'd like a Texture I have to repeat on multiple polygons, based on their X and Y coordinates.
here's an example of what I mean:
I had thought that doing something like this would work (assuming a 256x256 pixel texture)
verticies[0].TextureCoordinate = new Vector2(blockX / 256f, (blockY + blockHeight) / 256f);
verticies[1].TextureCoordinate = new Vector2(blockX / 256f, blockY / 256f);
verticies[2].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, (blockY + blockHeight) / 256f);
verticies[3].TextureCoordinate = new Vector2((blockX + blockWidth) / 256f, blockY / 256f);
// each block is draw with a TriangleStrip, hence the odd ordering of coordinates.
// the blocks I'm drawing are not on a fixed grid; their coordinates and dimensions are in pixels.
but the blocks end up "textured" with long-horizontal lines that look like the texture has been extremely stretched.
(to check if the problem had to do with TriangleStrips, I tried removing the last vertex and drawing a TriangleList of 1 - this had the same result on the texture, and the expected result of drawing only one half of my blocks.)
what's the correct way to achieve this effect?
my math was correct, but it seems that other code was wrong, and I was missing at least one important thing.
maybe-helpful hints for other people trying to achieve this effect and running into trouble:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
^ you need that code. but importantly, your SamplerState and other settings will get reset when you draw sprites (SpriteBatch's Begin()), so especially if you're abstracting your polygon-rendering code into little helper functions, be mindful of when and where you call them! ex:
spriteBatch.Begin();
// drawing sprites
MyFilledPolygonDrawer(args);
// drawing sprites
spriteBatch.End();
if you do this (assuming MyFilledPolygonDrawer uses 3D methods), you'll need to change all the settings (such as SamplerState) before you draw in 3D, and possibly after (depending on what settings you use for 2D rendering), all of which comes with a little overhead (and makes your code more fragile - you're more likely to screw up :P)
one way to avoid this is to draw all your 3D stuff and 2D stuff separately (all one, then all the other).
(in my case, I haven't got my code completely separated out in this way, but I was able to at least reduce some switching between 2D and 3D by using 2D methods to draw solid-color rectangles - Draw Rectangle in XNA using SpriteBatch - and 3D stuff only for less-regular and/or textured shapes.)

Clamping TextureAddressMode in XNA

I've been working on implementing a 2D lighting system in XNA, and I've gotten the system to work--as long as my window's dimensions are powers of two. Otherwise, the program will fail at this line:
GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, Vertices, 0, 2);
The exception states that "XNA Framework Reach profile requires TextureAddressMode to be Clamp when using texture sizes that are not powers of two," and every attempt that I've made to slve this problem has failed--the most common solution I've found on the internet is to put the line GraphicsDevice.SamplerStates[0] = SamplerState.LinearClamp; directly above the line above, but that hasn't solved my problem.
I apologize if I've left out any information that could be necessary to solve this; I'll be more than happy to provide more as needed.
Isn't this the same question you asked before?
In your HLSL look for the line that declares the sampler that the pixel shader is using.
You can set the address mode to clamp in this line.
SamplerState somethingLikeThis {
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};

Collision detecting custom sketched shape, represented as list of points

I have a set of points, drawn by the user. They will be drawing around some objects.
I need to somehow turn this set of points into a shape, so I can find the area to detect collisions.
An image will clarify:
Set of points represented as shape http://www.imagechicken.com/uploads/1277188630025178800.jpg
.
The best idea I have had so far involves iterating over every pixel determining if it is 'inside' or 'outside' the shape, but that would be horribly slow, and I'm not even sure how to do the determining 'inside'/'outside' bit...
Any hints? I am using .NET (C# and XNA) if that helps you help me!
You can think of your shape as an union of several shapes each of which is a simple closed polygon.
the check for every object if it is inside any of the polygons in the following manner:
All dots connected by lines - each line has an equation defining it.
For every object - build an equation for a line passing through this object.
now - for each object equation you need to check how many lines (those between the dots) intersects this object equation - but count only the intersection points that are in the rage between the two dots (and not in the rest of the line outside the two dots) and only the intersection points that are in one side of the object (pick a side - doesn't matter).
If the count is even - the object is outside the shape - otherwise it is inside.
Just a precursor to anything I will say, I have no experience in this field, this is just how I would go about the problem.
A tactic a lot of games use for this is known as Hit Boxes. It is much easier to detect if a point is inside a square than any other figure. But this doesn't give you an exact collision, it could be right outside your desired object.
I've seen Collision 'Bubbles' used before. Here is a link I found for you. This explains the use of Collision Bubbles in the console game Super Smash Brothers.
Given a point, the distance formula, and a radius, you can easily implement collision bubbles.
To take it even one step forward, I did a little bit of research, I saw a nifty little algorithm (more advanced that the top two suggestions), the "Gilbert-Johnson-Keerthi Collision detection algorithm for convex objects." Here is a link for ya. The implementation provided is written in D. If your working in C# it shouldn't be too hard to translate (I would highly suggest digesting the algorithm too).
Hope this gives you some direction.
Well I got it working thanks to some help on another forum.
I used the GraphicsPath class to do all the hard work for me.
This is what my method ended up looking like:
public bool IsColliding(Vector2 point)
{
GraphicsPath gp = new GraphicsPath();
Vector2 prevPoint = points[0];
for (int i = 1; i < points.Count; i++)
{
Vector2 currentPoint = points[i];
gp.AddLine(prevPoint.X, prevPoint.Y, currentPoint.X, currentPoint.Y);
prevPoint = currentPoint;
}
gp.CloseFigure(); //closing line segment
return gp.IsVisible(point.X, point.Y);
}
Thanks for your suggestions both of you

using MDX Sprite.Draw2D() and the positional problems i have encountered

I'm trying to use the Sprite Class in Microsoft.DirectX.Direct3D to draw some sprites to my device.
GameObject go = gameDevice.Objects[x];
SpriteDraw.Draw2D(go.ObjectTexture,
go.CenterPoint,
go.DegreeToRadian(go.Rotation),
go.Position,
Color.White);
GameObject is a class i wrote, in which all the basic information required of a game object is stored (like graphics, current game position, rotation, etc)
My dice with Sprite.Draw2D is the Position parameter (satisfied here with go.Position)
if i pass go.Position, the sprite draws at 0,0 regardless of the Position value of the Object.
i tested hard-coding in "new Point(100, 100)" and all objects drew at 100,100.
I cant figure out why the variable doesnt correctly satisfy the parameter.
I've done some googling, and many people have said MDX's Sprite.Draw2D is buggy and unstable, but i didnt find a solution.
Thus i call upon Stack Overflow to hopefully shed some light on this problem!
Fixed
Yes sprite.Draw2D some time gives problem. Have u tried sprite.Draw its working fine for me.
Here is the sample for Sprite.Draw.
GameObject go = gameDevice.Objects[x];
SpriteDraw.Draw2D(go.ObjectTexture,
new Vector3(go.CenterPoint.X,go.CenterPoint.Y,O),
new Vector3(go.Position.X,go.Position.Y,O),
Color.White); and for rotation u can use matrix transform of sprite.

Categories