I am creating a minecraft clone, and whenever I move the camera even a little bit fast there is a big tear between the chunks as shown here:
Each chunk is 32x32x32 cubes and has a single vertex buffer for each kind of cube, in case it matters. I am drawing 2D text on the screen as well, and I learned that I had to set the graphic device state for each kind of drawing. Here is how I'm drawing the cubes:
GraphicsDevice.Clear(Color.LightSkyBlue);
#region 3D
// Set the device
device.BlendState = BlendState.Opaque;
device.DepthStencilState = DepthStencilState.Default;
device.RasterizerState = RasterizerState.CullCounterClockwise;
// Go through each shader and draw the cubes of that style
lock (GeneratedChunks)
{
foreach (KeyValuePair<CubeType, BasicEffect> KVP in CubeType_Effect)
{
// Iterate through each technique in this effect
foreach (EffectPass pass in KVP.Value.CurrentTechnique.Passes)
{
// Go through each chunk in our chunk map, and pluck out the cubetype we care about
foreach (Vector3 ChunkKey in GeneratedChunks)
{
if (ChunkMap[ChunkKey].CubeType_TriangleCounts[KVP.Key] > 0)
{
pass.Apply(); // assign it to the video card
KVP.Value.View = camera.ViewMatrix;
KVP.Value.Projection = camera.ProjectionMatrix;
KVP.Value.World = worldMatrix;
device.SetVertexBuffer(ChunkMap[ChunkKey].CubeType_VertexBuffers[KVP.Key]);
device.DrawPrimitives(PrimitiveType.TriangleList, 0, ChunkMap[ChunkKey].CubeType_TriangleCounts[KVP.Key]);
}
}
}
}
}
#endregion
The world looks fine if I'm standing still. I thought this might be because I'm in windowed mode, but when I toggled full screen the problem persisted. I also assume that XNA is double buffered by itself? Or so google has told me.
I had a similar issue - I found that I had to call pass.Apply() after setting all of the Effect's parameters...
The fix so far has been to use 1 giant vertex buffer. I don't like it, but that's all that seems to work.
Related
I'm having a problem regarding my simple shooting game in AR using AR foundation.
First of all i need to explain how the app works:
-> it detects 5 planes and puts in the centre of them a cube, then saves the cube and the related plane in a struct:
`
public void PlaneUpdated(ARPlanesChangedEventArgs args)
{
if (args.added != null && planes.Count < maxObjects)
{
ARPlane arPlane = args.added[0];
PlaneObj temp = new PlaneObj(arPlane, (Instantiate(PlaceablePrefab, arPlane.transform.position, Quaternion.identity)));
planes.Add(temp);
}
}
`
-> after 5 blocks are placed a button appears which, when clicked, calls the function Update Size which moves a trigger block collider in the position of the camera and calls for updateObjects function
`
public void UpdateSize()
{
if (canUpdate)
{
canUpdate = false;
lookPoint.transform.position = Camera.main.transform.position;
this.GetComponent<UpdateObjects>().updateObjects(planes, lookPoint); //changed to lookPoint
}
}
`
-> the updateObjects function takes everyn object placed, scales them with a fixed ratio and then calls the LookAt function which turns every block towards the position of lookPoint
`
public void updateObjects(List<PlaceObjectsAutomated.PlaneObj> planes , GameObject lookPoint)
{
foreach(PlaceObjectsAutomated.PlaneObj planeObj in planes)
{
// var scaling = new Vector3(planeObj.plane.size.x * 2, planeObj.plane.size.y * 2, 0); //planeObj.obj.transform.localScale.z);
var scaling = new Vector3(1, 1, 1); /////////
planeObj.obj.transform.localScale += scaling;
planeObj.obj.transform.LookAt(lookPoint.transform.position);
}
this.GetComponent<SpawnEnemy>().spawnEnemy(planes, lookPoint);
}
`
Here's what the bug is: at this point i have 5 scaled blocks that look towards that point, but then after a while, without any other code except these snippets that controls the cubes rotation, they go back to their original rotation and will not rotate anymore, even if i try to call for LookAt again.
Does anyone knows why is it doing this? this problem happens only in AR, because
The things i know for sure are:
-This is not caused by the struct, it happened even before i used that
-The LookAt function isn't called anywhere else nor i manipulate the block's size and rotation except for the updateObjects function
-This is not cause by the scaling, because it happens even if i don't scale the blocks
-The functions aren't called except when i need them to, so there's no instances where it might go through them a second time
-This is not affected by the Quaternion.Identity because in the dummy 3d project i used to test the features the blocks stay rotated even with Identity as the initial spawn rotation
-I know they can't be rotated after they snap back to their original rotations because i actually tried to call the LookAt function afterwards but it doesn't work
I will start with my situation right now.
I downloaded the raycast project from: https://github.com/ChrisSerpico/raycasting
This is based on the tutorial from here: https://lodev.org/cgtutor/raycasting.html
After I got the project to work, played around a bit and modified some things, I'm currently stuck with adding multiple layers (based on one map per layer). I read a lot of things around the Internet but had no luck implementing that feature.
In this project: https://github.com/Owlzy/OwlRaycastEngine
there are multiple layers added, but that is done with slices and I can't figure out how to implement this in the Serpico project (took this because the floor/ceiling drawing works a lot better there). Textures are saved like this:
Texture2D canvas; // used to convert the buffer to a single texture to be drawn
Color[] buffer; // screen buffer with raw color data to be drawn
Color[][] rawData; // raw data of the individual external textures
// initialize graphics rendering objects
canvas = new Texture2D(GraphicsDevice, SCREEN_WIDTH, SCREEN_HEIGHT);
buffer = new Color[SCREEN_WIDTH * SCREEN_HEIGHT];
rawData = new Color[NUM_TEXTURES][]; //number of Textures
for (int i = 0; i < NUM_TEXTURES; i++)
{
rawData[i] = new Color[TEXTURE_WIDTH * TEXTURE_HEIGHT];
}
The buffer gets filles this way in the Wallcasting loop:
if (TEXTURE_WIDTH * texY + texX <= rawData[texNum].Length - 1)
{
buffer[SCREEN_WIDTH * y + x] = rawData[texNum][TEXTURE_WIDTH * texY + texX];
}
else //avoid crash when running into walls
{
buffer[SCREEN_WIDTH * y + x] = rawData[texNum][rawData[texNum].Length - 1];
}
and finally drawn this way:
canvas.SetData<Color>(buffer);
b.Draw(canvas, new Rectangle(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT), Color.White);
The code is straight from the lodev tutorial. I tried around with the variables lineHeight , texY and so on but no result. The textures just get stretched, cut off or the screen gets drawn with terrible effects.
Could someone help please? I really dispair...
Thanks a lot!
The problem is in the call in Draw() canvas.SetData<Color>(buffer);
Move this line to Update() and it will "mostly" work. Texture memory is shared between the CPU and GPU. By the time Draw() is called it is expected the textures already exist in GPU memory. Transferring data during draws causes random tearing.
The mostly comes in the nondeterministic delays between Update() and Draw() and the PCIE memory transfer.
i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.
I have a 2D game in XNA which has a scrolling camera. Unfortunately, when screen is moved, I can see some artifacts - mostly blur and additional lines on the screen.
I thought about changing coordinates before drawing (approximating with Ceiling() or Floor() consistently), but this seems a little inefficient. Is this the only way?
I use SpriteBatch for rendering.
This is my drawing method from Camera:
Vector2D works on doubles, Vector2 works on floats (used by XNA), Srpite is just a class with data for spriteBatch.Draw.
public void DrawSprite(Sprite toDraw)
{
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - transform.Position;
drawingPos.X = (float) drawingPostion.X*UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y*UnitToPixels;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth + zsortingValue);
}
My idea is to do this:
drawingPos.X = (float) Math.Floor(drawingPostion.X*UnitToPixels);
drawingPos.Y = (float) Math.Floor(drawingPostion.Y*UnitToPixels);
And it solves the problem. I think I can accept it this way. But are there any other options?
GraphicsDevice.SamplerStates[0] = SamplerState.PointWrap;
This isn't so much a problem with your camera as it is the sampler. Using a Point Sampler state tells the video card to take a single point color sample directly from the texture depending on the position. Other default modes like LinearWrap and LinearClamp will interpolate between texels (pixels on your source texture) and give it a very mushy, blurred look. If you're going for pixel-graphics, you need Point sampling.
With linear interpolation, if you have red and white next to each other in your texture, and it samples between the two (by some aspect of the camera), you will get pink. With point sampling, you get either red or white. Nothing in between.
Yes it is possible... try something this...
bool redrawSprite = false;
Sprite toDraw;
void MainRenderer()
{
if (redrawSprite)
{
DrawSprite(toDraw);
redrawSprite = false;
}
}
void ManualRefresh()
{
"Create or set your sprite and set it to 'toDraw'"
redrawSprite = true;
}
This way you will let main loop do the work like is intended.
I have a problem in the game I wrote with XNA. I recently added Textured polygons, and saw that every textured polygon shared the same texture although I changed it before calling. The code I am using:
if (countMeshes > 0)
{
for (int i = 0; i < countMeshes; i++)
{
TexturedMesh curMesh = listMeshes[i];
if (curMesh.tex == null)
{
drawEffect.Texture = WHITE_TEXTURE;
}
else
{
drawEffect.Texture = curMesh.tex;
}
drawEffect.Techniques[0].Passes[0].Apply();
graphics.DrawUserPrimitives(PrimitiveType.TriangleList, curMesh.verts, 0, curMesh.count);
}
}
Now, the first that came into my mind would be to create a BasicEffect for each Texture I need to draw. But I think that would be a bit of a overkill so my question is: How should I do it?
PS: I double checked everything, the UV Coords are fine, and it is 2D.
It seems to be the only way, the way I did it was to create a Dictionary with the texture as the key and a struct of a basic effect and a list of Vertexs as the value.
Something like this:
public struct MeshEffect
{
public TexturedMesh mesh;
public BasicEffect effect;
}
and the Dictionary:
private Dictionary<Texture2D, MeshEffect> texturedMeshes = new Dictionary<Texture2D, MeshEffect>();
But it all really depends on how you handle drawing, but one thing is sure, you can't use more than one texture on one BasicEffect.