Let me post the images first...
Solid shot where tearing occurs
And wireframe shot of that place
I am mostly using mostly using Riemers tutorial while the render code is..
Main render
public void Render()
{
device.Clear(Color.CornflowerBlue);
RasterizerState rs = new RasterizerState();
rs.CullMode = cullmode;
rs.FillMode = fillmode;
device.RasterizerState = rs;
effect.Parameters["xView"].SetValue(camera.ViewMatrix);
effect.Parameters["xProjection"].SetValue(camera.ProjectionMatrix);
effect.Parameters["xWorld"].SetValue(Matrix.Identity);
effect.Parameters["xEnableLighting"].SetValue(true);
effect.Parameters["xLightDirection"].SetValue(lightDirection);
effect.Parameters["xAmbient"].SetValue(0.5f);
globals.game.terrain.Render();
globals.game.spriteBatch.Begin();
globals.console.Render();
globals.game.spriteBatch.End();
}
Terrain.Render()
public void Render()
{
globals.game.graphics.effect.CurrentTechnique = globals.game.graphics.effect.Techniques["Colored"];
globals.game.graphics.effect.Parameters["xWorld"].SetValue(worldMatrix);
foreach (EffectPass pass in globals.game.graphics.effect.CurrentTechnique.Passes)
{
pass.Apply();
globals.game.graphics.device.Indices = indexBuffer;
globals.game.graphics.device.SetVertexBuffer(vertexBuffer);
globals.game.graphics.device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, indices.Length / 3);
}
}
I was stuck with this problem for pretty long now (not knowing if it is caused by my coding skills, xna or some g-card configuration...) so i wonder if someone have some ideas what might cause it?
Temporarily comment out the spritebatch begin, draw, & end code and see if that makes a difference. It may or may not depending on the vertex order in the buffer.
If it does help, your solution can be found here: http://blogs.msdn.com/b/shawnhar/archive/2010/06/18/spritebatch-and-renderstates-in-xna-game-studio-4-0.aspx
Related
I am trying to get a bitmap from a GradientStopCollection, so far I have been able to create a the GradientStopCollection with the following code.
public GradientStopCollection GradientColor(Color SelectedColor)
{
GradientStopCollection CGbrush = new GradientStopCollection(2);
CGbrush.Add(new GradientStop(SelectedColor, 0));
CGbrush.Add(new GradientStop(Colors.Black, 1));
return CGbrush;
}
I have tried to create a bit a copple time but so far none are successful.
If anyone has any ideas I would be happy to hear them,
Thanks :)
I searched and didn't find a way to do that.
I want to attach labels to 3D objects using sharpDX in HoloLens app.
anyone knows how?
thanks
edit:
so I decided to convert the text to image and then put it as texture on a plane made of 2 triangles mesh.
so now I tried this code:
https://gist.github.com/naveedmurtuza/6600103
but can't include the references, how do I fix that?
thanks
I wrote a game using SharpDx a few years ago, here's a video of it:
https://www.youtube.com/watch?v=tDRmIY6-8Z4
If I understand you correctly, you want 3D-texts just as the ones I'm using to explain the game elements? If so, you might get some ideas from this source code:
protected override bool draw(Camera camera, DrawingReason drawingReason, ShadowMap shadowMap)
{
if (drawingReason != DrawingReason.Normal)
return true;
camera.UpdateEffect(Effect);
foreach (var item in Items)
{
Effect.World = Matrix.BillboardRH(item.Target.Position + item.GetOffset(item), camera.Position, -camera.Up, camera.Front);
Effect.DiffuseColor = item.GetColor(item);
SpriteBatch.Begin(SpriteSortMode.Deferred, Effect.GraphicsDevice.BlendStates.NonPremultiplied, null, Effect.GraphicsDevice.DepthStencilStates.DepthRead, null, Effect.Effect);
SpriteBatch.DrawString(Font, item.Text, Vector2.Zero, Color.Black, 0, Font.MeasureString(item.Text) / 2, item.GetSize(item), 0, 0);
SpriteBatch.End();
}
Effect.GraphicsDevice.SetDepthStencilState(Effect.GraphicsDevice.DepthStencilStates.Default);
Effect.GraphicsDevice.SetBlendState(Effect.GraphicsDevice.BlendStates.Opaque);
return true;
}
Full game code is open source and available here:
https://github.com/danbystrom/Larv/blob/master/src/factor10.VisionThing/FloatingText/FloatingTexts.cs#L32
i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.
Posted this earlier and one person replied with any suggestion, so I'll post one more time I think. I realise it's old technology we're alkling about here, but still - I'm stuck. So any thoughts - anything - would be appreciated.
I'm currently working my way through "Beginning C# Programming", and have hit a problem in chapter 7 when drawing textures.
I have used the same code as on the demo CD, and although I had to change the path of the texture to be absolute, when rendered it is appearing grey.
I have debugged the program to write to file the loaded texture, and this is fine - no problems there. So something after that point is going wrong.
Here are some snippets of code:
public void InitializeGraphics()
{
// set up the parameters
Direct3D.PresentParameters p = new Direct3D.PresentParameters();
p.SwapEffect = Direct3D.SwapEffect.Discard;
...
graphics = new Direct3D.Device( 0, Direct3D.DeviceType.Hardware, this,
Direct3D.CreateFlags.SoftwareVertexProcessing, p );
...
// set up various drawing options
graphics.RenderState.CullMode = Direct3D.Cull.None;
graphics.RenderState.AlphaBlendEnable = true;
graphics.RenderState.AlphaBlendOperation = Direct3D.BlendOperation.Add;
graphics.RenderState.DestinationBlend = Direct3D.Blend.InvSourceAlpha;
graphics.RenderState.SourceBlend = Direct3D.Blend.SourceAlpha;
...
}
public void InitializeGeometry()
{
...
texture = Direct3D.TextureLoader.FromFile(
graphics, "E:\\Programming\\SharpDevelop_Projects\\AdvancedFrameworkv2\\texture.jpg", 0, 0, 0, 0, Direct3D.Format.Unknown,
Direct3D.Pool.Managed, Direct3D.Filter.Linear,
Direct3D.Filter.Linear, 0 );
...
}
protected virtual void Render()
{
graphics.Clear( Direct3D.ClearFlags.Target, Color.White , 1.0f, 0 );
graphics.BeginScene();
// set the texture
graphics.SetTexture( 0, texture );
// set the vertex format
graphics.VertexFormat = Direct3D.CustomVertex.TransformedTextured.Format;
// draw the triangles
graphics.DrawUserPrimitives( Direct3D.PrimitiveType.TriangleStrip, 2, vertexes );
graphics.EndScene();
graphics.Present();
...
}
I can't figure out what is going wrong here. Obviously if I load up the texture in windows it displays fine - so there's something not right in the code examples given in the book. It doesn't actually work, and there must be something wrong with my environment presumably.
I am creating a minecraft clone, and whenever I move the camera even a little bit fast there is a big tear between the chunks as shown here:
Each chunk is 32x32x32 cubes and has a single vertex buffer for each kind of cube, in case it matters. I am drawing 2D text on the screen as well, and I learned that I had to set the graphic device state for each kind of drawing. Here is how I'm drawing the cubes:
GraphicsDevice.Clear(Color.LightSkyBlue);
#region 3D
// Set the device
device.BlendState = BlendState.Opaque;
device.DepthStencilState = DepthStencilState.Default;
device.RasterizerState = RasterizerState.CullCounterClockwise;
// Go through each shader and draw the cubes of that style
lock (GeneratedChunks)
{
foreach (KeyValuePair<CubeType, BasicEffect> KVP in CubeType_Effect)
{
// Iterate through each technique in this effect
foreach (EffectPass pass in KVP.Value.CurrentTechnique.Passes)
{
// Go through each chunk in our chunk map, and pluck out the cubetype we care about
foreach (Vector3 ChunkKey in GeneratedChunks)
{
if (ChunkMap[ChunkKey].CubeType_TriangleCounts[KVP.Key] > 0)
{
pass.Apply(); // assign it to the video card
KVP.Value.View = camera.ViewMatrix;
KVP.Value.Projection = camera.ProjectionMatrix;
KVP.Value.World = worldMatrix;
device.SetVertexBuffer(ChunkMap[ChunkKey].CubeType_VertexBuffers[KVP.Key]);
device.DrawPrimitives(PrimitiveType.TriangleList, 0, ChunkMap[ChunkKey].CubeType_TriangleCounts[KVP.Key]);
}
}
}
}
}
#endregion
The world looks fine if I'm standing still. I thought this might be because I'm in windowed mode, but when I toggled full screen the problem persisted. I also assume that XNA is double buffered by itself? Or so google has told me.
I had a similar issue - I found that I had to call pass.Apply() after setting all of the Effect's parameters...
The fix so far has been to use 1 giant vertex buffer. I don't like it, but that's all that seems to work.