I'm new to OpenTK and I'm using the following code to draw large number of polygons using OpenTK
public static void DrawPolygon(Point[] points)
{
GL.Begin(BeginMode.Polygon); //IF I Change this to LineStrip every things will be OK
int numberOfPoints = points.Length;
for (int i = 0; i < numberOfPoints; i++)
{
GL.Vertex2(points[i].X, points[i].Y);
}
GL.End();
}
And this is the configuration code which executes before calling the DrawPolygon
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, width, 0, height, -1, 1); // Bottom-left corner pixel has coordinate (0, 0)
GL.Viewport(0, 0, (int)width, (int)height);
GL.ClearColor(drawing.Color.Transparent);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.Color3(pen.Color);
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Line);
GL.PointSize(5f);
GL.LineWidth(2f);
Using this code when I save the rendered image to disk as png, the result will be like this
result for:GL.Begin(BeginMode.Polygon);
However If I change the first line of the DrawPolygon to GL.Begin(BeginMode.LineStrip); the polygon would be rendered as expected like this:
result for : GL.Begin(BeginMode.LineStrip);
Anyone knows why those two extra lines appears when using BeginMode.Polygon?
I believe it's because GL_POLYGON/BeginMode.Polygon only renders convex polygons. I think that if you attempt render a concave polygon, as you're doing, the driver will try split up the geometry that you give it to render it as convex polygons, hence the unexpected lines in your example.
It's inadvisable to use the polygon rendering mode in OpenGL. The rendering of line strips and triangles is much more highly optimized, and therefore much, much quicker. I'd advise staying with line strips.
Related
I have followed a SharpGL tutorial that can display a rotating block. Initially this only had default colors drawn on it with gl.Color(r, g, b). After this succeeded I tried to texture the cube with an uv map.
When I run the application fullscreen while only coloring the cube (with the sharpGL component covering the entire inside of the application) I get 70~80 fps only for displaying a colored cube. When I enable OpenGL.GL_TEXTURE_2D and draw the textures on a singular cube I get 8~9 fps.
Whenever a bitmap is loaded for use as a texture, it is stored in the memory. This drop in framerates only occurs after I enable OpenGL.GL_TEXTURE_2D and call gl.TexCoord(c1, c2) for all coordinates. Actually moving the object with gl.Rotate(angle, x, y, z) does not noticably affect performance.
The provided data for the method including GetBlockUv and CubeCoordinates are static float-arrays.
Is SharpGL supposed to perform this poorly (i.e. on displaying a singular cube) or is there another reason? Am I doing something wrong that is affecting performance? Is applying textures supposed to affect the performance like that?
The main draw Event happens in a Block:
public void DrawBlock(object sender, OpenGLEventArgs args)
{
// Get the OpenGL instance that's been passed to us.
OpenGL gl = args.OpenGL;
// Reset the modelview.
gl.LoadIdentity();
// Move the block to its location
gl.Translate(Coord.X, Coord.Y, Coord.Z);
gl.Rotate(angle, 1.0f, 1.0f, 0.5f);
angle += 3;
// retrieve the right texture for this block and bind it.
Texture blockTex = BlockTexture.GetBlockTexture(gl, _type);
blockTex.Bind(gl);
// retrieve the uv map for this block
float[] uv = BlockTexture.GetBlockUv(_type);
// retrieve the coordinates for a cube
float[] cube = CubeCoordinates();
gl.Enable(OpenGL.GL_TEXTURE_2D);
// Draw the cube with the bound texture.
gl.Begin(OpenGL.GL_QUADS);
//
//
// Begin by allowing all colors.
gl.Color(1.0f, 1.0f, 1.0f);
// since the uv index increments with 2 each time, we will be keeping track of this separately.
int uvInd = 0;
// i denotes the current coordinate. Each coordinate consists of 3
// values (x, y, z), thus letting us skip 3.
//
// Seeing as we are creating quads, it is expected that cube.Length
// is 3 * 4 * N (where n is a whole number)
for (int i = 0; i < cube.Length; i += 3)
{
// color experiment
//if (i < cube.Length / 3)
//{
// gl.Color(1.0f, 0.00f, 0.00f);
//}
//else if (i < 2 * (cube.Length / 3))
//{
// gl.Color(0.0f, 1.0f, 0.0f);
//}
//else
//{
// gl.Color(0.0f, 0.0f, 1.0f);
//}
try
{
// set the coordinate for the texture
gl.TexCoord(uv[uvInd], uv[uvInd + 1]);
// set the vertex
gl.Vertex(cube[i], cube[i + 1], cube[i + 2]);
}
catch (IndexOutOfRangeException e)
{
throw new IndexOutOfRangeException(
"This exception is thrown because the cube map and uv map do not match size");
}
// increment the uv index
uvInd += 2;
}
gl.End();
gl.Disable(OpenGL.GL_TEXTURE_2D);
}
OpenGL is initialized elsewhere
private void OpenGLControl_OpenGLDraw(object sender, OpenGLEventArgs args)
{
// Get the OpenGL instance that's been passed to us.
OpenGL gl = args.OpenGL;
// Clear the color and depth buffers.
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
// call the draw method of the GameRunner if the
// GameRunner has already been created.
game?.DrawOpenGL(sender, args);
// Flush OpenGL.
gl.Flush();
}
private void OpenGLControl_OpenGLInitialized(object sender, OpenGLEventArgs args)
{
// Enable the OpenGL depth testing functionality.
args.OpenGL.Enable(OpenGL.GL_DEPTH_TEST);
}
All the intermediate GameRunner does right now is call the DrawBlock routine.
What I mainly would want to know is some insight into the performance I can expect of openGL / sharpGL and whether there are better alternatives. I would like to keep using the WPF architecture surrounding the game, but if openGL inside WPF is more meant as a gimmick, that might not be the best course of action.
I've been having the exact same issue, and it seems to be the case that either SharpGL or the WPF control itself are using software rendering. I tested this by disabling my main display adapter in Device Manager and got the exact same performance as I did with it enabled.
I don't know how to enable hardware acceleration though, so I don't actually know how to fix the issue.
i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.
I've got drawing sprites to work with OpenTK in my 2d game engine now. Only problem I'm having is that custom drawn objects with opengl (anything but sprites really) show up as the background color. Example:
I'm Drawing a 2.4f width black line here. There's also a quad and a point in the example, but they do not overlap anything that's actually visible. The line overlaps the magenta sprite, but the color is just wrong. My question is: Am I missing an OpenGL feature, or doing something horrible wrong?
These are the samples of my project concerning drawing: (you can also find the project on https://github.com/Villermen/HatlessEngine if there's questions about the code)
Initialization:
Window = new GameWindow(windowSize.Width, windowSize.Height);
//OpenGL initialization
GL.Enable(EnableCap.PointSmooth);
GL.Hint(HintTarget.PointSmoothHint, HintMode.Nicest);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.ClearColor(Color.Gray);
GL.Enable(EnableCap.Texture2D);
GL.Enable(EnableCap.DepthTest);
GL.DepthFunc(DepthFunction.Lequal);
GL.ClearDepth(1d);
GL.DepthRange(1d, 0d); //does not seem right, but it works (see it as duct-tape)
Every draw cycle:
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
//reset depth and color to be consistent over multiple frames
DrawX.Depth = 0;
DrawX.DefaultColor = Color.Black;
foreach(View view in Resources.Views)
{
CurrentDrawArea = view.Area;
GL.Viewport((int)view.Viewport.Left * Window.Width, (int)view.Viewport.Top * Window.Height, (int)view.Viewport.Right * Window.Width, (int)view.Viewport.Bottom * Window.Height);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(view.Area.Left, view.Area.Right, view.Area.Bottom, view.Area.Top, -1f, 1f);
GL.MatrixMode(MatrixMode.Modelview);
//drawing
foreach (LogicalObject obj in Resources.Objects)
{
//set view's coords for clipping?
obj.Draw();
}
}
GL.Flush();
Window.Context.SwapBuffers();
DrawX.Line:
public static void Line(PointF pos1, PointF pos2, Color color, float width = 1)
{
RectangleF lineRectangle = new RectangleF(pos1.X, pos1.Y, pos2.X - pos1.X, pos2.Y - pos1.Y);
if (lineRectangle.IntersectsWith(Game.CurrentDrawArea))
{
GL.LineWidth(width);
GL.Color3(color);
GL.Begin(PrimitiveType.Lines);
GL.Vertex3(pos1.X, pos1.Y, GLDepth);
GL.Vertex3(pos2.X, pos2.Y, GLDepth);
GL.End();
}
}
Edit: If I disable the blendcap before and enable it after drawing the line it does show up with the right color, but I must have it blended.
I forgot to unbind the texture in the texture-drawing method...
GL.BindTexture(TextureTarget.Texture2D, 0);
I have a small paint program that I am working on. I am using SetPixel on a bitmap to do that drawing of lines. When the brush size gets large, like 25 pixels across there is a noticeable performance drop. I am wondering if there is a faster way to draw to a bitmap. Here is a bit of the background of the project:
I am using bitmaps so that I can utilise layers, like in Photoshop or The GIMP.
Lines are being drawn manually because this will eventually use graphics tablet pressure to alter the size of the line over its length.
The lines should eventually be anti-aliaced/smoothed along the edges.
I'll include my drawing code just in case it is this that is slow and not the Set-Pixel bit.
This is in the windows where the painting happens:
private void canvas_MouseMove(object sender, MouseEventArgs e)
{
m_lastPosition = m_currentPosition;
m_currentPosition = e.Location;
if(m_penDown && m_pointInWindow)
m_currentTool.MouseMove(m_lastPosition, m_currentPosition, m_layer);
canvas.Invalidate();
}
Implementation of MouseMove:
public override void MouseMove(Point lastPos, Point currentPos, Layer currentLayer)
{
DrawLine(lastPos, currentPos, currentLayer);
}
Implementation of DrawLine:
// The primary drawing code for most tools. A line is drawn from the last position to the current position
public override void DrawLine(Point lastPos, Point currentPos, Layer currentLayer)
{
// Creat a line vector
Vector2D vector = new Vector2D(currentPos.X - lastPos.X, currentPos.Y - lastPos.Y);
// Create the point to draw at
PointF drawPoint = new Point(lastPos.X, lastPos.Y);
// Get the amount to step each time
PointF step = vector.GetNormalisedVector();
// Find the length of the line
double length = vector.GetMagnitude();
// For each step along the line...
for (int i = 0; i < length; i++)
{
// Draw a pixel
PaintPoint(currentLayer, new Point((int)drawPoint.X, (int)drawPoint.Y));
drawPoint.X += step.X;
drawPoint.Y += step.Y;
}
}
Implementation of PaintPoint:
public override void PaintPoint(Layer layer, Point position)
{
// Rasterise the pencil tool
// Assume it is square
// Check the pixel to be set is witin the bounds of the layer
// Set the tool size rect to the locate on of the point to be painted
m_toolArea.Location = position;
// Get the area to be painted
Rectangle areaToPaint = new Rectangle();
areaToPaint = Rectangle.Intersect(layer.GetRectangle(), m_toolArea);
// Check this is not a null area
if (!areaToPaint.IsEmpty)
{
// Go through the draw area and set the pixels as they should be
for (int y = areaToPaint.Top; y < areaToPaint.Bottom; y++)
{
for (int x = areaToPaint.Left; x < areaToPaint.Right; x++)
{
layer.GetBitmap().SetPixel(x, y, m_colour);
}
}
}
}
Thanks a lot for any help you can provide.
You can lock the bitmap data and use pointers to manually set the values. It's much faster. Though you'll have to use unsafe code.
public override void PaintPoint(Layer layer, Point position)
{
// Rasterise the pencil tool
// Assume it is square
// Check the pixel to be set is witin the bounds of the layer
// Set the tool size rect to the locate on of the point to be painted
m_toolArea.Location = position;
// Get the area to be painted
Rectangle areaToPaint = new Rectangle();
areaToPaint = Rectangle.Intersect(layer.GetRectangle(), m_toolArea);
Bitmap bmp;
BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int stride = data.Stride;
unsafe
{
byte* ptr = (byte*)data.Scan0;
// Check this is not a null area
if (!areaToPaint.IsEmpty)
{
// Go through the draw area and set the pixels as they should be
for (int y = areaToPaint.Top; y < areaToPaint.Bottom; y++)
{
for (int x = areaToPaint.Left; x < areaToPaint.Right; x++)
{
// layer.GetBitmap().SetPixel(x, y, m_colour);
ptr[(x * 3) + y * stride] = m_colour.B;
ptr[(x * 3) + y * stride + 1] = m_colour.G;
ptr[(x * 3) + y * stride + 2] = m_colour.R;
}
}
}
}
bmp.UnlockBits(data);
}
SetPixel does this:
is locks the whole image, sets the pixel and unlocks it
try to do that: you acquire a lock for the whole memory image with lockbits, process you update and release the lock after.
lockbits
I usually use an array to represent the raw pixel data. And then copy between that array and the bitmap with unsafe code.
Making the array of Color is a bad idea, since the Color struct is relatively large(12 bytes+). So you can either define your own 4 byte struct(that's the one I chose) or simply use an array of int or byte.
You should also reuse your array, since GC on the LOH tends to be expensive.
My code can be found at:
https://github.com/CodesInChaos/ChaosUtil/blob/master/Chaos.Image/
An alternative is writing all your code using pointers into the bitmap directly. That's a bit faster still, but can make the code uglier and more error prone.
Just an idea: Fill an offscreen bitmap with your Brush pixels. You only need to regenerate this bitmap when the brush, size or color is changed. And then just draw this bitmap onto your existing bitmap, where the mouse is located.
If you can modulate a bitmap with a color, you could set the pixels in grayscale and modulate it with the current brush color.
You're calling GetBitmap within your nested for loop. It looks like that's not necessary, you should GetBitmap outside the for loops as the reference isn't going to change.
Also look at #fantasticfix answer, Lockbits nearly always sorts slow performance issues with getting/setting pixels
I'm not sure why this code isn't simply drawing a triangle to screen (orthographically). I'm using OpenTK 1.1 which is the same thing as OpenGL 1.1.
List<Vector3> simpleVertices = new List<Vector3>();
simpleVertices.Add(new Vector3(0,0,0));
simpleVertices.Add(new Vector3(100,0,0));
simpleVertices.Add(new Vector3(100,100,0));
GL.MatrixMode(All.Projection);
GL.LoadIdentity();
GL.MatrixMode(All.Projection);
GL.Ortho(0, 480, 320, 0,0,1000);
GL.MatrixMode(All.Modelview);
GL.LoadIdentity();
GL.Translate(0,0,10);
unsafe
{
Vector3* data = (Vector3*)Marshal.AllocHGlobal(
Marshal.SizeOf(typeof(Vector3)) * simpleVertices.Count);
for(int i = 0; i < simpleVertices.Count; i++)
{
((Vector3*)data)[i] = simpleVertices[i];
}
GL.VertexPointer(3, All.Float, sizeof(Vector3), new IntPtr(data));
GL.DrawArrays(All.Triangles, 0, simpleVertices.Count);
}
The code executes once every update cycle in a draw function. What I think I'm doing (but evidentially am not) is creating a set of position vertices to form a triangle and drawing it 10 units in front of the camera.
Why is this code not drawing a triangle?
In OpenGL, the Z axis points out of the screen, so when you write
GL.Translate(0,0,10);
it actually translates it "in front" of the screen.
Now your two last parameters to GL.Ortho are 0,1000. This means that everything between 0 and 1000 in the MINUS Z direction ( = in front of the camera ) will be displayed.
In other words, GL.Translate(0,0,-10); will put your object in front of the camera, while GL.Translate(0,0,10); will put it behind.