I have followed a SharpGL tutorial that can display a rotating block. Initially this only had default colors drawn on it with gl.Color(r, g, b). After this succeeded I tried to texture the cube with an uv map.
When I run the application fullscreen while only coloring the cube (with the sharpGL component covering the entire inside of the application) I get 70~80 fps only for displaying a colored cube. When I enable OpenGL.GL_TEXTURE_2D and draw the textures on a singular cube I get 8~9 fps.
Whenever a bitmap is loaded for use as a texture, it is stored in the memory. This drop in framerates only occurs after I enable OpenGL.GL_TEXTURE_2D and call gl.TexCoord(c1, c2) for all coordinates. Actually moving the object with gl.Rotate(angle, x, y, z) does not noticably affect performance.
The provided data for the method including GetBlockUv and CubeCoordinates are static float-arrays.
Is SharpGL supposed to perform this poorly (i.e. on displaying a singular cube) or is there another reason? Am I doing something wrong that is affecting performance? Is applying textures supposed to affect the performance like that?
The main draw Event happens in a Block:
public void DrawBlock(object sender, OpenGLEventArgs args)
{
// Get the OpenGL instance that's been passed to us.
OpenGL gl = args.OpenGL;
// Reset the modelview.
gl.LoadIdentity();
// Move the block to its location
gl.Translate(Coord.X, Coord.Y, Coord.Z);
gl.Rotate(angle, 1.0f, 1.0f, 0.5f);
angle += 3;
// retrieve the right texture for this block and bind it.
Texture blockTex = BlockTexture.GetBlockTexture(gl, _type);
blockTex.Bind(gl);
// retrieve the uv map for this block
float[] uv = BlockTexture.GetBlockUv(_type);
// retrieve the coordinates for a cube
float[] cube = CubeCoordinates();
gl.Enable(OpenGL.GL_TEXTURE_2D);
// Draw the cube with the bound texture.
gl.Begin(OpenGL.GL_QUADS);
//
//
// Begin by allowing all colors.
gl.Color(1.0f, 1.0f, 1.0f);
// since the uv index increments with 2 each time, we will be keeping track of this separately.
int uvInd = 0;
// i denotes the current coordinate. Each coordinate consists of 3
// values (x, y, z), thus letting us skip 3.
//
// Seeing as we are creating quads, it is expected that cube.Length
// is 3 * 4 * N (where n is a whole number)
for (int i = 0; i < cube.Length; i += 3)
{
// color experiment
//if (i < cube.Length / 3)
//{
// gl.Color(1.0f, 0.00f, 0.00f);
//}
//else if (i < 2 * (cube.Length / 3))
//{
// gl.Color(0.0f, 1.0f, 0.0f);
//}
//else
//{
// gl.Color(0.0f, 0.0f, 1.0f);
//}
try
{
// set the coordinate for the texture
gl.TexCoord(uv[uvInd], uv[uvInd + 1]);
// set the vertex
gl.Vertex(cube[i], cube[i + 1], cube[i + 2]);
}
catch (IndexOutOfRangeException e)
{
throw new IndexOutOfRangeException(
"This exception is thrown because the cube map and uv map do not match size");
}
// increment the uv index
uvInd += 2;
}
gl.End();
gl.Disable(OpenGL.GL_TEXTURE_2D);
}
OpenGL is initialized elsewhere
private void OpenGLControl_OpenGLDraw(object sender, OpenGLEventArgs args)
{
// Get the OpenGL instance that's been passed to us.
OpenGL gl = args.OpenGL;
// Clear the color and depth buffers.
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
// call the draw method of the GameRunner if the
// GameRunner has already been created.
game?.DrawOpenGL(sender, args);
// Flush OpenGL.
gl.Flush();
}
private void OpenGLControl_OpenGLInitialized(object sender, OpenGLEventArgs args)
{
// Enable the OpenGL depth testing functionality.
args.OpenGL.Enable(OpenGL.GL_DEPTH_TEST);
}
All the intermediate GameRunner does right now is call the DrawBlock routine.
What I mainly would want to know is some insight into the performance I can expect of openGL / sharpGL and whether there are better alternatives. I would like to keep using the WPF architecture surrounding the game, but if openGL inside WPF is more meant as a gimmick, that might not be the best course of action.
I've been having the exact same issue, and it seems to be the case that either SharpGL or the WPF control itself are using software rendering. I tested this by disabling my main display adapter in Device Manager and got the exact same performance as I did with it enabled.
I don't know how to enable hardware acceleration though, so I don't actually know how to fix the issue.
Related
In the app I'm trying to develop a key part is getting the position of where the user has touched. First I thought of using a tap gesture recognizer but after a quick google search I learned that was useless (See here for an example).
Then I believe I discovered SkiaSharp and after learning how to use it, at least somewhat, I'm still not sure how I get the proper coordinates of a touch. Here are sections of the code in my project that are relevant to the problem.
Canvas Touch Function
private void canvasView_Touch(object sender, SKTouchEventArgs e)
{
// Only carry on with this function if the image is already on screen.
if(m_isImageDisplayed)
{
// Use switch to get what type of action occurred.
switch (e.ActionType)
{
case SKTouchAction.Pressed:
TouchImage(e.Location);
// Update simply tries to draw a small square using double for loops.
m_editedBm = Update(sender);
// Refresh screen.
(sender as SKCanvasView).InvalidateSurface();
break;
default:
break;
}
}
}
Touch Image
private void TouchImage(SKPoint point)
{
// Is the point in range of the canvas?
if(point.X >= m_x && point.X <= (m_editedCanvasSize.Width + m_x) &&
point.Y >= m_y && point.Y <= (m_editedCanvasSize.Height + m_y))
{
// Save the point for later and set the boolean to true so the algorithm can begin.
m_clickPoint = point;
m_updateAlgorithm = true;
}
}
Here I'm just seeing or TRYING to see if the point clicked was in range of the image and I made a different SkSize variable to help. Ignore the boolean, not that important.
Update function (function that attempts to draw ON the point pressed so it's the most important)
public SKBitmap Update(object sender)
{
// Create the default test color to replace current pixel colors in the bitmap.
SKColor color = new SKColor(255, 255, 255);
// Create a new surface with the current bitmap.
using (var surface = new SKCanvas(m_editedBm))
{
/* According to this: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/paths/finger-paint ,
the points I have to start are in Xamarin forms coordinates, but I need to translate them to SkiaSharp coordinates which are in
pixels. */
Point pt = new Point((double)m_touchPoint.X, (double)m_touchPoint.Y);
SKPoint newPoint = ConvertToPixel(pt);
// Loop over the touch point start, then go to a certain value (like x + 100) just to get a "block" that's been altered for pixels.
for (int x = (int)newPoint.X; x < (int)newPoint.X + 200.0f; ++x)
{
for (int y = (int)newPoint.Y; y < (int)newPoint.Y + 200.0f; ++y)
{
// According to the x and y, change the color.
m_editedBm.SetPixel(x, y, color);
}
}
return m_editedBm;
}
}
Here I'm THINKING that it'll start, you know, at the coordinate I pressed (and these coordinates have been confirmed to be within the range of the image thanks to the function "TouchImage". And when it does get the correct coordinates (or at least it SHOULD of done that) the square will be drawn one "line" at a time. I have a game programming background so this kind of sounds simple but I can't believe I didn't get this right the first time.
Also I have another function, it MIGHT prove worthwhile because the original image is rotated and then put on screen. Why? Well by default the image, after taking the picture, and then displayed, is rotated to the left. I had no idea why but I corrected it with the following function:
// Just rotate the image because for some reason it's titled 90 degrees to the left.
public static SKBitmap Rotate()
{
using (var bitmap = m_bm)
{
// The new ones width IS the old ones height.
var rotated = new SKBitmap(bitmap.Height, bitmap.Width);
using (var surface = new SKCanvas(rotated))
{
surface.Translate(rotated.Width, 0.0f);
surface.RotateDegrees(90);
surface.DrawBitmap(bitmap, 0, 0);
}
return rotated;
}
}
I'll keep reading and looking up stuff on what I'm doing wrong, but if any help is given I'm grateful.
I try to create an WPF Application with an integrated OpenGL visualization.
I found the sample project SharpGL it helped me to integrate the opengl code into my wpf program.
now I just want to draw a rectangle which has the following attributes:
40 x coordinates = columns
48 y coordinates = rows
Each (x,y) coordinate has a value which I have defined in an List<float>.
The List<float> is dynamically, so it will change all the time.
The goal is too show the values in real time in the drawed rectangle.
Like:
y-coord
x coord 0 2 0 3 7 0 1 ..40
4 5 3 0 6 0 5 ..40
. .
. .
48 48
Unfortunately I fail already when I try to draw the rectangle.
private void openGLControl_OpenGLDraw(object sender, OpenGLEventArgs args)
{
// Get the OpenGL object.
OpenGL gl = openGLControl.OpenGL;
// Clear the color and depth buffer.
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT);
// Load the identity matrix.
//gl.LoadIdentity();
gl.PolygonMode(FaceMode.FrontAndBack, PolygonMode.Filled);
gl.Color(0,0,0);
// Draw a coloured pyramid.
gl.Begin(OpenGL.GL_QUADS);
gl.Vertex(-1.0f, 1.0f);
gl.Color(200,1,1);
gl.Vertex(-1.0f, 0.0f);
gl.Color(200, 1, 1);
gl.Vertex(1.0f, 0.0f);
gl.Color(200, 1, 1);
gl.Vertex(1.0f, 1.0f);
gl.Color(200, 1, 1);
gl.End();
// Nudge the rotation.
//rotation += 3.0f;
}
My code shows just a black window.
How could I realize that?
Most likely you forgot to call gl.Flush() or gl.Finish() at the end of your openGLControl_OpenGLDraw() method. Until you call any of these methods (or gl.SwapBuffers() in case of double buffering) every drawing method you call will only be buffered but not executed.
i've made a small application to grap screenshots from any windowed game and send it to the iPhone to creat an virtual reality app, like oculus rift (see https://github.com/gagagu/VR-Streamer-Windows-Server for more info).
The images will be captured with SharpDX and everything is working fine.
Now i want to implement such like lens correction (barrel distortion) and i'm looking for the fastest way to realize it. I'm looking many internet sites with informations about barrel distortion and i think the fastest way is to use a shader for it, but i'm very new to sharpdx (and no knowledge about shaders) and i don't know how to implement a shader to my code. The most tutorials applys a shader to an object (like a cube) but not to a captured image and so i don't know how to do it.
[STAThread]
public System.Drawing.Bitmap Capture()
{
isInCapture = true;
try
{
// init
bool captureDone = false;
bitmap = new System.Drawing.Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
// the capture needs some time
for (int i = 0; !captureDone; i++)
{
try
{
//capture
duplicatedOutput.AcquireNextFrame(-1, out duplicateFrameInformation, out screenResource);
// only for wait
if (i > 0)
{
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
mapDest = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, captureRect.Width, captureRect.Height),
ImageLockMode.WriteOnly, bitmap.PixelFormat);
sourcePtr = mapSource.DataPointer;
destPtr = mapDest.Scan0;
// set x position offset to rect.x
int rowPitch = mapSource.RowPitch - offsetX;
// set pointer to y position
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch * captureRect.Y);
for (int y = 0; y < captureRect.Height; y++) // needs to speed up!!
{
// set pointer to x position
sourcePtr = IntPtr.Add(sourcePtr, offsetX);
// copy pixel to bmp
Utilities.CopyMemory(destPtr, sourcePtr, pWidth);
// incement pointert to next line
sourcePtr = IntPtr.Add(sourcePtr, rowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(screenTexture, 0);
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch//(Exception ex) // catch (SharpDXException e)
{
//if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
//{
// // throw e;
//}
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
}
}
catch
{
return new Bitmap(captureRect.Width, captureRect.Height, PixelFormat.Format32bppArgb);
}
isInCapture = false;
return bitmap;
}
It would be really great to get a little start assist from someone who willing to help.
I've found some shaders on inet but it is written for opengl (https://github.com/dghost/glslRiftDistort/tree/master/libovr-0.4.x/glsl110). Can i use the also for directx (sharpdx)?
Thanks forward for any help!
Now I've never used DirectX myself, but I suppose you'll need to use HLSL instead of GLSL (which should be fairly similar though). The idea is that you'll have to load your "screenshot" into a texture buffer, as an input to your fragment shader (pixel shader). Fragment shaders are deceptively easy to understand, it's just a piece of code (written in GLSL or HLSL) looking very much like a subset of C to which a few math functions has been added (vector and matrices manipulation mostly) executed for every single pixel to be rendered.
The code should be fairly simple, you'll take the current pixel position, apply the barrel distortion transformation to it's coordinates, then look up that coordinate in your screenshot texture. The transformation should look something like that :
vec2 uv;
/// Barrel Distortion ///
float d=length(uv);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / 3.14159;
float phi = atan(uv.y, uv.x);
uv = vec2(r*cos(phi)+.5,r*sin(phi)+.5);
Here's a shadertoy link if you wanna play with it and figure out how it works
I have no idea how HLSL handles texture filtering (which pixel you'll get when using floating point values for coordinates), but I'd put my money on bilinear filtering, which may very well give an unpleasant pixelyness to your output. You'll have to look at better filtering methods once you get the distortion working. Shouldn't be anything too complicated, familiarize yourself with HLSL syntax, find how to load your screenshot into a texture in DirectX and get rolling.
Edit : I said barrel distortion but the code is actually for the fisheye effect. Of course both are pretty much identical, the barrel distortion being only on one axis. I believe what you need is the fisheye effect though, it's what is commonly used for HMDs if I'm not mistaken.
My scene is 2048 x 1152, and the camera never moves. When I create a rectangle with the following:
timeBarRect = new Rect(220, 185, Screen.width / 3, Screen.height / 50);
Its position changes depending on the resolution of my game, so I can't figure out how to get it to always land where I want it on the screen. To clarify, if I set the resolution to 16:9, and change the size of the preview window, the game will resize at ratios of 16:9, but the bar will move out from where it's supposed to be.
I have two related questions:
Is it possible to place the Rect at a global coordinate? Since the screen is always 2048 x 1152, if I could just place it at a certain coordinate, it'd be perfect.
Is the Rect a UI element? When it's created, I can't find it in the hierarchy. If it's a UI element, I feel like it should be created relative to a canvas/camera, but I can't figure out a way to do that either.
Update:
I am realizing now that I was unclear about what is actually being visualized. Here is that information: Once the Rect is created, I create a texture, update the size of that texture in Update() and draw it to the Rect in OnGui():
timeTexture = new Texture2D (1, 1);
timeTexture.SetPixel(0,0, Color.green);
timeTexture.Apply();
The texture size being changed:
void Update ()
{
if (time < timerMax) {
playerCanAttack = false;
time = time + (10 * Time.deltaTime);
} else {
time = timerMax;
playerCanAttack = true;
}
The actual visualization of the Rect, which is being drawn in a different spot depending on the size of the screen:
void OnGUI(){
float ratio = time / 500;
float rectWidth = ratio * Screen.width / 1.6f;
timeBarRect.width = rectWidth;
GUI.DrawTexture (timeBarRect, timeTexture);
}
I don't know that I completely understand either of the two questions I posed, but I did discover that the way to get the rect's coordinates to match the screen no matter what resolution was not using global coordinates, but using the camera's coordinates, and placing code in Update() such that the rect's coordinates were updated:
timeBarRect.x = cam.pixelWidth / timerWidth;
timeBarRect.y = cam.pixelHeight / timerHeight;
I have a pretty annoying problem. I would like to create a drawing program, using winform + XNA combo.
The most important part would be to transform the mouse position into the XNA drawn grid - I was able to make it for the translations, but it only work if I don't zoom in - when I do, the coordinates simply went horrible wrong.
And I have no idea what I doing wrong. I tried to transform with scaling matrix, transform with inverse scaling matrix, multiplying with zoom, but none seems to work.
In the beginning (with zoom value = 1) the grid starts from (0,0,0) going to (Width, Height, 0). I was able to get coordinates based on this grid as long as the zoom value didn't changed at all. I using a custom shader, with orthographic projection matrix, identity view matrix, and the transformed world matrix.
Here are the two main methods:
internal void Update(RenderData data)
{
KeyboardState keyS = Keyboard.GetState();
MouseState mouS = Mouse.GetState();
if (ButtonState.Pressed == mouS.RightButton)
{
camTarget.X -= (float)(mouS.X - oldMstate.X) / 2;
camTarget.Y += (float)(mouS.Y - oldMstate.Y) / 2;
}
if (ButtonState.Pressed == mouS.MiddleButton || keyS.IsKeyDown(Keys.Space))
{
zVal += (float)(mouS.Y - oldMstate.Y) / 10;
zoom = (float)Math.Pow(2, zVal);
}
oldKState = keyS;
oldMstate = mouS;
world = Matrix.CreateTranslation(new Vector3(-camTarget.X, -camTarget.Y, 0)) * Matrix.CreateScale(zoom / 2);
}
internal PointF MousePos
{
get
{
Vector2 mousePos = new Vector2(Mouse.GetState().X, Mouse.GetState().Y);
Matrix trans = Matrix.CreateTranslation(new Vector3(camTarget.X - (Width / 2), -camTarget.Y + (Height / 2), 0));
mousePos = Vector2.Transform(mousePos, trans);
return new PointF(mousePos.X, mousePos.Y);
}
}
The second method should return the coordinates of the mouse cursor based on the grid (where the (0,0) point of the grid is the top-left corner.).
But is just don't work. I deleted the zoom transformation from the matrix trans, as I wasn't able to get any useful results (most of the time, the coordinates were horribly wrong, mostly many thousands when the grid's size is 500x500).
Any ideas, or suggestions? I've been trying to solve this simple problem for two days now :\
Take a look at the GraphicsDevice.Viewport.Unproject method for converting screen space locations in to world space, it basically goes through your world, view, projection transformations in reverse order.
as for your zooming issue, instead of scaling the world transform why not move the camera closer to the object that you're viewing?