Why does my OpenGL ES app crash on glDrawElements? - c#

I have developed a mobile game with OpenGL ES 3 on Xamarin (Which uses OpenTK). It's running fine on most devices, but crashes on some devices (HUAWEI Y5 lite). Unfortunately I don't get a detailed log of the error:
#00 pc 0000000000093d2a /vendor/lib/egl/libGLESv2_mtk.so
#01 pc 000000000001c137 /vendor/lib/egl/libGLESv2_mtk.so
#02 pc 000000000001eddf /vendor/lib/egl/libGLESv2_mtk.so
#03 pc 000000000001af75 /vendor/lib/egl/libGLESv2_mtk.so
#04 pc 000000000001aabf /vendor/lib/egl/libGLESv2_mtk.so (glDrawElements+54)
#05 pc 000000000000ca0c <anonymous>
I'm guessing it has something to do with my draw code or worse with some driver issues of the phone. I'm using the following code to render quads:
public void BeforeRender()
{
// Use shader program.
GL.UseProgram(shader.Program);
// Enable transparency
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
// Use texture
GL.ActiveTexture(TextureUnit.Texture0);
GL.Uniform1(shader.UniformTexture, 0);
// Only bind once for all quads
GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBufferId);
}
public void Render(Sprite sprite, Vector4 color, Matrix4 modelViewProjection)
{
// Set model view projection
GL.UniformMatrix4(shader.UniformModelViewProjection, false, ref modelViewProjection);
// Set color
GL.Uniform4(shader.UniformColor, color);
// Set texture
GL.BindTexture(TextureTarget.Texture2D, sprite.TextureId);
// Update attribute value Position
GL.BindBuffer(BufferTarget.ArrayBuffer, sprite.Vbo);
GL.VertexAttribPointer(shader.AttribVertex, 3, VertexAttribPointerType.Float, false, sizeof(float) * 5, IntPtr.Zero); // 3 + 2 = 5
GL.EnableVertexAttribArray(shader.AttribVertex);
// Update attribute value TexCoord
GL.VertexAttribPointer(shader.AttribTexCoord, 2, VertexAttribPointerType.Float, false, sizeof(float) * 5, new IntPtr(sizeof(float) * 3));
GL.EnableVertexAttribArray(shader.AttribTexCoord);
// Draw quad
GL.DrawElements(BeginMode.Triangles, faceIndexes.Length, DrawElementsType.UnsignedShort, IntPtr.Zero);
}
public void AfterRender()
{
// Unbind / Disable
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
GL.Disable(EnableCap.Blend);
}
To draw multiple quads I just call the methods like this:
BeforeRender();
foreach(var sprite in sprites)
{
Render(sprite);
}
AfterRender();
Is there something wrong with my code in general which might cause problems on some devices which other devices "tolerate"?
Thanks in advance!
Update:
Here is how I create the buffers:
public int Load<T>(T[] data)
where T : struct
{
int bufferId;
GL.GenBuffers(1, out bufferId);
bufferIds.Add(bufferId);
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferId);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(data.Length * Marshal.SizeOf(default(T))), data, BufferUsage.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
return bufferId;
}
For the index buffer i use:
ushort[] faceIndexes =
{
0, 1, 2, 1, 3, 2
};
indexBufferId = bufferManager.Load(faceIndexes);
For the vertex buffer I use:
float[] vertices =
{
0f, 0f, 0f, 0f, ty,
width, 0f, 0f, tx, ty,
0f, height, 0f, 0f, 0f,
width, height, 0f, tx, 0f
};
int vboId = bufferManager.Load(vertices);

The indices have to be store to an ElementArrayBuffer rather than an ArrayBuffer:
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferId); ...
GL.BindBuffer(BufferTarget.ElementArrayBuffer, bufferId);
GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(data.Length * Marshal.SizeOf(default(T))), data, BufferUsage.StaticDraw);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
Add a target argument to the generic Load method. For instance:
public int Load<T>(T[] data, BufferTarget target)
where T : struct
{
int bufferId;
GL.GenBuffers(1, out bufferId);
bufferIds.Add(bufferId);
GL.BindBuffer(target, bufferId);
GL.BufferData(target, (IntPtr)(data.Length * Marshal.SizeOf(default(T))), data, BufferUsage.StaticDraw);
GL.BindBuffer(target, 0);
return bufferId;
}
indexBufferId = bufferManager.Load(faceIndexes, BufferTarget.ElementArrayBuffer);
int vboId = bufferManager.Load(vertices, BufferTarget.ArrayBuffer);

I have found the issue, that was causing my app to crash on some devices. Apparently there was an item slipping in to the drawing routine, which was passing a buffer id 0 into GL.BindBuffer. On some devices this was causing the following error when Gl.DrawElements was called: emuglGLESv2_enc: sendVertexAttributes: bad offset / len!!!!!
The solution was of course to not execute this code anymore whenever the buffer id is 0 or no buffer for the id was created.
In my case the 0 was not returned by GL.GenBuffers. I had some sprites which should be empty (not rendered), and my orignal solution was to set the buffer id to 0.

Related

Opengl OpenTK - White screen when drawing depth Buffer [duplicate]

This question already has an answer here:
OpenGL 4.2 LookAt matrix only works with -z value for eye position
(1 answer)
Closed 2 years ago.
I am currently trying to add shadows with Shadow Mapping to my 3D Engine.
First I render the scene from the light's point of view, and save the depth values in a texture. Then I use the defeault FBO to draw from the texture. Just like in this tutorial.
The problem is that my screen stays white, no matter where I move.
GL.GetError() outputs noError and the SSBO's which I use in vertex shader have the right values. GL.CheckFramebufferStatus() returns FramebufferCompleteExt.
This is how I create the FBO for depth values:
_depthMapFBO = GL.GenFramebuffer();
_depthMapFBOColorBuffer = BufferObjects.FBO_TextureAttachment(_depthMapFBO, PixelInternalFormat.DepthComponent, PixelFormat.DepthComponent, FramebufferAttachment.DepthAttachment, 1024, 1024);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, _depthMapFBO);
GL.DrawBuffer(DrawBufferMode.None);
GL.ReadBuffer(ReadBufferMode.None);
====================================
public static int FBO_TextureAttachment(int FrameBuffer, PixelInternalFormat PixelInternalFormat, PixelFormat PixelFormat, FramebufferAttachment FramebufferAttachment, int Width, int Height)
{
// PixelInternalFormat = DepthComponent && PixelFormat = DepthComponent && FramebufferAttachment = DepthAttachment && Width, Height = 1024,
GL.BindFramebuffer(FramebufferTarget.Framebuffer, FrameBuffer);
int _texture = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, _texture);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat, Width, Height, 0, PixelFormat, PixelType.Float, IntPtr.Zero);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)All.Repeat);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)All.Repeat);
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment, TextureTarget.Texture2D, _texture, 0);
return _texture;
}
In my Render function it looks like this:
GL.BindFramebuffer(FramebufferTarget.Framebuffer, _depthMapFBO);
GL.Clear(ClearBufferMask.DepthBufferBit);
GL.Viewport(0, 0, 1024, 1024);
_simpleDepthProgram.Use();
float _nearPlane = 1.0f, _farPlane = 100f;
_lightProjection = Matrix4.CreateOrthographicOffCenter(-100.0f, 100.0f, -100.0f, 100.0f, _nearPlane, _farPlane);
_ligthView = Matrix4.LookAt(_allLamps[0].Position, new Vector3(0f), new Vector3(0.0f, 1.0f, 0.0f));
_lightSpaceMatrix = _lightProjection * _ligthView;
GL.UniformMatrix4(21, false, ref _lightSpaceMatrix);
// Copy all SSBO's
GL.ActiveTexture(TextureUnit.Texture2);
GL.BindTexture(TextureTarget.Texture2D, _depthMapFBOColorBuffer);
Scene();
And the shader where I draw the depthMap:
#version 450 core
out vec4 FragColor;
uniform sampler2D scene;
uniform sampler2D bloomed;
uniform sampler2D depthMap;
uniform float zNear;
uniform float zFar;
float LinearizeDepth(float depth)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * zNear * zFar) / (zFar + zNear - z * (zFar - zNear));
}
in vec2 TexCoord;
void main()
{
float depthValue = texture(depthMap, TexCoord).r;
//float depth = LinearizeDepth(gl_FragCoord.z) / far; // only for perspective
FragColor = vec4(vec3(depthValue), 1.0);
}
The computation of the _lightSpaceMatrix is wrong. The OpenTK matrix multiplication is reversed. See Problem with matrices #687:
Because of how matrices are treated in C# and OpenTK, multiplication order is inverted from what you might expect in C/C++ and GLSL. This is an old artefact in the library, and it's too late to change now, unfortunately.
Swap the _ligthView and _lightProjection when you multiply the matrices:
_lightSpaceMatrix = _lightProjection * _ligthView;
_lightSpaceMatrix = _ligthView * _lightProjection;

OpenTK Text rendering without GL.Begin()

I am writing an application with OpenTK and got to the point where i want to render text. From examples i patched together a version that creates a bitmap with the characters i need, using Graphics.DrawString(). That version works quite okay, but i am annoyed that it relies on GL.Begin(BeginMode.Quads) and GL.End() to render the text, which is why i want to use VAOs and VBOs from now on.
I am having a problem somewhere in my program, because i always get single colored squares, where the text should appear.
What i did so far to update my functions is the following:
I create the bitmap the same as before, i don't see why the problem should lie there.
After that i create a list of "Char" Objects, each creating a VBO, storing the position and texture coordinates like this:
float u_step = (float)GlyphWidth / (float)TextureWidth;
float v_step = (float)GlyphHeight / (float)TextureHeight;
float u = (float)(character % GlyphsPerLine) * u_step;
float v = (float)(character / GlyphsPerLine) * v_step;
float x = -GlyphWidth / 2, y = 0;
_vertices = new float[]{
x/rect.Width, -GlyphHeight/rect.Height, u, v,
-x/rect.Width, -GlyphHeight/rect.Height, u + u_step, v,
-x/rect.Width, y/rect.Height, u + u_step, v + v_step,
x/rect.Width, -GlyphHeight/rect.Height, u, v,
-x/rect.Width, y/rect.Height, u + u_step, v + v_step,
x/rect.Width, y/rect.Height, u, v + v_step
};
_VBO = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, _VBO);
GL.BufferData<float>(BufferTarget.ArrayBuffer, (IntPtr)(sizeof(float) * _vertices.Length),
_vertices, BufferUsageHint.DynamicDraw);
Next i generate a Texture, set texture0 as active and bind the Texture as TextureTarget.Texture2D. Then i load the bitmap to the texture doing the following:
_shader.Use();
_vertexLocation = _shader.GetAttribLocation("aPosition");
_texCoordLocation = _shader.GetAttribLocation("aTexCoord");
_fontTextureID = GL.GenTexture();
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, _fontTextureID);
using (var image = new Bitmap("container.png")) //_fontBitmapFilename))//
{
var data = image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadOnly,
System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D,
0,
PixelInternalFormat.Rgba,
image.Width,
image.Height,
0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra,
PixelType.UnsignedByte,
data.Scan0);
}
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
I create a VAO now, bind it, and bind one VBO to it. Then i set up the VertexAttribPointers to interpret the VBO Data:
_VAO = GL.GenVertexArray();
GL.BindVertexArray(_VAO);
GL.BindBuffer(BufferTarget.ArrayBuffer, _charSheet[87].VBO);
GL.EnableVertexAttribArray(_vertexLocation);
GL.VertexAttribPointer(_vertexLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 0);
GL.EnableVertexAttribArray(_texCoordLocation);
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, _fontTextureID);
GL.VertexAttribPointer(_texCoordLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 2);
GL.BindVertexArray(0);
_shader.StopUse();
My Render function starts by using the shader, binding the VAO and enabling the VertexAttribArrays. Then i bind the VBO, in this function a fixed one for testing, and reuse the VertexAttribPointer functions, so that the VAO updates its data (i might also be wrong thinking so..). At the end i draw two triangles, which makes a square, where the letter should appear.
_shader.Use();
GL.BindVertexArray(_VAO);
GL.EnableVertexAttribArray(_texCoordLocation);
GL.EnableVertexAttribArray(_vertexLocation);
GL.BindBuffer(BufferTarget.ArrayBuffer, _charSheet[87].VBO);
GL.VertexAttribPointer(_vertexLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 0);
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, _fontTextureID);
GL.VertexAttribPointer(_texCoordLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 2);
GL.DrawArrays(PrimitiveType.Triangles, 0, 6);
GL.BindVertexArray(0);
_shader.StopUse();
I do not know, where I'm doing something wrong in my program, maybe someone has a tip for me.
Vertex Shader
#version 330 core
layout(location = 0) in vec2 aPosition;
layout(location = 1) in vec2 aTexCoord;
out vec2 texCoord;
void main(void)
{
texCoord = aTexCoord;
gl_Position = vec4(aPosition, 0.0, 1.0);
}
Fragment Shader:
#version 330
out vec4 outputColor;
in vec2 texCoord;
uniform sampler2D texture0;
void main()
{
outputColor = texture(texture0, texCoord);
}
If a named buffer object is bound, then the last parameter of GL.VertexAttribPointer is treated as byte offset to the buffer object's data store.
The offset has to be 2*sizeof(float) rather than 2:
GL.VertexAttribPointer(_texCoordLocation, 2, VertexAttribPointerType.Float, false, 4 * sizeof(float), 2);
GL.VertexAttribPointer(_texCoordLocation, 2,
VertexAttribPointerType.Float, false, 4 * sizeof(float), 2*sizeof(float));
See glVertexAttribPointer and Vertex Specification.

SharpGL 2D Texturing issue

I am trying to texture triangle with runtime generated texture using SharpGL wrapper.
I can't figure out why triangle remains not textured.
gl.Error() put in draw loop returns 0 which means GL_NO_ERROR.
private void openGLControl_OpenGLDraw(object sender, OpenGLEventArgs args)
{
OpenGL gl = openGLControl.OpenGL;
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT );
gl.LoadIdentity();
gl.Color(1.0f,1.0f,1.0f,1.0f);
gl.Begin(OpenGL.GL_TRIANGLES);
gl.TexCoord(0, 1.0f);
gl.Vertex(0.0f, 0.0f);
gl.TexCoord(1.0f, 0f);
gl.Vertex(1.0f, 0f);
gl.TexCoord(1.0f, 1.0f);
gl.Vertex(1.0f, 1.0f);
gl.End();
}
private void openGLControl_OpenGLInitialized(object sender, OpenGLEventArgs args)
{
Random rnd = new Random();
OpenGL gl = openGLControl.OpenGL;
gl.ClearColor(0, 0, 0, 0);
gl.Enable(OpenGL.GL_TEXTURE_2D);
byte[] colors = new byte[256 * 256 * 4];
for (int i = 0; i < 256 * 256 * 4; i++)
{
colors[i] = (byte)rnd.Next(256);
}
uint[] textureID = new uint[1];
gl.GenTextures(1, textureID);
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, (int)OpenGL.GL_RGBA, 256, 256, 0, OpenGL.GL_RGBA, OpenGL.GL_BYTE, colors);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, textureID[0]);
}
You have to call gl.BindTexture before calling gl.TexImage2D.
The reason for this is in the first argument of both functions. OpenGL has a state machine that keeps track of what is bound. When you call gl.TexImage2D, you are telling GL to upload the pixels to the texture currently bound to OpenGL.GL_TEXTURE_2D. gl.BindTexture binds the texture you generated to OpenGL.GL_TEXTURE_2D.
The reason why texture wasn't visible was lacking setting:
uint[] array = new uint[] { OpenGL.GL_NEAREST };
gl.TexParameterI(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, array);
gl.TexParameterI(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, array);
By default SharpGL is using MipMap which weren't generated.

OpenGL texture rendering - only upper left pixel is displayed

I working with C# and OpenTK.
Currently I only want to map a texture on a triangle.
It seems to be working but on nearest texture filter, the whole triangle is only colored with the upper left pixel color of the bmp image and if I set the texture filter to linear the triangle shows still only one color, but it seems whether it is now mixed with the other pixels.
Can someone find the error in the code ?
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
GL.Enable(EnableCap.Texture2D);
GL.ClearColor(0.5F, 0.5F, 0.5F, 1.0F);
int vertexShaderHandle = GL.CreateShader(ShaderType.VertexShader);
int fragmentShaderHandle = GL.CreateShader(ShaderType.FragmentShader);
string vertexShaderSource = #"#version 400
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 uv;
out vec2 texture_uv;
void main()
{
gl_Position = vec4(inPosition.xyz, 1);
texture_uv = uv;
}";
string fragmentShaderSource = #"#version 400
in vec2 texture_uv;
out vec3 outColor;
uniform sampler2D uniSampler;
void main()
{
outColor = texture( uniSampler, texture_uv ).rgb;
}";
GL.ShaderSource(vertexShaderHandle, vertexShaderSource);
GL.ShaderSource(fragmentShaderHandle, fragmentShaderSource);
GL.CompileShader(vertexShaderHandle);
GL.CompileShader(fragmentShaderHandle);
prgHandle = GL.CreateProgram();
GL.AttachShader(prgHandle, vertexShaderHandle);
GL.AttachShader(prgHandle, fragmentShaderHandle);
GL.LinkProgram(prgHandle);
GL.DetachShader(prgHandle, vertexShaderHandle);
GL.DetachShader(prgHandle, fragmentShaderHandle);
GL.DeleteShader(vertexShaderHandle);
GL.DeleteShader(fragmentShaderHandle);
uniSamplerLoc = GL.GetUniformLocation(prgHandle, "uniSampler");
texHandle = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, texHandle);
Bitmap bmp = new Bitmap("C:/Users/Michael/Desktop/Test.bmp");
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, bmpData.Width, bmpData.Height, 0,
OpenTK.Graphics.OpenGL4.PixelFormat.Bgra, PixelType.UnsignedByte, bmpData.Scan0);
bmp.UnlockBits(bmpData);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
vaoHandle = GL.GenVertexArray();
GL.BindVertexArray(vaoHandle);
vboHandle = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, vboHandle);
float[] bufferData = { 0.5F, 1, 0, 1, 1,
0, 0, 0, 0, 0,
1, 0, 0, 1, 0 };
GL.BufferData<float>(BufferTarget.ArrayBuffer, (IntPtr) (15 * sizeof(float)), bufferData, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(0);
GL.EnableVertexAttribArray(1);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 5 * sizeof(float), 0);
GL.VertexAttribPointer(1, 2, VertexAttribPointerType.Float, false, 5 * sizeof(float), 3 * sizeof(float));
}
protected override void OnUnload(EventArgs e)
{
base.OnUnload(e);
GL.DeleteTexture(texHandle);
GL.DeleteProgram(prgHandle);
GL.DeleteBuffer(vboHandle);
GL.DeleteVertexArray(vaoHandle);
}
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(prgHandle);
GL.Uniform1(uniSamplerLoc, texHandle);
GL.BindVertexArray(vaoHandle);
GL.DrawArrays(PrimitiveType.Triangles, 0, 3);
SwapBuffers();
}
EDIT:
I tried this:
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(prgHandle);
GL.BindVertexArray(vaoHandle);
GL.ActiveTexture(TextureUnit.Texture3);
GL.BindTexture(TextureTarget.Texture2D, texHandle);
GL.Uniform1(uniSamplerLoc, 3);
GL.DrawArrays(PrimitiveType.Triangles, 0, 3);
SwapBuffers();
}
But nothing changed :(
The value of a sampler uniform variable needs to be the texture unit it should sample from. In your code, it is set to the texture name (aka texture id, aka texture handle) instead:
GL.Uniform1(uniSamplerLoc, texHandle);
The texture unit can be set with ActiveTexture(). When glBindTexture() is called, the value of the currently active texture unit determines which unit the texture is bound to. The default for the active texture unit is 0. So if you never called ActiveTexture(), the uniform should be set as:
GL.Uniform1(uniSamplerLoc, 0);
Just as a heads-up, another related source of errors is that the value of the uniform is a 0-based index of the texture unit, while the glActiveTexture() call takes an enum starting with GL_TEXTURE0. For example with the C bindings (not sure how exactly this looks with C# and OpenTK, but it should be similar enough), this would bind a texture to texture unit 3, and set a uniform sampler variable to use it:
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, texId);
glUniform1i(texUniformLoc, 3);
Note how GL_TEXTURE3 is used in the argument for glActiveTexture(), but a plain 3 in glUniform1i().

SharpGL & Kinect - Rendering ColorImageFrame Bitmap to OpenGL Texture

I want to draw a simple 2D quad with a texture from the Kinect's video input but my code isn't working.
Here is my ColorFrameReady event:
void Sensor_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
ColorImageFrame cif = e.OpenColorImageFrame();
if (cif != null)
{
middleBitmapSource = cif.ToBitmapSource();
cif.Dispose();
}
}
This is my OpenGL draw method:
private void OpenGLControl_OpenGLDraw(object sender, OpenGLEventArgs args)
{
// Get the OpenGL instance being pushed to us.
gl = args.OpenGL;
// Clear the colour and depth buffers.
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
// Enable textures
gl.Enable(OpenGL.GL_TEXTURE_2D);
// Reset the modelview matrix.
gl.LoadIdentity();
// Load textures
if (middleBitmapSource != null)
{
middleBitmap = getBitmap(middleBitmapSource, quadMiddleWidth, quadMiddleHeight);
//middleBitmap = new System.Drawing.Bitmap("C:\\Users\\Bobble\\Pictures\\Crate.bmp"); // When I use this texture, it works fine
gl.GenTextures(1, textures);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, textures[0]);
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, 3, middleBitmap.Width, middleBitmap.Height, 0, OpenGL.GL_BGR, OpenGL.GL_UNSIGNED_BYTE,
middleBitmap.LockBits(new System.Drawing.Rectangle(0, 0, middleBitmap.Width, middleBitmap.Height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format24bppRgb).Scan0);
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, OpenGL.GL_LINEAR);
gl.TexParameter(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, OpenGL.GL_LINEAR);
}
// Move back a bit
gl.Translate(0.0f, 0.0f, -25.0f);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, textures[0]);
// Draw a quad.
gl.Begin(OpenGL.GL_QUADS);
// Draw centre quad
gl.TexCoord(0.0f, 0.0f); gl.Vertex(-(quadMiddleWidth / 2), quadMiddleHeight / 2, quadDepthFar);
gl.TexCoord(0.0f, 1.0f); gl.Vertex(quadMiddleWidth / 2, quadMiddleHeight / 2, quadDepthFar);
gl.TexCoord(1.0f, 1.0f); gl.Vertex(quadMiddleWidth / 2, -(quadMiddleHeight / 2), quadDepthFar);
gl.TexCoord(1.0f, 0.0f); gl.Vertex(-(quadMiddleWidth / 2), -(quadMiddleHeight / 2), quadDepthFar);
gl.End();
gl.Flush();
}
And here is a helper method which converts a BitmapSource to a Bitmap of a given size:
private System.Drawing.Bitmap getBitmap(BitmapSource bitmapSource, double rectangleWidth, double rectangleHeight)
{
double newWidthRatio = rectangleWidth / (double)bitmapSource.PixelWidth;
double newHeightRatio = ((rectangleWidth * bitmapSource.PixelHeight) / (double)bitmapSource.PixelWidth) / (double)bitmapSource.PixelHeight;
BitmapSource transformedBitmapSource = new TransformedBitmap(bitmapSource, new ScaleTransform(newWidthRatio, newHeightRatio));
int width = transformedBitmapSource.PixelWidth;
int height = transformedBitmapSource.PixelHeight;
//int stride = width * ((transformedBitmapSource.Format.BitsPerPixel + 7) / 8);
int stride = ((width * transformedBitmapSource.Format.BitsPerPixel + 31) & ~31) / 8; // See: http://stackoverflow.com/questions/1983781/why-does-bitmapsource-create-throw-an-argumentexception/1983886#1983886
byte[] bits = new byte[height * stride];
transformedBitmapSource.CopyPixels(bits, stride, 0);
unsafe
{
fixed (byte* pBits = bits)
{
IntPtr ptr = new IntPtr(pBits);
System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(
width, height, stride, System.Drawing.Imaging.PixelFormat.Format32bppArgb, ptr);
return bitmap;
}
}
}
The quad renders but it is just white, no texture on it.
This turned out to be an issue with SharpGL - the OpenGL library I was using.
http://sharpgl.codeplex.com/discussions/348278 - discussion on CodePlex which led to the issue being resolved by the SharpGL developers.
I was having a white picture too, but when I changed the input of the function TexImage2D to use width and heights which are multiples of 2, it worked. According to the documentation:
The width of the texture image (must be a power of 2, e.g 64).
The height of the texture image (must be a power of 2, e.g 32).

Categories