I have a little problem with rendering in my SharpDX Direct11 App.
I had being tested rendering scene on a texture, and then draw this texture on backBuffer... but unfortunately renderTexture do not contains primitives which should be drawn. Texture is only filled by color.
Whole project on github: https://github.com/Kordi3112/SharpDXTest11
Main code part with rendering methods:
public override void Render()
{
//Camera
var proj = Matrix.OrthoLH(3 * Form.Bounds.Width / Form.Bounds.Height, 3, 0.01f, 100f);
var view = Matrix.LookAtLH(new Vector3(0, 0, -10), new Vector3(0, 0, 20), Vector3.UnitY);
var viewProj = Matrix.Multiply(view, proj);
var world = Matrix.Identity;
var worldViewProj = world * viewProj;
worldViewProj.Transpose();
//Update wvp matrix
Context.UpdateSubresource(ref worldViewProj, ContantBuffer);
DrawOnTexture();
//Set BackBuffer as render target
Context.OutputMerger.SetTargets(depthView, renderView);
// Clear views
Context.ClearDepthStencilView(depthView, DepthStencilClearFlags.Depth, 1.0f, 0);
Context.ClearRenderTargetView(renderView, Color.Pink);
//Set TextureColor Shader
Effect2.ApplyShader(Context);
//Set Buffers
Context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer2, Utilities.SizeOf<VertexPositionColorTexture>(), 0));
Context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R32_UInt, 0);
//Set Texture to Shader
Context.PixelShader.SetShaderResource(0, RenderTexture.ShaderResourceView);
//Draw
Context.DrawIndexed(6, 0, 0);
// Present!
SwapChain.Present(0, PresentFlags.None);
}
private void DrawOnTexture()
{
//Set Color Shader
Effect1.ApplyShader(Context);
//Set Buffers
Context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<VertexPositionColor>(), 0));
Context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R32_UInt, 0);
//Set Target
RenderTexture.SetRenderTarget(Context, depthView);
//Clear Targets - Green Bgound
RenderTexture.ClearRenderTarget(Context, depthView, 0, 1, 0, 1);
//Draw on RenderTarget
Context.DrawIndexed(6, 0, 0);
}
After call: Context.DrawIndexed(6, 0, 0); in private void DrawOnTexture() primitive should be drawn.
What this code above do
What i wanted to get
What's wrong with my code?
I'm sure the problem is not matrix or camera. When i will modify code to render primitive directly on backBuffer then its drawing normaly.
public override void Render()
{
//Camera
var proj = Matrix.OrthoLH(3 * Form.Bounds.Width / Form.Bounds.Height, 3, 0.01f, 100f);
var view = Matrix.LookAtLH(new Vector3(0, 0, -10), new Vector3(0, 0, 20), Vector3.UnitY);
var viewProj = Matrix.Multiply(view, proj);
var world = Matrix.Identity;
var worldViewProj = world * viewProj;
worldViewProj.Transpose();
//Update wvp matrix
Context.UpdateSubresource(ref worldViewProj, ContantBuffer);
//DrawOnTexture();
//Set BackBuffer as render target
Context.OutputMerger.SetTargets(depthView, renderView);
// Clear views
Context.ClearDepthStencilView(depthView, DepthStencilClearFlags.Depth, 1.0f, 0);
Context.ClearRenderTargetView(renderView, Color.Pink);
//Set Color Shader
Effect1.ApplyShader(Context);
//Set Buffers
Context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VertexBuffer, Utilities.SizeOf<VertexPositionColor>(), 0));
Context.InputAssembler.SetIndexBuffer(IndexBuffer, Format.R32_UInt, 0);
//Set Texture to Shader
//Context.PixelShader.SetShaderResource(0, RenderTexture.ShaderResourceView);
//Draw
Context.DrawIndexed(6, 0, 0);
// Present!
SwapChain.Present(0, PresentFlags.None);
}
output
Vertex Buffers declaration:
//Position Color
VertexBuffer = Buffer.Create(Device, BindFlags.VertexBuffer, new[] {
new VertexPositionColor(new Vector4(-1, -1, 0, 1), Color.Red.ToVector4()),
new VertexPositionColor(new Vector4(-1, 1, 0, 1), Color.Green.ToVector4()),
new VertexPositionColor(new Vector4(1, 1, 0, 1), Color.Blue.ToVector4()),
new VertexPositionColor(new Vector4(1, -1, 0, 1), Color.Yellow.ToVector4())
});
//Position Color Texture
VertexBuffer2 = Buffer.Create(Device, BindFlags.VertexBuffer, new[] {
new VertexPositionColorTexture(new Vector4(-1, -1, 0, 1), Color.White.ToVector4(), new Vector2(0,1)),
new VertexPositionColorTexture(new Vector4(-1, 1, 0, 1), Color.White.ToVector4(),new Vector2(0,0)),
new VertexPositionColorTexture(new Vector4(1, 1, 0, 1), Color.White.ToVector4(),new Vector2(1,0)),
new VertexPositionColorTexture(new Vector4(1, -1, 0, 1), Color.White.ToVector4(),new Vector2(1,1))
});
IndexBuffer = Buffer.Create(Device, BindFlags.IndexBuffer, new[] {
0,1,2,
0,2,3
});
Related
I've been trying to create a plane mesh in Unity using code, and I've come across a very interesting problem. I created an int[], filled it up with some values, and it's length is somehow zero. I've never encountered anything this quirky, so I'd enjoy a bit of help.
mesh.triangles = new int[]
{
4, 6, 5, 5, 6, 7
};
... // Not important stuff
Debug.Log(mesh.triangles.Length);
I don't know what is happening, so I really haven't tried anything. But in the console, there is an error message stating Failed setting triangles. Some indices are referencing out of bounds vertices. IndexCount: 6, VertexCount: 4.This is probably really important, but I don't understand some parts of the message(especially the last part). And if it makes a difference, I have an array concatenation method being called to add the first triangles to these ones. I initially identified this problem when the half of my mesh still wasn't appearing. I would really appreciate help; thanks.
Edit:
To cut confusion, I'm just going to paste my whole entire method.
private void CreateQuad(ref Mesh mesh, Vector3 offset, bool first)
{
if (first)
{
mesh.vertices = new Vector3[]
{
Vector3.zero, Vector3.right, Vector3.forward, new Vector3(1, 0, 1)
};
mesh.triangles = new int[]
{
0, 2, 1, 1, 2, 3
};
mesh.normals = new Vector3[]
{
Vector3.back, Vector3.back, Vector3.back, Vector3.back
};
mesh.tangents = new Vector4[]
{
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1)
};
mesh.uv = new Vector2[]
{
Vector2.zero, Vector2.right, Vector2.up, Vector2.one
};
}
else if (!first)
{
mesh.vertices = new Vector3[]
{
Vector3.zero + offset,
Vector3.right + offset,
Vector3.forward + offset,
new Vector3(1, 0, 1) + offset
};
mesh.triangles = new int[]
{
4, 6, 5, 5, 6, 7
};
mesh.normals = new Vector3[]
{
Vector3.back, Vector3.back, Vector3.back, Vector3.back
};
mesh.tangents = new Vector4[]
{
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1)
};
mesh.uv = new Vector2[]
{
Vector2.zero, Vector2.right, Vector2.up, Vector2.one
};
Debug.Log(mesh.triangles.Length);
}
}
You only have FOUR vertices!
mesh.vertices = new Vector3[]
{
Vector3.zero + offset,
Vector3.right + offset,
Vector3.forward + offset,
new Vector3(1, 0, 1) + offset
};
So the indices 4, 6, 5, 5, 6, 7 are all invalid! If you have only four vertices you can maximum have the indices 0, 1, 2, 3
=> Unity simply rejects them all. You should have already taken that hint from the error you get
Failed setting triangles. Some indices are referencing out of bounds vertices. IndexCount: 6, VertexCount: 4
Now it is a bit unclear what exactly you are trying to achieve here but
either you want to REPLACE the vertices: In this case there is no reason to set new triangle instances etc at all! It is enough to connect them only once:
private void CreateQuad(ref Mesh mesh, Vector3 offset, bool first)
{
if (first)
{
mesh.vertices = new Vector3[]
{
Vector3.zero, Vector3.right, Vector3.forward, new Vector3(1, 0, 1)
};
mesh.triangles = new int[]
{
0, 2, 1, 1, 2, 3
};
mesh.normals = new Vector3[]
{
Vector3.back, Vector3.back, Vector3.back, Vector3.back
};
mesh.tangents = new Vector4[]
{
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1)
};
mesh.uv = new Vector2[]
{
Vector2.zero, Vector2.right, Vector2.up, Vector2.one
};
}
else if (!first)
{
mesh.vertices = new Vector3[]
{
Vector3.zero + offset,
Vector3.right + offset,
Vector3.forward + offset,
new Vector3(1, 0, 1) + offset
};
}
}
the other properties can simply be left untouched since you only want to update the vertex positions.
Or you actually wanted to ADD more faces. In that case you rather want to append to the existing arrays:
private void CreateQuad(ref Mesh mesh, Vector3 offset, bool first)
{
if (first)
{
mesh.vertices = new Vector3[]
{
Vector3.zero, Vector3.right, Vector3.forward, new Vector3(1, 0, 1)
};
mesh.triangles = new int[]
{
0, 2, 1, 1, 2, 3
};
mesh.normals = new Vector3[]
{
Vector3.back, Vector3.back, Vector3.back, Vector3.back
};
mesh.tangents = new Vector4[]
{
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1),
new Vector4(1, 0, 0, -1)
};
mesh.uv = new Vector2[]
{
Vector2.zero, Vector2.right, Vector2.up, Vector2.one
};
}
else if (!first)
{
// fist get already existing verts etc
var oldVerts = mesh.vertices;
var oldTris = mesh.triangles;
// create new vertices and triangles arrays with additional space for the new quad
var newVerts = new Vector3[oldVerts.Length + 4];
var newTris = new int[oldTris.Length + 6];
// copy over the existing vertices and triangles
Array.Copy(oldVerts, newVerts, olVerts.Length);
Array.Copy(oldTris, newtris, oldtris.Length);
// then append the new vertices
newVerts[oldverts.Length + 0] = Vector3.zero + offset;
newVerts[oldverts.Length + 1] = Vector3.right + offset;
newVerts[oldverts.Length + 2] = Vector3.forward + offset;
newVerts[oldverts.Length + 3] = new Vector3(1, 0, 1) + offset;
// append the new triangles
newTris[oldTris.Length + 0] = oldverts.Length + 0;
newTris[oldTris.Length + 1] = oldverts.Length + 2;
newTris[oldTris.Length + 2] = oldverts.Length + 1;
newTris[oldTris.Length + 3] = oldverts.Length + 1;
newTris[oldTris.Length + 4] = oldverts.Length + 2;
newTris[oldTris.Length + 5] = oldverts.Length + 3;
// get the min and max points for filling the uvs (not the most efficient way probably but it is what it is ^^)
// we later want to spread out the UV values linear between 0 (min) and 1 (max) on the given vertices
var min = Vector3.zero;
var max = Vector3.zero;
foreach(var vertex in newVerts)
{
min = Vector3.Min(min, vertex);
max = Vector3.Max(max, vertex);
}
// also fill new tangents and normals and uvs (if really necessary)
var newNormals = new Vector3[newVerts.Length];
var newTangents = new Vector4[newVerts.Length];
var newUVs = new Vector2[newVerts.Length];
for(var i = 0; i < newVerts.Length; i++)
{
var vertex = newVerts[i];
newUVs[i] = new Vector2((vertex.x - min.x) / (max.x - min.x), (vertex.z - min.z) / (max.z - min.z));
newNormals[i] = Vector3.back;
newTangents[i] = new Vector4(1, 0, 0, -1);
};
// finally set them all back
mesh.vertices = newVerts;
mesh.triangles = newTris;
mesh.normals = newNormals;
mesh.tangents = newTangents;
mesh.uv = newUs;
}
}
You first need to set the vertex array before altering the triangles. As Unity writes "It is recommended to assign a triangle array after assigning the vertex array, in order to avoid out of bounds errors."
mesh.vertices = new Vector3[] { new Vector3(-1,0,1), new Vector3(-1,0,-1),
new Vector3(1,0,-1), new Vector3(1,0,1) };
mesh.triangles = new int[] {0,1,2,0,2,3};
I have generated a 2D Hectogon in my scene view, however I am now confused as to how to make the shape three dimensional. Any help in the maths or method that is used to calculate this would be greatly appreciated. I have only just started with C# and I feel this is a tall order considering the lack of new relevant content on OpenTk in terms of most of the calls used in most tutorials are now obsolete.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Drawing;
using System.IO;
using OpenTK;
using OpenTK.Input;
using OpenTK.Graphics.OpenGL;
using OpenTK.Graphics;
namespace SimpleGame
{
class Game : GameWindow
{
public Game() : base(1280, 720, new GraphicsMode(32, 24, 0, 4)) // screen resilotion
{
}
int pgmID;
int vsID;
int fsID;
int attribute_vcol;
int attribute_vpos;
int uniform_mview;
int vbo_position;
int vbo_color;
int vbo_mview;
int ibo_elements;
Vector3[] vertdata;
Vector3[] coldata;
Matrix4[] mviewdata;
int[] indicedata;
float time = 0.0f;
void initProgram()
{
pgmID = GL.CreateProgram();
loadShader("F:/Year 1/Semester 2/Simulation In Games/SimpleGame/SimpleGame/vs.glsl", ShaderType.VertexShader, pgmID, out vsID);
loadShader("F:/Year 1/Semester 2/Simulation In Games/SimpleGame/SimpleGame/fs.glsl", ShaderType.FragmentShader, pgmID, out fsID);
GL.LinkProgram(pgmID);
Console.WriteLine(GL.GetProgramInfoLog(pgmID));
attribute_vpos = GL.GetAttribLocation(pgmID, "vPosition");
attribute_vcol = GL.GetAttribLocation(pgmID, "vColor");
uniform_mview = GL.GetUniformLocation(pgmID, "modelview");
GL.GenBuffers(1, out vbo_position);
GL.GenBuffers(1, out vbo_color);
GL.GenBuffers(1, out vbo_mview);
GL.GenBuffers(1, out ibo_elements);
}
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
initProgram();
vertdata = new Vector3[] {
//new Vector3(0.0f,0.0f,0.0f), // center
//new Vector3(2.0f, 0f,0f), // right hand side
//new Vector3(0f,2f,0f), // up
new Vector3(0.0f,0.0f,-0.8f), // center point
new Vector3(2.0f,0.0f,-0.8f), // right hand side
new Vector3(1.0f,1.7f,-0.8f), // right hand top
new Vector3(-1.0f,1.7f,-0.8f), // right hand top
new Vector3(-2.0f,0.0f,-0.8f), // left hand top
new Vector3(-1.0f,-1.7f,-0.8f),
new Vector3(1.0f,-1.7f,-0.8f), // right hand top
};
indicedata = new int[]{
//front
0, 1, 2,
0, 2, 3,
//back
0, 3, 4,
0, 4, 5,
//left
0, 5, 6,
0, 6, 1,
};
coldata = new Vector3[] { new Vector3(1f, 0f, 0f),
new Vector3( 0f, 0f, 1f),
new Vector3( 0f, 1f, 0f),new Vector3(1f, 0f, 0f),
new Vector3( 0f, 0f, 1f),
new Vector3( 0f, 1f, 0f),new Vector3(1f, 0f, 0f),
new Vector3( 0f, 0f, 1f)};
mviewdata = new Matrix4[]{
Matrix4.Identity
};
Title = "Hello OpenTK!";
GL.ClearColor(Color.DarkTurquoise);
GL.PointSize(5f);
}
void loadShader(String filename, ShaderType type, int program, out int address)
{
address = GL.CreateShader(type);
using (StreamReader sr = new StreamReader(filename))
{
GL.ShaderSource(address, sr.ReadToEnd());
}
GL.CompileShader(address);
GL.AttachShader(program, address);
Console.WriteLine(GL.GetShaderInfoLog(address));
}
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Viewport(0, 0, Width, Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.Enable(EnableCap.DepthTest);
GL.EnableVertexAttribArray(attribute_vpos);
GL.EnableVertexAttribArray(attribute_vcol);
GL.DrawElements(BeginMode.Triangles, indicedata.Length, DrawElementsType.UnsignedInt, 0);
GL.DisableVertexAttribArray(attribute_vpos);
GL.DisableVertexAttribArray(attribute_vcol);
GL.Flush();
SwapBuffers();
}
protected override void OnUpdateFrame(FrameEventArgs e)
{
base.OnUpdateFrame(e);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_position);
GL.BufferData<Vector3>(BufferTarget.ArrayBuffer, (IntPtr)(vertdata.Length * Vector3.SizeInBytes), vertdata, BufferUsageHint.StaticDraw);
GL.VertexAttribPointer(attribute_vpos, 3, VertexAttribPointerType.Float, false, 0, 0);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_color);
GL.BufferData<Vector3>(BufferTarget.ArrayBuffer, (IntPtr)(coldata.Length * Vector3.SizeInBytes), coldata, BufferUsageHint.StaticDraw);
GL.VertexAttribPointer(attribute_vcol, 3, VertexAttribPointerType.Float, true, 0, 0);
time += (float)e.Time;
mviewdata[0] = Matrix4.CreateRotationY(0.2f time) Matrix4.CreateRotationX(0.0f time) Matrix4.CreateTranslation(0.0f, -1.0f, -4.0f) *
Matrix4.CreatePerspectiveFieldOfView(1.3f, ClientSize.Width / (float)ClientSize.Height, 1.0f, 40.0f); // rotation
GL.UniformMatrix4(uniform_mview, false, ref mviewdata[0]);
GL.UseProgram(pgmID);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo_elements);
GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indicedata.Length * sizeof(int)), indicedata, BufferUsageHint.StaticDraw);
}
}
}
I don't think there is a built in method for creating prisms. A prism based on a heptagon (7-sided polygon) is made up from two heptagons (one on the bottom, one on the top) plus 7 vertical 4-sided polygons. So the algorithm for extruding a prism from a horizontal polygon would be (in pseudo code)
create_prism(bottom : polygon, height : float) : body
var top : polygon
top = bottom.Clone()
for all vertices v of top
v.z = v.z + height
end
var b = new body
b.Add(bottom)
b.Add(top)
for i : integer = 0 to bottom.Count - 1
var j : integer
j = (i + 1) modulo bottom.Count
var side = new polygon[4]
side[0] = bottom[i]
side[1] = bottom[j]
side[2] = top[j]
side[3] = top[i]
b.Add(side)
end
return b
end
I working with C# and OpenTK.
Currently I only want to map a texture on a triangle.
It seems to be working but on nearest texture filter, the whole triangle is only colored with the upper left pixel color of the bmp image and if I set the texture filter to linear the triangle shows still only one color, but it seems whether it is now mixed with the other pixels.
Can someone find the error in the code ?
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
GL.Enable(EnableCap.Texture2D);
GL.ClearColor(0.5F, 0.5F, 0.5F, 1.0F);
int vertexShaderHandle = GL.CreateShader(ShaderType.VertexShader);
int fragmentShaderHandle = GL.CreateShader(ShaderType.FragmentShader);
string vertexShaderSource = #"#version 400
layout(location = 0) in vec3 position;
layout(location = 1) in vec2 uv;
out vec2 texture_uv;
void main()
{
gl_Position = vec4(inPosition.xyz, 1);
texture_uv = uv;
}";
string fragmentShaderSource = #"#version 400
in vec2 texture_uv;
out vec3 outColor;
uniform sampler2D uniSampler;
void main()
{
outColor = texture( uniSampler, texture_uv ).rgb;
}";
GL.ShaderSource(vertexShaderHandle, vertexShaderSource);
GL.ShaderSource(fragmentShaderHandle, fragmentShaderSource);
GL.CompileShader(vertexShaderHandle);
GL.CompileShader(fragmentShaderHandle);
prgHandle = GL.CreateProgram();
GL.AttachShader(prgHandle, vertexShaderHandle);
GL.AttachShader(prgHandle, fragmentShaderHandle);
GL.LinkProgram(prgHandle);
GL.DetachShader(prgHandle, vertexShaderHandle);
GL.DetachShader(prgHandle, fragmentShaderHandle);
GL.DeleteShader(vertexShaderHandle);
GL.DeleteShader(fragmentShaderHandle);
uniSamplerLoc = GL.GetUniformLocation(prgHandle, "uniSampler");
texHandle = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, texHandle);
Bitmap bmp = new Bitmap("C:/Users/Michael/Desktop/Test.bmp");
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, bmpData.Width, bmpData.Height, 0,
OpenTK.Graphics.OpenGL4.PixelFormat.Bgra, PixelType.UnsignedByte, bmpData.Scan0);
bmp.UnlockBits(bmpData);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
vaoHandle = GL.GenVertexArray();
GL.BindVertexArray(vaoHandle);
vboHandle = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, vboHandle);
float[] bufferData = { 0.5F, 1, 0, 1, 1,
0, 0, 0, 0, 0,
1, 0, 0, 1, 0 };
GL.BufferData<float>(BufferTarget.ArrayBuffer, (IntPtr) (15 * sizeof(float)), bufferData, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(0);
GL.EnableVertexAttribArray(1);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 5 * sizeof(float), 0);
GL.VertexAttribPointer(1, 2, VertexAttribPointerType.Float, false, 5 * sizeof(float), 3 * sizeof(float));
}
protected override void OnUnload(EventArgs e)
{
base.OnUnload(e);
GL.DeleteTexture(texHandle);
GL.DeleteProgram(prgHandle);
GL.DeleteBuffer(vboHandle);
GL.DeleteVertexArray(vaoHandle);
}
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(prgHandle);
GL.Uniform1(uniSamplerLoc, texHandle);
GL.BindVertexArray(vaoHandle);
GL.DrawArrays(PrimitiveType.Triangles, 0, 3);
SwapBuffers();
}
EDIT:
I tried this:
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.UseProgram(prgHandle);
GL.BindVertexArray(vaoHandle);
GL.ActiveTexture(TextureUnit.Texture3);
GL.BindTexture(TextureTarget.Texture2D, texHandle);
GL.Uniform1(uniSamplerLoc, 3);
GL.DrawArrays(PrimitiveType.Triangles, 0, 3);
SwapBuffers();
}
But nothing changed :(
The value of a sampler uniform variable needs to be the texture unit it should sample from. In your code, it is set to the texture name (aka texture id, aka texture handle) instead:
GL.Uniform1(uniSamplerLoc, texHandle);
The texture unit can be set with ActiveTexture(). When glBindTexture() is called, the value of the currently active texture unit determines which unit the texture is bound to. The default for the active texture unit is 0. So if you never called ActiveTexture(), the uniform should be set as:
GL.Uniform1(uniSamplerLoc, 0);
Just as a heads-up, another related source of errors is that the value of the uniform is a 0-based index of the texture unit, while the glActiveTexture() call takes an enum starting with GL_TEXTURE0. For example with the C bindings (not sure how exactly this looks with C# and OpenTK, but it should be similar enough), this would bind a texture to texture unit 3, and set a uniform sampler variable to use it:
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, texId);
glUniform1i(texUniformLoc, 3);
Note how GL_TEXTURE3 is used in the argument for glActiveTexture(), but a plain 3 in glUniform1i().
I've been working with OpenGL using the OpenTK library for .NET, writing my own engine. I placed 3 different objects, one spinning cube and 2 adjacent cubes. Everything seemed to work fine until I changed the color of the quad on top of the objects.
I'm rendering cubes with a green top, on the left the block on the back is being rendered over the block in the front. I can't seem to find out where I'm going wrong with this, when the camera is set to look from the other side it renders correctly.
The following is the related code in classes with irrelevant or unrelated methods, properties and attributes omitted:
GameState.cs
class GameState : State
{
// TEMP: Test Block
SimpleBlock block;
int i = 0;
public override void Render()
{
base.Render();
// Set OpenGL Settings
GL.Viewport(0, 0, 1024, 768);
GL.Enable(EnableCap.CullFace);
// Reset the Matrices
Matrices.ClearMatrices();
// Set Camera Settings (Field of view in radians)
Matrices.ProjectionMatrix = Matrix4.CreatePerspectiveFieldOfView((float)Math.PI / 2, (1024.0f / 768.0f), 1, 1000);
// Create the Camera
// this has to be in reverse
Matrix4 viewMatrix = Matrix4.CreateRotationX((float)Math.PI/8);
viewMatrix = viewMatrix.Translate(0, -2, -4);
// Multiply it with the ModelView (Which at this point is set to a value that we can just use = and it has the same result)
Matrices.ModelViewMatrix = viewMatrix;
// Render the Block
Matrices.Push();
Matrices.ModelViewMatrix = Matrices.ModelViewMatrix.Translate(2, 0, 0);
Matrices.ModelViewMatrix = Matrices.ModelViewMatrix.Translate(0.5f, 0, 0.5f);
Matrices.ModelViewMatrix = Matrices.ModelViewMatrix.Rotate(0, i / 40.0f, 0);
block.Render();
Matrices.Pop();
// Render the Block Again Twice
Matrices.Push();
Matrices.ModelViewMatrix = Matrices.ModelViewMatrix.Translate(-2, 0, 0);
Matrices.ModelViewMatrix = Matrices.ModelViewMatrix.Translate(0.5f, 0, 0.5f);
block.Render();
Matrices.ModelViewMatrix = Matrices.ModelViewMatrix.Translate(0, 0, -1);
block.Render();
Matrices.Pop();
// Increment Rotation Test Variable
i++;
}
}
SimpleBlock.cs
class SimpleBlock : IBlock
{
public void Render()
{
// Send the Shader Parameters to the GPU
Shader.Bind();
Shader.SendMatrices();
// Begin Rendering the Polys
GL.Begin(BeginMode.Triangles);
// Front Quad
Shader.SetColor(Color4.SaddleBrown);
GL.Normal3(0, 0, 1);
GLUtils.QuadVertices(
new Vector3(-0.5f, 1, 0.5f),
new Vector3(-0.5f, 0, 0.5f),
new Vector3( 0.5f, 1, 0.5f),
new Vector3( 0.5f, 0, 0.5f));
// Right Quad
GL.Normal3(1, 0, 0);
GLUtils.QuadVertices(
new Vector3(0.5f, 1, 0.5f),
new Vector3(0.5f, 0, 0.5f),
new Vector3(0.5f, 1, -0.5f),
new Vector3(0.5f, 0, -0.5f));
// Back Quad
GL.Normal3(0, 0, -1);
GLUtils.QuadVertices(
new Vector3( 0.5f, 1, -0.5f),
new Vector3( 0.5f, 0, -0.5f),
new Vector3(-0.5f, 1, -0.5f),
new Vector3(-0.5f, 0, -0.5f));
// Left Quad
GL.Normal3(-1, 0, 0);
GLUtils.QuadVertices(
new Vector3(-0.5f, 1, -0.5f),
new Vector3(-0.5f, 0, -0.5f),
new Vector3(-0.5f, 1, 0.5f),
new Vector3(-0.5f, 0, 0.5f));
// Bottom Quad
GL.Normal3(0, -1, 0);
GLUtils.QuadVertices(
new Vector3(-0.5f, 0, 0.5f),
new Vector3(-0.5f, 0, -0.5f),
new Vector3( 0.5f, 0, 0.5f),
new Vector3( 0.5f, 0, -0.5f));
// Top Quad
Shader.SetColor(Color4.Green);
GL.Normal3(0, 1, 0);
GLUtils.QuadVertices(
new Vector3(-0.5f, 1, -0.5f),
new Vector3(-0.5f, 1, 0.5f),
new Vector3(0.5f, 1, -0.5f),
new Vector3(0.5f, 1, 0.5f));
// Done!
GL.End();
}
}
BasicFragment.glfs
#version 130
// MultiColor Attribute
in vec4 multiColor;
// Output color
out vec4 gl_FragColor;
void main()
{
// Set fragment
gl_FragColor = multiColor;
}
BasicVertex.glvs
#version 130
// Transformation Matrices
uniform mat4 ProjectionMatrix;
uniform mat4 ModelViewMatrix;
// Vertex Position Attribute
in vec3 VertexPos;
// MultiColor Attributes
in vec4 MultiColor;
out vec4 multiColor;
void main()
{
// Process Colors
multiColor = MultiColor;
// Process Vertex
gl_Position = ProjectionMatrix * ModelViewMatrix * vec4(VertexPos.x, VertexPos.y, VertexPos.z, 1);
}
MainWindow.cs
// Extends OpenTK's GameWindow Class
class MainWindow : GameWindow
{
public MainWindow()
: base(1024, 768, new GraphicsMode(32, 0, 0, 4))
{
this.Title = "Trench Wars";
this.WindowBorder = WindowBorder.Fixed;
this.ClientSize = new Size(1024, 768);
// Set VSync On
this.VSync = VSyncMode.Adaptive;
}
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
// Clear Screen
GL.ClearColor(Color4.CornflowerBlue);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
// Do State-Specific Rendering
StateEngine.Render();
// Pull a Wicked Bluffing move in Poker
GL.Flush();
// Swap Buffers
this.SwapBuffers();
}
}
It seems like you did forget to enable depth testing. glEnable(GL_DEPTH_TEST) before rendering the geometry is your friend (or given the language bindings you're using GL.Enable(EnableCap.DepthTest);).
Is there any way to make a primitive and use it over and over again? ex: if I make one cube, can I create 100 and make a 10x10 grid? I've tried using a for loop and updating the x and z coords with each loop thru, but it only moves the one cube thats created in the beginning. My class was created using an example from a book. I know how to move the cube around the area by changing the coords in the PositionCube method. What can I do in my main game class that will allow me to create a simple 10x10 grid?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
namespace Cube_Chaser
{
class Cube
{
private GraphicsDevice device;
private Texture2D texture;
public Vector3 location;
private Vector3 position;
private VertexBuffer cubeVertexBuffer;
private List<VertexPositionTexture> vertices = new List<VertexPositionTexture>();
public Cube(GraphicsDevice graphicsDevice, Vector3 playerLocation, float minDistance, Texture2D texture)
{
device = graphicsDevice;
this.texture = texture;
PositionCube(playerLocation, minDistance);
BuildFace(new Vector3(0, 0, 0), new Vector3(0, 1, 1));
BuildFace(new Vector3(0, 0, 1), new Vector3(1, 1, 1));
BuildFace(new Vector3(1, 0, 1), new Vector3(1, 1, 0));
BuildFace(new Vector3(1, 0, 0), new Vector3(0, 1, 0));
BuildFaceHorizontal(new Vector3(0, 1, 0), new Vector3(1, 1, 1));
BuildFaceHorizontal(new Vector3(0, 0, 1), new Vector3(1, 0, 0));
cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly);
cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray());
this.position = position;
}
private void BuildFace(Vector3 p1, Vector3 p2)
{
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0));
vertices.Add(BuildVertex(p1.X, p2.Y, p1.Z, 1, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p1.Y, p2.Z, 0, 0));
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0));
}
private void BuildFaceHorizontal(Vector3 p1, Vector3 p2)
{
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p1.Y, p1.Z, 1, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0));
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0));
vertices.Add(BuildVertex(p1.X, p1.Y, p2.Z, 0, 0));
}
private VertexPositionTexture BuildVertex(float x, float y, float z, float u, float v)
{
return new VertexPositionTexture(new Vector3(x, y, z), new Vector2(u, v));
}
public void PositionCube(Vector3 playerLocation, float minDistance)
{
location = new Vector3(.5f, .5f, .5f);
}
public void Draw(Camera camera, BasicEffect effect)
{
effect.VertexColorEnabled = false;
effect.TextureEnabled = true;
effect.Texture = texture;
Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f));
Matrix scale = Matrix.CreateScale(0.05f);
Matrix translate = Matrix.CreateTranslation(location);
effect.World = center * scale * translate;
effect.View = camera.View;
effect.Projection = camera.Projection;
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
device.SetVertexBuffer(cubeVertexBuffer);
device.DrawPrimitives(PrimitiveType.TriangleList, 0, cubeVertexBuffer.VertexCount / 3);
}
}
}
}
I took #nico-schetler 's answer and created the classes for you.
Cube.cs
class Cube
{
private GraphicsDevice device;
private VertexBuffer cubeVertexBuffer;
public Cube(GraphicsDevice graphicsDevice)
{
device = graphicsDevice;
var vertices = new List<VertexPositionTexture>();
BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1));
BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1));
BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0));
BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0));
BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1));
BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0));
cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly);
cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray());
}
private void BuildFace(List<VertexPositionTexture> vertices, Vector3 p1, Vector3 p2)
{
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0));
vertices.Add(BuildVertex(p1.X, p2.Y, p1.Z, 1, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p1.Y, p2.Z, 0, 0));
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0));
}
private void BuildFaceHorizontal(List<VertexPositionTexture> vertices, Vector3 p1, Vector3 p2)
{
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p1.Y, p1.Z, 1, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0));
vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1));
vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0));
vertices.Add(BuildVertex(p1.X, p1.Y, p2.Z, 0, 0));
}
private VertexPositionTexture BuildVertex(float x, float y, float z, float u, float v)
{
return new VertexPositionTexture(new Vector3(x, y, z), new Vector2(u, v));
}
public void Draw( BasicEffect effect)
{
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
device.SetVertexBuffer(cubeVertexBuffer);
device.DrawPrimitives(PrimitiveType.TriangleList, 0, cubeVertexBuffer.VertexCount / 3);
}
}
}
CubeDrawable.cs
public class DrawableList<T> : DrawableGameComponent
{
private BasicEffect effect;
private Camera camera;
private class Entity
{
public Vector3 Position { get; set; }
public Matrix Orientation { get; set; }
public Texture2D Texture { get; set; }
}
private Cube cube;
private List<Entity> entities = new List<Entity>();
public DrawableList (Game game, Camera camera, BasicEffect effect)
: base( game )
{
this.effect = effect;
cube = new Cube (game.GraphicsDevice);
this.camera = camera;
}
public void Add( Vector3 position, Matrix orientation, Texture2D texture )
{
entities.Add (new Entity() {
Position = position,
Orientation = orientation,
Texture = texture
});
}
public override void Draw (GameTime gameTime )
{
base.Draw (gameTime);
foreach (var item in entities) {
effect.VertexColorEnabled = false;
effect.TextureEnabled = true;
effect.Texture = item.Texture;
Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f));
Matrix scale = Matrix.CreateScale(0.05f);
Matrix translate = Matrix.CreateTranslation(item.Position);
effect.World = center * scale * translate;
effect.View = camera.View;
effect.Projection = camera.Projection;
cube.Draw (effect);
}
}
}
Usage
camera = new Camera (graphics.GraphicsDevice);
effect = new BasicEffect (graphics.GraphicsDevice);
cubes = new DrawableList<Cube> (this, camera, effect);
Components.Add (cubes);
for (int i=0 ; i < 50; i++)
{
cubes.Add (new Vector3( i*0.5f, 50.0f, 50.0f), Matrix.Identity, logoTexture);
}
You should separate the logical representation of the cube from the physical one. The physical representation would be the vertex buffer etc. This is the same for every cube. You can affect the rendering through e.g. world transforms (as you already did).
The logical representation is your cube class. You will need 100 instances of them (each with their own position, scale etc.). The logical representation can reference the physical one. So there is no need to have more than one vertex buffer.