Setup of matrix for instance shader - c#

I want to draw instanced cubes.
I can call GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, 36, 2); successfully.
My problem is that all the cubes are drawn at the same position and same rotation. How can i change that individually for every cube?
To create different positions and so on, i need a matrix for each cube, right? I created this:
Matrix4[] Matrices = new Matrix4[]{
Matrix4.Identity, //do nothing
Matrix4.Identity * Matrix4.CreateTranslation(1,0,0) //move a little bit
};
GL.BindBuffer(BufferTarget.ArrayBuffer, matrixBuffer);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(sizeof(float) * 16 * Matrices.Length), Matrices, BufferUsageHint.StaticDraw);
This should create a buffer where i can store my matrices. matrixBuffer is the pointer to my buffer. Im not sure if the size is correct, i took float * 4 (for Vector4) * 4 (for 4 vectors) * array-size.
Draw Loop:
GL.BindBuffer(BufferTarget.ArrayBuffer, matrixBuffer);
GL.VertexAttribPointer(3, 4, VertexAttribPointerType.Float, false, 0, 0);
//GL.VertexAttribDivisor(3, ?);
GL.EnableVertexAttribArray(3);
GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, 36, 2);
Any higher number than 4 in VertexAttribPointer(..., 4, VertexattribPointerType.Float, ...); causes a crash. I tought i have to set that value to 16?
Im not sure if i need a VertexAttribDivisor, probably i need this every 36th vertex so i call GL.VertexAttribDivisor(3, 36);? But when i do that, i can't see any cube.
My vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
layout(location = 3) in mat4 instanceMatrix;
uniform mat4 projMatrix;
out vec4 vColor;
out vec2 texCoords[];
void main(){
gl_Position = instanceMatrix * projMatrix * vec4(position, 1.0);
//gl_Position = projMatrix * vec4(position, 1.0);
texCoords[0] = texCoord;
vColor = color;
}
So my questions:
What is wrong with my code?
Why can i set the size-parameter of VertexAttribPointer only to 4?
What is the correct value for VertexAttribDivisor?
Edit:
Based on the answer of Andon M. Coleman i made this changes:
GL.BindBuffer(BufferTarget.UniformBuffer, matrixBuffer);
GL.BufferData(BufferTarget.UniformBuffer, (IntPtr)(sizeof(float) * 16), IntPtr.Zero, BufferUsageHint.DynamicDraw);
//Bind Buffer to Binding Point
GL.BindBufferBase(BufferRangeTarget.UniformBuffer, matrixUniform, matrixBuffer);
matrixUniform = GL.GetUniformBlockIndex(shaderProgram, "instanceMatrix");
//Bind Uniform Block to Binding Point
GL.UniformBlockBinding(shaderProgram, matrixUniform, 0);
GL.BufferSubData(BufferTarget.UniformBuffer, IntPtr.Zero, (IntPtr)(sizeof(float) * 16 * Matrices.Length), Matrices);
And shader:
#version 330 core
layout(location = 0) in vec4 position; //gets vec3, fills w with 1.0
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
uniform mat4 projMatrix;
uniform UniformBlock
{ mat4 instanceMatrix[]; };
out vec4 vColor;
out vec2 texCoords[];
void main(){
gl_Position = projMatrix * instanceMatrix[0] * position;
texCoords[0] = texCoord;
vColor = color;
}

You have discovered the hard way that vertex attribute locations are always 4-component.
The only way to make a 4x4 matrix a per-vertex attribute is if you concede that mat4 is 4x as large as vec4.
Consider the declaration of your mat4 vertex attribute:
layout(location = 3) in mat4 instanceMatrix;
You might naturally think that location 3 stores 16 floating-point values, but you would be wrong. Locations in GLSL are always 4-component. Thus, mat4 instanceMatrix actually occupies 4 different locations.
This is essentially how instanceMatrix actually works:
layout(location = 3) in vec4 instanceMatrix_Column0;
layout(location = 4) in vec4 instanceMatrix_Column1;
layout(location = 5) in vec4 instanceMatrix_Column2;
layout(location = 6) in vec4 instanceMatrix_Column3;
Fortunately, you do not have to write your shader that way, it is perfectly valid to have a mat4 vertex attribute.
However, you do have to write your C# code to behave that way:
GL.BindBuffer(BufferTarget.ArrayBuffer, matrixBuffer);
GL.VertexAttribPointer(3, 4, VertexAttribPointerType.Float, false, 64, 0); // c0
GL.VertexAttribPointer(4, 4, VertexAttribPointerType.Float, false, 64, 16); // c1
GL.VertexAttribPointer(5, 4, VertexAttribPointerType.Float, false, 64, 32); // c2
GL.VertexAttribPointer(6, 4, VertexAttribPointerType.Float, false, 64, 48); // c3
Likewise, you must setup your vertex attribute divisor for all 4 locations:
GL.VertexAttribDivisor (3, 1);
GL.VertexAttribDivisor (4, 1);
GL.VertexAttribDivisor (5, 1);
GL.VertexAttribDivisor (6, 1);
Incidentally, because vertex attributes are always 4-component, you can actually declare:
layout(location = 0) in vec4 position;
And stop writing ugly code like this:
gl_Position = instanceMatrix * projMatrix * vec4(position, 1.0);
This is because missing components in a vertex attribute are automatically expanded by OpenGL.
(0.0, 0.0, 0.0, 1.0)
If you declare a vertex attribute as vec4 in the GLSL shader, but only supply data for XYZ, then W is automatically assigned a value of 1.0.
In actuality, you do not want to store your matrices per-vertex. That is a waste of multiple vertex attribute locations. What you may consider is an array of uniforms, or better yet a uniform buffer. You can index this array using the Vertex Shader pre-declared variable: gl_InstanceID. That is really the most sensible way to approach this, because you may find yourself using more properties per-instance than you have vertex attribute locations (mininum 16 in GL 3.3, only a few GPUs actually support more than 16).
Keep in mind that there is a limit to the number of vec4 uniforms a vertex shader can use in a single invocation, and that a mat4 counts as 4x the size of a vec4. Using a uniform buffer will allow you to draw many more instances than a plain old array of uniforms would.

Related

Crop the top and bottom part of texture using opentk

I have a slider which have value ranges from zero to one. Now using the value of this slider , I want to crop the image from bottom to half of image And from top to half of image. I have done the first one (bottom crop) by resizing the height of GLControl. Not sure that this is the proper way to achieve this. But it is working fine. I have no idea for second option (cropping from top to half of image). Please help to get it.
Attached is the outputs that I got when performing bottom crop with values 0,0.4,1.0 respectively.
int FramereHeight = (glControl2.Height / 2) / 10; // crop the middle camera upto half of it's height
if (NumericUpdownMiddleBottomCropVal != 0.0)//value ranges from zero to one
{
glControl2.Height = glControl2.Height - Convert.ToInt32(NumericUpdownMiddleBottomCropVal * 10 * FramereHeight);
}
public void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoord;
void main() {
vTexCoord = (a_position.xy+1)/2 ;
gl_Position =vec4(a_position, 1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"precision highp float;
uniform sampler2D sTexture_2;varying vec2 vTexCoord;
uniform float sSelectedCropVal;
uniform float sSelectedTopCropVal;uniform float sSelectedBottomCropVal;
void main ()
{
if (abs(vTexCoord.y-0.5) * 2.0 > 1.0 - 0.5*sSelectedCropVal)
discard;
vec4 color = texture2D (sTexture_2, vec2(vTexCoord.x, vTexCoord.y));
gl_FragColor =color;
}"); GL.CompileShader(fragShader);
}
I assume that sSelectedCropVal is in range [0.0, 1.0].
You can discard the fragments depending on the v corodiante:
if ((0.5 - vTexCoord.y) * 2.0 > 1.0-sSelectedCropVal)
discard;
if ((vTexCoord.y - 0.5) * 2.0 > 1.0-sSelectedBottomCropVal)
discard;
Complete shader:
precision highp float;
uniform sampler2D sTexture_2;varying vec2 vTexCoord;
uniform float sSelectedCropVal;
uniform float sSelectedTopCropVal;uniform float sSelectedBottomCropVal;
void main ()
{
if ((0.5 - vTexCoord.y) * 2.0 > 1.0-sSelectedCropVal)
discard;
if ((vTexCoord.y - 0.5) * 2.0 > 1.0-sSelectedBottomCropVal)
discard;
vec4 color = texture2D(sTexture_2, vTexCoord.xy);
gl_FragColor = color;
}

Vertex Skinning Looks Messy on Real Android Device using Monodroid

I have no problem doing the "vertex skinning" for three-dimensional animation. All goes well when using the emulator (and genymotion). However, when run on a real device (such as Samsung and Lenovo) looks messy.
Screenshoot (Emulator)
http://1drv.ms/1BzZ9Ib
Screenshoot (Real device)
1drv.ms/1BzZ2we
Passing skin transform matrix
int location = ...;
int arrayCount = ...;
float[] skinTransform = ...;
GL.UniformMatrix4(location, arrayCount, false, skinTransform);
GLSL vertex
uniform mat4 World;
uniform mat4 View;
uniform mat4 Projection;
uniform mat4 Bones[20];
attribute vec4 Position;
attribute vec4 BoneIndices;
attribute vec4 BoneWeights;
attribute vec2 UV;
varying vec4 v_Position;
varying vec2 v_UV;
void main()
{
mat4 skinTransform;
int boneIndex = int(BoneIndices.x);
skinTransform += Bones[boneIndex] * BoneWeights.x;
boneIndex = int(BoneIndices.y);
skinTransform += Bones[boneIndex] * BoneWeights.y;
boneIndex = int(BoneIndices.z);
skinTransform += Bones[boneIndex] * BoneWeights.z;
boneIndex = int(BoneIndices.w);
skinTransform += Bones[boneIndex] * BoneWeights.w;
vec4 skinPos = Position * skinTransform;
vec4 worldPosition = skinPos * World;
vec4 viewPosition = worldPosition * View;
v_Position = viewPosition * Projection;
v_UV = UV;
gl_Position = v_Position;
}
APK
http://1drv.ms/1BzYV3Q
Touch screen to on/off animation.
Info
Xamarin.Android = 4.10.x.x
Emulator Target = API 16 or 4.1
Real device Target = API 16 or 4.1
App Target = API 10 or 2.3 (tested also in API 14 and API 16), results remain the same
Is there any solution to solve this problem?
Best regards and thank you.

Pass colors and positions to shader from separate VBO

I am trying to draw VAO from separate VBO. My goal is to get different colors for each vertex of my geometry. But with my code it still all red.
I think my error is this code fragment. Please, help me to find it. (I have skipped program and matrices set ups)
Set up
vao = new int[1];
buffers = new int[2];
GL.GenVertexArrays(1, vao);
GL.GenBuffers(2, buffers);
GL.BindVertexArray(vao[0]);
GL.EnableVertexAttribArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, buffers[0]);
unsafe
{
fixed (void* verts = quad_strip3)
{
var prt = new IntPtr(verts);
GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(quad_strip3.Length * sizeof(float)), prt,
BufferUsageHint.StaticDraw);
}
}
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, new IntPtr(0));
GL.BindBuffer(BufferTarget.ArrayBuffer, buffers[1]);
var r = new Random();
var colors = new float[quad_strip3.Length];
for (int i = 0; i < colors.Length; i++)
{
colors[i] = (float)r.NextDouble();
}
unsafe
{
fixed (void* verts = colors)
{
var prt = new IntPtr(verts);
GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(colors.Length * sizeof(float)), prt,
BufferUsageHint.StaticDraw);
}
}
GL.VertexAttribPointer(1, 3, VertexAttribPointerType.Float, false, 0, new IntPtr(0));
Draw code
GL.BindVertexArray(vao[0]);
GL.DrawArrays(PrimitiveType.QuadStrip, 0, 26);
Vertex shader
#version 150 core
in vec3 in_Position;
in vec3 in_color;
out vec3 pass_Color;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
pass_Color = in_color;
}
Fragment shader
#version 150 core
in vec3 pass_Color;
out vec4 out_Color;
void main(void) {
out_Color = vec4(pass_Color, 1.0);
}
Khm... the solution was really simple. I just have missed to EnableVertexAttribArray for colors.
I insert
GL.EnableVertexAttribArray(1);
Before
GL.BindBuffer(BufferTarget.ArrayBuffer, buffers[1]);
And everything gets worked.

OpenGL 2d sprite not rendering (OpenTK)

I have been working on a render and it has been working OK for one texture but would not render a second. I seemed to have changed something and it stopped rendering anything but the background color. I am not sure what I changed and I cannot get it back to the way it was. I try not to post lots of code at once onto here but I do not know enough OpenGL to isolate the issue. If you can offer any help or hints, I would greatly appreciate it!
My guess is that it is either coming from the way I am binding the coordinate or the shader.
The following is the code:
Shaders:
string vertexShaderSource = #"
#version 330
layout (location = 0) in vec3 Position;
uniform mat4 projectionmatrix;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
attribute vec2 texcoord;
varying vec2 f_texcoord;
uniform vec2 pos;
void main()
{
f_texcoord = texcoord;
gl_Position = projectionmatrix * vec4(Position, 1);
//gl_Position = projectionmatrix * vec4(Position.xyz, 1.0);
}
";
string fragmentShaderSource = #"
#version 330
out vec4 FragColor;
varying vec2 f_texcoord;
uniform sampler2D mytexture;
void main()
{
FragColor = texture2D(mytexture, f_texcoord);
//FragColor = Vec4(0,0,0, 1);
}";
Vertexes:
Vector2[] g_vertex_buffer_data ={
new Vector2(-1.0f, 1.0f),
new Vector2(1.0f, 1.0f),
new Vector2(1.0f, -1.0f),
new Vector2(-1.0f, -1.0f)
};
Vector2[] g_texture_coords = {
new Vector2(0.0f, 0.0f),
new Vector2(1.0f, 0.0f),
new Vector2(1.0f, -1.0f),
new Vector2(0.0f, -1.0f)
};
Shader setup:
shaderProgramHandle = GL.CreateProgram();
vertexShaderHandle = GL.CreateShader(ShaderType.VertexShader);
fragmentShaderHandle = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(vertexShaderHandle, vertexShaderSource);
GL.ShaderSource(fragmentShaderHandle, fragmentShaderSource);
GL.CompileShader(vertexShaderHandle);
GL.CompileShader(fragmentShaderHandle);
GL.AttachShader(shaderProgramHandle, vertexShaderHandle);
GL.AttachShader(shaderProgramHandle, fragmentShaderHandle);
GL.LinkProgram(shaderProgramHandle);
GL.UseProgram(shaderProgramHandle);
Basic setup and binding:
GL.ClearColor(Color4.Red);
//GL.LoadMatrix(ref projectionMatrix);
GL.GenBuffers(2, out vertexbuffer);
GL.BindBuffer(BufferTarget.ArrayBuffer, vertexbuffer);
GL.BufferData<Vector2>(BufferTarget.ArrayBuffer,
new IntPtr(g_vertex_buffer_data.Length * Vector2.SizeInBytes),
g_vertex_buffer_data, BufferUsageHint.StaticDraw);
//Shader Setup
CreateShaders();
Matrix4 projectionMatrix = Matrix4.CreateOrthographic(control.Width, control.Height, -1, 1);
vertexShaderProjectionHandle = GL.GetUniformLocation(shaderProgramHandle, "projectionmatrix");
GL.UniformMatrix4(vertexShaderProjectionHandle, false, ref projectionMatrix);
GL.EnableVertexAttribArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, vertexbuffer);
GL.VertexAttribPointer(0, 2, VertexAttribPointerType.Float, false, 0, 0);
Loading and binding the texture:
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.Enable(EnableCap.Blend);
GL.ActiveTexture(TextureUnit.Texture0 + texture.textureID);
GL.BindTexture(TextureTarget.Texture2D, texture.textureID);
textureHandle = GL.GetAttribLocation(shaderProgramHandle, "texcoord");
GL.GenBuffers(1, out textureBufferHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, textureBufferHandle);
GL.BufferData<Vector2>(BufferTarget.ArrayBuffer, new IntPtr(Vector2.SizeInBytes * 4), g_texture_coords, BufferUsageHint.StaticDraw);
Matrix Setup:
//rotation += MathHelper.DegreesToRadians(1);
float displayRatio = ((float)control.Height / (float)control.Width);
Matrix4 ViewMatrix = Matrix4.Identity;
int ViewMatrixHandle = GL.GetUniformLocation(shaderProgramHandle, "ViewMatrix");
GL.UniformMatrix4(ViewMatrixHandle, true, ref ViewMatrix);
Matrix4 ModelMatrix = Matrix4.Identity;
int modelMatrixHandle = GL.GetUniformLocation(shaderProgramHandle, "ModelMatrix");
GL.UniformMatrix4(modelMatrixHandle, true, ref ModelMatrix);
int posHandle = GL.GetUniformLocation(shaderProgramHandle, "pos");
GL.Uniform2(posHandle, ref offset);
Rendering
GL.Viewport(0, 0, control.Width, control.Height);
//GL.Enable(EnableCap.Texture2D);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.BindVertexArray(0);
GL.EnableVertexAttribArray(textureHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, textureBufferHandle);
GL.VertexAttribPointer(textureHandle, 2, VertexAttribPointerType.Float, false, 0, 0);
GL.DrawArrays(BeginMode.Quads, 0, 4);
GL.Flush();
control.SwapBuffers();
You are using the old attribute qualifier to declare texcoord in your vertex shader. This is invalid in GLSL 330, and I suspect if you read the program/shader info logs when you compile/link your GLSL program it includes this information in the log.
To correct this, replace attribute vec2 texcoord with in vec2 texcoord. Then you should get a valid location when you query the attribute location, which is required to set your vertex attribute pointer correctly.
varying is also invalid in GLSL 330. You need to declare f_texcoord as out in your vertex shader and in in your fragment shader for your program to properly link.
There is no error detecting code at all in your code listings. You should read the manual pages for glValidateProgram (...), glGetProgramInfoLog (...) and glGetShaderInfoLog (...), because I am pretty sure the GLSL compiler would have told you your exact problem if you read its output log.

Modelview Difference between GLSL and HLSL?

I'm wondering if there is a difference between GLSL and HLSL Mathematics.
I'm using a selfmade Engine which works with openTK fine. My SharpDx Implementation gets everyday a bit further.
I'm currently working on the ModelviewProjection Matrix.
To see if it works I use a simple project which works fine with OpenTK.
So I changed the Shader code from GLSL to HLSL because the rest of the program uses the engine functions. The programm did not work I couldn't see the geometry, so I changed the Modelview Matrix and the Projections Matrix to the Identity Matrix. Aftwards it worked I saw the geometry. So I changed a bit of the GLSL becuase I wanted a similar GLSL code to the HLSL and I also changed the Matrixes to the identy too. Afterwards I did not see anything it didn't work.... So I'm stuck... Any of you have an Idea?
Anyways long story short
My HLSL Shader Code
public string Vs = #"cbuffer Variables : register(b0){
float4 Testfarbe;
float4x4 FUSEE_MVP;
}
struct VS_IN
{
float4 pos : POSITION;
float4 tex : TEXCOORD;
float4 normal : NORMAL;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
float4 tex : TEXCOORD;
float4 normal : NORMAL;
};
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN)0;
input.pos.w = 1.0f;
output.pos = mul(input.pos,FUSEE_MVP);
output.col = Testfarbe;
/*output.col = FUSEE_MV._m00_m01_m02_m03;*/
/* output.normal = input.normal;
output.tex = input.tex;*/
/* if (FUSEE_MV._m00 == 4.0f)
output.col = float4(1,0,0,1);
else
output.col = float4(0,0,1,1);*/
return output;
}
";
string Ps = #"
SamplerState pictureSampler;
Texture2D imageFG;
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
float4 tex : TEXCOORD;
float4 normal : NORMAL;
};
float4 PS( PS_IN input ) : SV_Target
{
return input.col;
/*return imageFG.Sample(pictureSampler,input.tex);*/
}";
So I changed my old working OpenTk project to see where the difference ist between openTK and SharpDx relating to the math calculations.
The HLSL code
public string Vs = #"
/* Copies incoming vertex color without change.
* Applies the transformation matrix to vertex position.
*/
attribute vec4 fuColor;
attribute vec3 fuVertex;
attribute vec3 fuNormal;
attribute vec2 fuUV;
varying vec4 vColor;
varying vec3 vNormal;
varying vec2 vUV;
uniform mat4 FUSEE_MVP;
uniform mat4 FUSEE_ITMV;
void main()
{
gl_Position = FUSEE_MVP * vec4(fuVertex, 1.0);
/*vNormal = mat3(FUSEE_ITMV[0].xyz, FUSEE_ITMV[1].xyz, FUSEE_ITMV[2].xyz) * fuNormal;*/
vUV = fuUV;
}";
public string Ps = #"
/* Copies incoming fragment color without change. */
#ifdef GL_ES
precision highp float;
#endif
uniform vec4 vColor;
varying vec3 vNormal;
void main()
{
gl_FragColor = vColor * dot(vNormal, vec3(0, 0, 1));
}";
In the main code itself I only read an Obj file and set the Identity matrix
public override void Init()
{
Mesh = MeshReader.LoadMesh(#"Assets/Teapot.obj.model");
//ShaderProgram sp = RC.CreateShader(Vs, Ps);
sp = RC.CreateShader(Vs, Ps);
_vTextureParam = sp.GetShaderParam("Testfarbe");//vColor
}
public override void RenderAFrame()
{
...
var mtxRot = float4x4.CreateRotationY(_angleHorz) * float4x4.CreateRotationX(_angleVert);
var mtxCam = float4x4.LookAt(0, 200, 500, 0, 0, 0, 0, 1, 0);
// first mesh
RC.ModelView = float4x4.CreateTranslation(0, -50, 0) * mtxRot * float4x4.CreateTranslation(-150, 0, 0) * mtxCam;
RC.SetShader(sp);
//mapping
RC.SetShaderParam(_vTextureParam, new float4(0.0f, 1.0f, 0.0f, 1.0f));
RC.Render(Mesh);
Present();
}
public override void Resize()
{
RC.Viewport(0, 0, Width, Height);
float aspectRatio = Width / (float)Height;
RC.Projection = float4x4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 1, 5000);
}
The both programs side by side
What I should also add is as soon as the values of my ModelView identity are bigger than 1.5 I don't see anything in my window ? anyone knows that might causing this?
I edited the Post and the Image so you see a bigger difference.
I had earlier in this post the identity Matrix. If I use the Identity Matrix with this Obj-File
v 0.0 0.5 0.5
v 0.5 0.0 0.5
v -0.5 0.0 0.5
vt 1 0 0
vt 0 1 0
vt 0 0 0
f 1/2 2/3 3/1
I saw in my SharpDX project the triangle and in my openTK not. But I tink the Teapot thing is a bit better to show the difference within the to project where only the Shadercode is different! I mean I could've done something wrong in the SharpDX Implementation for this Enginge but lets assume their is everything right. At least I hope so if you guys tell my the ShaderCode is just wrong ;)
I hope I could describe my problem clear so you understand it.
OK So you've to Transpose the Matrix...

Categories