I have three textures which should display on an opengl control in a way that those three should equally be in it. Means texture1 should be in 0 to 0.33 of glcontrol. And texture2 should be in 0.33 to 0.66 And texture3 in remain place. I have done like below. But right portion of middle image get a blurred area. Please help to correct
private void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoordIn;
void main() {
vTexCoordIn=( a_position.xy+1)/2 ;
gl_Position = vec4(a_position,1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"
uniform sampler2D sTexture1;
uniform sampler2D sTexture2;
uniform sampler2D sTexture3;
varying vec2 vTexCoordIn;
void main ()
{
vec2 vTexCoord=vec2(vTexCoordIn.x,vTexCoordIn.y);
if ( vTexCoord.x<0.3 )
gl_FragColor = texture2D (sTexture1, vec2(vTexCoord.x*2.0, vTexCoord.y));
else if ( vTexCoord.x>=0.3 && vTexCoord.x<=0.6 )
gl_FragColor = texture2D (sTexture2, vec2(vTexCoord.x*2.0, vTexCoord.y));
else
gl_FragColor = texture2D (sTexture3, vec2(vTexCoord.x*2.0, vTexCoord.y));
}");
GL.CompileShader(fragShader);
}
Means texture1 should be in 0 to 0.33 of glcontrol. And texture2 should be in 0.33 to 0.66 And texture3 in remain place.
If the texture coordinates are in range [0, 0.33] then sTexture1 has to be drawn and the texture coordinates have to be mapped from [0, 0.33] to [0, 1]:
if ( vTexCoord.x < 1.0/3.0 )
gl_FragColor = texture2D(sTexture1, vec2(vTexCoord.x * 3.0, vTexCoord.y));
If the texture coordinates are in range [0.33, 0.66] then sTexture2 has to be drawn and the texture coordinates have to be mapped from [0.33, 0.66] to [0, 1]:
else if ( vTexCoord.x >= 1.0/3.0 && vTexCoord.x < 2.0/3.0 )
gl_FragColor = texture2D(sTexture2, vec2(vTexCoord.x * 3.0 - 1.0, vTexCoord.y));
f the texture coordinates are in range [0.66, 1] then sTexture3 has to be drawn and the texture coordinates have to be mapped from [0.66, 1] to [0, 1]:
else if ( vTexCoord.x >= 2.0/3.0 )
gl_FragColor = texture2D(sTexture2, vec2(vTexCoord.x * 3.0 - 2.0, vTexCoord.y));
gl_FragColor = texture2D (sTexture3, vec2(vTexCoord.x*2.0, vTexCoord.y));
^^^^
Multiplying x-coordinate by 2.0 leads to values exceeding 1.0 if x >= 0.5. If your sampler is set up to CLAMP_TO_EDGE (which seems to be the case), this results in repeatedly sampling the same texel on the edge of the texture (which will appear as smearing / blurring as you mentioned).
Related
I have a slider which have value ranges from zero to one. Now using the value of this slider , I want to crop the image from bottom to half of image And from top to half of image. I have done the first one (bottom crop) by resizing the height of GLControl. Not sure that this is the proper way to achieve this. But it is working fine. I have no idea for second option (cropping from top to half of image). Please help to get it.
Attached is the outputs that I got when performing bottom crop with values 0,0.4,1.0 respectively.
int FramereHeight = (glControl2.Height / 2) / 10; // crop the middle camera upto half of it's height
if (NumericUpdownMiddleBottomCropVal != 0.0)//value ranges from zero to one
{
glControl2.Height = glControl2.Height - Convert.ToInt32(NumericUpdownMiddleBottomCropVal * 10 * FramereHeight);
}
public void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoord;
void main() {
vTexCoord = (a_position.xy+1)/2 ;
gl_Position =vec4(a_position, 1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"precision highp float;
uniform sampler2D sTexture_2;varying vec2 vTexCoord;
uniform float sSelectedCropVal;
uniform float sSelectedTopCropVal;uniform float sSelectedBottomCropVal;
void main ()
{
if (abs(vTexCoord.y-0.5) * 2.0 > 1.0 - 0.5*sSelectedCropVal)
discard;
vec4 color = texture2D (sTexture_2, vec2(vTexCoord.x, vTexCoord.y));
gl_FragColor =color;
}"); GL.CompileShader(fragShader);
}
I assume that sSelectedCropVal is in range [0.0, 1.0].
You can discard the fragments depending on the v corodiante:
if ((0.5 - vTexCoord.y) * 2.0 > 1.0-sSelectedCropVal)
discard;
if ((vTexCoord.y - 0.5) * 2.0 > 1.0-sSelectedBottomCropVal)
discard;
Complete shader:
precision highp float;
uniform sampler2D sTexture_2;varying vec2 vTexCoord;
uniform float sSelectedCropVal;
uniform float sSelectedTopCropVal;uniform float sSelectedBottomCropVal;
void main ()
{
if ((0.5 - vTexCoord.y) * 2.0 > 1.0-sSelectedCropVal)
discard;
if ((vTexCoord.y - 0.5) * 2.0 > 1.0-sSelectedBottomCropVal)
discard;
vec4 color = texture2D(sTexture_2, vTexCoord.xy);
gl_FragColor = color;
}
Using below shader code I can display frames from three cameras to a single openGL control. The starting location of this opengl control should be from center of screen and should start from left end to full screen width. That is width of control is screen width and height is half of screen height. But the problem is there is area other than textures and it appears as ClearColor (which is set as blue color).
if (uv.y > 1.0)
discard;
Can I remove/delete this extra area from GLControl?
int y = Screen.PrimaryScreen.Bounds.Height - this.PreferredSize.Height;
glControl1.Location = new Point(0, y/2);
private void OpenGL_SizeChanged(object sender, EventArgs e)
{
glControl1.Width = this.Width;
glControl1.Height = this.Height/2;
}
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.ClampToBorder);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.ClampToBorder);
private void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoordIn;
//uniform float aspect;
void main() {
vTexCoordIn=( a_position.xy+1)/2;
gl_Position = vec4(a_position,1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"
uniform sampler2D sTexture;
uniform sampler2D sTexture1;
uniform sampler2D sTexture2;
uniform vec2 sTexSize;
uniform vec2 sTexSize1;
uniform vec2 sTexSize2;
varying vec2 vTexCoordIn;
void main ()
{
vec2 vTexCoord=vec2(vTexCoordIn.x,vTexCoordIn.y);
if ( vTexCoord.x < 1.0/3.0 )
{
vec2 uv = vec2(vTexCoord.x * 3.0, vTexCoord.y);
uv.y *= sTexSize.x / sTexSize.y;
if (uv.y > 1.0)
discard;
else
gl_FragColor = texture2D(sTexture, uv);
}
else if ( vTexCoord.x >= 1.0/3.0 && vTexCoord.x < 2.0/3.0 )
{
vec2 uv = vec2(vTexCoord.x * 3.0 - 1.0, vTexCoord.y);
uv.y *= sTexSize1.x / sTexSize1.y;
if (uv.y > 1.0)
discard;
else
gl_FragColor = texture2D(sTexture1, uv);
}
else if ( vTexCoord.x >= 2.0/3.0 )
{
vec2 uv = vec2(vTexCoord.x * 3.0 - 2.0, vTexCoord.y);
uv.y *= sTexSize2.x / sTexSize2.y;
if (uv.y > 1.0)
discard;
else
gl_FragColor = texture2D(sTexture2, uv);
}
}");
GL.CompileShader(fragShader);
}
I think you should do the resizing on the CPU side instead of shaders ...
resize all frames to common height
sum the resized frames widths
rescale all frames so the summed width is equal to your window/desktop width
resize window height to match new common height after #3
Looks like you are doing #1,#2,#3 inside your shaders bud ignoring #4 resulting in that empty area. You can not match both x and y size of window without breaking aspect ratio of the camera images. So leave x size and change the y size of the window to remedy your problem.
Here small VCL/C++/legacy GL example (sorry I do not code in C# and too lazy to code the new stuff for this):
//---------------------------------------------------------------------------
#include <vcl.h>
#include <gl\gl.h>
#pragma hdrstop
#include "Unit1.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1; // VCL Application window object
int xs,ys; // window resolution
HDC hdc=NULL; // device context for GL
HGLRC hrc=NULL; // rendering context for GL
//---------------------------------------------------------------------------
const int camera_res[]= // (x,y) resolutions of each camera
{
320,200,
640,480,
352,288,
0,0
};
float frame_pos[128]; // (x0,x1) position of each frame
void frame_resize(int xs,int &ys) // position/resize frames and change ys to common height so xs fit without breaking aspect ratio
{
int i,j;
float dx,dy,x,y;
// comon height placemet
for (x=0.0,i=0,j=0;camera_res[i];)
{
dx=camera_res[i]; i++;
dy=camera_res[i]; i++;
dx*=1000.0/dy; // any non zero common height for example 1000
frame_pos[j]=x; x+=dx; j++;
frame_pos[j]=x-1.0; j++;
}
frame_pos[j]=-1.0; j++;
frame_pos[j]=-1.0; j++;
// rescale summed width x to match xs
x=float(xs)/x; // scale
ys=float(1000.0*x); // common height
for (j=0;frame_pos[j]>-0.1;)
{
frame_pos[j]*=x; j++;
frame_pos[j]*=x; j++;
}
}
//---------------------------------------------------------------------------
void gl_draw()
{
if (hrc==NULL) return;
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
// view in pixel units
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(-1.0,+1.0,0.0);
glScalef(2.0/float(xs),-2.0/float(ys),1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// render rectangle
int i; float x0,y0,x1,y1;
y0=0.0; y1=ys-1;
glColor3f(1.0,1.0,1.0);
for (i=0;frame_pos[i]>-0.1;)
{
x0=frame_pos[i]; i++;
x1=frame_pos[i]; i++;
// here bind (i/2) camera frame as texture (only one texture at a time)
glBegin(GL_LINE_LOOP);
glTexCoord2f(0.0,0.0); glVertex2f(x0,y0);
glTexCoord2f(0.0,1.0); glVertex2f(x0,y1);
glTexCoord2f(1.0,1.0); glVertex2f(x1,y1);
glTexCoord2f(1.0,0.0); glVertex2f(x1,y0);
glEnd();
}
glFlush();
SwapBuffers(hdc);
}
//---------------------------------------------------------------------------
__fastcall TForm1::TForm1(TComponent* Owner):TForm(Owner)
{
// desktop You can hardcode xs,ys instead
TCanvas *scr=new TCanvas();
scr->Handle=GetDC(NULL);
xs=scr->ClipRect.Width(); // desktop width
ys=scr->ClipRect.Height(); // desktop height
delete scr;
// window This is important
int ys0=ys; // remember original height
frame_resize(xs,ys); // compute sizes and placements
SetBounds(0,(ys0-ys)>>1,xs,ys); // resize window and place in the center of screen
// GL init most likely you can ignore this you already got GL
hdc = GetDC(Handle); // get device context for this App window
PIXELFORMATDESCRIPTOR pfd;
ZeroMemory( &pfd, sizeof( pfd ) ); // set the pixel format for the DC
pfd.nSize = sizeof( pfd );
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24;
pfd.cDepthBits = 24;
pfd.iLayerType = PFD_MAIN_PLANE;
SetPixelFormat(hdc,ChoosePixelFormat(hdc, &pfd),&pfd);
hrc = wglCreateContext(hdc); // create current rendering context
if(hrc == NULL)
{
ShowMessage("Could not initialize OpenGL Rendering context !!!");
Application->Terminate();
}
if(wglMakeCurrent(hdc, hrc) == false)
{
ShowMessage("Could not make current OpenGL Rendering context !!!");
wglDeleteContext(hrc); // destroy rendering context
Application->Terminate();
}
// resize GL framebufers this is important
glViewport(0,0,xs,ys);
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormDestroy(TObject *Sender)
{
// GL exit most likely you can ignore this
wglMakeCurrent(NULL, NULL); // release current rendering context
wglDeleteContext(hrc); // destroy rendering context
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormPaint(TObject *Sender)
{
gl_draw();
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormKeyDown(TObject *Sender, WORD &Key,TShiftState Shift)
{
if (Key==27) Close(); // Escape exits app
}
//---------------------------------------------------------------------------
Ignore the VCL stuff the only important thing here is the frame_resize function and its use. The gl_draw just renders rectangles instead of your frames so to remedy just bind texture and use GL_QUAD instead of GL_LINE_LOOP. Or port it to the new GL stuff so VBO/VAO ...
I encoded it so it support any number of cameras above 0 ... just be sure the frame_pos array is big enough (2 entries per camera).
As you can see no shaders are need. Of coarse in the new GL style you need shaders so in them just copy texel from texture to fragment ...
I want to draw instanced cubes.
I can call GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, 36, 2); successfully.
My problem is that all the cubes are drawn at the same position and same rotation. How can i change that individually for every cube?
To create different positions and so on, i need a matrix for each cube, right? I created this:
Matrix4[] Matrices = new Matrix4[]{
Matrix4.Identity, //do nothing
Matrix4.Identity * Matrix4.CreateTranslation(1,0,0) //move a little bit
};
GL.BindBuffer(BufferTarget.ArrayBuffer, matrixBuffer);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(sizeof(float) * 16 * Matrices.Length), Matrices, BufferUsageHint.StaticDraw);
This should create a buffer where i can store my matrices. matrixBuffer is the pointer to my buffer. Im not sure if the size is correct, i took float * 4 (for Vector4) * 4 (for 4 vectors) * array-size.
Draw Loop:
GL.BindBuffer(BufferTarget.ArrayBuffer, matrixBuffer);
GL.VertexAttribPointer(3, 4, VertexAttribPointerType.Float, false, 0, 0);
//GL.VertexAttribDivisor(3, ?);
GL.EnableVertexAttribArray(3);
GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, 36, 2);
Any higher number than 4 in VertexAttribPointer(..., 4, VertexattribPointerType.Float, ...); causes a crash. I tought i have to set that value to 16?
Im not sure if i need a VertexAttribDivisor, probably i need this every 36th vertex so i call GL.VertexAttribDivisor(3, 36);? But when i do that, i can't see any cube.
My vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
layout(location = 3) in mat4 instanceMatrix;
uniform mat4 projMatrix;
out vec4 vColor;
out vec2 texCoords[];
void main(){
gl_Position = instanceMatrix * projMatrix * vec4(position, 1.0);
//gl_Position = projMatrix * vec4(position, 1.0);
texCoords[0] = texCoord;
vColor = color;
}
So my questions:
What is wrong with my code?
Why can i set the size-parameter of VertexAttribPointer only to 4?
What is the correct value for VertexAttribDivisor?
Edit:
Based on the answer of Andon M. Coleman i made this changes:
GL.BindBuffer(BufferTarget.UniformBuffer, matrixBuffer);
GL.BufferData(BufferTarget.UniformBuffer, (IntPtr)(sizeof(float) * 16), IntPtr.Zero, BufferUsageHint.DynamicDraw);
//Bind Buffer to Binding Point
GL.BindBufferBase(BufferRangeTarget.UniformBuffer, matrixUniform, matrixBuffer);
matrixUniform = GL.GetUniformBlockIndex(shaderProgram, "instanceMatrix");
//Bind Uniform Block to Binding Point
GL.UniformBlockBinding(shaderProgram, matrixUniform, 0);
GL.BufferSubData(BufferTarget.UniformBuffer, IntPtr.Zero, (IntPtr)(sizeof(float) * 16 * Matrices.Length), Matrices);
And shader:
#version 330 core
layout(location = 0) in vec4 position; //gets vec3, fills w with 1.0
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
uniform mat4 projMatrix;
uniform UniformBlock
{ mat4 instanceMatrix[]; };
out vec4 vColor;
out vec2 texCoords[];
void main(){
gl_Position = projMatrix * instanceMatrix[0] * position;
texCoords[0] = texCoord;
vColor = color;
}
You have discovered the hard way that vertex attribute locations are always 4-component.
The only way to make a 4x4 matrix a per-vertex attribute is if you concede that mat4 is 4x as large as vec4.
Consider the declaration of your mat4 vertex attribute:
layout(location = 3) in mat4 instanceMatrix;
You might naturally think that location 3 stores 16 floating-point values, but you would be wrong. Locations in GLSL are always 4-component. Thus, mat4 instanceMatrix actually occupies 4 different locations.
This is essentially how instanceMatrix actually works:
layout(location = 3) in vec4 instanceMatrix_Column0;
layout(location = 4) in vec4 instanceMatrix_Column1;
layout(location = 5) in vec4 instanceMatrix_Column2;
layout(location = 6) in vec4 instanceMatrix_Column3;
Fortunately, you do not have to write your shader that way, it is perfectly valid to have a mat4 vertex attribute.
However, you do have to write your C# code to behave that way:
GL.BindBuffer(BufferTarget.ArrayBuffer, matrixBuffer);
GL.VertexAttribPointer(3, 4, VertexAttribPointerType.Float, false, 64, 0); // c0
GL.VertexAttribPointer(4, 4, VertexAttribPointerType.Float, false, 64, 16); // c1
GL.VertexAttribPointer(5, 4, VertexAttribPointerType.Float, false, 64, 32); // c2
GL.VertexAttribPointer(6, 4, VertexAttribPointerType.Float, false, 64, 48); // c3
Likewise, you must setup your vertex attribute divisor for all 4 locations:
GL.VertexAttribDivisor (3, 1);
GL.VertexAttribDivisor (4, 1);
GL.VertexAttribDivisor (5, 1);
GL.VertexAttribDivisor (6, 1);
Incidentally, because vertex attributes are always 4-component, you can actually declare:
layout(location = 0) in vec4 position;
And stop writing ugly code like this:
gl_Position = instanceMatrix * projMatrix * vec4(position, 1.0);
This is because missing components in a vertex attribute are automatically expanded by OpenGL.
(0.0, 0.0, 0.0, 1.0)
If you declare a vertex attribute as vec4 in the GLSL shader, but only supply data for XYZ, then W is automatically assigned a value of 1.0.
In actuality, you do not want to store your matrices per-vertex. That is a waste of multiple vertex attribute locations. What you may consider is an array of uniforms, or better yet a uniform buffer. You can index this array using the Vertex Shader pre-declared variable: gl_InstanceID. That is really the most sensible way to approach this, because you may find yourself using more properties per-instance than you have vertex attribute locations (mininum 16 in GL 3.3, only a few GPUs actually support more than 16).
Keep in mind that there is a limit to the number of vec4 uniforms a vertex shader can use in a single invocation, and that a mat4 counts as 4x the size of a vec4. Using a uniform buffer will allow you to draw many more instances than a plain old array of uniforms would.
I'm wondering if there is a difference between GLSL and HLSL Mathematics.
I'm using a selfmade Engine which works with openTK fine. My SharpDx Implementation gets everyday a bit further.
I'm currently working on the ModelviewProjection Matrix.
To see if it works I use a simple project which works fine with OpenTK.
So I changed the Shader code from GLSL to HLSL because the rest of the program uses the engine functions. The programm did not work I couldn't see the geometry, so I changed the Modelview Matrix and the Projections Matrix to the Identity Matrix. Aftwards it worked I saw the geometry. So I changed a bit of the GLSL becuase I wanted a similar GLSL code to the HLSL and I also changed the Matrixes to the identy too. Afterwards I did not see anything it didn't work.... So I'm stuck... Any of you have an Idea?
Anyways long story short
My HLSL Shader Code
public string Vs = #"cbuffer Variables : register(b0){
float4 Testfarbe;
float4x4 FUSEE_MVP;
}
struct VS_IN
{
float4 pos : POSITION;
float4 tex : TEXCOORD;
float4 normal : NORMAL;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
float4 tex : TEXCOORD;
float4 normal : NORMAL;
};
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN)0;
input.pos.w = 1.0f;
output.pos = mul(input.pos,FUSEE_MVP);
output.col = Testfarbe;
/*output.col = FUSEE_MV._m00_m01_m02_m03;*/
/* output.normal = input.normal;
output.tex = input.tex;*/
/* if (FUSEE_MV._m00 == 4.0f)
output.col = float4(1,0,0,1);
else
output.col = float4(0,0,1,1);*/
return output;
}
";
string Ps = #"
SamplerState pictureSampler;
Texture2D imageFG;
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
float4 tex : TEXCOORD;
float4 normal : NORMAL;
};
float4 PS( PS_IN input ) : SV_Target
{
return input.col;
/*return imageFG.Sample(pictureSampler,input.tex);*/
}";
So I changed my old working OpenTk project to see where the difference ist between openTK and SharpDx relating to the math calculations.
The HLSL code
public string Vs = #"
/* Copies incoming vertex color without change.
* Applies the transformation matrix to vertex position.
*/
attribute vec4 fuColor;
attribute vec3 fuVertex;
attribute vec3 fuNormal;
attribute vec2 fuUV;
varying vec4 vColor;
varying vec3 vNormal;
varying vec2 vUV;
uniform mat4 FUSEE_MVP;
uniform mat4 FUSEE_ITMV;
void main()
{
gl_Position = FUSEE_MVP * vec4(fuVertex, 1.0);
/*vNormal = mat3(FUSEE_ITMV[0].xyz, FUSEE_ITMV[1].xyz, FUSEE_ITMV[2].xyz) * fuNormal;*/
vUV = fuUV;
}";
public string Ps = #"
/* Copies incoming fragment color without change. */
#ifdef GL_ES
precision highp float;
#endif
uniform vec4 vColor;
varying vec3 vNormal;
void main()
{
gl_FragColor = vColor * dot(vNormal, vec3(0, 0, 1));
}";
In the main code itself I only read an Obj file and set the Identity matrix
public override void Init()
{
Mesh = MeshReader.LoadMesh(#"Assets/Teapot.obj.model");
//ShaderProgram sp = RC.CreateShader(Vs, Ps);
sp = RC.CreateShader(Vs, Ps);
_vTextureParam = sp.GetShaderParam("Testfarbe");//vColor
}
public override void RenderAFrame()
{
...
var mtxRot = float4x4.CreateRotationY(_angleHorz) * float4x4.CreateRotationX(_angleVert);
var mtxCam = float4x4.LookAt(0, 200, 500, 0, 0, 0, 0, 1, 0);
// first mesh
RC.ModelView = float4x4.CreateTranslation(0, -50, 0) * mtxRot * float4x4.CreateTranslation(-150, 0, 0) * mtxCam;
RC.SetShader(sp);
//mapping
RC.SetShaderParam(_vTextureParam, new float4(0.0f, 1.0f, 0.0f, 1.0f));
RC.Render(Mesh);
Present();
}
public override void Resize()
{
RC.Viewport(0, 0, Width, Height);
float aspectRatio = Width / (float)Height;
RC.Projection = float4x4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 1, 5000);
}
The both programs side by side
What I should also add is as soon as the values of my ModelView identity are bigger than 1.5 I don't see anything in my window ? anyone knows that might causing this?
I edited the Post and the Image so you see a bigger difference.
I had earlier in this post the identity Matrix. If I use the Identity Matrix with this Obj-File
v 0.0 0.5 0.5
v 0.5 0.0 0.5
v -0.5 0.0 0.5
vt 1 0 0
vt 0 1 0
vt 0 0 0
f 1/2 2/3 3/1
I saw in my SharpDX project the triangle and in my openTK not. But I tink the Teapot thing is a bit better to show the difference within the to project where only the Shadercode is different! I mean I could've done something wrong in the SharpDX Implementation for this Enginge but lets assume their is everything right. At least I hope so if you guys tell my the ShaderCode is just wrong ;)
I hope I could describe my problem clear so you understand it.
OK So you've to Transpose the Matrix...
I am using XNA with SpriteBatch and custom drawn verticies in parallel. The goal is to have the same coordinate system for both techniques.
That means I need a projection matrix that maps to screen coordinates: (0, 0) is in the top left screen corner, while width and height are determined by the screen resolution.
Matrix.CreateOrthographicOffCenter(0, width, 0, height, -1, 1);
Works well but has the center in the bottom-left corner.
Matrix.CreateOrthographicOffCenter(0, width, height, 0, -1, 1);
Does not display anything at all.
Trying to combine the first projection matrix with a translation and scaling y by -1 does not display anything at all either. Scaling by positive values works well, translation too. But as soon as I scale by a negative value I do not get any output at all.
Any ideas?
PS: For testing purpose I am drawing vertices far beyond the screen coordinates, so I would at least see something if there is some error in translation.
I use this code to initialize my 2D camera for drawing lines, and use a basic custom effect to draw.
Vector2 center;
center.X = Game.GraphicsDevice.Viewport.Width * 0.5f;
center.Y = Game.GraphicsDevice.Viewport.Height * 0.5f;
Matrix View = Matrix.CreateLookAt( new Vector3( center, 0 ), new Vector3( center, 1 ), new Vector3( 0, -1, 0 ) );
Matrix Projection = Matrix.CreateOrthographic( center.X * 2, center.Y * 2, -0.5f, 1 );
Effect
uniform float4x4 xWorld;
uniform float4x4 xViewProjection;
void VS_Basico(in float4 inPos : POSITION, in float4 inColor: COLOR0, out float4 outPos: POSITION, out float4 outColor:COLOR0 )
{
float4 tmp = mul (inPos, xWorld);
outPos = mul (tmp, xViewProjection);
outColor = inColor;
}
technique Lines
{
pass Pass0
{
VertexShader = compile vs_2_0 VS_Basico();
FILLMODE = SOLID;
CULLMODE = NONE;
}
}