I am creating a game engine that includes basic game needs. Using glslDevil, it turns out my bind VBO method throws an InvalidValue error. A call of glVertexPointer and a call of glEnableVertexPointer cause the issue. The vertex attribute index is causing the issue. The index is 4294967295 which is well over 15. Everything else works perfectly fine. I am using OpenTK. Here is the bind to attribute method.
public void BindToAttribute(ShaderProgram prog, string attribute)
{
int location = GL.GetAttribLocation(prog.ProgramID, attribute);
GL.EnableVertexAttribArray(location);
Bind();
GL.VertexAttribPointer(location, Size, PointerType, true, TSize, 0);
}
public void Bind()
{
GL.BindBuffer(Target, ID);
}
Here are my shaders if required.
Vertex Shader:
uniform mat4 transform;
uniform mat4 projection;
uniform mat4 camera;
in vec3 vertex;
in vec3 normal;
in vec4 color;
in vec2 uv;
out vec3 rnormal;
out vec4 rcolor;
out vec2 ruv;
void main(void)
{
rcolor = color;
rnormal = normal;
ruv = uv;
gl_Position = camera * projection * transform * vec4(vertex, 1);
}
Fragment Shader:
in vec3 rnormal;
in vec4 rcolor;
in vec2 ruv;
uniform sampler2D texture;
void main(void)
{
gl_FragColor = texture2D(texture, ruv) * rcolor;
}
Am I not obtaining the index correctly or is there another issue?
the index that you are getting seems to be the problem: that is the index that you get when opengl doesn't find a valid attribute/uniform with that name.
There are a few things that might be going on:
you are passing a string that doesn't exist in the shader program (check case sensitive and whatnot)
the uniform exists and you are passing the correct string, but you are not using that attribute in the shader so the driver has removed all the occurrences of that attribute in the final code due to optimization (therefore it doesn't exists anymore)
In general though, that number shows that OpenGL can't find the uniform or attribute you were looking for
EDIT:
One trick is the following: let's assume you have some pixel shader code that returns a value that is the sum of many values:
out vec4 color;
void main()
{
// this shader does many calculations when you are
// using many values, but let's assume you want to debug
// just the diffuse color... how do you do it?
// if you change the output to return just the diffuse color,
// the optimizer might remove code and you might have problems
//
// if you have this
color = various_calculation_1 + various_calculation_2 + ....;
// what you can do is the following
color *= 0.0000001f; // so basically it's still calculated
// but it almost won't show up
color += value_to_debug; // example, the diffuse color
}
Related
When I enable shader program, texture doesnt work
A_andrew is texture in alias (Part of|in Texture2D)
alias is alias (Texture2D)
CSharp Code
GL.ClearColor(255, 255, 255, 255);
GL.Clear(ClearBufferMask.DepthBufferBit);
A_andrew.Bind();
//shaderProgram.Use(); when enabled texture are disapeare
shaderProgram.GetUniform("texture0").SetVec1(alias.id);
GL.Begin(BeginMode.Quads);
AliasTexture2D tex = Draw.CurrentTexutre;
GL.TexCoord2(tex.GetLeftBottom());
GL.Vertex2(-0.6f, -0.4f);
GL.TexCoord2(tex.GetRightBottom());
GL.Vertex2(0.6f, -0.4f);
GL.TexCoord2(tex.GetRightTop());
GL.Vertex2(0.6f, 0.4f);
GL.TexCoord2(tex.GetLeftTop());
GL.Vertex2(-0.6f, 0.4f);
GL.End();
window.SwapBuffers();
Fragment Shader
version 330 core
in vec2 texCords;
uniform sampler2D texture0;
out vec4 color;
void main()
{
vec4 texColor = texture(texture0, texCords);
color = texColor;
}
Vertex Shader
version 330 core
layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec2 inTexCords;
out vec2 texCords;
void main()
{
texCords = inTexCords;
gl_Position = vec4(inPosition.xyz, 1.0);
}
I think problem in Fragment Shader, he dont get texture or|and texture cordinates
You cannot mix fixed function attributes and the fixed function matrix stack with a version 3.30 shader program.
You have to use the built in attributes such as gl_Vertex and gl_MultiTexCoord0 (see Vertex Attributes).
You have to use the the built in uniform variables like gl_ModelViewProjectionMatrix. In legacy OpenGL (GLSL 1.20) there are provided built-in uniforms. See OpenGL Shading Language 1.20 Specification; 7.5 Built-In Uniform State.
One of them is gl_ModelViewProjectionMatrix of type mat4, which provides the transformation by the model view and projection matrix. There also exist separated varables gl_ModelViewMatrix and gl_ProjectionMatrix vor the model view and projection matrix.
Vertex shader:
version 120
varying vec2 texCords;
void main()
{
texCords = gl_MultiTexCoord0.xy;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
I've been trying to Use Shader Storage Buffer Objects(SSBO) with opengl 4.3 core for some time but cant even get my Shader Program To Link
Here is the Code For My Vertex Shader
# version 430 core
in vec3 vertex;
uniform mat4
perspectiveMatrix,
viewMatrix,
modelMatrix;
in vec3 normal;
out vec3 lightVectors[3];
out vec3 vertexNormal;
out vec3 cameraVector;
layout(std430,binding=1)buffer Lights
{
vec3 positions[3];
vec3 attenuations[3];
vec3 colors[3];
}buf1;
void main()
{
vec4 worldVertex=modelMatrix*vec4(vertex,1.0);
gl_Position=perspectiveMatrix*viewMatrix*worldVertex;
vertexNormal=(modelMatrix*vec4(normal,0)).xyz;
for(int i=0;i<3;i++){
lightVectors[i]=buf1.positions[i]-worldVertex.xyz;
}
cameraVector=(inverse(viewMatrix)*vec4(0,0,0,1)).xyz-worldVertex.xyz;
}
And Here Is The Code for My Fragment Shader
# version 430 core
in vec3 vertexNormal;
in vec3 cameraVector;
in vec3 lightVectors[3];
out vec4 pixelColor;
layout(std430,binding=1)buffer Lights
{
vec3 positions[3];
vec3 attenuations[3];
vec3 colors[3];
}buf2;
uniform Material
{
vec3 diffuseMat;
vec3 specularMat;
float reflectivity;
float shininess;
};
float getAtten(int i)
{
float l=length(lightVectors[i]);
float atten=(buf2.attenuations[i].x*l)
+((buf2.attenuations[i].y)*(l*l))
+((buf2.attenuations[i].z)*(l*l*l));
return atten;
}
vec3 computeLightColor()
{
vec3 unitNormal=normalize(vertexNormal);
vec3 unitCam=normalize(cameraVector);
vec3 diffuse,specular,lightColor;
for(int i=0;i<3;i++)
{
vec3 unitLight=normalize(lightVectors[i]);
float atten=getAtten(i);
float nDiffuseDot=dot(unitLight,unitNormal);
float diffuseIntencity=max(0,nDiffuseDot);
diffuse+=(buf2.colors[i]*diffuseMat*diffuseIntencity)/atten;
vec3 reflectedLight=reflect(-unitLight,unitNormal);
float nSpecularDot=dot(reflectedLight,unitCam);
float specularIntencity=pow(max(0,nSpecularDot),reflectivity)*shininess;
specular+=(buf2.colors[i]*specularMat*specularIntencity)/atten;
}
lightColor=diffuse+specular;
return lightColor;
}
void main()
{
pixelColor=vec4(computeLightColor(),1)*0.7;
}
The Code common to both These Shaders is
layout(std430,binding=1)buffer Lights
{
vec3 positions[3];
vec3 attenuations[3];
vec3 colors[3];
};
This is an SSBO I'm using for my Lighting Calculations
I need to use this one single buffer in both Vertex & Fragment Shader because in my Vertex Shader This part of code
lightVectors[i]=buf1.positions[i]-worldVertex.xyz;
Uses The Light Positions defined in The SSBO
And in the Fragment Shader the
float getAtten(i) and
vec3 computeLight()
Method Uses the buffers colors[3] and attenuations[3] "Arrays" to compute diffuse and specular components
But i cannot Link The Vertex & Fragement Shader using the above SSBO Definition
And The Shader Info Log is empty So I don't know the error either
Is There A Way to Use The Above Defined SSBO In both the Vertex And Fragement Shader Without Creating 2 Separate SSBO i.e one for Vertex Shader And One for fragement Shader?
This Is My Shader Class. All My Shaders Extends To This Class For Implementation
public abstract class StaticShader extends Shader3D
{
private int
vertexShaderID=-1,
fragmentShaderID=-1;
private boolean alive=false;
private boolean isActive=false;
public StaticShader(Object vertexShader,Object fragmentShader)
{
programID=GL20.glCreateProgram();
vertexShaderID=loadShader(vertexShader,GL20.GL_VERTEX_SHADER);
fragmentShaderID=loadShader(fragmentShader,GL20.GL_FRAGMENT_SHADER);
GL20.glAttachShader(programID,vertexShaderID);
GL20.glAttachShader(programID,fragmentShaderID);
bindAttributes();
activateShader();
}
private int loadShader(Object src,int shaderType)
{
StringBuilder source=super.loadSource(src);
int shaderID=GL20.glCreateShader(shaderType);
GL20.glShaderSource(shaderID,source);
GL20.glCompileShader(shaderID);
if(GL20.glGetShaderi(shaderID,GL20.GL_COMPILE_STATUS)==GL11.GL_FALSE)
{
infoLogSize=GL20.glGetShaderi(shaderID,GL20.GL_INFO_LOG_LENGTH);
System.err.println(GL20.glGetShaderInfoLog(shaderID,infoLogSize));
System.err.println("COULD NOT COMPILE SHADER");
System.exit(-1);
}
return shaderID;
}
protected void activateShader()
{
GL20.glLinkProgram(programID);
if(GL20.glGetProgrami(programID,GL20.GL_LINK_STATUS)==GL11.GL_FALSE)
{
infoLogSize=GL20.glGetProgrami(programID,GL20.GL_INFO_LOG_LENGTH);
System.err.println(GL20.glGetProgramInfoLog(programID,infoLogSize));
System.err.println("COULD NOT LINK SHADER");
System.exit(-1);
}
GL20.glValidateProgram(programID);
if(GL20.glGetProgrami(programID,GL20.GL_VALIDATE_STATUS)==GL11.GL_FALSE)
{
infoLogSize=GL20.glGetProgrami(programID,GL20.GL_INFO_LOG_LENGTH);
System.err.println(GL20.glGetProgramInfoLog(programID,infoLogSize));
System.err.println("COULD NOT VALIDATE SHADER");
System.exit(-1);
}
}
public void dispose()
{
GL20.glUseProgram(0);
GL20.glDetachShader(programID,vertexShaderID);
GL20.glDetachShader(programID,fragmentShaderID);
GL20.glDeleteShader(vertexShaderID);
GL20.glDeleteShader(fragmentShaderID);
GL20.glDeleteProgram(programID);
}
}
I'm Using LWJGL 2.9.3 Java Opengl Version
The program compiles fine but glLinkprogram ()
return false and the shader info log is empty
I haven't even created any buffer object of any sort I'm just learning the syntax at the moment
Any help will be greatly appreciated thank u in advance and to those who have helped me in the comments but I'm greatly frustrated at the moment
In youre code I get the linker error:
error: binding mismatch between shaders for SSBO (named Lights)
You have to use the same binding point for equal named buffer objects in the different shader stages.
In your vertex shader:
layout(std430, binding=1) buffer Lights
{
vec3 positions[3];
vec3 attenuations[3];
vec3 colors[3];
} buf1;
Lights ist the externally visible name of the buffer and buf1 is the name of the block within the shader.
This means you have to use the same binding point for the buffer object Lights in the fragment shader:
layout(std430,binding=1) buffer Lights // <---- binding=1 instead of binding=2
{
vec3 positions[3];
vec3 attenuations[3];
vec3 colors[3];
}buf2;
See also:
OpenGL Shading Language 4.6 specification - 4.3.7 Buffer Variables
The buffer qualifier is used to declare global variables whose values are stored in the data store of a
buffer object bound through the OpenGL API.
// use buffer to create a buffer block (shader storage block)
buffer BufferName { // externally visible name of buffer
int count; // typed, shared memory...
... // ...
vec4 v[]; // last member may be an array that is not sized
// until after link time (dynamically sized)
} Name; // name of block within the shader
OpenGL Shading Language 4.6 specification - 4.4.5 Uniform and Shader Storage Block Layout Qualifiers
I'm working on my volume rendering application (C# + OpenTK).
The volume is being rendered using raycasting, i found a lot of inspiration on this site:
http://graphicsrunner.blogspot.sk/2009/01/volume-rendering-101.html, and even though my applications works with OpenGL, the main idea of using 3D texture and other stuff is the same.
Application works fine, but after I "flow into the volume" (means inside the bounding box), everything dissapears, and I want to prevent this. So is there some easy way to do this? --> I will be able to flow through the volume or move in the volume.
Here is the code of fragment shader:
#version 330
in vec3 EntryPoint;
in vec4 ExitPointCoord;
uniform sampler2D ExitPoints;
uniform sampler3D VolumeTex;
uniform sampler1D TransferFunc;
uniform float StepSize;
uniform float AlphaReduce;
uniform vec2 ScreenSize;
layout (location = 0) out vec4 FragColor;
void main()
{
//gl_FragCoord --> http://www.txutxi.com/?p=182
vec3 exitPoint = texture(ExitPoints, gl_FragCoord.st/ScreenSize).xyz;
//background need no raycasting
if (EntryPoint == exitPoint)
discard;
vec3 rayDirection = normalize(exitPoint - EntryPoint);
vec4 currentPosition = vec4(EntryPoint, 0.0f);
vec4 colorSum = vec4(.0f,.0f,.0f,.0f);
vec4 color = vec4(0.0f,0.0f,0.0f,0.0f);
vec4 value = vec4(0.0f);
vec3 Step = rayDirection * StepSize;
float stepLength= length(Step);
float LengthSum = 0.0f;
float Length = length(exitPoint - EntryPoint);
for(int i=0; i < 16000; i++)
{
currentPosition.w = 0.0f;
value = texture(VolumeTex, currentPosition.xyz);
color = texture(TransferFunc, value.a);
//reduce the alpha to have a more transparent result
color.a *= AlphaReduce;
//Front to back blending
color.rgb *= color.a;
colorSum = (1.0f - colorSum.a) * color + colorSum;
//accumulate length
LengthSum += stepLength;
//break from the loop when alpha gets high enough
if(colorSum.a >= .95f)
break;
//advance the current position
currentPosition.xyz += Step;
//break if the ray is outside of the bounding box
if(LengthSum >= Length)
break;
}
FragColor = colorSum;
}
The code below is based on https://github.com/toolchainX/Volume_Rendering_Using_GLSL
Display() function:
public void Display()
{
// the color of the vertex in the back face is also the location
// of the vertex
// save the back face to the user defined framebuffer bound
// with a 2D texture named `g_bfTexObj`
// draw the front face of the box
// in the rendering process, i.e. the ray marching process
// loading the volume `g_volTexObj` as well as the `g_bfTexObj`
// after vertex shader processing we got the color as well as the location of
// the vertex (in the object coordinates, before transformation).
// and the vertex assemblied into primitives before entering
// fragment shader processing stage.
// in fragment shader processing stage. we got `g_bfTexObj`
// (correspond to 'VolumeTex' in glsl)and `g_volTexObj`(correspond to 'ExitPoints')
// as well as the location of primitives.
// draw the back face of the box
GL.Enable(EnableCap.DepthTest);
//"vykreslim" front || back face objemu do framebuffru --> teda do 2D textury s ID bfTexID
//(pomocou backface.frag &.vert)
GL.BindFramebuffer(FramebufferTarget.Framebuffer, frameBufferID);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), bfVertShader.GetShaderHandle(), bfFragShader.GetShaderHandle());
spMain.UseProgram();
//cull front face
Render(CullFaceMode.Front);
spMain.UseProgram(0);
//klasicky framebuffer --> "obrazovka"
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), rcVertShader.GetShaderHandle(), rcFragShader.GetShaderHandle());
spMain.UseProgram();
SetUniforms();
Render(CullFaceMode.Back);
spMain.UseProgram(0);
GL.Disable(EnableCap.DepthTest);
}
private void DrawBox(CullFaceMode mode)
{
// --> Face culling allows non-visible triangles of closed surfaces to be culled before expensive Rasterization and Fragment Shader operations.
GL.Enable(EnableCap.CullFace);
GL.CullFace(mode);
GL.BindVertexArray(VAO);
GL.DrawElements(PrimitiveType.Triangles, 36, DrawElementsType.UnsignedInt, 0);
GL.BindVertexArray(0);
GL.Disable(EnableCap.CullFace);
spMain.UseProgram(0);//zapnuty bol v Render() ktora DrawBox zavolala
}
private void Render(CullFaceMode mode)
{
GL.ClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
spMain.UseProgram();
spMain.SetUniform("modelViewMatrix", Current);
spMain.SetUniform("projectionMatrix", projectionMatrix);
DrawBox(mode);
}
The problem is (I think) that as I'm moving towards the volume (I don't move the camera, just scaling the volume), if the scale factor > 2.7something, I'm in the volume, it means "after the plane on which is the final picture being rendered", so a can't see anything.
The solution (maybe) that I can think of, is something like that:
If I reach the scale factor = 2.7something:
1.) -> don't scale the volume
2.) -> somehow told to fragment shader to move EntryPoint towards the
RayDirection for some length (probably based on the scale factor).
Now, I tried this "method" and it seems that it can work:
vec3 entryPoint = EntryPoint + some_value * rayDirection;
The some_value have to be clamped between [0,1[ interval (or [0,1]?)
, but maybe it doesn't matter thank's to that:
if (EntryPoint == exitPoint)
discard;
So now, maybe (if my solution isn't so bad), I can change my answer to this:
How to compute the some_value (based on scale factor which I send to fragment shader)?
if(scale_factor < 2.7something)
work like before;
else
{
compute some_value; //(I need help with this part)
change entry point;
work like before;
}
(I'm not native english speeker, so If there are some big mistakes in the text and you don't understand something, just let me know and I'll try to fix these bugs)
Thank's.
I solved my problem. It doesn't make "being surrounded by the volume" illusion, but now, I can flow through the volume and nothing disappears.
This is the code of my solution added to fragment shader:
vec3 entryPoint = vec3(0.0f);
if(scaleCoeff >= 2.7f)
{
float tmp = min((scaleCoeff - 2.7f) * 0.1f, 1.0f);
entryPoint = EntryPoint + tmp * (exitPoint - EntryPoint);
}
else
{
entryPoint = EntryPoint;
}
//
But if you know or can think about better solution that makes the "being surrounded by the volume" effect, I'll be glad if you let me know.
Thank you.
If understand correctly, I think you should use Plane Clipping to go through the volume. (I could give you a simple example based on your code if you attach this solution. Translate the whole C++ project to C# is too time-consuming.)
I have a fairly simple fragmentshader that does not work. It appears to have something to do with the textureCube method.
This is the fragment shader:
in vec3 ReflectDir;
in vec3 RefractDir;
uniform samplerCube CubeMapTex;
uniform bool DrawSkyBox;
uniform float MaterialReflectionFactor;
void main()
{
// Access the cube map texture
vec4 reflectColor = textureCube(CubeMapTex, ReflectDir);
vec4 refractColor = textureCube(CubeMapTex, RefractDir);
if( DrawSkyBox )
{
gl_FragColor = reflectColor;
gl_FragColor = vec4(ReflectDir, 1); //This line
}
else
gl_FragColor = vec4(1,0,0,1);
}
ReflectDir and RefractDir come from a vertex shader, but that seems to be in order.
If I comment the second line in the if statement the whole screen is black (incliding the teapot), otherwise it looks like this (ReflectDir seems ok):
http://i.imgur.com/MkHX6kT.png
Also the cubemap is rendered properly (well the images orders are bad). This is how the scene looks like without the shader program:
http://i.imgur.com/6kKzA2x.jpg
Aditional info:
the texture is loaded with GL_TEXTURE_CUBE_MAP on active texture TEXTURE0
uniform CubeMapTex is set to 0
DrawSkyBox is set to true when drawing the skybox, false after that
I used SharpGL
When I bind my buffers to attributes for my shaders, they seem to be getting flipped.
So, I've got a vertex shader:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_color;
out vec3 ex_Color;
void main(void)
{
gl_Position = projection_matrix * modelview_matrix * vec4(in_position, 1);
ex_Color = in_color;
}
and a fragment shader
precision highp float;
in vec3 ex_Color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = vec4(ex_Color, 1.0);
}
Nothing too complicated. There are two inputs: one for vertex locations, and one for colors. (As a newb, I didn't want to deal with textures or light yet.)
Now, in my client code, I put data into two arrays of vectors, positionVboData and colorVboData, and I create the VBOs...
GL.GenBuffers(1, out positionVboHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, positionVboHandle);
GL.BufferData<Vector3>(BufferTarget.ArrayBuffer,
new IntPtr(positionVboData.Length * Vector3.SizeInBytes),
positionVboData, BufferUsageHint.StaticDraw);
GL.GenBuffers(1, out colorVboHandle);
GL.BindBuffer(BufferTarget.ArrayBuffer, colorVboHandle);
GL.BufferData<Vector3>(BufferTarget.ArrayBuffer,
new IntPtr(colorVboData.Length * Vector3.SizeInBytes),
colorVboData, BufferUsageHint.StaticDraw);
and then, I would expect the following code to work to bind the vbos to the attributes for the shaders:
GL.EnableVertexAttribArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, positionVboHandle);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
GL.BindAttribLocation(shaderProgramHandle, 0, "in_position");
GL.EnableVertexAttribArray(1);
GL.BindBuffer(BufferTarget.ArrayBuffer, colorVboHandle);
GL.VertexAttribPointer(1, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
GL.BindAttribLocation(shaderProgramHandle, 1, "in_color");
But, in fact I have to swap positionVboHandle and colorVboHandle in the last code sample and then it works perfectly. But that seems backwards to me. What am I missing?
Update
Something weird is going on. If I change the vertex shader to this:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_color;
out vec3 ex_Color;
void main(void)
{
gl_Position = projection_matrix * modelview_matrix * vec4(in_position, 1);
//ex_Color = in_color;
ex_Color = vec3(1.0, 1.0, 1.0);
}"
And make no other changes (other than the fix suggested to move the program link after all the set up, it loads the correct attribute, the vertex positions, into in_position rather than into in_color.
GL.BindAttribLocation must be performed before GL.LinkProgram. Are you calling GL.LinkProgram after this code fragment?
EDIT:
Answering to your Update - because you don't use in_color, then OpenGL simply ignores this input. And your vertex shader taks only in_position as input. Most likely it binds it in location 0. That's why your code works. You should bind locations before linking program as it is described in link above.
So, with Mārtiņš Možeiko's help, I was able to figure this out. I was calling BindAttribLocation before LinkProgram correctly. However, I wasn't calling GL.CreateProgram() BEFORE I was binding any of the attribute locations.