I'm trying to draw a triangle using an OpenGL(OpenTK) fragment shader.
But always displayed black triangle. (even I changed color in fragment shader.)
Maybe fragment shader is not working.
How to fix it?
I attached my code.
P.S. I'm sorry if I do something wrong with this Post. This is my first time on this site.
Render
window.RenderFrame += (FrameEventArgs args) =>
{
GL.UseProgram(shaderProgram.shaderProgramId);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
float[] verts = { -0.5f, -0.5f, 0.0f, 0.5f, -0.5f, 0.0f, 0.0f, 0.5f, 0.0f };
int vao = GL.GenVertexArray();
int vertices = GL.GenBuffer();
GL.BindVertexArray(vao);
GL.BindBuffer(BufferTarget.ArrayBuffer, vertices);
GL.BufferData(BufferTarget.ArrayBuffer, verts.Length * sizeof(float), verts, BufferUsageHint.StaticCopy);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false,3 * sizeof(float),0);
GL.EnableVertexAttribArray(0);
GL.DrawArrays(OpenTK.Graphics.OpenGL4.PrimitiveType.Triangles, 0, 3);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.BindVertexArray(0);
GL.DeleteVertexArray(vao);
GL.DeleteBuffer(vertices);
window.SwapBuffers();
};
Shader Load
public static Shader LoadShader(string shaderLocation, ShaderType shaderType)
{
int shaderId = GL.CreateShader(shaderType);
GL.ShaderSource( shaderId, File.ReadAllText( shaderLocation ) );
GL.CompileShader( shaderId );
string infoLog = GL.GetShaderInfoLog(shaderId);
if (!string.IsNullOrEmpty(infoLog))
{
throw new Exception(infoLog);
}
return new Shader() {shaderId = shaderId};
}
Program binding
public static ShaderProgram LoadShaderProgram(string vertexShaderLocation, string fragmentShaderLocation)
{
int shaderProgramId = GL.CreateProgram();
Shader vertexShader = LoadShader(vertexShaderLocation, ShaderType.VertexShader);
Shader fragShader = LoadShader(fragmentShaderLocation, ShaderType.FragmentShader);
GL.AttachShader(shaderProgramId, vertexShader.shaderId);
GL.AttachShader(shaderProgramId, fragShader.shaderId);
GL.LinkProgram(shaderProgramId);
GL.DetachShader(shaderProgramId, vertexShader.shaderId);
GL.DetachShader(shaderProgramId, fragShader.shaderId);
GL.DeleteShader(vertexShader.shaderId);
GL.DeleteShader(fragShader.shaderId);
string infoLog = GL.GetProgramInfoLog(shaderProgramId);
if (!string.IsNullOrEmpty(infoLog))
{
throw new Exception(infoLog);
}
return new ShaderProgram() {shaderProgramId = shaderProgramId};
}
shaders
vertex
#version 330
layout(location=0) in vec3 vPosition;
out vec4 vertexColor;
void main() {
gl_Position = vec4( vPosition, 1.0);
vertexColor = vec4(0.0,1.0,0.0,1.0);
}
fragment
#version 330
out vec4 FragColor;
in vec4 vertexColor;
void main()
{
FragColor = vertexColor;
}
I see a couple of problems with your code. The main reason you may not see your shader in action is because you do not keep your shaders attached to your shader handle (shaderProgramId). Instead, you detach and delete them right after you compiled and attached them. What you are doing there is basically creating your shader program and then immediately throwing it away.
Another issue (which might not cause your main problem, but yet) may be your usage of a VAO. A VAO is actually meant to preserve the OpenGL states of the objects bound to it across state switches. It is rather a container for VBOs inside it, holding their descriptions. So what you want to do is to create your VAO, bind it, then create, bind and describe (glVertexAttribPointer) your VBOs. After that you can unbind your VAO. When you bind it again, you don't have to do anything extra (like binding VBOs or using glVertexAttribPointer again): What you did when first binding your VAO and adding your VBOs is stored in the VAO. Just bind the VAO, bind your shader (glUseProgram) and happily render away.
Related
I want to map a rectangular globe texture onto a sphere. I can load the "globe.jpg" texture and display it onto the screen. I think I need to retrieve the color of the "globe.jpg" texture at specific texture coordinates and use that to colorize a specific point on the globe.
I want to map the globe map on the rightmiddle side onto one of the spheres on the left side (see picture)
Code for loading texture:
int texture;
public Texture() {
texture = LoadTexture("Content/globe.jpg");
}
public int LoadTexture(string file) {
Bitmap bitmap = new Bitmap(file);
int tex;
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest);
GL.GenTextures(1, out tex);
GL.BindTexture(TextureTarget.Texture2D, tex);
BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bitmap.UnlockBits(data);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
return tex;
}
I also already created some code that maps a point on a sphere to a point on a point on the texture I think (used code from the texture mapping spheres section on https://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html).
point is a Vector3 of where a ray intersects the sphere:
vn = new Vector3(0f, 1f, 0f); //should be north pole of sphere, but it isn't based on sphere's position, so I think it's incorrect
ve = new Vector3(1f, 0f, 0f); // should be a point on the equator
float phi = (float) Math.Acos(-1 * Vector3.Dot(vn, point));
float theta = (float) (Math.Acos(Vector3.Dot(point, ve) / Math.Sin(phi))) / (2 * (float) Math.PI);
float v = phi / (float) Math.PI;
float u = Vector3.Dot(Vector3.Cross(vn, ve), point) > 0 ? theta : 1 - theta;
I think that I can now use this u and v coordinate on the texture I loaded to find the color of the texture there. But I don't know how. I also think the north pole and equator vectors are not correct.
I don't know if you still need an answer after 4 months, but:
If you have a proper sphere model (like an obj file created with blender) with the correct uv information, you just need to import that model (using assimp or any other importer) and apply the texture during render pass.
Your question is a bit vague because I do not know if you use shaders.
My approach would be:
1: Import model with assimp library or any other import library
2: Implement vertex and fragment shaders and include a sampler2D uniform for the texture in the fragment shader
3: During render pass select your shader program id [ GL.UseProgram(...) ] and then upload vertices and texture uv and texture pixel (as a uniform) information to the shaders.
4: Use a standard vertex shader like this:
#version 330
in vec3 aPosition;
in vec2 aTexture;
out vec2 vTexture;
uniform mat4 uModelViewProjectionMatrix;
void main()
{
vTexture = aTexture;
gl_Position = uModelViewProjectionMatrix * vec4(aPosition, 1.0);
}
5: Use a standard fragment shader like this:
#version 330
in vec2 vTexture;
uniform sampler2D uTexture;
out vec4 fragcolor;
void main()
{
fragcolor = texture(uTexture, vTexture);
}
If you need a valid obj file for a sphere with rectangular uv mapping, feel free to drop a line (or two).
I'm working on my volume rendering application (C# + OpenTK).
The volume is being rendered using raycasting, i found a lot of inspiration on this site:
http://graphicsrunner.blogspot.sk/2009/01/volume-rendering-101.html, and even though my applications works with OpenGL, the main idea of using 3D texture and other stuff is the same.
Application works fine, but after I "flow into the volume" (means inside the bounding box), everything dissapears, and I want to prevent this. So is there some easy way to do this? --> I will be able to flow through the volume or move in the volume.
Here is the code of fragment shader:
#version 330
in vec3 EntryPoint;
in vec4 ExitPointCoord;
uniform sampler2D ExitPoints;
uniform sampler3D VolumeTex;
uniform sampler1D TransferFunc;
uniform float StepSize;
uniform float AlphaReduce;
uniform vec2 ScreenSize;
layout (location = 0) out vec4 FragColor;
void main()
{
//gl_FragCoord --> http://www.txutxi.com/?p=182
vec3 exitPoint = texture(ExitPoints, gl_FragCoord.st/ScreenSize).xyz;
//background need no raycasting
if (EntryPoint == exitPoint)
discard;
vec3 rayDirection = normalize(exitPoint - EntryPoint);
vec4 currentPosition = vec4(EntryPoint, 0.0f);
vec4 colorSum = vec4(.0f,.0f,.0f,.0f);
vec4 color = vec4(0.0f,0.0f,0.0f,0.0f);
vec4 value = vec4(0.0f);
vec3 Step = rayDirection * StepSize;
float stepLength= length(Step);
float LengthSum = 0.0f;
float Length = length(exitPoint - EntryPoint);
for(int i=0; i < 16000; i++)
{
currentPosition.w = 0.0f;
value = texture(VolumeTex, currentPosition.xyz);
color = texture(TransferFunc, value.a);
//reduce the alpha to have a more transparent result
color.a *= AlphaReduce;
//Front to back blending
color.rgb *= color.a;
colorSum = (1.0f - colorSum.a) * color + colorSum;
//accumulate length
LengthSum += stepLength;
//break from the loop when alpha gets high enough
if(colorSum.a >= .95f)
break;
//advance the current position
currentPosition.xyz += Step;
//break if the ray is outside of the bounding box
if(LengthSum >= Length)
break;
}
FragColor = colorSum;
}
The code below is based on https://github.com/toolchainX/Volume_Rendering_Using_GLSL
Display() function:
public void Display()
{
// the color of the vertex in the back face is also the location
// of the vertex
// save the back face to the user defined framebuffer bound
// with a 2D texture named `g_bfTexObj`
// draw the front face of the box
// in the rendering process, i.e. the ray marching process
// loading the volume `g_volTexObj` as well as the `g_bfTexObj`
// after vertex shader processing we got the color as well as the location of
// the vertex (in the object coordinates, before transformation).
// and the vertex assemblied into primitives before entering
// fragment shader processing stage.
// in fragment shader processing stage. we got `g_bfTexObj`
// (correspond to 'VolumeTex' in glsl)and `g_volTexObj`(correspond to 'ExitPoints')
// as well as the location of primitives.
// draw the back face of the box
GL.Enable(EnableCap.DepthTest);
//"vykreslim" front || back face objemu do framebuffru --> teda do 2D textury s ID bfTexID
//(pomocou backface.frag &.vert)
GL.BindFramebuffer(FramebufferTarget.Framebuffer, frameBufferID);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), bfVertShader.GetShaderHandle(), bfFragShader.GetShaderHandle());
spMain.UseProgram();
//cull front face
Render(CullFaceMode.Front);
spMain.UseProgram(0);
//klasicky framebuffer --> "obrazovka"
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), rcVertShader.GetShaderHandle(), rcFragShader.GetShaderHandle());
spMain.UseProgram();
SetUniforms();
Render(CullFaceMode.Back);
spMain.UseProgram(0);
GL.Disable(EnableCap.DepthTest);
}
private void DrawBox(CullFaceMode mode)
{
// --> Face culling allows non-visible triangles of closed surfaces to be culled before expensive Rasterization and Fragment Shader operations.
GL.Enable(EnableCap.CullFace);
GL.CullFace(mode);
GL.BindVertexArray(VAO);
GL.DrawElements(PrimitiveType.Triangles, 36, DrawElementsType.UnsignedInt, 0);
GL.BindVertexArray(0);
GL.Disable(EnableCap.CullFace);
spMain.UseProgram(0);//zapnuty bol v Render() ktora DrawBox zavolala
}
private void Render(CullFaceMode mode)
{
GL.ClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
spMain.UseProgram();
spMain.SetUniform("modelViewMatrix", Current);
spMain.SetUniform("projectionMatrix", projectionMatrix);
DrawBox(mode);
}
The problem is (I think) that as I'm moving towards the volume (I don't move the camera, just scaling the volume), if the scale factor > 2.7something, I'm in the volume, it means "after the plane on which is the final picture being rendered", so a can't see anything.
The solution (maybe) that I can think of, is something like that:
If I reach the scale factor = 2.7something:
1.) -> don't scale the volume
2.) -> somehow told to fragment shader to move EntryPoint towards the
RayDirection for some length (probably based on the scale factor).
Now, I tried this "method" and it seems that it can work:
vec3 entryPoint = EntryPoint + some_value * rayDirection;
The some_value have to be clamped between [0,1[ interval (or [0,1]?)
, but maybe it doesn't matter thank's to that:
if (EntryPoint == exitPoint)
discard;
So now, maybe (if my solution isn't so bad), I can change my answer to this:
How to compute the some_value (based on scale factor which I send to fragment shader)?
if(scale_factor < 2.7something)
work like before;
else
{
compute some_value; //(I need help with this part)
change entry point;
work like before;
}
(I'm not native english speeker, so If there are some big mistakes in the text and you don't understand something, just let me know and I'll try to fix these bugs)
Thank's.
I solved my problem. It doesn't make "being surrounded by the volume" illusion, but now, I can flow through the volume and nothing disappears.
This is the code of my solution added to fragment shader:
vec3 entryPoint = vec3(0.0f);
if(scaleCoeff >= 2.7f)
{
float tmp = min((scaleCoeff - 2.7f) * 0.1f, 1.0f);
entryPoint = EntryPoint + tmp * (exitPoint - EntryPoint);
}
else
{
entryPoint = EntryPoint;
}
//
But if you know or can think about better solution that makes the "being surrounded by the volume" effect, I'll be glad if you let me know.
Thank you.
If understand correctly, I think you should use Plane Clipping to go through the volume. (I could give you a simple example based on your code if you attach this solution. Translate the whole C++ project to C# is too time-consuming.)
I'm trying to draw a fullscreen quad with a shader applied to it, but I keep getting the following error when drawing:
An error occurred while preparing to draw. This is probably because the current vertex declaration does not include all the elements required by the current vertex shader. The current vertex declaration includes these elements: SV_Position0, TEXCOORD0.
This is how I declared my verticies for the quad:
_vertices = new VertexPositionTexture[4];
_vertices[0] = new VertexPositionTexture(new Vector3(-1, 1, 0), new Vector2(0, 0));
_vertices[1] = new VertexPositionTexture(new Vector3(1, 1, 0), new Vector2(1, 0));
_vertices[2] = new VertexPositionTexture(new Vector3(-1, -1, 0), new Vector2(0, 1));
_vertices[3] = new VertexPositionTexture(new Vector3(1, -1, 0), new Vector2(1, 1));
And this is how I'm drawing the quad (with un-needed thing omitted out)
foreach (var pass in _lightEffect1.CurrentTechnique.Passes)
{
pass.Apply();
GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, 2, VertexPositionTexture.VertexDeclaration);
}
And here is the shader that is being applied to the quad
// Vertex shader input structure
struct VertexShaderInput
{
float4 Pos : SV_Position;
float2 TexCoord : TEXCOORD0;
};
// Vertex shader output structure
struct VertexShaderOutput
{
float4 Pos : SV_Position;
float2 TexCoord : TEXCOORD0;
};
VertexShaderOutput VertexToPixelShader(VertexShaderInput input)
{
VertexShaderOutput output;
output.Pos = input.Pos;
output.TexCoord = input.TexCoord;
return output;
}
float4 PointLightShader(VertexShaderOutput PSIn) : COLOR0
{
//Pixel shader code here....
return float4(shading.r, shading.g, shading.b, 1.0f);
}
technique DeferredPointLight
{
pass Pass1
{
VertexShader = compile vs_4_0_level_9_1 VertexToPixelShader();
PixelShader = compile ps_4_0_level_9_1 PointLightShader();
}
}
One thing I noticed is that the definition in VertexPositionTexture that MonoGame provides is a vec3 for the position and a vec2 for the texcoords. However in the shader it's a float4 for the position and a float2 for the texcoords.
I tried changing it to the float3, but the shader does not compile. So I then tried to create my own "VertexPositionTexture" struct that had a vec4 with my own definition but I ended up getting the same error.
I'm not all that good at DirectX, and I tried looking all over google, but I cannot find anything that might be the cause of the problem.
Did I do something wrong in the shader? Am I missing something?
It turns out this was a really silly fix.
The Content Pipeline tool was not compiling the converted .fx files where I thought it would, and a fix I made (changing POSITION to SV_POSITION) was not actually being used...
The shader now works, now that it's actually using the correct shader
I've got drawing sprites to work with OpenTK in my 2d game engine now. Only problem I'm having is that custom drawn objects with opengl (anything but sprites really) show up as the background color. Example:
I'm Drawing a 2.4f width black line here. There's also a quad and a point in the example, but they do not overlap anything that's actually visible. The line overlaps the magenta sprite, but the color is just wrong. My question is: Am I missing an OpenGL feature, or doing something horrible wrong?
These are the samples of my project concerning drawing: (you can also find the project on https://github.com/Villermen/HatlessEngine if there's questions about the code)
Initialization:
Window = new GameWindow(windowSize.Width, windowSize.Height);
//OpenGL initialization
GL.Enable(EnableCap.PointSmooth);
GL.Hint(HintTarget.PointSmoothHint, HintMode.Nicest);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.ClearColor(Color.Gray);
GL.Enable(EnableCap.Texture2D);
GL.Enable(EnableCap.DepthTest);
GL.DepthFunc(DepthFunction.Lequal);
GL.ClearDepth(1d);
GL.DepthRange(1d, 0d); //does not seem right, but it works (see it as duct-tape)
Every draw cycle:
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
//reset depth and color to be consistent over multiple frames
DrawX.Depth = 0;
DrawX.DefaultColor = Color.Black;
foreach(View view in Resources.Views)
{
CurrentDrawArea = view.Area;
GL.Viewport((int)view.Viewport.Left * Window.Width, (int)view.Viewport.Top * Window.Height, (int)view.Viewport.Right * Window.Width, (int)view.Viewport.Bottom * Window.Height);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(view.Area.Left, view.Area.Right, view.Area.Bottom, view.Area.Top, -1f, 1f);
GL.MatrixMode(MatrixMode.Modelview);
//drawing
foreach (LogicalObject obj in Resources.Objects)
{
//set view's coords for clipping?
obj.Draw();
}
}
GL.Flush();
Window.Context.SwapBuffers();
DrawX.Line:
public static void Line(PointF pos1, PointF pos2, Color color, float width = 1)
{
RectangleF lineRectangle = new RectangleF(pos1.X, pos1.Y, pos2.X - pos1.X, pos2.Y - pos1.Y);
if (lineRectangle.IntersectsWith(Game.CurrentDrawArea))
{
GL.LineWidth(width);
GL.Color3(color);
GL.Begin(PrimitiveType.Lines);
GL.Vertex3(pos1.X, pos1.Y, GLDepth);
GL.Vertex3(pos2.X, pos2.Y, GLDepth);
GL.End();
}
}
Edit: If I disable the blendcap before and enable it after drawing the line it does show up with the right color, but I must have it blended.
I forgot to unbind the texture in the texture-drawing method...
GL.BindTexture(TextureTarget.Texture2D, 0);
Hello everyone I'm currently trying to create a deferred renderer for my graphics engine using c# and SlimDX. As a resource I use this tutorial which is very helpful eventhough it's intended for XNA.
But right now I'm stuck...
I have my renderer set up to draw all geometry's color, normals and depth to seperate render target textures. This works. I can draw the resulting textures to the restored backbuffer as sprites and I can see that they contain just what they are supposed to. But when I try to pass those Textures to another shader, in this case to create a light map, weirds things happen. Here's how I draw one frame:
public bool RenderFrame(FrameInfo fInfo){
if(!BeginRender()) //checks Device, resizes buffers, calls BeginScene(), etc.
return false;
foreach(RenderQueue queue in fInfo.GetRenderQueues()){
RenderQueue(queue);
}
EndRender(); //currently only calls EndScene, used to do more
ResolveGBuffer();
DrawDirectionalLight(
new Vector3(1f, -1f, 0),
new Color4(1f,1f,1f,1f),
fi.CameraPosition,
SlimMath.Matrix.Invert(fi.ViewProjectionMatrix));
}
private void ResolveGBuffer() {
if(DeviceContext9 == null || DeviceContext9.Device == null)
return;
DeviceContext9.Device.SetRenderTarget(0, _backbuffer);
DeviceContext9.Device.SetRenderTarget(1, null);
DeviceContext9.Device.SetRenderTarget(2, null);
}
private void DrawDirectionalLight(Vector3 lightDirection, Color4 color, SlimMath.Vector3 cameraPosition, SlimMath.Matrix invertedViewProjection) {
if(DeviceContext9 == null || DeviceContext9.Device == null)
return;
DeviceContext9.Device.BeginScene();
_directionalLightShader.Shader.SetTexture(
_directionalLightShader.Parameters["ColorMap"],
_colorTexture);
_directionalLightShader.Shader.SetTexture(
_directionalLightShader.Parameters["NormalMap"],
_normalTexture);
_directionalLightShader.Shader.SetTexture(
_directionalLightShader.Parameters["DepthMap"],
_depthTexture);
_directionalLightShader.Shader.SetValue<Vector3>(
_directionalLightShader.Parameters["lightDirection"],
lightDirection);
_directionalLightShader.Shader.SetValue<Color4>(
_directionalLightShader.Parameters["Color"],
color);
_directionalLightShader.Shader.SetValue<SlimMath.Vector3>(
_directionalLightShader.Parameters["cameraPosition"],
cameraPosition);
_directionalLightShader.Shader.SetValue<SlimMath.Matrix>(
_directionalLightShader.Parameters["InvertViewProjection"],
invertedViewProjection);
_directionalLightShader.Shader.SetValue<Vector2>(
_directionalLightShader.Parameters["halfPixel"],
_halfPixel);
_directionalLightShader.Shader.Technique =
_directionalLightShader.Technique("Technique0");
_directionalLightShader.Shader.Begin();
_directionalLightShader.Shader.BeginPass(0);
RenderQuad(SlimMath.Vector2.One * -1, SlimMath.Vector2.One);
_directionalLightShader.Shader.EndPass();
_directionalLightShader.Shader.End();
DeviceContext9.Device.EndScene();
}
Now when I replace the call to DrawDirectionalLight with some code to draw _colorTexture, _normalTexture and _depthTexture to the screen everything looks ok, but when I use the DrawDirectionalLight function instead I see wild flickering. From the output of PIX it looks like my textures do not get passed to the shader correctly:
Following the tutorial the texture parameters and samplers are defined as follows:
float3 lightDirection;
float3 Color;
float3 cameraPosition;
float4x4 InvertViewProjection;
texture ColorMap;
texture NormalMap;
texture DepthMap;
sampler colorSampler = sampler_state{
Texture = ColorMap;
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter= LINEAR;
MinFilter= LINEAR;
MipFilter= LINEAR;
};
sampler depthSampler = sampler_state{
Texture = DepthMap;
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter= POINT;
MinFilter= POINT;
MipFilter= POINT;
};
sampler normalSampler = sampler_state{
Texture = NormalMap;
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter= POINT;
MinFilter= POINT;
MipFilter= POINT;
};
Now my big question is WHY? There are no error messages printed to debug output.
EDIT:
the rendertargets/textures are created like this:
_colorTexture = new Texture(DeviceContext9.Device,
DeviceContext9.PresentParameters.BackBufferWidth,
DeviceContext9.PresentParameters.BackBufferHeight,
1,
Usage.RenderTarget,
Format.A8R8G8B8,
Pool.Default);
_colorSurface = _colorTexture.GetSurfaceLevel(0);
_normalTexture = new Texture(DeviceContext9.Device,
DeviceContext9.PresentParameters.BackBufferWidth,
DeviceContext9.PresentParameters.BackBufferHeight,
1,
Usage.RenderTarget,
Format.A8R8G8B8,
Pool.Default);
_normalSurface = _normalTexture.GetSurfaceLevel(0);
_depthTexture = new Texture(DeviceContext9.Device,
DeviceContext9.PresentParameters.BackBufferWidth,
DeviceContext9.PresentParameters.BackBufferHeight,
1,
Usage.RenderTarget,
Format.A8R8G8B8,
Pool.Default);
_depthSurface = _depthTexture.GetSurfaceLevel(0);
EDIT 2:
The problems seems to lie in the directionalLightShader itselft since passing other regular textures doesn't work either.
The answer to my problem is as simple as the problem was stupid. The strange behaviour was caused by 2 different errors:
I was just looking at the wrong events in PIX. The textures we passed correctly to the shader but I didn't see it because it was 'hidden' in the BeginPass-event (behind the '+').
The pixel shader which I was trying to execute never got called because vertices of the fullscreen quad I used to render were drawn in clockwise order... my CullMode was also set to clockwise...
Thanks to everyone who read this question!