I'm working on my volume rendering application (C# + OpenTK).
The volume is being rendered using raycasting, i found a lot of inspiration on this site:
http://graphicsrunner.blogspot.sk/2009/01/volume-rendering-101.html, and even though my applications works with OpenGL, the main idea of using 3D texture and other stuff is the same.
Application works fine, but after I "flow into the volume" (means inside the bounding box), everything dissapears, and I want to prevent this. So is there some easy way to do this? --> I will be able to flow through the volume or move in the volume.
Here is the code of fragment shader:
#version 330
in vec3 EntryPoint;
in vec4 ExitPointCoord;
uniform sampler2D ExitPoints;
uniform sampler3D VolumeTex;
uniform sampler1D TransferFunc;
uniform float StepSize;
uniform float AlphaReduce;
uniform vec2 ScreenSize;
layout (location = 0) out vec4 FragColor;
void main()
{
//gl_FragCoord --> http://www.txutxi.com/?p=182
vec3 exitPoint = texture(ExitPoints, gl_FragCoord.st/ScreenSize).xyz;
//background need no raycasting
if (EntryPoint == exitPoint)
discard;
vec3 rayDirection = normalize(exitPoint - EntryPoint);
vec4 currentPosition = vec4(EntryPoint, 0.0f);
vec4 colorSum = vec4(.0f,.0f,.0f,.0f);
vec4 color = vec4(0.0f,0.0f,0.0f,0.0f);
vec4 value = vec4(0.0f);
vec3 Step = rayDirection * StepSize;
float stepLength= length(Step);
float LengthSum = 0.0f;
float Length = length(exitPoint - EntryPoint);
for(int i=0; i < 16000; i++)
{
currentPosition.w = 0.0f;
value = texture(VolumeTex, currentPosition.xyz);
color = texture(TransferFunc, value.a);
//reduce the alpha to have a more transparent result
color.a *= AlphaReduce;
//Front to back blending
color.rgb *= color.a;
colorSum = (1.0f - colorSum.a) * color + colorSum;
//accumulate length
LengthSum += stepLength;
//break from the loop when alpha gets high enough
if(colorSum.a >= .95f)
break;
//advance the current position
currentPosition.xyz += Step;
//break if the ray is outside of the bounding box
if(LengthSum >= Length)
break;
}
FragColor = colorSum;
}
The code below is based on https://github.com/toolchainX/Volume_Rendering_Using_GLSL
Display() function:
public void Display()
{
// the color of the vertex in the back face is also the location
// of the vertex
// save the back face to the user defined framebuffer bound
// with a 2D texture named `g_bfTexObj`
// draw the front face of the box
// in the rendering process, i.e. the ray marching process
// loading the volume `g_volTexObj` as well as the `g_bfTexObj`
// after vertex shader processing we got the color as well as the location of
// the vertex (in the object coordinates, before transformation).
// and the vertex assemblied into primitives before entering
// fragment shader processing stage.
// in fragment shader processing stage. we got `g_bfTexObj`
// (correspond to 'VolumeTex' in glsl)and `g_volTexObj`(correspond to 'ExitPoints')
// as well as the location of primitives.
// draw the back face of the box
GL.Enable(EnableCap.DepthTest);
//"vykreslim" front || back face objemu do framebuffru --> teda do 2D textury s ID bfTexID
//(pomocou backface.frag &.vert)
GL.BindFramebuffer(FramebufferTarget.Framebuffer, frameBufferID);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), bfVertShader.GetShaderHandle(), bfFragShader.GetShaderHandle());
spMain.UseProgram();
//cull front face
Render(CullFaceMode.Front);
spMain.UseProgram(0);
//klasicky framebuffer --> "obrazovka"
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), rcVertShader.GetShaderHandle(), rcFragShader.GetShaderHandle());
spMain.UseProgram();
SetUniforms();
Render(CullFaceMode.Back);
spMain.UseProgram(0);
GL.Disable(EnableCap.DepthTest);
}
private void DrawBox(CullFaceMode mode)
{
// --> Face culling allows non-visible triangles of closed surfaces to be culled before expensive Rasterization and Fragment Shader operations.
GL.Enable(EnableCap.CullFace);
GL.CullFace(mode);
GL.BindVertexArray(VAO);
GL.DrawElements(PrimitiveType.Triangles, 36, DrawElementsType.UnsignedInt, 0);
GL.BindVertexArray(0);
GL.Disable(EnableCap.CullFace);
spMain.UseProgram(0);//zapnuty bol v Render() ktora DrawBox zavolala
}
private void Render(CullFaceMode mode)
{
GL.ClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
spMain.UseProgram();
spMain.SetUniform("modelViewMatrix", Current);
spMain.SetUniform("projectionMatrix", projectionMatrix);
DrawBox(mode);
}
The problem is (I think) that as I'm moving towards the volume (I don't move the camera, just scaling the volume), if the scale factor > 2.7something, I'm in the volume, it means "after the plane on which is the final picture being rendered", so a can't see anything.
The solution (maybe) that I can think of, is something like that:
If I reach the scale factor = 2.7something:
1.) -> don't scale the volume
2.) -> somehow told to fragment shader to move EntryPoint towards the
RayDirection for some length (probably based on the scale factor).
Now, I tried this "method" and it seems that it can work:
vec3 entryPoint = EntryPoint + some_value * rayDirection;
The some_value have to be clamped between [0,1[ interval (or [0,1]?)
, but maybe it doesn't matter thank's to that:
if (EntryPoint == exitPoint)
discard;
So now, maybe (if my solution isn't so bad), I can change my answer to this:
How to compute the some_value (based on scale factor which I send to fragment shader)?
if(scale_factor < 2.7something)
work like before;
else
{
compute some_value; //(I need help with this part)
change entry point;
work like before;
}
(I'm not native english speeker, so If there are some big mistakes in the text and you don't understand something, just let me know and I'll try to fix these bugs)
Thank's.
I solved my problem. It doesn't make "being surrounded by the volume" illusion, but now, I can flow through the volume and nothing disappears.
This is the code of my solution added to fragment shader:
vec3 entryPoint = vec3(0.0f);
if(scaleCoeff >= 2.7f)
{
float tmp = min((scaleCoeff - 2.7f) * 0.1f, 1.0f);
entryPoint = EntryPoint + tmp * (exitPoint - EntryPoint);
}
else
{
entryPoint = EntryPoint;
}
//
But if you know or can think about better solution that makes the "being surrounded by the volume" effect, I'll be glad if you let me know.
Thank you.
If understand correctly, I think you should use Plane Clipping to go through the volume. (I could give you a simple example based on your code if you attach this solution. Translate the whole C++ project to C# is too time-consuming.)
Related
I got a c# program with opengl4csharp library, which creates a 3D cube, movable in space with the mouse and keyboard.
Actually, I apply only one texture to the cube uniformly. My problem is that I want to apply a different texture to each face.
I tried to initiate a texture array and a textureId array as following :
diceTextures = new Texture[6];
diceTextures[0] = new Texture("top.jpg");
diceTextures[1] = new Texture("bottom.jpg");
diceTextures[2] = new Texture("left.jpg");
diceTextures[3] = new Texture("right.jpg");
diceTextures[4] = new Texture("front.jpg");
diceTextures[5] = new Texture("back.jpg");
diceUint = new uint[diceTextures.Length];
for (uint ui = 0; ui < diceTextures.Length; ui++)
{
diceUint[ui] = diceTextures[ui].TextureID;
}
Then in the OnRenderFrame method, to bind them with :
Gl.UseProgram(program);
Gl.ActiveTexture(TextureUnit.Texture0);
Gl.BindTextures(0, diceTextures.Length, diceUint);
But nothing changes, only the first texture of the array is displayed on the cube, as previously when binding only one texture.
How can I achieve that the textures are applied to the faces ?
Gl.BindTextures(0, diceTextures.Length, diceUint);
This binds 6 textures to 6 separate texture units, 0 through diceTextures.Length - 1. Indeed, if you're going to use glBindTextures, you don't need the glActiveTexture call.
In any case, if your goal is to give each face a different texture, you first have to be able to identify a specific face from your shader. That means each face needs to be given a per-vertex value which is separate from those for the vertices for other faces. This also means that such faces cannot share positions with other faces, since one of their attributes is not shared from face to face.
So you need a new vertex attribute which contains the index for the texture you want that face to use.
From there, you can employ array textures. These are single textures which contain an array of images. When sampling from array textures, you can specify (as part of the texture coordinate) which index in the array to sample from.
Of course, this changes your texture building code, as you must use GL_TEXTURE_2D_ARRAY for your texture type, and allocate multiple array layers for each mipmap face.
Overall, the shader code would look something like this:
#version 330
layout(location = 0) in vec3 position;
layout(location = 2) in vec2 vertTexCoord;
layout(location = 6) in float textureLayer;
out vec2 texCoord;
flat out float layer;
void main()
{
gl_Position = //your usual stuff.
texCoord = vertTexCoord;
layer = textureLayer;
}
Fragment shader:
#version 330
in vec2 texCoord;
flat in float layer;
uniform sampler2DArray arrayTexture;
out vec4 outColor;
void main()
{
outColor = texture(arrayTexture, vec3(texCoord, layer));
}
I am creating a game engine that includes basic game needs. Using glslDevil, it turns out my bind VBO method throws an InvalidValue error. A call of glVertexPointer and a call of glEnableVertexPointer cause the issue. The vertex attribute index is causing the issue. The index is 4294967295 which is well over 15. Everything else works perfectly fine. I am using OpenTK. Here is the bind to attribute method.
public void BindToAttribute(ShaderProgram prog, string attribute)
{
int location = GL.GetAttribLocation(prog.ProgramID, attribute);
GL.EnableVertexAttribArray(location);
Bind();
GL.VertexAttribPointer(location, Size, PointerType, true, TSize, 0);
}
public void Bind()
{
GL.BindBuffer(Target, ID);
}
Here are my shaders if required.
Vertex Shader:
uniform mat4 transform;
uniform mat4 projection;
uniform mat4 camera;
in vec3 vertex;
in vec3 normal;
in vec4 color;
in vec2 uv;
out vec3 rnormal;
out vec4 rcolor;
out vec2 ruv;
void main(void)
{
rcolor = color;
rnormal = normal;
ruv = uv;
gl_Position = camera * projection * transform * vec4(vertex, 1);
}
Fragment Shader:
in vec3 rnormal;
in vec4 rcolor;
in vec2 ruv;
uniform sampler2D texture;
void main(void)
{
gl_FragColor = texture2D(texture, ruv) * rcolor;
}
Am I not obtaining the index correctly or is there another issue?
the index that you are getting seems to be the problem: that is the index that you get when opengl doesn't find a valid attribute/uniform with that name.
There are a few things that might be going on:
you are passing a string that doesn't exist in the shader program (check case sensitive and whatnot)
the uniform exists and you are passing the correct string, but you are not using that attribute in the shader so the driver has removed all the occurrences of that attribute in the final code due to optimization (therefore it doesn't exists anymore)
In general though, that number shows that OpenGL can't find the uniform or attribute you were looking for
EDIT:
One trick is the following: let's assume you have some pixel shader code that returns a value that is the sum of many values:
out vec4 color;
void main()
{
// this shader does many calculations when you are
// using many values, but let's assume you want to debug
// just the diffuse color... how do you do it?
// if you change the output to return just the diffuse color,
// the optimizer might remove code and you might have problems
//
// if you have this
color = various_calculation_1 + various_calculation_2 + ....;
// what you can do is the following
color *= 0.0000001f; // so basically it's still calculated
// but it almost won't show up
color += value_to_debug; // example, the diffuse color
}
The sample
If you watch the code, I'm interested in refraction.fx, and in void DrawRefractGlacier(GameTime gameTime) function. Here you can notice that the function uses a texture to render water distortion on an image (waterfall.jpg as "distorter image", and glacier.jpg as distorted image).
If you read inside refraction.fx, at the beginning it says:
// Effect uses a scrolling displacement texture to offset the position of the main
// texture. Depending on the contents of the displacement texture, this can give a
// wide range of refraction, rippling, warping, and swirling type effects.
It seems that would be easy to achieve another effect by changing the image. I tried that with an image like this:
I want to achieve the effect of distorting everything around as a rotating whirl, or a spiral. How can I do that?
Some simple sequential screen of how it looks with my texture:
Refraction shader:
// Effect uses a scrolling displacement texture to offset the position of the main
// texture. Depending on the contents of the displacement texture, this can give a
// wide range of refraction, rippling, warping, and swirling type effects.
float2 DisplacementScroll;
float2 angle;
sampler TextureSampler : register(s0);
sampler DisplacementSampler : register(s1);
float2x2 RotationMatrix(float rotation)
{
float c = cos(rotation);
float s = sin(rotation);
return float2x2(c, -s, s ,c);
}
float4 main(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0
{
float2 rotated_texcoord = texCoord;
rotated_texcoord -= float2(0.25, 0.25);
rotated_texcoord = mul(rotated_texcoord, RotationMatrix(angle));
rotated_texcoord += float2(0.25, 0.25);
float2 DispScroll = DisplacementScroll;
// Look up the displacement amount.
float2 displacement = tex2D(DisplacementSampler, DispScroll+ texCoord / 3);
// Offset the main texture coordinates.
texCoord += displacement * 0.2 - 0.15;
// Look up into the main texture.
return tex2D(TextureSampler, texCoord) * color;
}
technique Refraction
{
pass Pass1
{
PixelShader = compile ps_2_0 main();
}
}
Its draw call:
void DrawRefractGlacier(GameTime gameTime)
{
// Set an effect parameter to make the
// displacement texture scroll in a giant circle.
refractionEffect.Parameters["DisplacementScroll"].SetValue(
MoveInCircle(gameTime, 0.2f));
// Set the displacement texture.
graphics.GraphicsDevice.Textures[1] = waterfallTexture;
// Begin the sprite batch.
spriteBatch.Begin(0, null, null, null, null, refractionEffect);
// Because the effect will displace the texture coordinates before
// sampling the main texture, the coordinates could sometimes go right
// off the edges of the texture, which looks ugly. To prevent this, we
// adjust our sprite source region to leave a little border around the
// edge of the texture. The displacement effect will then just move the
// texture coordinates into this border region, without ever hitting
// the edge of the texture.
Rectangle croppedGlacier = new Rectangle(32, 32,
glacierTexture.Width - 64,
glacierTexture.Height - 64);
spriteBatch.Draw(glacierTexture,
GraphicsDevice.Viewport.Bounds,
croppedGlacier,
Color.White);
// End the sprite batch.
spriteBatch.End();
}
I have a fairly simple fragmentshader that does not work. It appears to have something to do with the textureCube method.
This is the fragment shader:
in vec3 ReflectDir;
in vec3 RefractDir;
uniform samplerCube CubeMapTex;
uniform bool DrawSkyBox;
uniform float MaterialReflectionFactor;
void main()
{
// Access the cube map texture
vec4 reflectColor = textureCube(CubeMapTex, ReflectDir);
vec4 refractColor = textureCube(CubeMapTex, RefractDir);
if( DrawSkyBox )
{
gl_FragColor = reflectColor;
gl_FragColor = vec4(ReflectDir, 1); //This line
}
else
gl_FragColor = vec4(1,0,0,1);
}
ReflectDir and RefractDir come from a vertex shader, but that seems to be in order.
If I comment the second line in the if statement the whole screen is black (incliding the teapot), otherwise it looks like this (ReflectDir seems ok):
http://i.imgur.com/MkHX6kT.png
Also the cubemap is rendered properly (well the images orders are bad). This is how the scene looks like without the shader program:
http://i.imgur.com/6kKzA2x.jpg
Aditional info:
the texture is loaded with GL_TEXTURE_CUBE_MAP on active texture TEXTURE0
uniform CubeMapTex is set to 0
DrawSkyBox is set to true when drawing the skybox, false after that
I used SharpGL
I have a pretty annoying problem. I would like to create a drawing program, using winform + XNA combo.
The most important part would be to transform the mouse position into the XNA drawn grid - I was able to make it for the translations, but it only work if I don't zoom in - when I do, the coordinates simply went horrible wrong.
And I have no idea what I doing wrong. I tried to transform with scaling matrix, transform with inverse scaling matrix, multiplying with zoom, but none seems to work.
In the beginning (with zoom value = 1) the grid starts from (0,0,0) going to (Width, Height, 0). I was able to get coordinates based on this grid as long as the zoom value didn't changed at all. I using a custom shader, with orthographic projection matrix, identity view matrix, and the transformed world matrix.
Here are the two main methods:
internal void Update(RenderData data)
{
KeyboardState keyS = Keyboard.GetState();
MouseState mouS = Mouse.GetState();
if (ButtonState.Pressed == mouS.RightButton)
{
camTarget.X -= (float)(mouS.X - oldMstate.X) / 2;
camTarget.Y += (float)(mouS.Y - oldMstate.Y) / 2;
}
if (ButtonState.Pressed == mouS.MiddleButton || keyS.IsKeyDown(Keys.Space))
{
zVal += (float)(mouS.Y - oldMstate.Y) / 10;
zoom = (float)Math.Pow(2, zVal);
}
oldKState = keyS;
oldMstate = mouS;
world = Matrix.CreateTranslation(new Vector3(-camTarget.X, -camTarget.Y, 0)) * Matrix.CreateScale(zoom / 2);
}
internal PointF MousePos
{
get
{
Vector2 mousePos = new Vector2(Mouse.GetState().X, Mouse.GetState().Y);
Matrix trans = Matrix.CreateTranslation(new Vector3(camTarget.X - (Width / 2), -camTarget.Y + (Height / 2), 0));
mousePos = Vector2.Transform(mousePos, trans);
return new PointF(mousePos.X, mousePos.Y);
}
}
The second method should return the coordinates of the mouse cursor based on the grid (where the (0,0) point of the grid is the top-left corner.).
But is just don't work. I deleted the zoom transformation from the matrix trans, as I wasn't able to get any useful results (most of the time, the coordinates were horribly wrong, mostly many thousands when the grid's size is 500x500).
Any ideas, or suggestions? I've been trying to solve this simple problem for two days now :\
Take a look at the GraphicsDevice.Viewport.Unproject method for converting screen space locations in to world space, it basically goes through your world, view, projection transformations in reverse order.
as for your zooming issue, instead of scaling the world transform why not move the camera closer to the object that you're viewing?