Multiple texture on same object opengl4csharp - c#

I got a c# program with opengl4csharp library, which creates a 3D cube, movable in space with the mouse and keyboard.
Actually, I apply only one texture to the cube uniformly. My problem is that I want to apply a different texture to each face.
I tried to initiate a texture array and a textureId array as following :
diceTextures = new Texture[6];
diceTextures[0] = new Texture("top.jpg");
diceTextures[1] = new Texture("bottom.jpg");
diceTextures[2] = new Texture("left.jpg");
diceTextures[3] = new Texture("right.jpg");
diceTextures[4] = new Texture("front.jpg");
diceTextures[5] = new Texture("back.jpg");
diceUint = new uint[diceTextures.Length];
for (uint ui = 0; ui < diceTextures.Length; ui++)
{
diceUint[ui] = diceTextures[ui].TextureID;
}
Then in the OnRenderFrame method, to bind them with :
Gl.UseProgram(program);
Gl.ActiveTexture(TextureUnit.Texture0);
Gl.BindTextures(0, diceTextures.Length, diceUint);
But nothing changes, only the first texture of the array is displayed on the cube, as previously when binding only one texture.
How can I achieve that the textures are applied to the faces ?

Gl.BindTextures(0, diceTextures.Length, diceUint);
This binds 6 textures to 6 separate texture units, 0 through diceTextures.Length - 1. Indeed, if you're going to use glBindTextures, you don't need the glActiveTexture call.
In any case, if your goal is to give each face a different texture, you first have to be able to identify a specific face from your shader. That means each face needs to be given a per-vertex value which is separate from those for the vertices for other faces. This also means that such faces cannot share positions with other faces, since one of their attributes is not shared from face to face.
So you need a new vertex attribute which contains the index for the texture you want that face to use.
From there, you can employ array textures. These are single textures which contain an array of images. When sampling from array textures, you can specify (as part of the texture coordinate) which index in the array to sample from.
Of course, this changes your texture building code, as you must use GL_TEXTURE_2D_ARRAY for your texture type, and allocate multiple array layers for each mipmap face.
Overall, the shader code would look something like this:
#version 330
layout(location = 0) in vec3 position;
layout(location = 2) in vec2 vertTexCoord;
layout(location = 6) in float textureLayer;
out vec2 texCoord;
flat out float layer;
void main()
{
gl_Position = //your usual stuff.
texCoord = vertTexCoord;
layer = textureLayer;
}
Fragment shader:
#version 330
in vec2 texCoord;
flat in float layer;
uniform sampler2DArray arrayTexture;
out vec4 outColor;
void main()
{
outColor = texture(arrayTexture, vec3(texCoord, layer));
}

Related

Volume rendering using ray-casting- flowing through the volume

I'm working on my volume rendering application (C# + OpenTK).
The volume is being rendered using raycasting, i found a lot of inspiration on this site:
http://graphicsrunner.blogspot.sk/2009/01/volume-rendering-101.html, and even though my applications works with OpenGL, the main idea of using 3D texture and other stuff is the same.
Application works fine, but after I "flow into the volume" (means inside the bounding box), everything dissapears, and I want to prevent this. So is there some easy way to do this? --> I will be able to flow through the volume or move in the volume.
Here is the code of fragment shader:
#version 330
in vec3 EntryPoint;
in vec4 ExitPointCoord;
uniform sampler2D ExitPoints;
uniform sampler3D VolumeTex;
uniform sampler1D TransferFunc;
uniform float StepSize;
uniform float AlphaReduce;
uniform vec2 ScreenSize;
layout (location = 0) out vec4 FragColor;
void main()
{
//gl_FragCoord --> http://www.txutxi.com/?p=182
vec3 exitPoint = texture(ExitPoints, gl_FragCoord.st/ScreenSize).xyz;
//background need no raycasting
if (EntryPoint == exitPoint)
discard;
vec3 rayDirection = normalize(exitPoint - EntryPoint);
vec4 currentPosition = vec4(EntryPoint, 0.0f);
vec4 colorSum = vec4(.0f,.0f,.0f,.0f);
vec4 color = vec4(0.0f,0.0f,0.0f,0.0f);
vec4 value = vec4(0.0f);
vec3 Step = rayDirection * StepSize;
float stepLength= length(Step);
float LengthSum = 0.0f;
float Length = length(exitPoint - EntryPoint);
for(int i=0; i < 16000; i++)
{
currentPosition.w = 0.0f;
value = texture(VolumeTex, currentPosition.xyz);
color = texture(TransferFunc, value.a);
//reduce the alpha to have a more transparent result
color.a *= AlphaReduce;
//Front to back blending
color.rgb *= color.a;
colorSum = (1.0f - colorSum.a) * color + colorSum;
//accumulate length
LengthSum += stepLength;
//break from the loop when alpha gets high enough
if(colorSum.a >= .95f)
break;
//advance the current position
currentPosition.xyz += Step;
//break if the ray is outside of the bounding box
if(LengthSum >= Length)
break;
}
FragColor = colorSum;
}
The code below is based on https://github.com/toolchainX/Volume_Rendering_Using_GLSL
Display() function:
public void Display()
{
// the color of the vertex in the back face is also the location
// of the vertex
// save the back face to the user defined framebuffer bound
// with a 2D texture named `g_bfTexObj`
// draw the front face of the box
// in the rendering process, i.e. the ray marching process
// loading the volume `g_volTexObj` as well as the `g_bfTexObj`
// after vertex shader processing we got the color as well as the location of
// the vertex (in the object coordinates, before transformation).
// and the vertex assemblied into primitives before entering
// fragment shader processing stage.
// in fragment shader processing stage. we got `g_bfTexObj`
// (correspond to 'VolumeTex' in glsl)and `g_volTexObj`(correspond to 'ExitPoints')
// as well as the location of primitives.
// draw the back face of the box
GL.Enable(EnableCap.DepthTest);
//"vykreslim" front || back face objemu do framebuffru --> teda do 2D textury s ID bfTexID
//(pomocou backface.frag &.vert)
GL.BindFramebuffer(FramebufferTarget.Framebuffer, frameBufferID);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), bfVertShader.GetShaderHandle(), bfFragShader.GetShaderHandle());
spMain.UseProgram();
//cull front face
Render(CullFaceMode.Front);
spMain.UseProgram(0);
//klasicky framebuffer --> "obrazovka"
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), rcVertShader.GetShaderHandle(), rcFragShader.GetShaderHandle());
spMain.UseProgram();
SetUniforms();
Render(CullFaceMode.Back);
spMain.UseProgram(0);
GL.Disable(EnableCap.DepthTest);
}
private void DrawBox(CullFaceMode mode)
{
// --> Face culling allows non-visible triangles of closed surfaces to be culled before expensive Rasterization and Fragment Shader operations.
GL.Enable(EnableCap.CullFace);
GL.CullFace(mode);
GL.BindVertexArray(VAO);
GL.DrawElements(PrimitiveType.Triangles, 36, DrawElementsType.UnsignedInt, 0);
GL.BindVertexArray(0);
GL.Disable(EnableCap.CullFace);
spMain.UseProgram(0);//zapnuty bol v Render() ktora DrawBox zavolala
}
private void Render(CullFaceMode mode)
{
GL.ClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
spMain.UseProgram();
spMain.SetUniform("modelViewMatrix", Current);
spMain.SetUniform("projectionMatrix", projectionMatrix);
DrawBox(mode);
}
The problem is (I think) that as I'm moving towards the volume (I don't move the camera, just scaling the volume), if the scale factor > 2.7something, I'm in the volume, it means "after the plane on which is the final picture being rendered", so a can't see anything.
The solution (maybe) that I can think of, is something like that:
If I reach the scale factor = 2.7something:
1.) -> don't scale the volume
2.) -> somehow told to fragment shader to move EntryPoint towards the
RayDirection for some length (probably based on the scale factor).
Now, I tried this "method" and it seems that it can work:
vec3 entryPoint = EntryPoint + some_value * rayDirection;
The some_value have to be clamped between [0,1[ interval (or [0,1]?)
, but maybe it doesn't matter thank's to that:
if (EntryPoint == exitPoint)
discard;
So now, maybe (if my solution isn't so bad), I can change my answer to this:
How to compute the some_value (based on scale factor which I send to fragment shader)?
if(scale_factor < 2.7something)
work like before;
else
{
compute some_value; //(I need help with this part)
change entry point;
work like before;
}
(I'm not native english speeker, so If there are some big mistakes in the text and you don't understand something, just let me know and I'll try to fix these bugs)
Thank's.
I solved my problem. It doesn't make "being surrounded by the volume" illusion, but now, I can flow through the volume and nothing disappears.
This is the code of my solution added to fragment shader:
vec3 entryPoint = vec3(0.0f);
if(scaleCoeff >= 2.7f)
{
float tmp = min((scaleCoeff - 2.7f) * 0.1f, 1.0f);
entryPoint = EntryPoint + tmp * (exitPoint - EntryPoint);
}
else
{
entryPoint = EntryPoint;
}
//
But if you know or can think about better solution that makes the "being surrounded by the volume" effect, I'll be glad if you let me know.
Thank you.
If understand correctly, I think you should use Plane Clipping to go through the volume. (I could give you a simple example based on your code if you attach this solution. Translate the whole C++ project to C# is too time-consuming.)

Monogame Shader Porting Issues

Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.

GLSL textureCube fails

I have a fairly simple fragmentshader that does not work. It appears to have something to do with the textureCube method.
This is the fragment shader:
in vec3 ReflectDir;
in vec3 RefractDir;
uniform samplerCube CubeMapTex;
uniform bool DrawSkyBox;
uniform float MaterialReflectionFactor;
void main()
{
// Access the cube map texture
vec4 reflectColor = textureCube(CubeMapTex, ReflectDir);
vec4 refractColor = textureCube(CubeMapTex, RefractDir);
if( DrawSkyBox )
{
gl_FragColor = reflectColor;
gl_FragColor = vec4(ReflectDir, 1); //This line
}
else
gl_FragColor = vec4(1,0,0,1);
}
ReflectDir and RefractDir come from a vertex shader, but that seems to be in order.
If I comment the second line in the if statement the whole screen is black (incliding the teapot), otherwise it looks like this (ReflectDir seems ok):
http://i.imgur.com/MkHX6kT.png
Also the cubemap is rendered properly (well the images orders are bad). This is how the scene looks like without the shader program:
http://i.imgur.com/6kKzA2x.jpg
Aditional info:
the texture is loaded with GL_TEXTURE_CUBE_MAP on active texture TEXTURE0
uniform CubeMapTex is set to 0
DrawSkyBox is set to true when drawing the skybox, false after that
I used SharpGL

How to return a texture from pixel shader in Unity 3d shaderlab?

How to create a simple pixel color shader that say takes a texture, applyes something like masking:
half4 color = tex2D(_Texture0, i.uv.xy);
if(distance(color, mask) > _CutOff)
{
return color;
}
else
{
return static_color;
}
in and returns a texture that can be passed to next shader from c# code in a way like mats[1].SetTexture("_MainTex", mats[0].GetTexture("_MainTex"));?
But... you might not want to do a shader to only modify a texture.
Why not? It is a common practice.
Check out Graphics.Blit. It basically draws a quad with material (including a shader) applied. So you could use your shader to modify a texture. But the texture has to be RenderTexture.
It would be like this:
var mat = new Material(Shader.Find("My Shader"));
var output = new RenderTexture(...);
Graphics.Blit(sourceTexture, output, mat);
sourceTexture in this case will be bound to _MainTex of My Shader.

Computing polygon outline vertices on the GPU

I'm trying to draw 2D polygons with wide, colored outlines without using a custom shader.
(if I were to write one it'd probably be slower than using the CPU since I'm not well-versed in shaders)
To do so I plan to draw the polygons like normal, and then use the resulting depth-buffer as a stencil when drawing the same, expanded geometry.
The XNA "GraphicsDevice" can draw primitives given any array of IVertexType instances:
DrawUserPrimitives<T>(PrimitiveType primitiveType, T[] vertexData, int vertexOffset, int primitiveCount, VertexDeclaration vertexDeclaration) where T : struct;
I've defined an IvertexType for 2D coordinate space:
public struct VertexPosition2DColor : IVertexType
{
public VertexPosition2DColor (Vector2 position, Color color) {
this.position = position;
this.color = color;
}
public Vector2 position;
public Color color;
public static VertexDeclaration declaration = new VertexDeclaration (
new VertexElement(0, VertexElementFormat.Vector2, VertexElementUsage.Position, 0),
new VertexElement(sizeof(float)*2, VertexElementFormat.Color, VertexElementUsage.Color, 0)
);
VertexDeclaration IVertexType.VertexDeclaration {
get {return declaration;}
}
}
I've defined an array class for storing a polygon's vertices, colors, and edge normals:
I hope to pass this class as the T[] parameter in the GraphicDevice's DrawPrimitives function.
The goal is for the outline vertices to be GPU-calculated since it's apparently good at such things.
internal class VertexOutlineArray : Array
{
internal VertexOutlineArray (Vector2[] positions, Vector2[] normals, Color[] colors, Color[] outlineColors, bool outlineDrawMode) {
this.positions = positions;
this.normals = normals;
this.colors = colors;
this.outlineColors = outlineColors;
this.outlineDrawMode = outlineDrawMode;
}
internal Vector2[] positions, normals;
internal Color[] colors, outlineColors;
internal float outlineWidth;
internal bool outlineDrawMode;
internal void SetVertex(int index, Vector2 position, Vector2 normal, Color color, Color outlineColor) {
positions[index] = position;
normals[index] = normal;
colors[index] = color;
outlineColors[index] = outlineColor;
}
internal VertexPosition2DColor this[int i] {
get {return (outlineDrawMode)? new VertexPosition2DColor(positions[i] + outlineWidth*normals[i], outlineColors[i])
: new VertexPosition2DColor(positions[i], colors[i]);
}
}
}
I want to be able to render the shape and it's outline like so:
the depth buffer is used as a stencil when drawing the expanded outliner geometry
protected override void RenderLocally (GraphicsDevice device)
{
// Draw shape
mVertices.outlineDrawMode = true; //mVertices is a VertexOutlineArray instance
device.RasterizerState = RasterizerState.CullNone;
device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth16;
device.Clear(ClearOptions.DepthBuffer, Color.SkyBlue, 0, 0);
device.DrawUserPrimitives<VertexPosition2DColor>(PrimitiveType.TriangleList, (VertexPosition2DColor[])mVertices, 0, mVertices.Length -2, VertexPosition2DColor.declaration);
// Draw outline
mVertices.outlineDrawMode = true;
device.DepthStencilState = new DepthStencilState {
DepthBufferWriteEnable = true,
DepthBufferFunction = CompareFunction.Greater //keeps the outline from writing over the shape
};
device.DrawUserPrimitives(PrimitiveType.TriangleList, mVertices.ToArray(), 0, mVertices.Count -2);
}
This doesn't work though, because I'm unable to pass my VertexArray class as a T[]. How can I amend this or otherwise accomplish the goal of doing outline calculations on the GPU without a custom shader?
I am wondering why you dont simply write a class that draws the outline using pairs of thin triangles as lines? You could create a generalized polyline routine that receives an input of the 2d points and a width of the line and the routine spits out a VertexBuffer.
I realize this isn't answering your question but I cant see what the advantage is of trying to do it your way. Is there a specific effect you want to achieve or are you going to be manipulating the data very frequently or scaling the polygons alot?
The problem you are likely having is that XNA4 for Windows Phone 7 does not support custom shaders at all. In fact they purposefully limited it to a set of predefined shaders because of the number of permutations that would have to be tested. The ones currently supported are:
AlphaTestEffect
BasicEffect
EnvironmentMapEffect
DualTextureEffect
SkinnedEffect
You can read about them here:
http://msdn.microsoft.com/en-us/library/bb203872(v=xnagamestudio.40).aspx
I have not tested creating or utilizing a IVertexType with Vector2 position and normal and so I cant comment on if it is supported or not. If I were to do this I would use just the BasicEffect and VertexPositionNormal for the main polygonal shape rendering and adjust the DiffuseColor for each polygon. For rendering the outline you use the existing VertexBuffer and scale it appropriately by calling GraphicsDevice.Viewport.Unproject() to determine the 3d coordinate distance require to produce a n-pixel 2d screen distance(your outline width).
Remember that when you are using the BasicEffect, or any effect for that matter, that you have to loop through the EffectPass array of the CurrentTechnique and call the Apply() method for each pass before you make your Draw call.
device.DepthStencilState = DepthStencilState.Default;
device.BlendState = BlendState.AlphaBlend;
device.RasterizerState = RasterizerState.CullCounterClockwise;
//Set the appropriate vertex and indice buffers
device.SetVertexBuffer(_polygonVertices);
device.Indices = _polygonIndices;
foreach (EffectPass pass in _worldEffect.CurrentTechnique.Passes)
{
pass.Apply();
PApp.Graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _polygonVertices.VertexCount, 0, _polygonIndices.IndexCount / 3);
}

Categories