I want to map a rectangular globe texture onto a sphere. I can load the "globe.jpg" texture and display it onto the screen. I think I need to retrieve the color of the "globe.jpg" texture at specific texture coordinates and use that to colorize a specific point on the globe.
I want to map the globe map on the rightmiddle side onto one of the spheres on the left side (see picture)
Code for loading texture:
int texture;
public Texture() {
texture = LoadTexture("Content/globe.jpg");
}
public int LoadTexture(string file) {
Bitmap bitmap = new Bitmap(file);
int tex;
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest);
GL.GenTextures(1, out tex);
GL.BindTexture(TextureTarget.Texture2D, tex);
BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bitmap.UnlockBits(data);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
return tex;
}
I also already created some code that maps a point on a sphere to a point on a point on the texture I think (used code from the texture mapping spheres section on https://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html).
point is a Vector3 of where a ray intersects the sphere:
vn = new Vector3(0f, 1f, 0f); //should be north pole of sphere, but it isn't based on sphere's position, so I think it's incorrect
ve = new Vector3(1f, 0f, 0f); // should be a point on the equator
float phi = (float) Math.Acos(-1 * Vector3.Dot(vn, point));
float theta = (float) (Math.Acos(Vector3.Dot(point, ve) / Math.Sin(phi))) / (2 * (float) Math.PI);
float v = phi / (float) Math.PI;
float u = Vector3.Dot(Vector3.Cross(vn, ve), point) > 0 ? theta : 1 - theta;
I think that I can now use this u and v coordinate on the texture I loaded to find the color of the texture there. But I don't know how. I also think the north pole and equator vectors are not correct.
I don't know if you still need an answer after 4 months, but:
If you have a proper sphere model (like an obj file created with blender) with the correct uv information, you just need to import that model (using assimp or any other importer) and apply the texture during render pass.
Your question is a bit vague because I do not know if you use shaders.
My approach would be:
1: Import model with assimp library or any other import library
2: Implement vertex and fragment shaders and include a sampler2D uniform for the texture in the fragment shader
3: During render pass select your shader program id [ GL.UseProgram(...) ] and then upload vertices and texture uv and texture pixel (as a uniform) information to the shaders.
4: Use a standard vertex shader like this:
#version 330
in vec3 aPosition;
in vec2 aTexture;
out vec2 vTexture;
uniform mat4 uModelViewProjectionMatrix;
void main()
{
vTexture = aTexture;
gl_Position = uModelViewProjectionMatrix * vec4(aPosition, 1.0);
}
5: Use a standard fragment shader like this:
#version 330
in vec2 vTexture;
uniform sampler2D uTexture;
out vec4 fragcolor;
void main()
{
fragcolor = texture(uTexture, vTexture);
}
If you need a valid obj file for a sphere with rectangular uv mapping, feel free to drop a line (or two).
Related
I made a full-screen size square to show on window.
But sadly, I am stuck at changing viewpoint (camera or perspective?) to make square look small at the center of window.
As many people suggested on web, I followed guid of setting up Matrix and perspective field of view which does not work...
I am wondering what I am missing on my code.
private void ImageControl_OnRender(TimeSpan delta)
{
//Create perspective camera matrix
//ImageControl is the name of window
GL.Viewport(0, 0, (int)ImageControl.Width, (int)ImageControl.Height);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
Matrix4 perspectiveMatrix;
Matrix4.CreatePerspectiveFieldOfView(45.0f * (float)Math.PI / 180, (float)(ImageControl.Width / ImageControl.Height), 0.1f, 100.0f, out perspectiveMatrix);
//Set perspective camera
//GL.MatrixMode(MatrixMode.Projection);
//GL.LoadIdentity();
GL.LoadMatrix(ref perspectiveMatrix);
GL.LoadIdentity();
GL.MatrixMode(MatrixMode.Modelview);
//GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
//Now starting to draw objects
//Set the background colour
GL.ClearColor(Color4.SkyBlue);
//Clear the colour and depth buffer for next matrix.
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
//Set the scale of object first hand
//GL.Scale(0.5f, 0.5f, 0.5f);
//GL.Translate() <<< Set the translation of object first hand
GL.Translate(0.0f, 0.0f, -2.0f);
//Set the colour of object first hand
GL.Color3(0.3f, 0.2f, 0.5f);
//Tells that we are going to draw a sqare consisting of vertices. Can be Triangle too!
GL.Begin(PrimitiveType.Quads);
GL.Vertex3(-1.0f, -1.0f, 0.0f);
GL.Vertex3(1.0f, -1.0f, 0.0f);
GL.Vertex3(1.0f, 1.0f, 0.0f);
GL.Vertex3(-1.0f, 1.0f, 0.0f);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
//GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.End();
GL.Finish();
}
You load the identity matrix after the projection matrix. This overrides the projection matrix. Do the following:
// 1. Select projection matrix mode
GL.MatrixMode(MatrixMode.Projection); // <--- INSERT
// 2. Load projection matrix
GL.LoadMatrix(ref perspectiveMatrix);
// GL.LoadIdentity(); <--- DELETE
// 3. Select model view matrix mode
GL.MatrixMode(MatrixMode.Modelview);
// 4. Clear model view matrix (load the identity matrix)
GL.LoadIdentity();
// 5. Multiply model view matrix with the translation matrix
GL.Translate(0.0f, 0.0f, -2.0f);
Note that GL.MatrixMode selects the current matrix. All pending matrix operations affect the selected matrix. GL.LoadIdentity "clears" the matrix. It loads the Identity matrix.
I'm trying to draw a triangle using an OpenGL(OpenTK) fragment shader.
But always displayed black triangle. (even I changed color in fragment shader.)
Maybe fragment shader is not working.
How to fix it?
I attached my code.
P.S. I'm sorry if I do something wrong with this Post. This is my first time on this site.
Render
window.RenderFrame += (FrameEventArgs args) =>
{
GL.UseProgram(shaderProgram.shaderProgramId);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
float[] verts = { -0.5f, -0.5f, 0.0f, 0.5f, -0.5f, 0.0f, 0.0f, 0.5f, 0.0f };
int vao = GL.GenVertexArray();
int vertices = GL.GenBuffer();
GL.BindVertexArray(vao);
GL.BindBuffer(BufferTarget.ArrayBuffer, vertices);
GL.BufferData(BufferTarget.ArrayBuffer, verts.Length * sizeof(float), verts, BufferUsageHint.StaticCopy);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false,3 * sizeof(float),0);
GL.EnableVertexAttribArray(0);
GL.DrawArrays(OpenTK.Graphics.OpenGL4.PrimitiveType.Triangles, 0, 3);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.BindVertexArray(0);
GL.DeleteVertexArray(vao);
GL.DeleteBuffer(vertices);
window.SwapBuffers();
};
Shader Load
public static Shader LoadShader(string shaderLocation, ShaderType shaderType)
{
int shaderId = GL.CreateShader(shaderType);
GL.ShaderSource( shaderId, File.ReadAllText( shaderLocation ) );
GL.CompileShader( shaderId );
string infoLog = GL.GetShaderInfoLog(shaderId);
if (!string.IsNullOrEmpty(infoLog))
{
throw new Exception(infoLog);
}
return new Shader() {shaderId = shaderId};
}
Program binding
public static ShaderProgram LoadShaderProgram(string vertexShaderLocation, string fragmentShaderLocation)
{
int shaderProgramId = GL.CreateProgram();
Shader vertexShader = LoadShader(vertexShaderLocation, ShaderType.VertexShader);
Shader fragShader = LoadShader(fragmentShaderLocation, ShaderType.FragmentShader);
GL.AttachShader(shaderProgramId, vertexShader.shaderId);
GL.AttachShader(shaderProgramId, fragShader.shaderId);
GL.LinkProgram(shaderProgramId);
GL.DetachShader(shaderProgramId, vertexShader.shaderId);
GL.DetachShader(shaderProgramId, fragShader.shaderId);
GL.DeleteShader(vertexShader.shaderId);
GL.DeleteShader(fragShader.shaderId);
string infoLog = GL.GetProgramInfoLog(shaderProgramId);
if (!string.IsNullOrEmpty(infoLog))
{
throw new Exception(infoLog);
}
return new ShaderProgram() {shaderProgramId = shaderProgramId};
}
shaders
vertex
#version 330
layout(location=0) in vec3 vPosition;
out vec4 vertexColor;
void main() {
gl_Position = vec4( vPosition, 1.0);
vertexColor = vec4(0.0,1.0,0.0,1.0);
}
fragment
#version 330
out vec4 FragColor;
in vec4 vertexColor;
void main()
{
FragColor = vertexColor;
}
I see a couple of problems with your code. The main reason you may not see your shader in action is because you do not keep your shaders attached to your shader handle (shaderProgramId). Instead, you detach and delete them right after you compiled and attached them. What you are doing there is basically creating your shader program and then immediately throwing it away.
Another issue (which might not cause your main problem, but yet) may be your usage of a VAO. A VAO is actually meant to preserve the OpenGL states of the objects bound to it across state switches. It is rather a container for VBOs inside it, holding their descriptions. So what you want to do is to create your VAO, bind it, then create, bind and describe (glVertexAttribPointer) your VBOs. After that you can unbind your VAO. When you bind it again, you don't have to do anything extra (like binding VBOs or using glVertexAttribPointer again): What you did when first binding your VAO and adding your VBOs is stored in the VAO. Just bind the VAO, bind your shader (glUseProgram) and happily render away.
I am creating my own game graphics engine. I have looked into using others like Unity, but they don't fit my needs. Anyway, I am using OpenTK (This is a 2D game), and the issue is that when i draw a texture to the screen, then draw a quad to the screen, the color darkens on the texture. Here is the method I am using to draw a texture:
public void Texture(int ID, Vector2 size, Vector2 pos, Vector2 texSize, Vector2 texPos)
{
pos.Y = -pos.Y;
GL.Enable(EnableCap.Texture2D);
GL.BindTexture(TextureTarget.Texture2D, ID);
GL.Begin(PrimitiveType.Quads);
GL.TexCoord2(texPos.X, texPos.Y);
GL.Vertex2(pos.X, pos.Y);
GL.TexCoord2(texPos.X + texSize.X, texPos.Y);
GL.Vertex2(pos.X + size.X, pos.Y);
GL.TexCoord2(texPos.X + texSize.X, texPos.Y + texSize.Y);
GL.Vertex2(pos.X + size.X, pos.Y - size.Y);
GL.TexCoord2(texPos.X, texPos.Y + texSize.Y);
GL.Vertex2(pos.X, pos.Y - size.Y);
GL.End();
}
I am inverting the Y because I am used to the Windows Forms coordinate system, where going down is y++. I am calling it like this:
Texture(backdropTextureID, new Vector2(1f, 1f), new Vector2(-0.5f, -0.5f), new Vector2(1f, 1f), new Vector2(0f, 0f));
As expected, if there is nothing else being drawn, it draws a texture with the GL id of backdropTextureID in the center of the screen. When I draw a colored quad also though, the texture is darkened. Here is the method I am using for drawing a quad:
public void Quad(Vector2 pos1, Vector2 pos2, Vector2 pos3, Vector2 pos4, Color color1, Color color2, Color color3, Color color4)
{
GL.Disable(EnableCap.Texture2D);
pos1.Y = -pos1.Y;
pos2.Y = -pos2.Y;
pos3.Y = -pos3.Y;
pos4.Y = -pos4.Y;
GL.Begin(PrimitiveType.Quads);
GL.Color3(color1);
GL.Vertex2(pos1);
GL.Color3(color2);
GL.Vertex2(pos2);
GL.Color3(color3);
GL.Vertex2(pos3);
GL.Color3(color4);
GL.Vertex2(pos4);
GL.End();
}
Again, inverting the Y, for the reason stated above. Also, notice, I am enabling EnableCap.Texture2D in the method for drawing a texture, and disabling it when I draw a colored quad. I am calling the quad method like this:
Quad(new Vector2(0.0f, 0.0f), new Vector2(0.5f, 0.0f), new Vector2(0.5f, 0.5f), new Vector2(0.0f, 0.5f), Color.Gray, Color.Gray, Color.Gray, Color.Gray);
If anyone could help me, thank you in advance. Basically: How do I stop a texture from darkening after drawing a colored quad in C# OpenTK?
For anyone whose having this problem, I figured it out. The same color I was giving to the colored quad i gave to the texture. You just need to add
GL.Color3(Color.Transparent);
to the start of the texture drawing method.
I'm working on my volume rendering application (C# + OpenTK).
The volume is being rendered using raycasting, i found a lot of inspiration on this site:
http://graphicsrunner.blogspot.sk/2009/01/volume-rendering-101.html, and even though my applications works with OpenGL, the main idea of using 3D texture and other stuff is the same.
Application works fine, but after I "flow into the volume" (means inside the bounding box), everything dissapears, and I want to prevent this. So is there some easy way to do this? --> I will be able to flow through the volume or move in the volume.
Here is the code of fragment shader:
#version 330
in vec3 EntryPoint;
in vec4 ExitPointCoord;
uniform sampler2D ExitPoints;
uniform sampler3D VolumeTex;
uniform sampler1D TransferFunc;
uniform float StepSize;
uniform float AlphaReduce;
uniform vec2 ScreenSize;
layout (location = 0) out vec4 FragColor;
void main()
{
//gl_FragCoord --> http://www.txutxi.com/?p=182
vec3 exitPoint = texture(ExitPoints, gl_FragCoord.st/ScreenSize).xyz;
//background need no raycasting
if (EntryPoint == exitPoint)
discard;
vec3 rayDirection = normalize(exitPoint - EntryPoint);
vec4 currentPosition = vec4(EntryPoint, 0.0f);
vec4 colorSum = vec4(.0f,.0f,.0f,.0f);
vec4 color = vec4(0.0f,0.0f,0.0f,0.0f);
vec4 value = vec4(0.0f);
vec3 Step = rayDirection * StepSize;
float stepLength= length(Step);
float LengthSum = 0.0f;
float Length = length(exitPoint - EntryPoint);
for(int i=0; i < 16000; i++)
{
currentPosition.w = 0.0f;
value = texture(VolumeTex, currentPosition.xyz);
color = texture(TransferFunc, value.a);
//reduce the alpha to have a more transparent result
color.a *= AlphaReduce;
//Front to back blending
color.rgb *= color.a;
colorSum = (1.0f - colorSum.a) * color + colorSum;
//accumulate length
LengthSum += stepLength;
//break from the loop when alpha gets high enough
if(colorSum.a >= .95f)
break;
//advance the current position
currentPosition.xyz += Step;
//break if the ray is outside of the bounding box
if(LengthSum >= Length)
break;
}
FragColor = colorSum;
}
The code below is based on https://github.com/toolchainX/Volume_Rendering_Using_GLSL
Display() function:
public void Display()
{
// the color of the vertex in the back face is also the location
// of the vertex
// save the back face to the user defined framebuffer bound
// with a 2D texture named `g_bfTexObj`
// draw the front face of the box
// in the rendering process, i.e. the ray marching process
// loading the volume `g_volTexObj` as well as the `g_bfTexObj`
// after vertex shader processing we got the color as well as the location of
// the vertex (in the object coordinates, before transformation).
// and the vertex assemblied into primitives before entering
// fragment shader processing stage.
// in fragment shader processing stage. we got `g_bfTexObj`
// (correspond to 'VolumeTex' in glsl)and `g_volTexObj`(correspond to 'ExitPoints')
// as well as the location of primitives.
// draw the back face of the box
GL.Enable(EnableCap.DepthTest);
//"vykreslim" front || back face objemu do framebuffru --> teda do 2D textury s ID bfTexID
//(pomocou backface.frag &.vert)
GL.BindFramebuffer(FramebufferTarget.Framebuffer, frameBufferID);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), bfVertShader.GetShaderHandle(), bfFragShader.GetShaderHandle());
spMain.UseProgram();
//cull front face
Render(CullFaceMode.Front);
spMain.UseProgram(0);
//klasicky framebuffer --> "obrazovka"
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
GL.Viewport(0, 0, width, height);
LinkShader(spMain.GetProgramHandle(), rcVertShader.GetShaderHandle(), rcFragShader.GetShaderHandle());
spMain.UseProgram();
SetUniforms();
Render(CullFaceMode.Back);
spMain.UseProgram(0);
GL.Disable(EnableCap.DepthTest);
}
private void DrawBox(CullFaceMode mode)
{
// --> Face culling allows non-visible triangles of closed surfaces to be culled before expensive Rasterization and Fragment Shader operations.
GL.Enable(EnableCap.CullFace);
GL.CullFace(mode);
GL.BindVertexArray(VAO);
GL.DrawElements(PrimitiveType.Triangles, 36, DrawElementsType.UnsignedInt, 0);
GL.BindVertexArray(0);
GL.Disable(EnableCap.CullFace);
spMain.UseProgram(0);//zapnuty bol v Render() ktora DrawBox zavolala
}
private void Render(CullFaceMode mode)
{
GL.ClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
spMain.UseProgram();
spMain.SetUniform("modelViewMatrix", Current);
spMain.SetUniform("projectionMatrix", projectionMatrix);
DrawBox(mode);
}
The problem is (I think) that as I'm moving towards the volume (I don't move the camera, just scaling the volume), if the scale factor > 2.7something, I'm in the volume, it means "after the plane on which is the final picture being rendered", so a can't see anything.
The solution (maybe) that I can think of, is something like that:
If I reach the scale factor = 2.7something:
1.) -> don't scale the volume
2.) -> somehow told to fragment shader to move EntryPoint towards the
RayDirection for some length (probably based on the scale factor).
Now, I tried this "method" and it seems that it can work:
vec3 entryPoint = EntryPoint + some_value * rayDirection;
The some_value have to be clamped between [0,1[ interval (or [0,1]?)
, but maybe it doesn't matter thank's to that:
if (EntryPoint == exitPoint)
discard;
So now, maybe (if my solution isn't so bad), I can change my answer to this:
How to compute the some_value (based on scale factor which I send to fragment shader)?
if(scale_factor < 2.7something)
work like before;
else
{
compute some_value; //(I need help with this part)
change entry point;
work like before;
}
(I'm not native english speeker, so If there are some big mistakes in the text and you don't understand something, just let me know and I'll try to fix these bugs)
Thank's.
I solved my problem. It doesn't make "being surrounded by the volume" illusion, but now, I can flow through the volume and nothing disappears.
This is the code of my solution added to fragment shader:
vec3 entryPoint = vec3(0.0f);
if(scaleCoeff >= 2.7f)
{
float tmp = min((scaleCoeff - 2.7f) * 0.1f, 1.0f);
entryPoint = EntryPoint + tmp * (exitPoint - EntryPoint);
}
else
{
entryPoint = EntryPoint;
}
//
But if you know or can think about better solution that makes the "being surrounded by the volume" effect, I'll be glad if you let me know.
Thank you.
If understand correctly, I think you should use Plane Clipping to go through the volume. (I could give you a simple example based on your code if you attach this solution. Translate the whole C++ project to C# is too time-consuming.)
I've got drawing sprites to work with OpenTK in my 2d game engine now. Only problem I'm having is that custom drawn objects with opengl (anything but sprites really) show up as the background color. Example:
I'm Drawing a 2.4f width black line here. There's also a quad and a point in the example, but they do not overlap anything that's actually visible. The line overlaps the magenta sprite, but the color is just wrong. My question is: Am I missing an OpenGL feature, or doing something horrible wrong?
These are the samples of my project concerning drawing: (you can also find the project on https://github.com/Villermen/HatlessEngine if there's questions about the code)
Initialization:
Window = new GameWindow(windowSize.Width, windowSize.Height);
//OpenGL initialization
GL.Enable(EnableCap.PointSmooth);
GL.Hint(HintTarget.PointSmoothHint, HintMode.Nicest);
GL.Enable(EnableCap.LineSmooth);
GL.Hint(HintTarget.LineSmoothHint, HintMode.Nicest);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.ClearColor(Color.Gray);
GL.Enable(EnableCap.Texture2D);
GL.Enable(EnableCap.DepthTest);
GL.DepthFunc(DepthFunction.Lequal);
GL.ClearDepth(1d);
GL.DepthRange(1d, 0d); //does not seem right, but it works (see it as duct-tape)
Every draw cycle:
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
//reset depth and color to be consistent over multiple frames
DrawX.Depth = 0;
DrawX.DefaultColor = Color.Black;
foreach(View view in Resources.Views)
{
CurrentDrawArea = view.Area;
GL.Viewport((int)view.Viewport.Left * Window.Width, (int)view.Viewport.Top * Window.Height, (int)view.Viewport.Right * Window.Width, (int)view.Viewport.Bottom * Window.Height);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(view.Area.Left, view.Area.Right, view.Area.Bottom, view.Area.Top, -1f, 1f);
GL.MatrixMode(MatrixMode.Modelview);
//drawing
foreach (LogicalObject obj in Resources.Objects)
{
//set view's coords for clipping?
obj.Draw();
}
}
GL.Flush();
Window.Context.SwapBuffers();
DrawX.Line:
public static void Line(PointF pos1, PointF pos2, Color color, float width = 1)
{
RectangleF lineRectangle = new RectangleF(pos1.X, pos1.Y, pos2.X - pos1.X, pos2.Y - pos1.Y);
if (lineRectangle.IntersectsWith(Game.CurrentDrawArea))
{
GL.LineWidth(width);
GL.Color3(color);
GL.Begin(PrimitiveType.Lines);
GL.Vertex3(pos1.X, pos1.Y, GLDepth);
GL.Vertex3(pos2.X, pos2.Y, GLDepth);
GL.End();
}
}
Edit: If I disable the blendcap before and enable it after drawing the line it does show up with the right color, but I must have it blended.
I forgot to unbind the texture in the texture-drawing method...
GL.BindTexture(TextureTarget.Texture2D, 0);