I'm trying to take a portion of the current texture and turn it to 50% transparent. I send in four values, signifying the rectangle I want to make transparent. It seems every time, however, that coord.x/coord.y are set to (0, 0), resulting in the entire image being transparent when I send in any rectangle that starts at (0, 0).
I'm still new to GLSL and and probably approaching this wrong. Any pointers on the correct approach would be greatly appreciated!
Values being sent in
sprite.Shader.SetParameter("texture", sprite.Texture);
sprite.Shader.SetParameter("x1", 0);
sprite.Shader.SetParameter("x2", 5);
sprite.Shader.SetParameter("y1", 0);
sprite.Shader.SetParameter("y2", sprite.Height - 1); // sprite.Height = 32
transparency.frag
uniform sampler2D texture;
uniform float x1;
uniform float x2;
uniform float y1;
uniform float y2;
void main() {
vec2 coord = gl_TexCoord[0].xy;
vec4 pixel_color = texture2D(texture, coord);
if ((coord.x > x1) && (coord.x < x2) && (coord.y > y1) && (coord.y < y2))
{
pixel_color.a -= 0.5;
}
gl_FragColor = pixel_color;
}
Texture coordinates are not in pixels, but are instead given between 0.0 and 1.0.
gl_TexCoord[0].xy;
This call is giving you back a vector2f with values ranging from 0.0 to 1.0. You are checking pixel coordinates against normalized texture coordinates. To solve this you can either scale your texture coordinates, or you can normalize your pixel coordinates.
Related
I'm banging my head at this problem for quite a while now and finally realized that i need serious help...
So basically i wanted to implement proper shadows into my project im writing in Monogame. For this I wrote a deferred dhader in HLSL using multiple tutorials, mainly for old XNA.
The Problem is, that although my lighting and shadow work for a spotlight, the light on the floor of my scene is very dependent on my camera, as you can see in the images: https://imgur.com/a/TU7y0bs
I tried many different things to solve this problem:
A bigger DepthBias will widen the radius that is "shadow free" with massive peter panning and the described issue is not fixed at all.
One paper suggested using an exponential shadow map, but i didn't like the results at all, as the light bleeding was unbearable and smaller shadows (like the one behind the torch at the wall) would not get rendered.
I switched my GBuffer DepthMap to 1-z/w to get more precision, but that did not fix the problem either.
I am using a
new RenderTarget2D(device,
Width, Height, false, SurfaceFormat.Vector2, DepthFormat.Depth24Stencil8)
to store the depth from the lights perspective.
I Calculate the Shadow using this PixelShader Function:
Note, that i want to adapt this shader into a point light in the future - thats why im simply using length(LightPos - PixelPos).
SpotLight.fx - PixelShader
float4 PS(VSO input) : SV_TARGET0
{
// Fancy Lighting equations
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
// Sample Depth from DepthMap
float Depth = DepthMap.Sample(SampleTypeClamp, UV).x;
// Getting the PixelPosition in WorldSpace
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
// Transform Position to WorldSpace
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LightUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightDepth = ShadowMap.Sample(SampleDot, LightUV).r;
// Linear depth model
float closestDepth = lightDepth * LightFarplane; // Depth is stored in [0, 1]; bring it to [0, farplane]
float currentDepth = length(LightPosition.xyz - Position.xyz) - DepthBias;
ShadowFactor = step(currentDepth, closestDepth); // closestDepth > currentDepth -> Occluded, Shadow.
float4 phong = Phong(...);
return ShadowFactor * phong;
}
LightViewProjection is simply light.View * light.Projection
InverseViewProjection is Matrix.Invert(camera.View * Camera.Projection)
Phong() is a function i call to finalize the lighting
The lightDepthMap simply stores length(lightPos - Position)
I'd like to have that artifact shown in the pictures gone to be able to adapt the code to point lights, as well.
Could this be a problem with the way i retrieve the world position from screen space and my depth got a to low resolution?
Help is much appreciated!
--- Update ---
I changed my Lighting shader to display the difference between the distance stored in the shadowMap and the distance calculated on the spot in the Pixelshader:
float4 PixelShaderFct(...) : SV_TARGET0
{
// Get Depth from Texture
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightZ = ShadowMap.Sample(SampleDot, LUV).r;
float Attenuation = AttenuationMap.Sample(SampleType, LUV).r;
float ShadowFactor = 1;
// Linear depth model; lightZ stores (LightPos - Pos)/LightFarPlane
float closestDepth = lightZ * LightFarPlane;
float currentDepth = length(LightPosition.xyz - Position.xyz) -DepthBias;
return (closestDepth - currentDepth);
}
As I am basically outputting Length - (Length - Bias) one would expect to have an Image with "DepthBias" as its color. But that is not the result I'm getting here:
https://imgur.com/a/4PXLH7s
Based on this result, I'm assuming that either i've got precision issues (which i find weird, given that im working with near- and farplanes of [0.1, 50]), or something is wrong with the way im recovering the worldPosition of a given pixel from my DepthMap.
I finally found the solution and I'm posting it here if someone stumbles across the same issue:
The Tutorial I used was for XNA / DX9. But as im targetting DX10+ a tiny change needs to be done:
In XNA / DX9 the UV coordinates are not align with the actual pixels and need to be aligned. That is what - float2(1.0f / GBufferTextureSize.xy); in float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy); was for. This is NOT needed in DX10 and above and will result in the issue i had.
Solution:
UV Coordinates for a Fullscreen Quad:
For XNA / DX9:
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
For Monogame / DX10+
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1)
How do you map the points on a normalized sphere (radius of 0.5, center of 0,0,0) to a fish eye texture image? I am using the C# language and OpenGL. The results should be UV coordinates for the image. The sphere is simply a list of 3D coordinates for each vertice of the sphere, so each of these would get a UV coordinate in to the image.
The end results would be a full sphere with the fish eye image wrapping all the way around 360 degrees when textured on to the sphere.
Example fish eye image:
There isn't a single way to map to a sphere. The texture in your post looks like the seam would be up in a nominated world plane. Compare that to a typical skydome type texture, where the sphere usually joins at the bottom (where the camera cant see the join).
You might use shader code like this to map a point based on UV:
float3 PointOnSphere(float phi, float theta, float radius)
{
float3 pos = float3(0, 0, 0);
pos.x = radius * cos(phi) * sin(theta);
pos.y = radius * sin(phi) * sin(theta);
pos.z = radius * cos(theta);
return pos;
}
Or reverse that to get UV from a point on the surface:
float2 AngleFromPoint(float3 pos)
{
float phi = atan(pos.y / pos.x);
float theta = atan(pos.y / pos.z / sin(phi));
return(float2(phi, theta));
}
Which way is up and where the seam joins is something you'll have to workout yourself. If the texture looks compressed because it is non linear you may need to try something like:
texcoord = pow(texcoord, 0.5f);
Edit: and obviously, normalise the angles to 0 - 1 for texture coordinates
so I know that The parametric equation for ONE circle is:
x = cx + r * cos(a)
y = cy + r * sin(a)
From this it's easy to get a point from it's circumference...
But what if I want to get the array points of many intersecting circle's circumference?
Like this:
So how can I draw similar circle unions with GL lines containing points (Vertices, sequence matters) in a coordinate system, If I know each circle's center and radius?
(Best would be if you have to iterate trough it using it's collective parametric equation's parameter, to get each Vertex with the desired density.)
Warning! The result is just an array of points (any density) linked with lines as they follow each other (The bold black part). NOT polygons. The shape is not filled.
(I want to draw it in Unity3D using C# and GL.Lines)
Since you know Circle c1:
x1 = cx1 + r1 * cos(a)
y1 = cy1 + r1 * sin(a)
and you want additional condition point P[x1,y1] ∉ any other C.
Simply generate all circles(or check the condition when generating) and remove all points that are closer to any Center[cx, cy] then corresponding circle radius R.
To calculate ditance (or better square distance and compare to precalculated square R to improve perforamce) simply measure the distance of vector P - Center(pythagoras):
foreach (Point p){
foreach (other Circle c){
float dist = (P - Center).Lenght;
if (dist < c.R){
// point is not valid, remove
}
}
}
this solution isnt indeed optimal(as mentioned in comments).
Other approach would be calculate intersections of every circle with every other(https://math.stackexchange.com/questions/256100/how-can-i-find-the-points-at-which-two-circles-intersect) and remove RANGE between those points(the right one ofc - its starting to be complicated). In addtiotion if you need to maintain right sequence, it should be possible to continue generating one circle untill you reach an intersection - then switch circles for the new one etc. Carefull though: you would need to start on outside of the shape!
Depending on which OpenGL version you want to use, a simple way would be to trace the triangles of each circles in a stencil. Then trace same circles in lines excluding the region in the stencil.
For a shader solution, you can check here:
#ifdef GL_ES
precision mediump float;
#endif
uniform vec3 iResolution; // viewport resolution (in pixels)
uniform float iGlobalTime; // shader playback time (in seconds)
uniform float iChannelTime[4]; // channel playback time (in seconds)
uniform vec3 iChannelResolution[4]; // channel resolution (in pixels)
uniform vec4 iMouse; // mouse pixel coords. xy: current (if MLB down), zw: click
uniform samplerXX iChannel0..3; // input channel. XX = 2D/Cube
uniform vec4 iDate; // (year, month, day, time in seconds)
uniform float iSampleRate; // sound sample rate (i.e., 44100)
bool PixelInsideCircle( vec3 circle )
{
return length(vec2(gl_FragCoord.xy - circle.xy)) < circle.z;
}
bool PixelOnCircleContour( vec3 circle )
{
return PixelInsideCircle(circle) && !PixelInsideCircle( vec3(circle.xy,circle.z-1.0) );
}
void main( void )
{
float timeFactor = (2.0+sin(iGlobalTime))/2.0;
const int NB_CIRCLES=3;
vec3 c[NB_CIRCLES];
c[0] = vec3( 0.6, 0.4, 0.07 ) * iResolution;
c[1] = vec3( 0.45, 0.69, 0.09 ) * iResolution;
c[2] = vec3( 0.35, 0.58, 0.06 ) * iResolution;
c[0].z = 0.09*iResolution.x*timeFactor;
c[1].z = 0.1*iResolution.x*timeFactor;
c[2].z = 0.07*iResolution.x*timeFactor;
c[0].xy = iMouse.xy;
bool keep = false;
for ( int i = 0; i < NB_CIRCLES; ++i )
{
if ( !PixelOnCircleContour(c[i]) )
continue;
bool insideOther = false;
for ( int j = 0; j < NB_CIRCLES; ++j )
{
if ( i == j )
continue;
if ( PixelInsideCircle(c[j]) )
insideOther = true;
}
keep = keep || !insideOther;
}
if ( keep )
gl_FragColor = vec4(1.0,1.0,0.0,1.0);
}
and tweak it a little
Your question is not really complete as you don't explain how you want the points to be spread over the outline. I infer that you would like a dense sequence of points ordered along a single curve.
There is no easy solution to this problem and the resulting shape can be very complex (it can even have holes). You will not spare the computation of intersections between circular arcs and other geometric issues.
One way to address it is to polygonize the circles with sufficient point density and use a polygon union algorithm. The excellent Clipper library comes to mind (http://www.angusj.com/delphi/clipper.php).
Another more quick & dirty solution is to work in raster space: create a large white image and paint all your circles in black. Then use a contour following algorithm such as Moore-neighborhood(http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/index.html).
Our game uses a 'block atlas', a grid of square textures which correspond to specific faces of blocks in the game. We're aiming to streamline vertex data memory by storing texture data in the vertex as shorts instead of float2s. Here's our Vertex Definition (XNA, C#):
public struct BlockVertex : IVertexType
{
public Vector3 Position;
public Vector3 Normal;
/// <summary>
/// The texture coordinate of this block in reference to the top left corner of an ID's square on the atlas
/// </summary>
public short TextureID;
public short DecalID;
// Describe the layout of this vertex structure.
public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration
(
new VertexElement(0, VertexElementFormat.Vector3,
VertexElementUsage.Position, 0),
new VertexElement(sizeof(float) * 3, VertexElementFormat.Vector3,
VertexElementUsage.Normal, 0),
new VertexElement(sizeof(float) * 6, VertexElementFormat.Short2,
VertexElementUsage.TextureCoordinate, 0)
);
public BlockVertex(Vector3 Position, Vector3 Normal, short TexID)
{
this.Position = Position;
this.Normal = Normal;
this.TextureID = TexID;
this.DecalID = TexID;
}
// Describe the size of this vertex structure.
public const int SizeInBytes = (sizeof(float) * 6) + (sizeof(short) * 2);
VertexDeclaration IVertexType.VertexDeclaration
{
get { return VertexDeclaration; }
}
}
I have little experience in HLSL, but when I tried to adapt our current shader to use this vertex (as opposed to the old one which stored the Texture and Decal as Vector2 coordinates), I got nothing but transparent blue models, which I believe means that the texture coordinates for the faces are all the same?
Here's the HLSL where I try to interpret the vertex data:
int AtlasWidth = 25;
int SquareSize = 32;
float TexturePercent = 0.04; //percent of the entire texture taken up by 1 pixel
[snipped]
struct VSInputTx
{
float4 Position : POSITION0;
float3 Normal : NORMAL0;
short2 BlockAndDecalID : TEXCOORD0;
};
struct VSOutputTx
{
float4 Position : POSITION0;
float3 Normal : TEXCOORD0;
float3 CameraView : TEXCOORD1;
short2 TexCoords : TEXCOORD2;
short2 DecalCoords : TEXCOORD3;
float FogAmt : TEXCOORD4;
};
[snipped]
VSOutputTx VSBasicTx( VSInputTx input )
{
VSOutputTx output;
float4 worldPosition = mul( input.Position, World );
float4 viewPosition = mul( worldPosition, View );
output.Position = mul( viewPosition, Projection );
output.Normal = mul( input.Normal, World );
output.CameraView = normalize( CameraPosition - worldPosition );
output.FogAmt = 1 - saturate((distance(worldPosition,CameraPosition)-FogBegin)/(FogEnd-FogBegin)); //This sets the fog amount (since it needs position data)
// Convert texture coordinates to short2 from blockID
// When they transfer to the pixel shader, they will be interpolated
// per pixel.
output.TexCoords = short2((short)(((input.BlockAndDecalID.x) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.x) / (AtlasWidth)) * TexturePercent));
output.DecalCoords = short2((short)(((input.BlockAndDecalID.y) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.y) / (AtlasWidth)) * TexturePercent));
return output;
}
I changed nothing else from the original shader which displayed everything fine, but you can see the entire .fx file here.
If I could just debug the thing I might be able to get it, but... well, anyways, I think it has to do with my limited knowledge of how shaders work. I imagine my attempt to use integer arithmetic is less than effective. Also, that's a lot of casts, I could believe it if values got forced to 0 somewhere in there. In case I am way off the mark, here is what I aim to achieve:
Shader gets a vertex which stores two shorts as well as other data. The shorts represent an ID of a certain corner of a grid of square textures. One is for the block face, the other is for a decal which is drawn over that face.
The shader uses these IDs, as well as some constants which define the size of the texture grid (henceforth referred to as "Atlas"), to determine the actual texture coordinate of this particular corner.
The X of the texture coordinate is the (ID % AtlasWidth) * TexturePercent, where TexturePercent is a constant which represents the percentage of the entire texture represented by 1 pixel.
The Y of the texture coordinate is the (ID / AtlasWidth) * TexturePercent.
How can I go about this?
Update: I got an error at some point "vs_1_1 does not support 8- or 16-bit integers" could this be part of the issue?
output.TexCoords = short2((short)(((input.BlockAndDecalID.x) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.x) / (AtlasWidth)) * TexturePercent));
output.DecalCoords = short2((short)(((input.BlockAndDecalID.y) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.y) / (AtlasWidth)) * TexturePercent));
These two lines contain these following divisions:
(input.BlockAndDecalID.x) / (AtlasWidth)
(input.BlockAndDecalID.y) / (AtlasWidth)
These are both integer divisions. Integer division cuts off all past the decimal point. I'm assuming AtlasWidth is always greater than each of the coordinates, therefore both of these divisions will always result in 0. If this assumption is incorrect, I'm assuming you still want that decimal data in some way?
You probably want a float division, something that returns a decimal result, so you need to cast at least one (or both) of the operands to a float first, e.g.:
((float)input.BlockAndDecalID.x) / ((float)AtlasWidth)
EDIT: I would use NormalizedShort2 instead; it scales your value by the max short value (32767), allowing a "decimal-like" use of short values (in other words, "0.5" = 16383). Of course if your intent was just to halve your tex coord data, you can achieve this while still using floating points by using HalfVector2. If you still insist on Short2 you will probably have to scale it like NormalizedShort2 already does before submitting it as a coordinate.
Alright, after I tried to create a short in one of the shader functions and faced a compiler error which stated that vs_1_1 doesn't support 8/16-bit integers (why didn't you tell me that before??), I changed the function to look like this:
float texX = ((float)((int)input.BlockAndDecalID.x % AtlasWidth) * (float)TexturePercent);
float texY = ((float)((int)input.BlockAndDecalID.x / AtlasWidth) * (float)TexturePercent);
output.TexCoords = float2(texX, texY);
float decX = ((float)((int)input.BlockAndDecalID.y % AtlasWidth) * (float)TexturePercent);
float decY = ((float)((int)input.BlockAndDecalID.y / AtlasWidth) * (float)TexturePercent);
output.DecalCoords = float2(decX, decY);
Which seems to work just fine. I'm not sure if it's changing stuff to ints or that (float) cast on the front. I think it might be the float, which means Scott was definitely on to something. My partner wrote this shader somehow to use a short2 as the texture coordinate; how he did that I do not know.
This is my Transform. I got it from an example of a simple 2D camera.
public Matrix Transform(GraphicsDevice graphicsDevice)
{
float ViewportWidth = graphicsDevice.Viewport.Width;
float ViewportHeight = graphicsDevice.Viewport.Height;
matrixTransform =
Matrix.CreateTranslation(new Vector3(-cameraPosition.X, -cameraPosition.Y, 0)) *
Matrix.CreateRotationZ(Rotation) *
Matrix.CreateScale(new Vector3(Zoom, Zoom, 0)) *
Matrix.CreateTranslation(
new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0));
return matrixTransform;
}
If I understand it correctly, it allows for a roll(rotation), sprite scale change on zoom, and translation between world and camera for simple up, down, left, right controls. However, it does not alter the Z depth.
But what I need is for the game world to zoom, not just the sprites drawn. And I assume in order to do this I need to change the Z distance between the camera and the world matrix.
I am VERY NEW to programming and have only a simple understanding of matrix in general. I have even less understanding as to how XNA uses them in the draw method. And so far I feel like pulling my hair out from a fruitless search for answers... I just need the world coordinates to scale on zoom, so that before my mouse at a pre-zoom X.60 Y.60 will be at X.600 Y.600 post-zoom (ie: zoom level 0.1). But my mouse has not moved, only the world got bigger in view (or shrank).
I know this question is old, but this is in case anyone comes across this problem and can't find a solution. #RogueDeus was trying to convert scaled input coordinates when he was zooming in or out with his camera. In order to scale the mouse, all you need is to get the inverse matrix of the scale.
So if his scale matrix was created as this:
Matrix.CreateScale(zoom, zoom, 0);
The mouse coordinates should be inverse scaled and shifted by the necessary translation:
float ViewportWidth = graphicsDevice.Viewport.Width;
float ViewportHeight = graphicsDevice.Viewport.Height;
Matrix scale = Matrix.CreateScale(zoom, zoom, 0);
Matrix inputScalar = Matrix.Invert(scale);
...
public MouseState transformMouse(MouseState mouse)
{
/// Shifts the position to 0-relative
Vector2 newPosition = new Vector2(mouse.X - ViewportWidth,
mouse.Y - ViewportHeight);
/// Scales the input to a proper size
newPosition = Vector2.Transform(newPosition, InputScalar);
return new MouseState((int)newPosition.X, (int)newPosition.Y,
mouse.ScrollWheelValue, mouse.LeftButton,
mouse.MiddleButton, mouse.RightButton,
mouse.XButton1, mouse.XButton2);
}
You are using 2D coordinates, therefore the Z coordinate is of absolutely no importance. In fact, the scale matrix you are using ( Matrix.CreateScale(new Vector3(Zoom, Zoom, 0)) ) multiply the Z coordinate by 0, effectively setting it to 0.
As this scale matrix is in the view matrix, it will scale the entire world. I am not sure to really understand your problem. Could you try to explain it a litle more, please?
I seem to have figured out how to get the coordinates to scale...
I was assuming that the current mouse status would reflect the world matrix its clicked on, but apparently it never actually does this. It is always linked to the view matrix. (The screen itself) and that value needs to scale along with the world matrix (in the transform).
So as the transform is effected by Zoom in the Matrix.CreateScale(new Vector3(Zoom, Zoom, 0)) so too does the mouseState X & Y coordinates need to be scaled by it to virtually mirror the world matrix.
//Offsets any cam location by a zoom scaled window bounds
Vector2 CamCenterOffset
{
get { return new Vector2((game.Window.ClientBounds.Height / Zoom)
* 0.5f, (game.Window.ClientBounds.Width / Zoom) * 0.5f);
}
}
//Scales the mouse.X and mouse.Y by the same Zoom as everything.
Vector2 MouseCursorInWorld
{
get
{
currMouseState = Mouse.GetState();
return cameraPosition + new Vector2(currMouseState.X / Zoom,
currMouseState.Y / Zoom);
}
}