Some simple XNA/HLSL questions - c#

I've been getting into HLSL programming lately and I'm very curious as to HOW some of the things I'm doing actually work.
For example, I've got this very simple shader here that shades any teal colored pixels to a red-ish color.
sampler2D mySampler;
float4 MyPixelShader(float2 texCoords : TEXCOORD0): COLOR
{
float4 Color;
Color = tex2D(mySampler, texCoords.xy);
if(Color.r == 0 && Color.g == 1.0 && Color.b == 1.0)
{
Color.r = 1.0;
Color.g = 0.5;
Color.b = 0.5;
}
return Color;
}
technique Simple
{
pass pass1
{
PixelShader = compile ps_2_0 MyPixelShader();
}
}
I understand that the tex2D function grabs the pixel's color at the specified location, but what I don't understand is how mySampler even has any data. I'm not setting it or passing in a texture at all, yet it magically contains my texture's data.
Also, what is the difference between things like:
COLOR and COLOR0
or
TEXCOORD and TEXCOORD0
I can take a logical guess and say that COLOR0 is a registry in assembly that holds the currently used pixel color in the GPU. (that may be completely wrong, I'm just stating what I think it is)
And if so, does that mean specifying something like float2 texCoords : TEXCOORD0 will, by default, grab the current position the GPU is processing?

mySampler is assgined to a sample register, the first is S0.
SpriteBatch uses the same register to draw textures so you have initilized it before for sure.
this register are in relation with GraphicDevice.Textures and GraphicDevice.SamplerStates arrays.
In fact, in your shader you can use this sentence:
sampler TextureSampler : register(s0);
EDIT:
if you need to use a second texture in your shader you can make this:
HLSL
sampler MaskTexture : register(s1);
C#:
GraphicsDevice.Textures[1] = MyMaskTexture;
GraphicsDevice.SamplerStates[1].AddresU = TextureAddresMode....
Color0 is not a registry and does not hold the current pixel color. It's referred to the vertex structure you are using.
When you define a vertex like a VertexPositionColor, the vertex contains a Position, and a Color, but if you want to define a custom vertex with two colors, you need a way to discriminate between the two colors... the channels.
The number suffix means the channel you are referring in the current vertex.

Related

Cubemap skybox from the scene of the shader

Problem, perhaps, simple:
I can't figure out how to get the skybox and apply it to my shader.
I think I'm close but how do I take the skybox from the scene??
mygameobjec.GetComponent<Renderer>().material.SetTexture("_SkyReflection",Skybox.material.Texture??);
Thanks
Try RenderSettings.skybox.mainTexture.
https://docs.unity3d.com/ScriptReference/RenderSettings-skybox.html
A tip though: it is also possible to access the current reflection environment inside the shader from a shader global called unity_SpecCube0. Here is a function i often use in my shaders:
// Returns the reflection color given a normal and view direction.
inline half3 SurfaceReflection(half3 viewDir, half3 worldNormal, half roughness) {
half3 worldRefl = reflect(-viewDir, worldNormal);
half r = roughness * 1.7 - 0.7 * roughness;
float4 reflData = UNITY_SAMPLE_TEXCUBE_LOD(
unity_SpecCube0, worldRefl, r * 6
);
return DecodeHDR (reflData, unity_SpecCube0_HDR);
}

(Monogame/HLSL) Problems with ShadowMapping - Shadow dependent on Camera position

I'm banging my head at this problem for quite a while now and finally realized that i need serious help...
So basically i wanted to implement proper shadows into my project im writing in Monogame. For this I wrote a deferred dhader in HLSL using multiple tutorials, mainly for old XNA.
The Problem is, that although my lighting and shadow work for a spotlight, the light on the floor of my scene is very dependent on my camera, as you can see in the images: https://imgur.com/a/TU7y0bs
I tried many different things to solve this problem:
A bigger DepthBias will widen the radius that is "shadow free" with massive peter panning and the described issue is not fixed at all.
One paper suggested using an exponential shadow map, but i didn't like the results at all, as the light bleeding was unbearable and smaller shadows (like the one behind the torch at the wall) would not get rendered.
I switched my GBuffer DepthMap to 1-z/w to get more precision, but that did not fix the problem either.
I am using a
new RenderTarget2D(device,
Width, Height, false, SurfaceFormat.Vector2, DepthFormat.Depth24Stencil8)
to store the depth from the lights perspective.
I Calculate the Shadow using this PixelShader Function:
Note, that i want to adapt this shader into a point light in the future - thats why im simply using length(LightPos - PixelPos).
SpotLight.fx - PixelShader
float4 PS(VSO input) : SV_TARGET0
{
// Fancy Lighting equations
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
// Sample Depth from DepthMap
float Depth = DepthMap.Sample(SampleTypeClamp, UV).x;
// Getting the PixelPosition in WorldSpace
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
// Transform Position to WorldSpace
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LightUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightDepth = ShadowMap.Sample(SampleDot, LightUV).r;
// Linear depth model
float closestDepth = lightDepth * LightFarplane; // Depth is stored in [0, 1]; bring it to [0, farplane]
float currentDepth = length(LightPosition.xyz - Position.xyz) - DepthBias;
ShadowFactor = step(currentDepth, closestDepth); // closestDepth > currentDepth -> Occluded, Shadow.
float4 phong = Phong(...);
return ShadowFactor * phong;
}
LightViewProjection is simply light.View * light.Projection
InverseViewProjection is Matrix.Invert(camera.View * Camera.Projection)
Phong() is a function i call to finalize the lighting
The lightDepthMap simply stores length(lightPos - Position)
I'd like to have that artifact shown in the pictures gone to be able to adapt the code to point lights, as well.
Could this be a problem with the way i retrieve the world position from screen space and my depth got a to low resolution?
Help is much appreciated!
--- Update ---
I changed my Lighting shader to display the difference between the distance stored in the shadowMap and the distance calculated on the spot in the Pixelshader:
float4 PixelShaderFct(...) : SV_TARGET0
{
// Get Depth from Texture
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightZ = ShadowMap.Sample(SampleDot, LUV).r;
float Attenuation = AttenuationMap.Sample(SampleType, LUV).r;
float ShadowFactor = 1;
// Linear depth model; lightZ stores (LightPos - Pos)/LightFarPlane
float closestDepth = lightZ * LightFarPlane;
float currentDepth = length(LightPosition.xyz - Position.xyz) -DepthBias;
return (closestDepth - currentDepth);
}
As I am basically outputting Length - (Length - Bias) one would expect to have an Image with "DepthBias" as its color. But that is not the result I'm getting here:
https://imgur.com/a/4PXLH7s
Based on this result, I'm assuming that either i've got precision issues (which i find weird, given that im working with near- and farplanes of [0.1, 50]), or something is wrong with the way im recovering the worldPosition of a given pixel from my DepthMap.
I finally found the solution and I'm posting it here if someone stumbles across the same issue:
The Tutorial I used was for XNA / DX9. But as im targetting DX10+ a tiny change needs to be done:
In XNA / DX9 the UV coordinates are not align with the actual pixels and need to be aligned. That is what - float2(1.0f / GBufferTextureSize.xy); in float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy); was for. This is NOT needed in DX10 and above and will result in the issue i had.
Solution:
UV Coordinates for a Fullscreen Quad:
For XNA / DX9:
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
For Monogame / DX10+
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1)

Smoothing noises with different amplitudes (Part 2)

Well, I'm continuing this question without answer (Smoothing random noises with different amplitudes) and I have another question.
I have opted to use the contour/shadow of a shape (Translating/transforming? list of points from its center with an offset/distance).
This contour/shadow is bigger than the current path. I used this repository (https://github.com/n-yoda/unity-vertex-effects) to recreate the shadow. And this works pretty well, except for one fact.
To know the height of all points (obtained by this shadow algorithm (Line 13 of ModifiedShadow.cs & Line 69 of CircleOutline.cs)) I get the distance of the current point to the center and I divide between the maximum distance to the center:
float dist = orig.Max(v => (v - Center).magnitude);
foreach Point in poly --> float d = 1f - (Center - p).magnitude / dist;
Where orig is the entire list of points obtained by the shadow algorithm.
D is the height of the shadow.
But the problem is obvious I get a perfect circle:
In red and black to see the contrast:
And this is not what I want:
As you can see this not a perfect gradient. Let's explain what's happening.
I use this library to generate noises: https://github.com/Auburns/FastNoise_CSharp
Note: If you want to know what I use to get noises with different amplitude: Smoothing random noises with different amplitudes (see first block of code), to see this in action, see this repo
Green background color represent noises with a mean height of -0.25 and an amplitude of 0.3
White background color represent noises with a mean height of 0 and an amplitude of 0.1
Red means 1 (total interpolation for noises corresponding to white pixels)
Black means 0 (total interpolation for noises corresponding to green pixels)
That's why we have this output:
Actually, I have tried comparing distances of each individual point to the center, but this output a weird and unexpected result.
Actually, I don't know what to try...
The problem is that the lerp percentage (e.g., from high/low or "red" to "black" in your visualization) is only a function of the point's distance from the center, which is divided by a constant (which happens to be the maximum distance of any point from the center). That's why it appears circular.
For instance, the centermost point on the left side of the polygon might be 300 pixels away from the center, while the centermost point on the right might be 5 pixels. Both need to be red, but basing it off of 0 distance from center = red won't have either be red, and basing it off the min distance from center = red will only have red on the right side.
The relevant minimum and maximum distances will change depending on where the point is
One alternative method is for each point: find the closest white pixel, and find the closest green pixel, (or, the closest shadow pixel that is adjacent to green/white, such as here). Then, choose your redness depending on how the distances compare between those two points and the current point.
Therefore, you could do this (pseudo-C#):
foreach pixel p in shadow_region {
// technically, closest shadow pixel which is adjacent to x Pixel:
float closestGreen_distance = +inf;
float closestWhite_distance = +inf;
// Possibly: find all shadow-adjacent pixels prior to the outer loop
// and cache them. Then, you only have to loop through those pixels.
foreach pixel p2 in shadow {
float p2Dist = (p-p2).magnitude;
if (p2 is adjacent to green) {
if (p2Dist < closestGreen_distance) {
closestGreen_distance = p2Dist;
}
}
if (p2 is adjacent to white) {
if (p2Dist < closestWhite_distance) {
closestWhite_distance = p2Dist;
}
}
}
float d = 1f - closestWhite_distance / (closestWhite_distance + closestGreen_distance)
}
Using the code you've posted in the comments, this might look like:
foreach (Point p in value)
{
float minOuterDistance = outerPoints.Min(p2 => (p - p2).magnitude);
float minInnerDistance = innerPoints.Min(p2 => (p - p2).magnitude);
float d = 1f - minInnerDistance / (minInnerDistance + minOuterDistance);
Color32? colorValue = func?.Invoke(p.x, p.y, d);
if (colorValue.HasValue)
target[F.P(p.x, p.y, width, height)] = colorValue.Value;
}
The above part was chosen for the solution. The below part, mentioned as another option, turned out to be unnecessary.
If you can't determine if a shadow pixel is adjacent to white/green, here's an alternative that only requires the calculation of the normals of each vertex in your pink (original) outline.
Create outer "yellow" vertices by going to each pink vertex and following its normal outward. Create inner "blue" vertices by going to each pink vertex and following its normal inward.
Then, when looping through each pixel in the shadow, loop through the yellow vertices to get your "closest to green" and through the blue to get "closest to white".
The problem is that since your shapes aren't fully convex, these projected blue and yellow outlines might be inside-out in some places, so you would need to deal with that somehow. I'm having trouble determining an exact method of dealing with that but here's what I have so far:
One step is to ignore any blues/yellows that have outward-normals that point towards the current shadow pixel.
However, if the current pixel is inside of a point where the yellow/blue shape is inside out, I'm not sure how to proceed. There might be something to ignoring blue/yellow vertexes that are closer to the closest pink vertex than they should be.
extremely rough pseudocode:
list yellow_vertex_list = new list
list blue_vertex_list = new list
foreach pink vertex p:
given float dist;
vertex yellowvertex = new vertex(p+normal*dist)
vertex bluevertex = new vertex(p-normal*dist)
yellow_vertex_list.add(yellowvertex)
blue_vertex_list.add(bluevertex)
create shadow
for each pixel p in shadow:
foreach vertex v in blue_vertex_list
if v.normal points towards v: break;
if v is the wrong side of inside-out region: break;
if v is closest so far:
closest_blue = v
closest_blue_dist = (v-p).magnitude
foreach vertex v in yellow_vertex_list
if v.normal points towards v break;
if v is the wrong side of inside-out region: break;
if v is closest so far:
closest_yellow = v
closest_yellow_dist = (v-p).magnitude
float d = 1f - closest_blue_dist / (closest_blue_dist + closest_yellow_dist)

Whats the logic behind creating a normal map from a texture?

I have looked on google, but the only thing that I could find was a tutorial on how to create one by using photoshop. No interest! I need the logic behind it.
(and i dont need the logic of how to 'use' a bump map, i want to know how to 'make' one!)
I am writing my own HLSL shader and have come as far as to realize that there is some kind of gradient between two pixels which will show its normal - thus with the position of the light can be lit accoardingly.
I want to do this real time so that when the texture changes, the bumpmap does too.
Thanks
I realize that I'm way WAY late to this party but I, too, ran into the same situation recently while attempting to write my own normal map generator for 3ds max. There's bulky and unnecessary libraries for C# but nothing in the way of a simple, math-based solution.
So I ran with the math behind the conversion: the Sobel Operator. That's what you're looking to employ in the shader script.
The following Class is about the simplest implementation I've seen for C#. It does exactly what it's supposed to do and achieves exactly what is desired: a normal map based on either a heightmap, texture or even a programmatically-generated procedural that you provide.
As you can see in the code, I've implemented if / else to mitigate exceptions thrown on edge detection width and height limits.
What it does: samples the HSB Brightness of each pixel / adjoining pixel to determine the scale of the output Hue / Saturation values that are subsequently converted to RGB for the SetPixel operation.
As an aside: you could implement an input control to scale the intensity of the output Hue / Saturation values to scale the subsequent affect that the output normal map will provide your geometry / lighting.
And that's it. No more having to deal with that deprecated, tiny-windowed PhotoShop plugin. Sky's the limit.
Screenshot of C# winforms implementation (source / output):
C# Class to achieve a Sobel-based normal map from source image:
using System.Drawing;
using System.Windows.Forms;
namespace heightmap.Class
{
class Normal
{
public void calculate(Bitmap image, PictureBox pic_normal)
{
Bitmap image = (Bitmap) Bitmap.FromFile(#"yourpath/yourimage.jpg");
#region Global Variables
int w = image.Width - 1;
int h = image.Height - 1;
float sample_l;
float sample_r;
float sample_u;
float sample_d;
float x_vector;
float y_vector;
Bitmap normal = new Bitmap(image.Width, image.Height);
#endregion
for (int y = 0; y < w + 1; y++)
{
for (int x = 0; x < h + 1; x++)
{
if (x > 0) { sample_l = image.GetPixel(x - 1, y).GetBrightness(); }
else { sample_l = image.GetPixel(x, y).GetBrightness(); }
if (x < w) { sample_r = image.GetPixel(x + 1, y).GetBrightness(); }
else { sample_r = image.GetPixel(x, y).GetBrightness(); }
if (y > 1) { sample_u = image.GetPixel(x, y - 1).GetBrightness(); }
else { sample_u = image.GetPixel(x, y).GetBrightness(); }
if (y < h) { sample_d = image.GetPixel(x, y + 1).GetBrightness(); }
else { sample_d = image.GetPixel(x, y).GetBrightness(); }
x_vector = (((sample_l - sample_r) + 1) * .5f) * 255;
y_vector = (((sample_u - sample_d) + 1) * .5f) * 255;
Color col = Color.FromArgb(255, (int)x_vector, (int)y_vector, 255);
normal.SetPixel(x, y, col);
}
}
pic_normal.Image = normal; // set as PictureBox image
}
}
}
A sampler to read your height or depth map.
/// same data as HeightMap, but in a format that the pixel shader can read
/// the pixel shader dynamically generates the surface normals from this.
extern Texture2D HeightMap;
sampler2D HeightSampler = sampler_state
{
Texture=(HeightMap);
AddressU=CLAMP;
AddressV=CLAMP;
Filter=LINEAR;
};
Note that my input map is a 512x512 single-component grayscale texture. Calculating the normals from that is pretty simple:
#define HALF2 ((float2)0.5)
#define GET_HEIGHT(heightSampler,texCoord) (tex2D(heightSampler,texCoord+HALF2))
///calculate a normal for the given location from the height map
/// basically, this calculates the X- and Z- surface derivatives and returns their
/// cross product. Note that this assumes the heightmap is a 512 pixel square for no particular
/// reason other than that my test map is 512x512.
float3 GetNormal(sampler2D heightSampler, float2 texCoord)
{
/// normalized size of one texel. this would be 1/1024.0 if using 1024x1024 bitmap.
float texelSize=1/512.0;
float n = GET_HEIGHT(heightSampler,texCoord+float2(0,-texelSize));
float s = GET_HEIGHT(heightSampler,texCoord+float2(0,texelSize));
float e = GET_HEIGHT(heightSampler,texCoord+float2(-texelSize,0));
float w = GET_HEIGHT(heightSampler,texCoord+float2(texelSize,0));
float3 ew = normalize(float3(2*texelSize,e-w,0));
float3 ns = normalize(float3(0,s-n,2*texelSize));
float3 result = cross(ew,ns);
return result;
}
and a pixel shader to call it:
#define LIGHT_POSITION (float3(0,2,0))
float4 SolidPS(float3 worldPosition : NORMAL0, float2 texCoord : TEXCOORD0) : COLOR0
{
/// calculate a normal from the height map
float3 normal = GetNormal(HeightSampler,texCoord);
/// return it as a color. (Since the normal components can range from -1 to +1, this
/// will probably return a lot of "black" pixels if rendered as-is to screen.
return float3(normal,1);
}
LIGHT_POSITION could (and probably should) be input from your host code, though I've cheated and used a constant here.
Note that this method requires 4 texture lookups per normal, not counting one to get the color. That may not be an issue for you (depending on whatever else your're doing). If that becomes too much of a performance hit, you can either just call it whenever the texture changes, render to a target, and capture the result as a normal map.
An alternative would be to draw a screen-aligned quad textured with the heightmap to a render target and use the ddx/ddy HLSL intrinsics to generate the normals without having to resample the source texture. Obviously you'd do this in a pre-pass step, read the resulting normal map back, and then use it as an input to your later stages.
In any case, this has proved fast enough for me.
The short answer is: there's no way to do this reliably that produces good results, because there's no way to tell the difference between a diffuse texture that has changes in color/brightness due to bumpiness, and a diffuse texture that has changes in color/brightness because the surface is actually a different colour/brightness at that point.
Longer answer:
If you were to assume that the surface were actually a constant colour, then any changes in colour or brightness must be due to shading effects due to bumpiness. Calculate how much brighter/darker each pixel is from the actual surface colour; brighter values indicate parts of the surface that face 'towards' the light source, and darker values indicate parts of the surface that face 'away' from the light source. If you also specify the direction the light is coming from, you can calculate a surface normal at each point on the texture such that it would result in the shading value you calculated.
That's the basic theory. Of course, in reality, the surface is almost never a constant colour, which is why this approach of using purely the diffuse texture as input tends not to work very well. I'm not sure how things like CrazyBump do it but I think they're doing things like averaging the colour over local parts of the image rather than the whole texture.
Ordinarily, normal maps are created from actual 3D models of the surface that are 'projected' onto lower-resolution geometry. Normal maps are just a technique for faking that high-resolution geometry, after all.
Quick answer: It's not possible.
A simple generic (diffuse) texture simply does not contain this information. I haven't looked exactly how Photoshop does it (seen it once used by an artist), but I think they just simply do something like 'depth=r+g+b+a', which basically returns a heightmap/gradient. And then converting the heightmap to a normalmap using a simple edge detect effect to get a Tangent space Normal Map.
Just keep in mind, in most cases you use a normal map to simulate a high res 3D geometry mesh, as it fills in the blank spot vertex-normals leave behind. If your scene heavily relies on lighting, this is a no-go, but if it's a simple directional light, this 'might' work.
Of course, this is just my experience, you might just as well be working on a completely different type of project.

Calculating texture coordinates from a grid position

Our game uses a 'block atlas', a grid of square textures which correspond to specific faces of blocks in the game. We're aiming to streamline vertex data memory by storing texture data in the vertex as shorts instead of float2s. Here's our Vertex Definition (XNA, C#):
public struct BlockVertex : IVertexType
{
public Vector3 Position;
public Vector3 Normal;
/// <summary>
/// The texture coordinate of this block in reference to the top left corner of an ID's square on the atlas
/// </summary>
public short TextureID;
public short DecalID;
// Describe the layout of this vertex structure.
public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration
(
new VertexElement(0, VertexElementFormat.Vector3,
VertexElementUsage.Position, 0),
new VertexElement(sizeof(float) * 3, VertexElementFormat.Vector3,
VertexElementUsage.Normal, 0),
new VertexElement(sizeof(float) * 6, VertexElementFormat.Short2,
VertexElementUsage.TextureCoordinate, 0)
);
public BlockVertex(Vector3 Position, Vector3 Normal, short TexID)
{
this.Position = Position;
this.Normal = Normal;
this.TextureID = TexID;
this.DecalID = TexID;
}
// Describe the size of this vertex structure.
public const int SizeInBytes = (sizeof(float) * 6) + (sizeof(short) * 2);
VertexDeclaration IVertexType.VertexDeclaration
{
get { return VertexDeclaration; }
}
}
I have little experience in HLSL, but when I tried to adapt our current shader to use this vertex (as opposed to the old one which stored the Texture and Decal as Vector2 coordinates), I got nothing but transparent blue models, which I believe means that the texture coordinates for the faces are all the same?
Here's the HLSL where I try to interpret the vertex data:
int AtlasWidth = 25;
int SquareSize = 32;
float TexturePercent = 0.04; //percent of the entire texture taken up by 1 pixel
[snipped]
struct VSInputTx
{
float4 Position : POSITION0;
float3 Normal : NORMAL0;
short2 BlockAndDecalID : TEXCOORD0;
};
struct VSOutputTx
{
float4 Position : POSITION0;
float3 Normal : TEXCOORD0;
float3 CameraView : TEXCOORD1;
short2 TexCoords : TEXCOORD2;
short2 DecalCoords : TEXCOORD3;
float FogAmt : TEXCOORD4;
};
[snipped]
VSOutputTx VSBasicTx( VSInputTx input )
{
VSOutputTx output;
float4 worldPosition = mul( input.Position, World );
float4 viewPosition = mul( worldPosition, View );
output.Position = mul( viewPosition, Projection );
output.Normal = mul( input.Normal, World );
output.CameraView = normalize( CameraPosition - worldPosition );
output.FogAmt = 1 - saturate((distance(worldPosition,CameraPosition)-FogBegin)/(FogEnd-FogBegin)); //This sets the fog amount (since it needs position data)
// Convert texture coordinates to short2 from blockID
// When they transfer to the pixel shader, they will be interpolated
// per pixel.
output.TexCoords = short2((short)(((input.BlockAndDecalID.x) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.x) / (AtlasWidth)) * TexturePercent));
output.DecalCoords = short2((short)(((input.BlockAndDecalID.y) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.y) / (AtlasWidth)) * TexturePercent));
return output;
}
I changed nothing else from the original shader which displayed everything fine, but you can see the entire .fx file here.
If I could just debug the thing I might be able to get it, but... well, anyways, I think it has to do with my limited knowledge of how shaders work. I imagine my attempt to use integer arithmetic is less than effective. Also, that's a lot of casts, I could believe it if values got forced to 0 somewhere in there. In case I am way off the mark, here is what I aim to achieve:
Shader gets a vertex which stores two shorts as well as other data. The shorts represent an ID of a certain corner of a grid of square textures. One is for the block face, the other is for a decal which is drawn over that face.
The shader uses these IDs, as well as some constants which define the size of the texture grid (henceforth referred to as "Atlas"), to determine the actual texture coordinate of this particular corner.
The X of the texture coordinate is the (ID % AtlasWidth) * TexturePercent, where TexturePercent is a constant which represents the percentage of the entire texture represented by 1 pixel.
The Y of the texture coordinate is the (ID / AtlasWidth) * TexturePercent.
How can I go about this?
Update: I got an error at some point "vs_1_1 does not support 8- or 16-bit integers" could this be part of the issue?
output.TexCoords = short2((short)(((input.BlockAndDecalID.x) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.x) / (AtlasWidth)) * TexturePercent));
output.DecalCoords = short2((short)(((input.BlockAndDecalID.y) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.y) / (AtlasWidth)) * TexturePercent));
These two lines contain these following divisions:
(input.BlockAndDecalID.x) / (AtlasWidth)
(input.BlockAndDecalID.y) / (AtlasWidth)
These are both integer divisions. Integer division cuts off all past the decimal point. I'm assuming AtlasWidth is always greater than each of the coordinates, therefore both of these divisions will always result in 0. If this assumption is incorrect, I'm assuming you still want that decimal data in some way?
You probably want a float division, something that returns a decimal result, so you need to cast at least one (or both) of the operands to a float first, e.g.:
((float)input.BlockAndDecalID.x) / ((float)AtlasWidth)
EDIT: I would use NormalizedShort2 instead; it scales your value by the max short value (32767), allowing a "decimal-like" use of short values (in other words, "0.5" = 16383). Of course if your intent was just to halve your tex coord data, you can achieve this while still using floating points by using HalfVector2. If you still insist on Short2 you will probably have to scale it like NormalizedShort2 already does before submitting it as a coordinate.
Alright, after I tried to create a short in one of the shader functions and faced a compiler error which stated that vs_1_1 doesn't support 8/16-bit integers (why didn't you tell me that before??), I changed the function to look like this:
float texX = ((float)((int)input.BlockAndDecalID.x % AtlasWidth) * (float)TexturePercent);
float texY = ((float)((int)input.BlockAndDecalID.x / AtlasWidth) * (float)TexturePercent);
output.TexCoords = float2(texX, texY);
float decX = ((float)((int)input.BlockAndDecalID.y % AtlasWidth) * (float)TexturePercent);
float decY = ((float)((int)input.BlockAndDecalID.y / AtlasWidth) * (float)TexturePercent);
output.DecalCoords = float2(decX, decY);
Which seems to work just fine. I'm not sure if it's changing stuff to ints or that (float) cast on the front. I think it might be the float, which means Scott was definitely on to something. My partner wrote this shader somehow to use a short2 as the texture coordinate; how he did that I do not know.

Categories