Cubemap skybox from the scene of the shader - c#

Problem, perhaps, simple:
I can't figure out how to get the skybox and apply it to my shader.
I think I'm close but how do I take the skybox from the scene??
mygameobjec.GetComponent<Renderer>().material.SetTexture("_SkyReflection",Skybox.material.Texture??);
Thanks

Try RenderSettings.skybox.mainTexture.
https://docs.unity3d.com/ScriptReference/RenderSettings-skybox.html
A tip though: it is also possible to access the current reflection environment inside the shader from a shader global called unity_SpecCube0. Here is a function i often use in my shaders:
// Returns the reflection color given a normal and view direction.
inline half3 SurfaceReflection(half3 viewDir, half3 worldNormal, half roughness) {
half3 worldRefl = reflect(-viewDir, worldNormal);
half r = roughness * 1.7 - 0.7 * roughness;
float4 reflData = UNITY_SAMPLE_TEXCUBE_LOD(
unity_SpecCube0, worldRefl, r * 6
);
return DecodeHDR (reflData, unity_SpecCube0_HDR);
}

Related

Problem with PanGesture and object translation in Arkit Scene (Xamarin)

i'm new with Arkit and Xamarin enviroment.
I need help about the translation of SCNNode in scene using PanGesture.
I have used this guide to start my fist approach with PanGesture. Guide
After that....
I used the help code, but I noticed that, as in the example, when I move an object in the scene it ONLY follows the X, Y axes.
In short, if the Cartesian axes of the ARkit scene are framed, with the Z of the camera pointing at the observer, everything works.
If the camera position is changed (the phone moves), how can I obtain the translation delta within the 3D space?
if (sender.State == UIGestureRecognizerState.Changed)
{
var translate = sender.TranslationInView(areaPanned);
// Only allow movement vertically or horizontally [OK, but how can i obtain the XYZ value of delta in scene from XY of Viewport?]
node.LocalTranslate(new SCNVector3((float)translate.X / 10000f, (float)-translate.Y / 10000, 0.0f));
}
Following the opengl standard I thought of such a solution:
[Pseudo code]
scale/offset from 0...1 to -1...1 coordinate space
var vS = new Coordinate(this.Scene.CurrentViewport.X, this.Scene.CurrentViewport.Y, 1.0);
var vWH = new Coordinate(this.Scene.CurrentViewport.Width, this.Scene.CurrentViewport.Height, 1.0);
var scrrenpos = new Coordinate(translate.X, -translate.Y, 1.0);
var normalized = (scrrenpos - vS) / vWH;
After that i need matrix:
var inversePM = (projection * modelView).inverse
where:
=> projection from ARCAmera.ProjectionMatrix
=> modelView from ARCamera.Transform
To finish:
var result = normalized * inversePM;
if I set the SCNNode position with this value nothing works :(
Thanks
Problem solved!
here Swift code to translate in c#... works fine!

(Monogame/HLSL) Problems with ShadowMapping - Shadow dependent on Camera position

I'm banging my head at this problem for quite a while now and finally realized that i need serious help...
So basically i wanted to implement proper shadows into my project im writing in Monogame. For this I wrote a deferred dhader in HLSL using multiple tutorials, mainly for old XNA.
The Problem is, that although my lighting and shadow work for a spotlight, the light on the floor of my scene is very dependent on my camera, as you can see in the images: https://imgur.com/a/TU7y0bs
I tried many different things to solve this problem:
A bigger DepthBias will widen the radius that is "shadow free" with massive peter panning and the described issue is not fixed at all.
One paper suggested using an exponential shadow map, but i didn't like the results at all, as the light bleeding was unbearable and smaller shadows (like the one behind the torch at the wall) would not get rendered.
I switched my GBuffer DepthMap to 1-z/w to get more precision, but that did not fix the problem either.
I am using a
new RenderTarget2D(device,
Width, Height, false, SurfaceFormat.Vector2, DepthFormat.Depth24Stencil8)
to store the depth from the lights perspective.
I Calculate the Shadow using this PixelShader Function:
Note, that i want to adapt this shader into a point light in the future - thats why im simply using length(LightPos - PixelPos).
SpotLight.fx - PixelShader
float4 PS(VSO input) : SV_TARGET0
{
// Fancy Lighting equations
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
// Sample Depth from DepthMap
float Depth = DepthMap.Sample(SampleTypeClamp, UV).x;
// Getting the PixelPosition in WorldSpace
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
// Transform Position to WorldSpace
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LightUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightDepth = ShadowMap.Sample(SampleDot, LightUV).r;
// Linear depth model
float closestDepth = lightDepth * LightFarplane; // Depth is stored in [0, 1]; bring it to [0, farplane]
float currentDepth = length(LightPosition.xyz - Position.xyz) - DepthBias;
ShadowFactor = step(currentDepth, closestDepth); // closestDepth > currentDepth -> Occluded, Shadow.
float4 phong = Phong(...);
return ShadowFactor * phong;
}
LightViewProjection is simply light.View * light.Projection
InverseViewProjection is Matrix.Invert(camera.View * Camera.Projection)
Phong() is a function i call to finalize the lighting
The lightDepthMap simply stores length(lightPos - Position)
I'd like to have that artifact shown in the pictures gone to be able to adapt the code to point lights, as well.
Could this be a problem with the way i retrieve the world position from screen space and my depth got a to low resolution?
Help is much appreciated!
--- Update ---
I changed my Lighting shader to display the difference between the distance stored in the shadowMap and the distance calculated on the spot in the Pixelshader:
float4 PixelShaderFct(...) : SV_TARGET0
{
// Get Depth from Texture
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightZ = ShadowMap.Sample(SampleDot, LUV).r;
float Attenuation = AttenuationMap.Sample(SampleType, LUV).r;
float ShadowFactor = 1;
// Linear depth model; lightZ stores (LightPos - Pos)/LightFarPlane
float closestDepth = lightZ * LightFarPlane;
float currentDepth = length(LightPosition.xyz - Position.xyz) -DepthBias;
return (closestDepth - currentDepth);
}
As I am basically outputting Length - (Length - Bias) one would expect to have an Image with "DepthBias" as its color. But that is not the result I'm getting here:
https://imgur.com/a/4PXLH7s
Based on this result, I'm assuming that either i've got precision issues (which i find weird, given that im working with near- and farplanes of [0.1, 50]), or something is wrong with the way im recovering the worldPosition of a given pixel from my DepthMap.
I finally found the solution and I'm posting it here if someone stumbles across the same issue:
The Tutorial I used was for XNA / DX9. But as im targetting DX10+ a tiny change needs to be done:
In XNA / DX9 the UV coordinates are not align with the actual pixels and need to be aligned. That is what - float2(1.0f / GBufferTextureSize.xy); in float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy); was for. This is NOT needed in DX10 and above and will result in the issue i had.
Solution:
UV Coordinates for a Fullscreen Quad:
For XNA / DX9:
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
For Monogame / DX10+
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1)

Lerp alpha based on distance between 2 objects

So I am trying to increase an image alpha channel based on the fact that an object is getting closer and closer to the player. I am using Vector3.Distance()
to get the distance from the player to the object but I don't know how should I convert the distance so that the value of color.a will get bigger and bigger as the distance get's smaller and smaller.
Please point me in the right direction;
How can I make a number bigger based on the fact that another number is getting smaller?
See this post that explains how to lerp a color based on distance between two GameObjects. The only difference is that you want to lerp the alpha instead so everything written on that post should still be relevant to this. Just few modifications need to be made.
You just need to use Mathf.Lerp instead of Color.Lerp. Also, you need to enable fade mode on the material. You can do that from the Editor or script. The code below is a modified code from the linked answer that should accomplish what you are doing. It also enables fade mode from code in the Start function.
public GameObject obj1;
public GameObject obj2;
const float MAX_DISTANCE = 200;
Renderer mRenderer;
void Start()
{
mRenderer = GetComponent<Renderer>();
//ENABLE FADE Mode on the material if not done already
mRenderer.material.SetFloat("_Mode", 2);
mRenderer.material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.SrcAlpha);
mRenderer.material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha);
mRenderer.material.SetInt("_ZWrite", 0);
mRenderer.material.DisableKeyword("_ALPHATEST_ON");
mRenderer.material.EnableKeyword("_ALPHABLEND_ON");
mRenderer.material.DisableKeyword("_ALPHAPREMULTIPLY_ON");
mRenderer.material.renderQueue = 3000;
}
void Update()
{
//Get distance between those two Objects
float distanceApart = getSqrDistance(obj1.transform.position, obj2.transform.position);
UnityEngine.Debug.Log(getSqrDistance(obj1.transform.position, obj2.transform.position));
//Convert 0 and 200 distance range to 0f and 1f range
float lerp = mapValue(distanceApart, 0, MAX_DISTANCE, 0f, 1f);
//Lerp Alpha between near and far color
Color lerpColor = mRenderer.material.color;
lerpColor.a = Mathf.Lerp(1, 0, lerp);
mRenderer.material.color = lerpColor;
}
public float getSqrDistance(Vector3 v1, Vector3 v2)
{
return (v1 - v2).sqrMagnitude;
}
float mapValue(float mainValue, float inValueMin, float inValueMax, float outValueMin, float outValueMax)
{
return (mainValue - inValueMin) * (outValueMax - outValueMin) / (inValueMax - inValueMin) + outValueMin;
}
I don't know how you want the effect to look, but it's basically a math question.
Something like:
color.a = 1.0 / distance
should get you started. Basically, the further from 1 distance gets (increasing), the closer to 0 color.a gets (decreasing). The opposite is thus true if distance decreases (color.a then increases).
You have to deal with values for distance getting inferior to 1 (if applicable), as it won't increase color.a anymore.
You also have to deal with color.a max possible value: is it 1.0 or 255 (or something else)? Replace the 1.0 in the formula with this max value. You may have to multiply distance by an arbitrary value so the effect isn't too fast or too slow.
Sounds like you want a function (in the mathematical sense) f(x) that maps an input (distance) value in the domain [0, infinity) to the output (alpha) range [1, 0]. One simple such function is 1/(1+x) (click the link to see an interactive graph).
You can use the interactive graph and online math resources to play with the equation to find one that looks good to you. Once you have that figured out, implementing it in code should be easy!

Whats the logic behind creating a normal map from a texture?

I have looked on google, but the only thing that I could find was a tutorial on how to create one by using photoshop. No interest! I need the logic behind it.
(and i dont need the logic of how to 'use' a bump map, i want to know how to 'make' one!)
I am writing my own HLSL shader and have come as far as to realize that there is some kind of gradient between two pixels which will show its normal - thus with the position of the light can be lit accoardingly.
I want to do this real time so that when the texture changes, the bumpmap does too.
Thanks
I realize that I'm way WAY late to this party but I, too, ran into the same situation recently while attempting to write my own normal map generator for 3ds max. There's bulky and unnecessary libraries for C# but nothing in the way of a simple, math-based solution.
So I ran with the math behind the conversion: the Sobel Operator. That's what you're looking to employ in the shader script.
The following Class is about the simplest implementation I've seen for C#. It does exactly what it's supposed to do and achieves exactly what is desired: a normal map based on either a heightmap, texture or even a programmatically-generated procedural that you provide.
As you can see in the code, I've implemented if / else to mitigate exceptions thrown on edge detection width and height limits.
What it does: samples the HSB Brightness of each pixel / adjoining pixel to determine the scale of the output Hue / Saturation values that are subsequently converted to RGB for the SetPixel operation.
As an aside: you could implement an input control to scale the intensity of the output Hue / Saturation values to scale the subsequent affect that the output normal map will provide your geometry / lighting.
And that's it. No more having to deal with that deprecated, tiny-windowed PhotoShop plugin. Sky's the limit.
Screenshot of C# winforms implementation (source / output):
C# Class to achieve a Sobel-based normal map from source image:
using System.Drawing;
using System.Windows.Forms;
namespace heightmap.Class
{
class Normal
{
public void calculate(Bitmap image, PictureBox pic_normal)
{
Bitmap image = (Bitmap) Bitmap.FromFile(#"yourpath/yourimage.jpg");
#region Global Variables
int w = image.Width - 1;
int h = image.Height - 1;
float sample_l;
float sample_r;
float sample_u;
float sample_d;
float x_vector;
float y_vector;
Bitmap normal = new Bitmap(image.Width, image.Height);
#endregion
for (int y = 0; y < w + 1; y++)
{
for (int x = 0; x < h + 1; x++)
{
if (x > 0) { sample_l = image.GetPixel(x - 1, y).GetBrightness(); }
else { sample_l = image.GetPixel(x, y).GetBrightness(); }
if (x < w) { sample_r = image.GetPixel(x + 1, y).GetBrightness(); }
else { sample_r = image.GetPixel(x, y).GetBrightness(); }
if (y > 1) { sample_u = image.GetPixel(x, y - 1).GetBrightness(); }
else { sample_u = image.GetPixel(x, y).GetBrightness(); }
if (y < h) { sample_d = image.GetPixel(x, y + 1).GetBrightness(); }
else { sample_d = image.GetPixel(x, y).GetBrightness(); }
x_vector = (((sample_l - sample_r) + 1) * .5f) * 255;
y_vector = (((sample_u - sample_d) + 1) * .5f) * 255;
Color col = Color.FromArgb(255, (int)x_vector, (int)y_vector, 255);
normal.SetPixel(x, y, col);
}
}
pic_normal.Image = normal; // set as PictureBox image
}
}
}
A sampler to read your height or depth map.
/// same data as HeightMap, but in a format that the pixel shader can read
/// the pixel shader dynamically generates the surface normals from this.
extern Texture2D HeightMap;
sampler2D HeightSampler = sampler_state
{
Texture=(HeightMap);
AddressU=CLAMP;
AddressV=CLAMP;
Filter=LINEAR;
};
Note that my input map is a 512x512 single-component grayscale texture. Calculating the normals from that is pretty simple:
#define HALF2 ((float2)0.5)
#define GET_HEIGHT(heightSampler,texCoord) (tex2D(heightSampler,texCoord+HALF2))
///calculate a normal for the given location from the height map
/// basically, this calculates the X- and Z- surface derivatives and returns their
/// cross product. Note that this assumes the heightmap is a 512 pixel square for no particular
/// reason other than that my test map is 512x512.
float3 GetNormal(sampler2D heightSampler, float2 texCoord)
{
/// normalized size of one texel. this would be 1/1024.0 if using 1024x1024 bitmap.
float texelSize=1/512.0;
float n = GET_HEIGHT(heightSampler,texCoord+float2(0,-texelSize));
float s = GET_HEIGHT(heightSampler,texCoord+float2(0,texelSize));
float e = GET_HEIGHT(heightSampler,texCoord+float2(-texelSize,0));
float w = GET_HEIGHT(heightSampler,texCoord+float2(texelSize,0));
float3 ew = normalize(float3(2*texelSize,e-w,0));
float3 ns = normalize(float3(0,s-n,2*texelSize));
float3 result = cross(ew,ns);
return result;
}
and a pixel shader to call it:
#define LIGHT_POSITION (float3(0,2,0))
float4 SolidPS(float3 worldPosition : NORMAL0, float2 texCoord : TEXCOORD0) : COLOR0
{
/// calculate a normal from the height map
float3 normal = GetNormal(HeightSampler,texCoord);
/// return it as a color. (Since the normal components can range from -1 to +1, this
/// will probably return a lot of "black" pixels if rendered as-is to screen.
return float3(normal,1);
}
LIGHT_POSITION could (and probably should) be input from your host code, though I've cheated and used a constant here.
Note that this method requires 4 texture lookups per normal, not counting one to get the color. That may not be an issue for you (depending on whatever else your're doing). If that becomes too much of a performance hit, you can either just call it whenever the texture changes, render to a target, and capture the result as a normal map.
An alternative would be to draw a screen-aligned quad textured with the heightmap to a render target and use the ddx/ddy HLSL intrinsics to generate the normals without having to resample the source texture. Obviously you'd do this in a pre-pass step, read the resulting normal map back, and then use it as an input to your later stages.
In any case, this has proved fast enough for me.
The short answer is: there's no way to do this reliably that produces good results, because there's no way to tell the difference between a diffuse texture that has changes in color/brightness due to bumpiness, and a diffuse texture that has changes in color/brightness because the surface is actually a different colour/brightness at that point.
Longer answer:
If you were to assume that the surface were actually a constant colour, then any changes in colour or brightness must be due to shading effects due to bumpiness. Calculate how much brighter/darker each pixel is from the actual surface colour; brighter values indicate parts of the surface that face 'towards' the light source, and darker values indicate parts of the surface that face 'away' from the light source. If you also specify the direction the light is coming from, you can calculate a surface normal at each point on the texture such that it would result in the shading value you calculated.
That's the basic theory. Of course, in reality, the surface is almost never a constant colour, which is why this approach of using purely the diffuse texture as input tends not to work very well. I'm not sure how things like CrazyBump do it but I think they're doing things like averaging the colour over local parts of the image rather than the whole texture.
Ordinarily, normal maps are created from actual 3D models of the surface that are 'projected' onto lower-resolution geometry. Normal maps are just a technique for faking that high-resolution geometry, after all.
Quick answer: It's not possible.
A simple generic (diffuse) texture simply does not contain this information. I haven't looked exactly how Photoshop does it (seen it once used by an artist), but I think they just simply do something like 'depth=r+g+b+a', which basically returns a heightmap/gradient. And then converting the heightmap to a normalmap using a simple edge detect effect to get a Tangent space Normal Map.
Just keep in mind, in most cases you use a normal map to simulate a high res 3D geometry mesh, as it fills in the blank spot vertex-normals leave behind. If your scene heavily relies on lighting, this is a no-go, but if it's a simple directional light, this 'might' work.
Of course, this is just my experience, you might just as well be working on a completely different type of project.

Some simple XNA/HLSL questions

I've been getting into HLSL programming lately and I'm very curious as to HOW some of the things I'm doing actually work.
For example, I've got this very simple shader here that shades any teal colored pixels to a red-ish color.
sampler2D mySampler;
float4 MyPixelShader(float2 texCoords : TEXCOORD0): COLOR
{
float4 Color;
Color = tex2D(mySampler, texCoords.xy);
if(Color.r == 0 && Color.g == 1.0 && Color.b == 1.0)
{
Color.r = 1.0;
Color.g = 0.5;
Color.b = 0.5;
}
return Color;
}
technique Simple
{
pass pass1
{
PixelShader = compile ps_2_0 MyPixelShader();
}
}
I understand that the tex2D function grabs the pixel's color at the specified location, but what I don't understand is how mySampler even has any data. I'm not setting it or passing in a texture at all, yet it magically contains my texture's data.
Also, what is the difference between things like:
COLOR and COLOR0
or
TEXCOORD and TEXCOORD0
I can take a logical guess and say that COLOR0 is a registry in assembly that holds the currently used pixel color in the GPU. (that may be completely wrong, I'm just stating what I think it is)
And if so, does that mean specifying something like float2 texCoords : TEXCOORD0 will, by default, grab the current position the GPU is processing?
mySampler is assgined to a sample register, the first is S0.
SpriteBatch uses the same register to draw textures so you have initilized it before for sure.
this register are in relation with GraphicDevice.Textures and GraphicDevice.SamplerStates arrays.
In fact, in your shader you can use this sentence:
sampler TextureSampler : register(s0);
EDIT:
if you need to use a second texture in your shader you can make this:
HLSL
sampler MaskTexture : register(s1);
C#:
GraphicsDevice.Textures[1] = MyMaskTexture;
GraphicsDevice.SamplerStates[1].AddresU = TextureAddresMode....
Color0 is not a registry and does not hold the current pixel color. It's referred to the vertex structure you are using.
When you define a vertex like a VertexPositionColor, the vertex contains a Position, and a Color, but if you want to define a custom vertex with two colors, you need a way to discriminate between the two colors... the channels.
The number suffix means the channel you are referring in the current vertex.

Categories