Calculating texture coordinates from a grid position - c#

Our game uses a 'block atlas', a grid of square textures which correspond to specific faces of blocks in the game. We're aiming to streamline vertex data memory by storing texture data in the vertex as shorts instead of float2s. Here's our Vertex Definition (XNA, C#):
public struct BlockVertex : IVertexType
{
public Vector3 Position;
public Vector3 Normal;
/// <summary>
/// The texture coordinate of this block in reference to the top left corner of an ID's square on the atlas
/// </summary>
public short TextureID;
public short DecalID;
// Describe the layout of this vertex structure.
public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration
(
new VertexElement(0, VertexElementFormat.Vector3,
VertexElementUsage.Position, 0),
new VertexElement(sizeof(float) * 3, VertexElementFormat.Vector3,
VertexElementUsage.Normal, 0),
new VertexElement(sizeof(float) * 6, VertexElementFormat.Short2,
VertexElementUsage.TextureCoordinate, 0)
);
public BlockVertex(Vector3 Position, Vector3 Normal, short TexID)
{
this.Position = Position;
this.Normal = Normal;
this.TextureID = TexID;
this.DecalID = TexID;
}
// Describe the size of this vertex structure.
public const int SizeInBytes = (sizeof(float) * 6) + (sizeof(short) * 2);
VertexDeclaration IVertexType.VertexDeclaration
{
get { return VertexDeclaration; }
}
}
I have little experience in HLSL, but when I tried to adapt our current shader to use this vertex (as opposed to the old one which stored the Texture and Decal as Vector2 coordinates), I got nothing but transparent blue models, which I believe means that the texture coordinates for the faces are all the same?
Here's the HLSL where I try to interpret the vertex data:
int AtlasWidth = 25;
int SquareSize = 32;
float TexturePercent = 0.04; //percent of the entire texture taken up by 1 pixel
[snipped]
struct VSInputTx
{
float4 Position : POSITION0;
float3 Normal : NORMAL0;
short2 BlockAndDecalID : TEXCOORD0;
};
struct VSOutputTx
{
float4 Position : POSITION0;
float3 Normal : TEXCOORD0;
float3 CameraView : TEXCOORD1;
short2 TexCoords : TEXCOORD2;
short2 DecalCoords : TEXCOORD3;
float FogAmt : TEXCOORD4;
};
[snipped]
VSOutputTx VSBasicTx( VSInputTx input )
{
VSOutputTx output;
float4 worldPosition = mul( input.Position, World );
float4 viewPosition = mul( worldPosition, View );
output.Position = mul( viewPosition, Projection );
output.Normal = mul( input.Normal, World );
output.CameraView = normalize( CameraPosition - worldPosition );
output.FogAmt = 1 - saturate((distance(worldPosition,CameraPosition)-FogBegin)/(FogEnd-FogBegin)); //This sets the fog amount (since it needs position data)
// Convert texture coordinates to short2 from blockID
// When they transfer to the pixel shader, they will be interpolated
// per pixel.
output.TexCoords = short2((short)(((input.BlockAndDecalID.x) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.x) / (AtlasWidth)) * TexturePercent));
output.DecalCoords = short2((short)(((input.BlockAndDecalID.y) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.y) / (AtlasWidth)) * TexturePercent));
return output;
}
I changed nothing else from the original shader which displayed everything fine, but you can see the entire .fx file here.
If I could just debug the thing I might be able to get it, but... well, anyways, I think it has to do with my limited knowledge of how shaders work. I imagine my attempt to use integer arithmetic is less than effective. Also, that's a lot of casts, I could believe it if values got forced to 0 somewhere in there. In case I am way off the mark, here is what I aim to achieve:
Shader gets a vertex which stores two shorts as well as other data. The shorts represent an ID of a certain corner of a grid of square textures. One is for the block face, the other is for a decal which is drawn over that face.
The shader uses these IDs, as well as some constants which define the size of the texture grid (henceforth referred to as "Atlas"), to determine the actual texture coordinate of this particular corner.
The X of the texture coordinate is the (ID % AtlasWidth) * TexturePercent, where TexturePercent is a constant which represents the percentage of the entire texture represented by 1 pixel.
The Y of the texture coordinate is the (ID / AtlasWidth) * TexturePercent.
How can I go about this?
Update: I got an error at some point "vs_1_1 does not support 8- or 16-bit integers" could this be part of the issue?

output.TexCoords = short2((short)(((input.BlockAndDecalID.x) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.x) / (AtlasWidth)) * TexturePercent));
output.DecalCoords = short2((short)(((input.BlockAndDecalID.y) % (AtlasWidth)) * TexturePercent), (short)(((input.BlockAndDecalID.y) / (AtlasWidth)) * TexturePercent));
These two lines contain these following divisions:
(input.BlockAndDecalID.x) / (AtlasWidth)
(input.BlockAndDecalID.y) / (AtlasWidth)
These are both integer divisions. Integer division cuts off all past the decimal point. I'm assuming AtlasWidth is always greater than each of the coordinates, therefore both of these divisions will always result in 0. If this assumption is incorrect, I'm assuming you still want that decimal data in some way?
You probably want a float division, something that returns a decimal result, so you need to cast at least one (or both) of the operands to a float first, e.g.:
((float)input.BlockAndDecalID.x) / ((float)AtlasWidth)
EDIT: I would use NormalizedShort2 instead; it scales your value by the max short value (32767), allowing a "decimal-like" use of short values (in other words, "0.5" = 16383). Of course if your intent was just to halve your tex coord data, you can achieve this while still using floating points by using HalfVector2. If you still insist on Short2 you will probably have to scale it like NormalizedShort2 already does before submitting it as a coordinate.

Alright, after I tried to create a short in one of the shader functions and faced a compiler error which stated that vs_1_1 doesn't support 8/16-bit integers (why didn't you tell me that before??), I changed the function to look like this:
float texX = ((float)((int)input.BlockAndDecalID.x % AtlasWidth) * (float)TexturePercent);
float texY = ((float)((int)input.BlockAndDecalID.x / AtlasWidth) * (float)TexturePercent);
output.TexCoords = float2(texX, texY);
float decX = ((float)((int)input.BlockAndDecalID.y % AtlasWidth) * (float)TexturePercent);
float decY = ((float)((int)input.BlockAndDecalID.y / AtlasWidth) * (float)TexturePercent);
output.DecalCoords = float2(decX, decY);
Which seems to work just fine. I'm not sure if it's changing stuff to ints or that (float) cast on the front. I think it might be the float, which means Scott was definitely on to something. My partner wrote this shader somehow to use a short2 as the texture coordinate; how he did that I do not know.

Related

(Monogame/HLSL) Problems with ShadowMapping - Shadow dependent on Camera position

I'm banging my head at this problem for quite a while now and finally realized that i need serious help...
So basically i wanted to implement proper shadows into my project im writing in Monogame. For this I wrote a deferred dhader in HLSL using multiple tutorials, mainly for old XNA.
The Problem is, that although my lighting and shadow work for a spotlight, the light on the floor of my scene is very dependent on my camera, as you can see in the images: https://imgur.com/a/TU7y0bs
I tried many different things to solve this problem:
A bigger DepthBias will widen the radius that is "shadow free" with massive peter panning and the described issue is not fixed at all.
One paper suggested using an exponential shadow map, but i didn't like the results at all, as the light bleeding was unbearable and smaller shadows (like the one behind the torch at the wall) would not get rendered.
I switched my GBuffer DepthMap to 1-z/w to get more precision, but that did not fix the problem either.
I am using a
new RenderTarget2D(device,
Width, Height, false, SurfaceFormat.Vector2, DepthFormat.Depth24Stencil8)
to store the depth from the lights perspective.
I Calculate the Shadow using this PixelShader Function:
Note, that i want to adapt this shader into a point light in the future - thats why im simply using length(LightPos - PixelPos).
SpotLight.fx - PixelShader
float4 PS(VSO input) : SV_TARGET0
{
// Fancy Lighting equations
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
// Sample Depth from DepthMap
float Depth = DepthMap.Sample(SampleTypeClamp, UV).x;
// Getting the PixelPosition in WorldSpace
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
// Transform Position to WorldSpace
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LightUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightDepth = ShadowMap.Sample(SampleDot, LightUV).r;
// Linear depth model
float closestDepth = lightDepth * LightFarplane; // Depth is stored in [0, 1]; bring it to [0, farplane]
float currentDepth = length(LightPosition.xyz - Position.xyz) - DepthBias;
ShadowFactor = step(currentDepth, closestDepth); // closestDepth > currentDepth -> Occluded, Shadow.
float4 phong = Phong(...);
return ShadowFactor * phong;
}
LightViewProjection is simply light.View * light.Projection
InverseViewProjection is Matrix.Invert(camera.View * Camera.Projection)
Phong() is a function i call to finalize the lighting
The lightDepthMap simply stores length(lightPos - Position)
I'd like to have that artifact shown in the pictures gone to be able to adapt the code to point lights, as well.
Could this be a problem with the way i retrieve the world position from screen space and my depth got a to low resolution?
Help is much appreciated!
--- Update ---
I changed my Lighting shader to display the difference between the distance stored in the shadowMap and the distance calculated on the spot in the Pixelshader:
float4 PixelShaderFct(...) : SV_TARGET0
{
// Get Depth from Texture
float4 Position = 1.0f;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth;
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
// Calculate Projected UV from Light POV -> ScreenPos is in [-1;1] Space
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float lightZ = ShadowMap.Sample(SampleDot, LUV).r;
float Attenuation = AttenuationMap.Sample(SampleType, LUV).r;
float ShadowFactor = 1;
// Linear depth model; lightZ stores (LightPos - Pos)/LightFarPlane
float closestDepth = lightZ * LightFarPlane;
float currentDepth = length(LightPosition.xyz - Position.xyz) -DepthBias;
return (closestDepth - currentDepth);
}
As I am basically outputting Length - (Length - Bias) one would expect to have an Image with "DepthBias" as its color. But that is not the result I'm getting here:
https://imgur.com/a/4PXLH7s
Based on this result, I'm assuming that either i've got precision issues (which i find weird, given that im working with near- and farplanes of [0.1, 50]), or something is wrong with the way im recovering the worldPosition of a given pixel from my DepthMap.
I finally found the solution and I'm posting it here if someone stumbles across the same issue:
The Tutorial I used was for XNA / DX9. But as im targetting DX10+ a tiny change needs to be done:
In XNA / DX9 the UV coordinates are not align with the actual pixels and need to be aligned. That is what - float2(1.0f / GBufferTextureSize.xy); in float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy); was for. This is NOT needed in DX10 and above and will result in the issue i had.
Solution:
UV Coordinates for a Fullscreen Quad:
For XNA / DX9:
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1) - float2(1.0f / GBufferTextureSize.xy);
For Monogame / DX10+
input.ScreenPosition.xy /= input.ScreenPosition.w;
float2 UV = 0.5f * (float2(input.ScreenPosition.x, -input.ScreenPosition.y) + 1)

Converting a texture to a 1d array of float values using a compute shader

I have a fairly simple requirement for a compute shader (DirectCompute through Unity). I have a 128x128 texture and I'd like to turn the red channel of that texture into a 1d array of floats. I need to do this very often so just doing a cpu-side for loop over each texel won't cut it.
Initialization:
m_outputBuffer = new ComputeBuffer(m_renderTexture.width * m_renderTexture.height, sizeof(float));
m_kernelIndex = m_computeShader.FindKernel("CSMain");
Here is the C# method:
/// <summary>
/// This method converts the red channel of the given RenderTexture to a
/// one dimensional array of floats of size width * height.
/// </summary>
private float[] ConvertToFloatArray(RenderTexture renderTexture)
{
m_computeShader.SetTexture(m_kernelIndex, INPUT_TEXTURE, renderTexture);
float[] result = new float[renderTexture.width * renderTexture.height];
m_outputBuffer.SetData(result);
m_computeShader.SetBuffer(m_kernelIndex, OUTPUT_BUFFER, m_outputBuffer);
m_computeShader.Dispatch(m_kernelIndex, renderTexture.width / 8, renderTexture.height / 8, 1);
m_outputBuffer.GetData(result);
return result;
}
and the entire compute shader:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
Texture2D<float4> InputTexture;
RWBuffer<float> OutputBuffer;
[numthreads(8, 8, 1)]
void CSMain(uint3 id : SV_DispatchThreadID)
{
OutputBuffer[id.x * id.y] = InputTexture[id.xy].r;
}
the C# method returns an array of the expected size, and it usually sort-of-corresponds to what I expect. However, even if my input texture is uniformly red, there'll still be some zeroes.
I reconsidered and solved my own question. The answer was in two parts: I was combining the x and y coordinates (id.x and id.y) strangely, and I was using the wrong input semantic. (SV_GroupThreadID instead of SV_DispatchThreadID)
So here's the solution. I also flipped the y axis to match my intuition.
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
Texture2D<float4> InputTexture;
RWBuffer<float> OutputBuffer;
[numthreads(8, 8, 1)]
void CSMain(uint3 id : SV_DispatchThreadID)
{
uint w, h;
InputTexture.GetDimensions(w, h);
OutputBuffer[id.x + id.y * w] = InputTexture[float2(id.x, h - 1 - id.y)].r;
}

SharpDX how to correct vertex positions after matrix rotation

I am getting 2d coordinates from Vector3. I have to correct positions to get right results. Indeed it seems I get correct positions like this but I do not know how to correct position when rotated.
Here my worldViewMatrix that I do operations but those operations not passed to my VertexData then I try to correct positions.
WorldViewMatrix = Matrix.Scaling(Scale) * Matrix.RotationX(Rotation.X) * Matrix.RotationY(Rotation.Y) * Matrix.RotationZ(Rotation.Z) * Matrix.Translation(Position.X, Position.Y, Position.Z) * viewProj;
I am trying to correct it like:
public Vector2 Convert_3Dto2D(Vector3 position, Vector3 translation, Vector3 scale, Vector3 rotation, Matrix viewProj, RenderForm_EX form)
{position += translation;
position += translation;
position = Vector3.Multiply(position, scale);
//ROTATION ?
var project = Vector3.Project(position, 0, 0, form.ClientSize.Width, form.ClientSize.Height, 0, 1, viewProj);
Console.WriteLine(project.X+" "+ project.Y);
return new Vector2(project.X, project.Y);
}
What can I do to correct rotated position ?
If you can, post a little more information about "correct positions". I will take a stab at this and assume you want to move your vertex into world space, then work out what pixel it occupies.
Usually you order multiplying your order by
Translate * Rotate * Scale;
if you want Viewprojection to apply correctly, I believe it should be at the start. V * (t * r * s).
The following link on gamedev stackexchange goes into this. matrix order
Also, your project takes in a Vector3 that has been already multiplied into wvp matrix, I dont see you have multiplied it in your convert_3dto2d function.
Basically, execute a TRS matrix multiply on your original vert, then multiply your WVP matrix then execute your project. You will then get your screen space pixel.

GLSL/SFML - Only make portion of texture alpha

I'm trying to take a portion of the current texture and turn it to 50% transparent. I send in four values, signifying the rectangle I want to make transparent. It seems every time, however, that coord.x/coord.y are set to (0, 0), resulting in the entire image being transparent when I send in any rectangle that starts at (0, 0).
I'm still new to GLSL and and probably approaching this wrong. Any pointers on the correct approach would be greatly appreciated!
Values being sent in
sprite.Shader.SetParameter("texture", sprite.Texture);
sprite.Shader.SetParameter("x1", 0);
sprite.Shader.SetParameter("x2", 5);
sprite.Shader.SetParameter("y1", 0);
sprite.Shader.SetParameter("y2", sprite.Height - 1); // sprite.Height = 32
transparency.frag
uniform sampler2D texture;
uniform float x1;
uniform float x2;
uniform float y1;
uniform float y2;
void main() {
vec2 coord = gl_TexCoord[0].xy;
vec4 pixel_color = texture2D(texture, coord);
if ((coord.x > x1) && (coord.x < x2) && (coord.y > y1) && (coord.y < y2))
{
pixel_color.a -= 0.5;
}
gl_FragColor = pixel_color;
}
Texture coordinates are not in pixels, but are instead given between 0.0 and 1.0.
gl_TexCoord[0].xy;
This call is giving you back a vector2f with values ranging from 0.0 to 1.0. You are checking pixel coordinates against normalized texture coordinates. To solve this you can either scale your texture coordinates, or you can normalize your pixel coordinates.

Whats the logic behind creating a normal map from a texture?

I have looked on google, but the only thing that I could find was a tutorial on how to create one by using photoshop. No interest! I need the logic behind it.
(and i dont need the logic of how to 'use' a bump map, i want to know how to 'make' one!)
I am writing my own HLSL shader and have come as far as to realize that there is some kind of gradient between two pixels which will show its normal - thus with the position of the light can be lit accoardingly.
I want to do this real time so that when the texture changes, the bumpmap does too.
Thanks
I realize that I'm way WAY late to this party but I, too, ran into the same situation recently while attempting to write my own normal map generator for 3ds max. There's bulky and unnecessary libraries for C# but nothing in the way of a simple, math-based solution.
So I ran with the math behind the conversion: the Sobel Operator. That's what you're looking to employ in the shader script.
The following Class is about the simplest implementation I've seen for C#. It does exactly what it's supposed to do and achieves exactly what is desired: a normal map based on either a heightmap, texture or even a programmatically-generated procedural that you provide.
As you can see in the code, I've implemented if / else to mitigate exceptions thrown on edge detection width and height limits.
What it does: samples the HSB Brightness of each pixel / adjoining pixel to determine the scale of the output Hue / Saturation values that are subsequently converted to RGB for the SetPixel operation.
As an aside: you could implement an input control to scale the intensity of the output Hue / Saturation values to scale the subsequent affect that the output normal map will provide your geometry / lighting.
And that's it. No more having to deal with that deprecated, tiny-windowed PhotoShop plugin. Sky's the limit.
Screenshot of C# winforms implementation (source / output):
C# Class to achieve a Sobel-based normal map from source image:
using System.Drawing;
using System.Windows.Forms;
namespace heightmap.Class
{
class Normal
{
public void calculate(Bitmap image, PictureBox pic_normal)
{
Bitmap image = (Bitmap) Bitmap.FromFile(#"yourpath/yourimage.jpg");
#region Global Variables
int w = image.Width - 1;
int h = image.Height - 1;
float sample_l;
float sample_r;
float sample_u;
float sample_d;
float x_vector;
float y_vector;
Bitmap normal = new Bitmap(image.Width, image.Height);
#endregion
for (int y = 0; y < w + 1; y++)
{
for (int x = 0; x < h + 1; x++)
{
if (x > 0) { sample_l = image.GetPixel(x - 1, y).GetBrightness(); }
else { sample_l = image.GetPixel(x, y).GetBrightness(); }
if (x < w) { sample_r = image.GetPixel(x + 1, y).GetBrightness(); }
else { sample_r = image.GetPixel(x, y).GetBrightness(); }
if (y > 1) { sample_u = image.GetPixel(x, y - 1).GetBrightness(); }
else { sample_u = image.GetPixel(x, y).GetBrightness(); }
if (y < h) { sample_d = image.GetPixel(x, y + 1).GetBrightness(); }
else { sample_d = image.GetPixel(x, y).GetBrightness(); }
x_vector = (((sample_l - sample_r) + 1) * .5f) * 255;
y_vector = (((sample_u - sample_d) + 1) * .5f) * 255;
Color col = Color.FromArgb(255, (int)x_vector, (int)y_vector, 255);
normal.SetPixel(x, y, col);
}
}
pic_normal.Image = normal; // set as PictureBox image
}
}
}
A sampler to read your height or depth map.
/// same data as HeightMap, but in a format that the pixel shader can read
/// the pixel shader dynamically generates the surface normals from this.
extern Texture2D HeightMap;
sampler2D HeightSampler = sampler_state
{
Texture=(HeightMap);
AddressU=CLAMP;
AddressV=CLAMP;
Filter=LINEAR;
};
Note that my input map is a 512x512 single-component grayscale texture. Calculating the normals from that is pretty simple:
#define HALF2 ((float2)0.5)
#define GET_HEIGHT(heightSampler,texCoord) (tex2D(heightSampler,texCoord+HALF2))
///calculate a normal for the given location from the height map
/// basically, this calculates the X- and Z- surface derivatives and returns their
/// cross product. Note that this assumes the heightmap is a 512 pixel square for no particular
/// reason other than that my test map is 512x512.
float3 GetNormal(sampler2D heightSampler, float2 texCoord)
{
/// normalized size of one texel. this would be 1/1024.0 if using 1024x1024 bitmap.
float texelSize=1/512.0;
float n = GET_HEIGHT(heightSampler,texCoord+float2(0,-texelSize));
float s = GET_HEIGHT(heightSampler,texCoord+float2(0,texelSize));
float e = GET_HEIGHT(heightSampler,texCoord+float2(-texelSize,0));
float w = GET_HEIGHT(heightSampler,texCoord+float2(texelSize,0));
float3 ew = normalize(float3(2*texelSize,e-w,0));
float3 ns = normalize(float3(0,s-n,2*texelSize));
float3 result = cross(ew,ns);
return result;
}
and a pixel shader to call it:
#define LIGHT_POSITION (float3(0,2,0))
float4 SolidPS(float3 worldPosition : NORMAL0, float2 texCoord : TEXCOORD0) : COLOR0
{
/// calculate a normal from the height map
float3 normal = GetNormal(HeightSampler,texCoord);
/// return it as a color. (Since the normal components can range from -1 to +1, this
/// will probably return a lot of "black" pixels if rendered as-is to screen.
return float3(normal,1);
}
LIGHT_POSITION could (and probably should) be input from your host code, though I've cheated and used a constant here.
Note that this method requires 4 texture lookups per normal, not counting one to get the color. That may not be an issue for you (depending on whatever else your're doing). If that becomes too much of a performance hit, you can either just call it whenever the texture changes, render to a target, and capture the result as a normal map.
An alternative would be to draw a screen-aligned quad textured with the heightmap to a render target and use the ddx/ddy HLSL intrinsics to generate the normals without having to resample the source texture. Obviously you'd do this in a pre-pass step, read the resulting normal map back, and then use it as an input to your later stages.
In any case, this has proved fast enough for me.
The short answer is: there's no way to do this reliably that produces good results, because there's no way to tell the difference between a diffuse texture that has changes in color/brightness due to bumpiness, and a diffuse texture that has changes in color/brightness because the surface is actually a different colour/brightness at that point.
Longer answer:
If you were to assume that the surface were actually a constant colour, then any changes in colour or brightness must be due to shading effects due to bumpiness. Calculate how much brighter/darker each pixel is from the actual surface colour; brighter values indicate parts of the surface that face 'towards' the light source, and darker values indicate parts of the surface that face 'away' from the light source. If you also specify the direction the light is coming from, you can calculate a surface normal at each point on the texture such that it would result in the shading value you calculated.
That's the basic theory. Of course, in reality, the surface is almost never a constant colour, which is why this approach of using purely the diffuse texture as input tends not to work very well. I'm not sure how things like CrazyBump do it but I think they're doing things like averaging the colour over local parts of the image rather than the whole texture.
Ordinarily, normal maps are created from actual 3D models of the surface that are 'projected' onto lower-resolution geometry. Normal maps are just a technique for faking that high-resolution geometry, after all.
Quick answer: It's not possible.
A simple generic (diffuse) texture simply does not contain this information. I haven't looked exactly how Photoshop does it (seen it once used by an artist), but I think they just simply do something like 'depth=r+g+b+a', which basically returns a heightmap/gradient. And then converting the heightmap to a normalmap using a simple edge detect effect to get a Tangent space Normal Map.
Just keep in mind, in most cases you use a normal map to simulate a high res 3D geometry mesh, as it fills in the blank spot vertex-normals leave behind. If your scene heavily relies on lighting, this is a no-go, but if it's a simple directional light, this 'might' work.
Of course, this is just my experience, you might just as well be working on a completely different type of project.

Categories