This is my first time here (With an account), I'm looking to make a height-map editor with XNA 4.0 (Somewhat similar to Earth2150's, if you've played it).
I've written a custom Effect File here: http://pastebin.com/CUFtB8Z9
It blends textures just fine, except it blends over the entire map.
What I really want is to be able to have multiple textures on my heightmap (Which i'll then blend with the nearest other texture) and I am looking for ways to do this.
I thought about assigning a float in my Vertex Declaration, then using an array of textures to "Assign" a texture to a specific vertex. But how would I go about getting my effect file to take in a different value for a texture on each vertex?
Sorry about not being very clear, here is my Draw code and my Vertex Declaration:
(Excuse the random number changing, It was my attempt to try and get each vertex to pick a random texture)
public void Draw(Texture2D[] TextureArray)
{
RasterizerState rs = new RasterizerState();
rs.CullMode = CullMode.None;
//rs.FillMode = FillMode.WireFrame;
EditGame.Instance.GraphicsDevice.RasterizerState = rs;
Random rnd = new Random();
foreach (EffectPass pass in EditGame.Instance.baseEffect.CurrentTechnique.Passes)
{
if (SlowCounter == 60)
{
EditGame.Instance.baseEffect.Parameters["xTexture"].SetValue(TextureArray[rnd.Next(0, 2)]);
EditGame.Instance.baseEffect.Parameters["bTexture"].SetValue(TextureArray[rnd.Next(0, 2)]);
SlowCounter = 0;
}
pass.Apply();
EditGame.Instance.GraphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices, 0, vertices.Length, indices, 0, indices.Length / 3, VP2TC.VertexDeclaration);
}
SlowCounter++;
}
public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration(
new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
new VertexElement(12, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate,0),
new VertexElement(20, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate,1),
new VertexElement(28, VertexElementFormat.Single, VertexElementUsage.BlendWeight,0),
new VertexElement(32, VertexElementFormat.Vector3,VertexElementUsage.Normal,0),
new VertexElement(44, VertexElementFormat.Color,VertexElementUsage.Color,0)
);
As I said in my comment, I'm not certain this is what you're looking for but I'll go ahead anyway.
I think what you probably want is described here.
Essentially you have a Vector 4 which stores the weights of each texture and then take a weighted average of all 4 textures weighted by the individual elements in the vector (acting as 4 blend weights).
If you want to blend textures without having to have a blend element for every single texture things get more fun.
You could have a single blend weight, which essentially picks the blending of 2 adjacent textures in order. So if you have:
Snow
Grass
Rock
Sand
Blend Weight = 0.5
Would pick a blend of Grass (0.25) and Rock (0.75). Blended in equal amounts (since it's halfway between them).
If you want lots of textures, your shader is going to become very cumbersome with ~50 texture samplers. If you really want this many textures you should consider a texture atlas or just procedurally generated virtual textures with the blending already done at generation time.
Related
var random = new Random();
Canvas.SetLeft(rectangle, random.Next((int)(ImageCanvas.Width - 100)));
Canvas.SetTop(rectangle, random.Next((int)(ImageCanvas.Height - 100)));
return rectangle;
So the above code just randomly sets the Top and Left positions of a rectangle that will appear on the canvas. I can easily reuse this code if I want multiple rectangles to appear on the screen, however what I was having trouble doing is tweaking the code so that each rectangle is never overlapping each other.
I thought of maybe doing a while loop that keeps running random.Next((int)(ImageCanvas.Height - 100)) continuously until it is not equal to the previous random... But that isn't perfect. The shapes are quite big, so having slightly different X or Y coordinates doesn't prevent a overlap. They would somehow need to be at least 50 pixels distance between each other or something for this to prevent any overlap between other rectangles.
Assuming your Canvas is reasonably large, i.e. the rectangles will not occupy a large amount of the area, it most likely suffices to simply generate rectangles at random (as in your example code), and then check to make sure they don't overlap with any of the previously selected rectangles.
Note that "overlaps with another rectangle" is really the same as "has a non-empty intersection with another rectangle". And .NET provides that functionality; for WPF, you should use the System.Windows.Rect struct. It even has an IntersectsWith() method, giving the information you need in a single call (otherwise you'd have to get the intersection as one step, and then check to see if the result is empty in a second step).
The whole thing might look something like this:
List<Rectangle> GenerateRectangles(Canvas canvas, int count, Size size)
{
Random random = new Random();
List<Rect> rectangles = new List<Rect>(count);
while (count-- > 0)
{
Rect rect;
do
{
rect = new Rect(random.Next((int)(canvas.Width - size.Width),
(int)(canvas.Height - size.Height), size.Width, size.Height);
} while (rectangles.Any(r => r.IntersectsWith(rect));
rectangles.Add(rect);
}
return rectangles.Select(r =>
{
Rectangle rectangle = new Rectangle();
rectangle.Width = r.Width;
rectangle.Height = r.Height;
canvas.SetLeft(rectangle, r.Left);
canvas.SetTop(rectangle, r.Top);
return rectangle;
}).ToList();
}
You would want something more sophisticated if you were dealing with a more constrained area and/or a larger number of rectangles. The above won't scale well for large numbers of rectangles, especially if the probability of a collision is high. But for your stated goals, it should work fine.
I have decided to have a go at making a dungeon crawler game with the Xna framework. I am a computer science student and am quite familiar with c# and .net framework. I have some questions about different parts of the development for my engine.
Loading Maps
I have a tile class that stores the vector2 position, 2dtexture and dimensions of the tile. I have another class called tilemap that has a list of tiles that are indexed by position. I am reading from a text file which is in the number format above that matches the number to the index in the tile list and creates a new tile with the correct texture and position, storing it into another list of tiles.
public List<Tile> tiles = new List<Tile>(); // List of tiles that I have added to the game
public List<TileRow> testTiles = new List<TileRow>(); // Tilerow contains a list of tiles along the x axis along with there vector2 position.
Reading and storing the map tiles.
using (StreamReader stream = new StreamReader("TextFile1.txt"))
{
while (stream.EndOfStream != true)
{
line = stream.ReadLine().Trim(' ');
lineArray = line.Split(' ');
TileRow tileRow = new TileRow();
for (int x = 0; x < lineArray.Length; x++)
{
tileXCo = x * tiles[int.Parse(lineArray[x])].width;
tileYCo = yCo * tiles[int.Parse(lineArray[x])].height;
tileRow.tileList.Add(new Tile(tiles[int.Parse(lineArray[x])].titleTexture, new Vector2(tileXCo,tileYCo)));
}
testTiles.Add(tileRow);
yCo++;
}
}
For drawing the map.
public void Draw(SpriteBatch spriteBatch, GameTime gameTime)
{
foreach (TileRow tes in testTiles)
{
foreach (Tile t in tes.tileList)
{
spriteBatch.Draw(t.titleTexture, t.position, Color.White);
}
}
}
Questions:
Is this the correct way I should be doing it, or should I just be storing a list referencing my tiles list?
How would I deal with Multi Layered Maps?
Collision Detection
At the moment I have a method that is looping through every tile that is stored in my testTiles list and checking to see if its dimensions are intersecting with the players dimensions and then return a list of all the tiles that are. I have a derived class of my tile class called CollisionTile that triggers a collision when the player and that rectangle intersect. (public class CollisionTile : Tile)
public List<Tile> playerArrayPosition(TileMap tileMap)
{
List<Tile> list = new List<Tile>();
foreach (TileRow test in tileMap.testTiles)
{
foreach (Tile t in test.tileList)
{
Rectangle rectangle = new Rectangle((int)tempPosition.X, (int)tempPosition.Y, (int)playerImage.Width / 4, (int)playerImage.Height / 4);
Rectangle rectangle2 = new Rectangle((int)t.position.X, (int)t.position.Y, t.width, t.height);
if (rectangle.Intersects(rectangle2))
{
list.Add(t);
}
}
}
return list;
}
Yeah, I am pretty sure this is not the right way to check for tile collision. Any help would be great.
Sorry for the long post, any help would be much appreciated.
You are right. This is a very inefficient way to draw and check for collision on your tiles. What you should be looking into is a Quadtree data structure.
A quadtree will store your tiles in a manner that will allow you to query your world using a Rectangle, and your quadtree will return all tiles that are contained inside of that Rectangle.
List<Tiles> tiles = Quadtree.GetObjects(rectangle);
This allows you to select only the tiles that need to be processed. For example, when drawing your tiles, you could specify a Rectangle the size of your viewport, and only those tiles would be drawn (culling).
Another example, is you can query the world with your player's Rectangle and only check for collisions on the tiles that are returned for that portion of your world.
For loading your tiles, you may want to consider loading into a two dimensional array, instead of a List. This would allow you to fetch a tile based on its position, instead of cross referencing it between two lists.
Tile[,] tiles = new Tile[,]
Tile tile = tiles[x,y];
Also, in this case, an array data structure would be a lot more efficient than using a List.
For uniform sets of tiles with standard widths and heights, it is quite easy to calculate which tiles are visible on the screen, and to determine which tile(s) your character is overlapping with. Even though I wrote the QuadTree in Jon's answer, I think it's overkill for this. Generally, the formula is:
tileX = someXCoordinate % tileWidth;
tileY = someYCoordinate % tileHeight;
Then you can just look that up in a 2D array tiles[tileX, tileY]. For drawing, this can be used to figure out which tile is in the upper left corner of the screen, then either do the same again for the bottom right (+1), or add tiles to the upper left to fill the screen. Then your loop will look more like:
leftmostTile = screenX % tileWidth; // screenX is the left edge of the screen in world coords
topmostTile = screenY % tileHeight;
rightmostTile = (screenX + screenWidth) % tileWidth;
bottommostTile = (screenY + screenHeight) % tileHeight;
for(int tileX = leftmostTile; tileX <= rightmostTile; tileX++)
{
for(int tileY = topmostTile; tileY <= bottommostTile; tileY++)
{
Tile t = tiles[tileX][tileY];
// ... more stuff
}
}
The same simple formula can be used to quickly figure out which tile(s) are under rectangular areas.
IF however, your tiles are non-uniform, or you have an isometric view, or you want the additional functionality that a QuadTree provides, I would consider Jon's answer and make use of a QuadTree. I would try to keep tiles out of the QuadTree if you can though.
Given a CGImage or UIImage, how can I apply a custom color look-up table (aka LUT, CLUT, Color Map)? That is, how can I map the colors in the image to new colors, given a mapping?
I will describe three approaches that you can take.
Do it manually
Use a CIFilter (available in iOS 5)
Use a shader (GPU program)
Manual Approach
First, get the raw image data from the UIImage. You may do this by creating a byte array of the appropriate size (width * height * components), then drawing into it with CGBitmapContext. Something like this:
using (var colorSpace = CGColorSpace.CreateDeviceRGB())
using (var context = new CGBitmapContext(
bytes, width, height, bitsPerComponent, bytesPerRow,
colorSpace, CGBitmapFlags.ByteOrder32Big | CGBitmapFlags.PremultipliedLast))
{
var drawRect = new RectangleF(-rectangle.X, -rectangle.Y, image.CGImage.Width, image.CGImage.Height);
context.ClipToRect(new RectangleF(0, 0, width, height));
context.DrawImage(drawRect, image.CGImage);
}
Then create an array of bytes for your output image (probably the same size). Iterate over the input image looking up color values in your Look-Up Table and writing them to the output image.
You may convert the output bytes to an image by constructing a CGDataProvider from the bytes, then a CGImage from that, and then a UIImage from the CGImage.
CIFilter Approach
As of iOS 5, Apple provides many built-in image operations. Generally, these are easy to use and faster than doing it manually. However, depending on how your color look-up table is specified, you might not find a perfect fit.
Given a CIFilter, you may set the inputImage, then retrieve the output from the OutputImage property. See the documentation for a list of filters in the CICategoryColorAdjustment and CICategoryColorEffect categories. As of this writing, I would suggest looking at CIToneCurve, CIFalseColor, CIColorMap and CIColorCube. Sadly, at the time of writing, CIColorMap is not available on iOS.
If you are doing scientific imaging, and you only use a linear gradient between two colors, I suggest looking at CIFalseColor.
Here is an example of populating a CIColorCube with a random color look-up function. Note that CIFilters may be created dynamically by name (not type-safe) or in a strongly-typed way. If you know what filter you want to use at code-time, I suggest using the strongly-typed filter (CIColorCube rather than CIFilter.FromName("CIColorCube")). I am using the dynamic approach in the following example, as it is more confusing.
static void PopulateColorCubeFilter(CIFilter filter)
{
if (filter.Name != "CIColorCube")
return;
int dimension = 64; // Must be power of 2, max of 128 (max of 64 on ios)
int cubeDataSize = 4 * dimension * dimension * dimension;
filter[new NSString("inputCubeDimension")] = new NSNumber(dimension);
// 2 : 32 /4 = 8 = 2^3
// 4 : 256 /4 = 64 = 4^3
// 8 : 2048 /4 = 512 = 8^3
var cubeData = new byte[cubeDataSize];
var rnd = new Random();
rnd.NextBytes(cubeData);
for (int i = 3; i < cubeDataSize; i += 4)
cubeData[i] = 255;
filter[new NSString("inputCubeData")] = NSData.FromArray(cubeData);
}
GPU Shader Approach
Finally, the most general-purpose high-performance approach that remains correct under magnification would be to do the color mapping on the GPU. This is more effort than the first two approaches, so you need to decide if it is worth it.
Load one texture map (aka Sampler2D) with your input image
Load a second texture map with your color map (practically it is a 1D Texture, but with OpenGL ES probably needs to be loaded as a 2D Texture)
Apply the two texture maps and a shader to a quad
In the shader use the texture coordinates to look up the color in the first texture, then use the value in the first texture to look up in the second texture. That is your output color.
I am trying to extract out 3D distance in mm between two known points in a 2D image. I am using square AR markers in order to get the camera coordinates relative to the markers in the scene. The points are the corners of these markers.
An example is shown below:
The code is written in C# and I am using XNA. I am using AForge.net for the CoPlanar POSIT
The steps I take in order to work out the distance:
1. Mark corners on screen. Corners are represented in 2D vector form, Image centre is (0,0). Up is positive in the Y direction, right is positive in the X direction.
2. Use AForge.net Co-Planar POSIT algorithm to get pose of each marker:
float focalLength = 640; //Needed for POSIT
float halfCornerSize = 50; //Represents 1/2 an edge i.e. 50mm
AVector[] modelPoints = new AVector3[]
{
new AVector3( -halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, halfCornerSize ),
new AVector3( halfCornerSize, 0, -halfCornerSize ),
new AVector3( -halfCornerSize, 0, -halfCornerSize ),
};
CoplanarPosit coPosit = new CoplanarPosit(modelPoints, focalLength);
coPosit.EstimatePose(cornersToEstimate, out marker1Rot, out marker1Trans);
3. Convert to XNA rotation/translation matrix (AForge uses OpenGL matrix form):
float yaw, pitch, roll;
marker1Rot.ExtractYawPitchRoll(out yaw, out pitch, out roll);
Matrix xnaRot = Matrix.CreateFromYawPitchRoll(-yaw, -pitch, roll);
Matrix xnaTranslation = Matrix.CreateTranslation(marker1Trans.X, marker1Trans.Y, -marker1Trans.Z);
Matrix transform = xnaRot * xnaTranslation;
4. Find 3D coordinates of the corners:
//Model corner points
cornerModel = new Vector3[]
{
new Vector3(halfCornerSize,0,-halfCornerSize),
new Vector3(-halfCornerSize,0,-halfCornerSize),
new Vector3(halfCornerSize,0,halfCornerSize),
new Vector3(-halfCornerSize,0,halfCornerSize)
};
Matrix markerTransform = Matrix.CreateTranslation(cornerModel[i].X, cornerModel[i].Y, cornerModel[i].Z);
cornerPositions3d1[i] = (markerTransform * transform).Translation;
//DEBUG: project corner onto screen - represented by brown dots
Vector3 t3 = viewPort.Project(markerTransform.Translation, projectionMatrix, viewMatrix, transform);
cornersProjected1[i].X = t3.X; cornersProjected1[i].Y = t3.Y;
5. Look at the 3D distance between two corners on a marker, this represents 100mm. Find the scaling factor needed to convert this 3D distance to 100mm. (I actually get the average scaling factor):
for (int i = 0; i < 4; i++)
{
//Distance scale;
distanceScale1 += (halfCornerSize * 2) / Vector3.Distance(cornerPositions3d1[i], cornerPositions3d1[(i + 1) % 4]);
}
distanceScale1 /= 4;
6. Finally I find the 3D distance between related corners and multiply by the scaling factor to get distance in mm:
for(int i = 0; i < 4; i++)
{
distance[i] = Vector3.Distance(cornerPositions3d1[i], cornerPositions3d2[i]) * scalingFactor;
}
The distances acquired are never truly correct. I used the cutting board as it allowed me easy calculation of what the distances should be. The above image calculated a distance of 147mm (expected 150mm) for corner 1 (red to purple). The image below shows 188mm (expected 200mm).
What is also worrying is the fact that when measuring the distance between marker corners sharing an edge on the same marker, the 3D distances obtained are never the same. Another thing I noticed is that the brown dots never seem to exactly match up with the colored dots. The colored dots are the coordinates used as input to the CoPlanar posit. The brown dots are the calculated positions from the center of the marker calculated via POSIT.
Does anyone have any idea what might be wrong here? I am pulling out my hair trying to figure it out. The code should be quite simple, I don't think I have made any obvious mistakes with the code. I am not great at maths so please point out where my basic maths might be wrong as well...
You are using way to many black boxes in your question. What is the focal length in the second step? Why go through ypr in step 3? How do you calibrate? I recommend to start over from scratch without using libraries that you do not understand.
Step 1: Create a camera model. Understand the errors, build a projection. If needed apply a 2d filter for lens distortion. This might be hard.
Step 2: Find you markers in 2d, after removing lens distortion. Make sure you know the error and that you get the center. Maybe over multiple frames.
Step 3: Un-project to 3d. After 1 and 2 this should be easy.
Step 4: ???
Step 5: Profit! (Measure distance in 3d and know your error)
I think you need to have 3D photo (two photo from a set of distance) so you can get the parallax distance from image differences
I have looked on google, but the only thing that I could find was a tutorial on how to create one by using photoshop. No interest! I need the logic behind it.
(and i dont need the logic of how to 'use' a bump map, i want to know how to 'make' one!)
I am writing my own HLSL shader and have come as far as to realize that there is some kind of gradient between two pixels which will show its normal - thus with the position of the light can be lit accoardingly.
I want to do this real time so that when the texture changes, the bumpmap does too.
Thanks
I realize that I'm way WAY late to this party but I, too, ran into the same situation recently while attempting to write my own normal map generator for 3ds max. There's bulky and unnecessary libraries for C# but nothing in the way of a simple, math-based solution.
So I ran with the math behind the conversion: the Sobel Operator. That's what you're looking to employ in the shader script.
The following Class is about the simplest implementation I've seen for C#. It does exactly what it's supposed to do and achieves exactly what is desired: a normal map based on either a heightmap, texture or even a programmatically-generated procedural that you provide.
As you can see in the code, I've implemented if / else to mitigate exceptions thrown on edge detection width and height limits.
What it does: samples the HSB Brightness of each pixel / adjoining pixel to determine the scale of the output Hue / Saturation values that are subsequently converted to RGB for the SetPixel operation.
As an aside: you could implement an input control to scale the intensity of the output Hue / Saturation values to scale the subsequent affect that the output normal map will provide your geometry / lighting.
And that's it. No more having to deal with that deprecated, tiny-windowed PhotoShop plugin. Sky's the limit.
Screenshot of C# winforms implementation (source / output):
C# Class to achieve a Sobel-based normal map from source image:
using System.Drawing;
using System.Windows.Forms;
namespace heightmap.Class
{
class Normal
{
public void calculate(Bitmap image, PictureBox pic_normal)
{
Bitmap image = (Bitmap) Bitmap.FromFile(#"yourpath/yourimage.jpg");
#region Global Variables
int w = image.Width - 1;
int h = image.Height - 1;
float sample_l;
float sample_r;
float sample_u;
float sample_d;
float x_vector;
float y_vector;
Bitmap normal = new Bitmap(image.Width, image.Height);
#endregion
for (int y = 0; y < w + 1; y++)
{
for (int x = 0; x < h + 1; x++)
{
if (x > 0) { sample_l = image.GetPixel(x - 1, y).GetBrightness(); }
else { sample_l = image.GetPixel(x, y).GetBrightness(); }
if (x < w) { sample_r = image.GetPixel(x + 1, y).GetBrightness(); }
else { sample_r = image.GetPixel(x, y).GetBrightness(); }
if (y > 1) { sample_u = image.GetPixel(x, y - 1).GetBrightness(); }
else { sample_u = image.GetPixel(x, y).GetBrightness(); }
if (y < h) { sample_d = image.GetPixel(x, y + 1).GetBrightness(); }
else { sample_d = image.GetPixel(x, y).GetBrightness(); }
x_vector = (((sample_l - sample_r) + 1) * .5f) * 255;
y_vector = (((sample_u - sample_d) + 1) * .5f) * 255;
Color col = Color.FromArgb(255, (int)x_vector, (int)y_vector, 255);
normal.SetPixel(x, y, col);
}
}
pic_normal.Image = normal; // set as PictureBox image
}
}
}
A sampler to read your height or depth map.
/// same data as HeightMap, but in a format that the pixel shader can read
/// the pixel shader dynamically generates the surface normals from this.
extern Texture2D HeightMap;
sampler2D HeightSampler = sampler_state
{
Texture=(HeightMap);
AddressU=CLAMP;
AddressV=CLAMP;
Filter=LINEAR;
};
Note that my input map is a 512x512 single-component grayscale texture. Calculating the normals from that is pretty simple:
#define HALF2 ((float2)0.5)
#define GET_HEIGHT(heightSampler,texCoord) (tex2D(heightSampler,texCoord+HALF2))
///calculate a normal for the given location from the height map
/// basically, this calculates the X- and Z- surface derivatives and returns their
/// cross product. Note that this assumes the heightmap is a 512 pixel square for no particular
/// reason other than that my test map is 512x512.
float3 GetNormal(sampler2D heightSampler, float2 texCoord)
{
/// normalized size of one texel. this would be 1/1024.0 if using 1024x1024 bitmap.
float texelSize=1/512.0;
float n = GET_HEIGHT(heightSampler,texCoord+float2(0,-texelSize));
float s = GET_HEIGHT(heightSampler,texCoord+float2(0,texelSize));
float e = GET_HEIGHT(heightSampler,texCoord+float2(-texelSize,0));
float w = GET_HEIGHT(heightSampler,texCoord+float2(texelSize,0));
float3 ew = normalize(float3(2*texelSize,e-w,0));
float3 ns = normalize(float3(0,s-n,2*texelSize));
float3 result = cross(ew,ns);
return result;
}
and a pixel shader to call it:
#define LIGHT_POSITION (float3(0,2,0))
float4 SolidPS(float3 worldPosition : NORMAL0, float2 texCoord : TEXCOORD0) : COLOR0
{
/// calculate a normal from the height map
float3 normal = GetNormal(HeightSampler,texCoord);
/// return it as a color. (Since the normal components can range from -1 to +1, this
/// will probably return a lot of "black" pixels if rendered as-is to screen.
return float3(normal,1);
}
LIGHT_POSITION could (and probably should) be input from your host code, though I've cheated and used a constant here.
Note that this method requires 4 texture lookups per normal, not counting one to get the color. That may not be an issue for you (depending on whatever else your're doing). If that becomes too much of a performance hit, you can either just call it whenever the texture changes, render to a target, and capture the result as a normal map.
An alternative would be to draw a screen-aligned quad textured with the heightmap to a render target and use the ddx/ddy HLSL intrinsics to generate the normals without having to resample the source texture. Obviously you'd do this in a pre-pass step, read the resulting normal map back, and then use it as an input to your later stages.
In any case, this has proved fast enough for me.
The short answer is: there's no way to do this reliably that produces good results, because there's no way to tell the difference between a diffuse texture that has changes in color/brightness due to bumpiness, and a diffuse texture that has changes in color/brightness because the surface is actually a different colour/brightness at that point.
Longer answer:
If you were to assume that the surface were actually a constant colour, then any changes in colour or brightness must be due to shading effects due to bumpiness. Calculate how much brighter/darker each pixel is from the actual surface colour; brighter values indicate parts of the surface that face 'towards' the light source, and darker values indicate parts of the surface that face 'away' from the light source. If you also specify the direction the light is coming from, you can calculate a surface normal at each point on the texture such that it would result in the shading value you calculated.
That's the basic theory. Of course, in reality, the surface is almost never a constant colour, which is why this approach of using purely the diffuse texture as input tends not to work very well. I'm not sure how things like CrazyBump do it but I think they're doing things like averaging the colour over local parts of the image rather than the whole texture.
Ordinarily, normal maps are created from actual 3D models of the surface that are 'projected' onto lower-resolution geometry. Normal maps are just a technique for faking that high-resolution geometry, after all.
Quick answer: It's not possible.
A simple generic (diffuse) texture simply does not contain this information. I haven't looked exactly how Photoshop does it (seen it once used by an artist), but I think they just simply do something like 'depth=r+g+b+a', which basically returns a heightmap/gradient. And then converting the heightmap to a normalmap using a simple edge detect effect to get a Tangent space Normal Map.
Just keep in mind, in most cases you use a normal map to simulate a high res 3D geometry mesh, as it fills in the blank spot vertex-normals leave behind. If your scene heavily relies on lighting, this is a no-go, but if it's a simple directional light, this 'might' work.
Of course, this is just my experience, you might just as well be working on a completely different type of project.