I load my textures using
Texture2D.FromFile()
then draw them using
spriteBatch.Draw()
But here's the point: I want to change some colors of the image to another ones. So my questions:
How to change single color of the image to another single color (eg. blue to red).
In fact, what I really want to do is changing group of colors to another group of colors. For example red and similar hues to red to blue and similar hues to blue. You can do this for example in Corel PHOTO-PAINT ("Replace Color").
Please have in mind, that I'm a beginner in XNA.
Best regards,
Jack
EDIT:
Thank you very much for help, guys. Callum's answer is very helpful indeed. But I'm wondering is there a built-in function to solve my second problem, because writing your own may be time-consuming. And I think, that kind of function may be very useful. Something like:
color.SetNewColor(Color color_from, Color color_to, int range)
That kind of function, as I've said before, is built in Corel PHOTO-PAINT. To explain it better, here is the example of what I'm talking about:
link text
So, I only set color_from, color_to and range. I think it works like that: it checks every color of the image, if it is in range of color_from, it is changed to adequate color in hue of color_to.
I assume you mean change individual pixels? In that case use the GetData() and SetData() methods of the Texture2D class.
For example, you can get an array containing the colours of the individual pixels by doing this:
// Assume you have a Texture2D called texture
Color[] data = new Color[texture.Width * texture.Height];
texutre.GetData(data);
// You now have a packed array of Colors.
// So, change the 3rd pixel from the right which is the 4th pixel from the top do:
data[4*texture.Width+3] = Color.Red;
// Once you have finished changing data, set it back to the texture:
texture.SetData(data);
Note you can use the other overloads of GetData() to select only a section.
So, to replace each pixel of a specified colour to another colour:
// Assume you have a Texture2D called texture, Colors called colorFrom, colorTo
Color[] data = new Color[texture.Width * texture.Height];
texutre.GetData(data);
for(int i = 0; i < data.Length; i++)
if(data[i] == colorFrom)
data[i] = colorTo;
texture.SetData(data);
To see if hues are similar, try this method:
private bool IsSimilar(Color original, Color test, int redDelta, int blueDelta, int greenDelta)
{
return Math.Abs(original.R - test.R) < redDelta && Math.Abs(original.G - test.G) < greenDelta && Math.Abs(original.B - test.B) < blueDelta;
}
where *delta is the tolerance of change for each colour channel that you want to accept.
To answer your edit, no there is a built in function, but you can just use a mixture of ideas from the two sections above:
Color[] data = new Color[texture.Width * texture.Height];
texutre.GetData(data);
for(int i = 0; i < data.Length; i++)
if(IsSimilar(data[i], colorFrom, range, range, range))
data[i] = colorTo;
texture.SetData(data);
Moving data between the GPU and CPU by using GetData and SetData is an expensive operation. If there are a limited number of colors, you could use a pixel shader effect when rendering to the screen. You can pass an effect to SpriteBatch.Begin:
sampler2D input : register(s0);
/// <summary>The color used to tint the input.</summary>
/// <defaultValue>White</defaultValue>
float4 FromColor : register(C0);
/// <summary>The color used to tint the input.</summary>
/// <defaultValue>Red</defaultValue>
float4 ToColor : register(C1);
/// <summary>Explain the purpose of this variable.</summary>
/// <minValue>05/minValue>
/// <maxValue>10</maxValue>
/// <defaultValue>3.5</defaultValue>
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 Color;
Color= tex2D(input , uv.xy);
if (Color.r == FromColor.r && Color.g == FromColor.g && Color.b == FromColor.b)
return ToColor;
return Color;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 main();
}
}
Create your effect in your LoadContent method:
colorSwapEffect = Content.Load<Effect>(#"Effects\ColorSwap");
colorSwapEffect.Parameters["FromColor"].SetValue(Color.White);
colorSwapEffect.Parameters["ToColor"].SetValue(Color.Red);
And pass the effect to your call to SpriteBatch.Begin():
sprite.Begin(0, BlendState.Opaque, SamplerState.PointWrap,
DepthStencilState.Default, RasterizerState.CullNone, colorSwapEffect);
For what you really want to do, you can swap the red and blue channels even more easily. Change your pixel shader's main() function to this, which swaps b (blue) and r (red):
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 Color;
Color= tex2D(input , uv.xy);
return float4(Color.b, Color.g, Color.r, Color.a);
}
Callum's solution is powerful and flexible.
A more limited solution that is slightly easier to implement is to leverage the spriteBatch color parameter.
The variables
Texture2D sprite; //Assuming you have loaded this somewhere
Color color = Color.Red; //The color you want to use
Vector2 position = new Vector2(0f, 0f); //the position to draw the sprite
The drawing code
//Start the spriteBatch segment, enable alpha blending for transparency
spriteBatch.Begin(SpriteBlendMode.AlphaBlend);
//Draw our sprite at the specified position using a specified color
spriteBatch.Draw(sprite, position, color);
//end the spritebatch
spriteBatch.End();
If your sprite is all white, then using this method will turn your sprite red. Also, make sure you are using a file format with transparency in it, PNG is a favorite.
Callum hit it on the head if you are changing the color of 2D images as it seems you are - but as you can see you actually need to determine the actual pixel you want to modify and edit it rather than "replace yellow with green" for example.
The same logic could be used to do this replacement (simply loop through the pixels of the image and check the color - I can say that be wary when editing textures like this though as they seemed to cause some pretty serious spikes in performance depending on what was done and how often. I didn't fully investigate but I think it was causing quite a bit of garbage collection.
this works for me:
protected override void Initialize()
{
sprite = Content.Load<Texture2D>("Parado");
Color[] data = new Color[sprite.Width * sprite.Height];
sprite.GetData(data);
// new color
Color novaCor =Color.Blue;
for (int i = 0; i < data.Length; i++)
{
// cor roxa no desenho
if (data[i].R == 142
&& data[i].G == 24
&& data[i].B == 115)
{
data[i] = novaCor;
}
}
sprite.SetData<Color>(data);
posicaoNinja = new Vector2(0, 200);
base.Initialize();
}
Related
Ok so I ported a game I have been working on over to Monogame, however I'm having a shader issue now that it's ported. It's an odd bug, since it works on my old XNA project and it also works the first time I use it in the new monogame project, but not after that unless I restart the game.
The shader is a very simple shader that looks at a greyscale image and, based on the grey, picks a color from the lookup texture. Basically I'm using this to randomize a sprite image for an enemy every time a new enemy is placed on the screen. It works for the first time an enemy is spawned, but doesn't work after that, just giving a completely transparent texture (not a null texture).
Also, I'm only targeting Windows Desktop for now, but I am planning to target Mac and Linux at some point.
Here is the shader code itself.
sampler input : register(s0);
Texture2D colorTable;
float seed; //calculate in program, pass to shader (between 0 and 1)
sampler colorTableSampler =
sampler_state
{
Texture = <colorTable>;
};
float4 PixelShaderFunction(float2 c: TEXCOORD0) : COLOR0
{
//get current pixel of the texture (greyscale)
float4 color = tex2D(input, c);
//set the values to compare to.
float hair = 139/255; float hairless = 140/255;
float shirt = 181/255; float shirtless = 182/255;
//var to hold new color
float4 swap;
//pixel coordinate for lookup
float2 i;
i.y = 1;
//compare and swap
if (color.r >= hair && color.r <= hairless)
{
i.x = ((0.5 + seed + 96)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r >= shirt && color.r <= shirtless)
{
i.x = ((0.5 + seed + 64)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 1)
{
i.x = ((0.5 + seed + 32)/128);
swap = tex2D(colorTableSampler,i);
}
if (color.r == 0)
{
i.x = ((0.5 + seed)/128);
swap = tex2D(colorTableSampler, i);
}
return swap;
}
technique ColorSwap
{
pass Pass1
{
// TODO: set renderstates here.
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
And here is the function that creates the texture. I should also note that the texture generation works fine without the shader, I just get the greyscale base image.
public static Texture2D createEnemyTexture(GraphicsDevice gd, SpriteBatch sb)
{
//get a random number to pass into the shader.
Random r = new Random();
float seed = (float)r.Next(0, 32);
//create the texture to copy color data into
Texture2D enemyTex = new Texture2D(gd, CHARACTER_SIDE, CHARACTER_SIDE);
//create a render target to draw a character to.
RenderTarget2D rendTarget = new RenderTarget2D(gd, CHARACTER_SIDE, CHARACTER_SIDE,
false, gd.PresentationParameters.BackBufferFormat, DepthFormat.None);
gd.SetRenderTarget(rendTarget);
//set background of new render target to transparent.
//gd.Clear(Microsoft.Xna.Framework.Color.Black);
//start drawing to the new render target
sb.Begin(SpriteSortMode.Immediate, BlendState.Opaque,
SamplerState.PointClamp, DepthStencilState.None, RasterizerState.CullNone);
//send the random value to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["seed"].SetValue(seed);
//send the palette texture to the shader.
Graphics.GlobalGfx.colorSwapEffect.Parameters["colorTable"].SetValue(Graphics.GlobalGfx.palette);
//apply the effect
Graphics.GlobalGfx.colorSwapEffect.CurrentTechnique.Passes[0].Apply();
//draw the texture (now with color!)
sb.Draw(enemyBase, new Microsoft.Xna.Framework.Vector2(0, 0), Microsoft.Xna.Framework.Color.White);
//end drawing
sb.End();
//reset rendertarget
gd.SetRenderTarget(null);
//copy the drawn and colored enemy to a non-volitile texture (instead of render target)
//create the color array the size of the texture.
Color[] cs = new Color[CHARACTER_SIDE * CHARACTER_SIDE];
//get all color data from the render target
rendTarget.GetData<Color>(cs);
//move the color data into the texture.
enemyTex.SetData<Color>(cs);
//return the finished texture.
return enemyTex;
}
And just in case, the code for loading in the shader:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
colorSwapEffect = new Effect(gd, Reader.ReadBytes((int)Reader.BaseStream.Length));
If anyone has ideas to fix this, I'd really appreciate it, and just let me know if you need other info about the problem.
I am not sure why you have "at" (#) sign in front of the string, when you escaped backslash - unless you want to have \\ in your string, but it looks strange in the file path.
You have wrote in your code:
BinaryReader Reader = new BinaryReader(File.Open(#"Content\\shaders\\test.mgfx", FileMode.Open));
Unless you want \\ inside your string do
BinaryReader Reader = new BinaryReader(File.Open(#"Content\shaders\test.mgfx", FileMode.Open));
or
BinaryReader Reader = new BinaryReader(File.Open("Content\\shaders\\test.mgfx", FileMode.Open));
but do not use both.
I don't see anything super obvious just reading through it, but really this could be tricky for someone to figure out just looking at your code.
I'd recommend doing a graphics profile (via visual studio) and capturing the frame which renders correctly then the frame rendering incorrectly and comparing the state of the two.
Eg, is the input texture what you expect it to be, are pixels being output but culled, is the output correct on the render target (in which case the problem could be Get/SetData), etc.
Change ps_2_0 to ps_4_0_level_9_3.
Monogame cannot use shaders built on HLSL 2.
Also the built in sprite batch shader uses ps_4_0_level_9_3 and vs_4_0_level_9_3, you will get issues if you try to replace the pixel portion of a shader with a different level shader.
This is the only issue I can see with your code.
How to create a simple pixel color shader that say takes a texture, applyes something like masking:
half4 color = tex2D(_Texture0, i.uv.xy);
if(distance(color, mask) > _CutOff)
{
return color;
}
else
{
return static_color;
}
in and returns a texture that can be passed to next shader from c# code in a way like mats[1].SetTexture("_MainTex", mats[0].GetTexture("_MainTex"));?
But... you might not want to do a shader to only modify a texture.
Why not? It is a common practice.
Check out Graphics.Blit. It basically draws a quad with material (including a shader) applied. So you could use your shader to modify a texture. But the texture has to be RenderTexture.
It would be like this:
var mat = new Material(Shader.Find("My Shader"));
var output = new RenderTexture(...);
Graphics.Blit(sourceTexture, output, mat);
sourceTexture in this case will be bound to _MainTex of My Shader.
I'm trying to draw 2D polygons with wide, colored outlines without using a custom shader.
(if I were to write one it'd probably be slower than using the CPU since I'm not well-versed in shaders)
To do so I plan to draw the polygons like normal, and then use the resulting depth-buffer as a stencil when drawing the same, expanded geometry.
The XNA "GraphicsDevice" can draw primitives given any array of IVertexType instances:
DrawUserPrimitives<T>(PrimitiveType primitiveType, T[] vertexData, int vertexOffset, int primitiveCount, VertexDeclaration vertexDeclaration) where T : struct;
I've defined an IvertexType for 2D coordinate space:
public struct VertexPosition2DColor : IVertexType
{
public VertexPosition2DColor (Vector2 position, Color color) {
this.position = position;
this.color = color;
}
public Vector2 position;
public Color color;
public static VertexDeclaration declaration = new VertexDeclaration (
new VertexElement(0, VertexElementFormat.Vector2, VertexElementUsage.Position, 0),
new VertexElement(sizeof(float)*2, VertexElementFormat.Color, VertexElementUsage.Color, 0)
);
VertexDeclaration IVertexType.VertexDeclaration {
get {return declaration;}
}
}
I've defined an array class for storing a polygon's vertices, colors, and edge normals:
I hope to pass this class as the T[] parameter in the GraphicDevice's DrawPrimitives function.
The goal is for the outline vertices to be GPU-calculated since it's apparently good at such things.
internal class VertexOutlineArray : Array
{
internal VertexOutlineArray (Vector2[] positions, Vector2[] normals, Color[] colors, Color[] outlineColors, bool outlineDrawMode) {
this.positions = positions;
this.normals = normals;
this.colors = colors;
this.outlineColors = outlineColors;
this.outlineDrawMode = outlineDrawMode;
}
internal Vector2[] positions, normals;
internal Color[] colors, outlineColors;
internal float outlineWidth;
internal bool outlineDrawMode;
internal void SetVertex(int index, Vector2 position, Vector2 normal, Color color, Color outlineColor) {
positions[index] = position;
normals[index] = normal;
colors[index] = color;
outlineColors[index] = outlineColor;
}
internal VertexPosition2DColor this[int i] {
get {return (outlineDrawMode)? new VertexPosition2DColor(positions[i] + outlineWidth*normals[i], outlineColors[i])
: new VertexPosition2DColor(positions[i], colors[i]);
}
}
}
I want to be able to render the shape and it's outline like so:
the depth buffer is used as a stencil when drawing the expanded outliner geometry
protected override void RenderLocally (GraphicsDevice device)
{
// Draw shape
mVertices.outlineDrawMode = true; //mVertices is a VertexOutlineArray instance
device.RasterizerState = RasterizerState.CullNone;
device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth16;
device.Clear(ClearOptions.DepthBuffer, Color.SkyBlue, 0, 0);
device.DrawUserPrimitives<VertexPosition2DColor>(PrimitiveType.TriangleList, (VertexPosition2DColor[])mVertices, 0, mVertices.Length -2, VertexPosition2DColor.declaration);
// Draw outline
mVertices.outlineDrawMode = true;
device.DepthStencilState = new DepthStencilState {
DepthBufferWriteEnable = true,
DepthBufferFunction = CompareFunction.Greater //keeps the outline from writing over the shape
};
device.DrawUserPrimitives(PrimitiveType.TriangleList, mVertices.ToArray(), 0, mVertices.Count -2);
}
This doesn't work though, because I'm unable to pass my VertexArray class as a T[]. How can I amend this or otherwise accomplish the goal of doing outline calculations on the GPU without a custom shader?
I am wondering why you dont simply write a class that draws the outline using pairs of thin triangles as lines? You could create a generalized polyline routine that receives an input of the 2d points and a width of the line and the routine spits out a VertexBuffer.
I realize this isn't answering your question but I cant see what the advantage is of trying to do it your way. Is there a specific effect you want to achieve or are you going to be manipulating the data very frequently or scaling the polygons alot?
The problem you are likely having is that XNA4 for Windows Phone 7 does not support custom shaders at all. In fact they purposefully limited it to a set of predefined shaders because of the number of permutations that would have to be tested. The ones currently supported are:
AlphaTestEffect
BasicEffect
EnvironmentMapEffect
DualTextureEffect
SkinnedEffect
You can read about them here:
http://msdn.microsoft.com/en-us/library/bb203872(v=xnagamestudio.40).aspx
I have not tested creating or utilizing a IVertexType with Vector2 position and normal and so I cant comment on if it is supported or not. If I were to do this I would use just the BasicEffect and VertexPositionNormal for the main polygonal shape rendering and adjust the DiffuseColor for each polygon. For rendering the outline you use the existing VertexBuffer and scale it appropriately by calling GraphicsDevice.Viewport.Unproject() to determine the 3d coordinate distance require to produce a n-pixel 2d screen distance(your outline width).
Remember that when you are using the BasicEffect, or any effect for that matter, that you have to loop through the EffectPass array of the CurrentTechnique and call the Apply() method for each pass before you make your Draw call.
device.DepthStencilState = DepthStencilState.Default;
device.BlendState = BlendState.AlphaBlend;
device.RasterizerState = RasterizerState.CullCounterClockwise;
//Set the appropriate vertex and indice buffers
device.SetVertexBuffer(_polygonVertices);
device.Indices = _polygonIndices;
foreach (EffectPass pass in _worldEffect.CurrentTechnique.Passes)
{
pass.Apply();
PApp.Graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _polygonVertices.VertexCount, 0, _polygonIndices.IndexCount / 3);
}
I have a folder containing about 2500 PNG images, with no transparency. Every image is about 500 x 500 (some are 491 x 433, others 511 x 499 etc).
I want to programatically downsize every image to 10% of its original size, and to set the white background of every image as the transparent color.
To test the functionality of my application without resizing 2500 images every time, I used 15 images of billiard balls as a "test" folder.
Now my problem is with the following code, I get a resized and cropped PNG, whith a almost transparent background. The problem is that a white border on the left and top appears in every image viewer (Irfan View, Paint.Net and GIMP)
How can I avoid this border?
Here is the code I used for this:
void ResizeI(string[] Paths, string OutPut, Methods m, PointF Values, bool TwoCheck, bool Overwrite, float[] CropVals)
{
for (int i = 0; i < Paths.Length; i++)//Paths is the array of all images
{
string Path = Paths[i];//current image
Bitmap Oimg = (Bitmap)Bitmap.FromFile(Path);//original image
Bitmap img = new Bitmap((int)(Oimg.Width - CropVals[0] - CropVals[1]), (int)(Oimg.Height - CropVals[2] - CropVals[3]));//cropped image
Graphics ggg = Graphics.FromImage(img);
ggg.DrawImage(Oimg, new RectangleF(((float)-CropVals[0]), ((float)-CropVals[2]), Oimg.Width - CropVals[1], Oimg.Height - CropVals[3]));
ggg.Flush(System.Drawing.Drawing2D.FlushIntention.Flush);
ggg.Dispose();
PointF scalefactor = GetScaleFactor(img, Values, TwoCheck);//the scale factor equals 0.1 for 10%
Bitmap newimg = new Bitmap((int)(Math.Ceiling(((float)img.Width) * scalefactor.X)), (int)(Math.Ceiling(((float)img.Height) * scalefactor.Y)));
System.Drawing.Imaging.ImageFormat curform = img.RawFormat;
string OutPath = System.IO.Path.Combine(OutPut, System.IO.Path.GetFileName(Path));
OutPath = CheckPath(OutPath, Overwrite);//Delete if exsits
Graphics g = Graphics.FromImage(newimg);
g.InterpolationMode = GetModeFromMethod(m);//Bicubic interpolation
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
g.ScaleTransform(scalefactor.X, scalefactor.Y);
g.DrawImage(img, new Rectangle(0, 0, (int)Math.Ceiling(((float)newimg.Width) / scalefactor.X) + 1, (int)Math.Ceiling(((float)newimg.Height) / scalefactor.Y) + 1));
//g.Flush(System.Drawing.Drawing2D.FlushIntention.Flush);
newimg.MakeTransparent(Color.White);
newimg.Save(OutPath, curform);
g.Dispose();
img.Dispose();
}
}
And here is a example of the white border I mentioned. Download the image or drag it around and put a black background under it to see the border:
-- EDIT --
I managed to write this function instead of newimg.MakeTransparent(...):
void SetTransparent(ref Bitmap b)
{
for (int i = 0; i < b.Width; i++)
{
for (int ii = 0; ii < b.Height; ii++)
{
Color cc = b.GetPixel(i, ii);
int tog = cc.R + cc.G + cc.B;
float durch = 255f - (((float)tog) / 3f);
b.SetPixel(i, ii, Color.FromArgb((int)durch, cc.R, cc.G, cc.B));
}
}
}
the problem is that my billiard ball now look like this:
I can't help with the specific code, but maybe can explain what's happening.
newimg.MakeTransparent(Color.White);
This will take one color, and make it transparent. The catch is that, there's a spectrum of colors between the edge of your billiard ball (orange) and the pure white background. This is the antialiasing of the edge (which will be a blend of colors from the pure orange of the ball to the pure white of the background).
By turning only pure white transparent, you are still left with this 'halo' of white-ish colors around the object.
There's perhaps a better way to handle this using white values as an alpha mask but I'm not sure if .net's image library can handle that (I'll have to defer to someone with more .net experience comes along).
In the interim, though, what may help is if you set the transparency before you do the resize. It won't be a true fix, but might reduce the halo effect some.
UPDATE:
So, I've been thinking about this some more, and I'm not entirely sure there's a programmatic solution for creating alpha channel transparency automatically, as I have a hunch there's a lot of subjectivity involved.
Off the top of my head, this is what I came up with:
assuming the top left pixel is your 100% transparent color (we'll say pixel X).
assuming your background that you want transparent is one solid color (vs. a pattern)
assume a roughly 3px anti-aliasing
you could then...
check for neighboring pixels to X. For each neighboring pixel to X that matches the color of X, we set that 100% transparent.
if a pixel next to x is NOT the same, we could check it's relative hue.
branch from that pixel and check it's surrounding pixels.
do this marking each pixel (a, b, c, etc) until the relative hue changes a certain percentage and/or the pixel color is the same as it's neighbor (with a certain margin of variability). If it does, we'll assume we're well into the interior of the object.
now step backwards through the pixels you marked, adjusting the transparency...say c=0% b=33% a=66%
But still, that's a large oversimplification of what would really have to happen. It's making a lot of assumptions, not taking into account a patterned background, and completely ignores interior areas that need to also be transparent (such as a donut hole).
Normally in a graphics editing app, this is done via selecting blocks of the background color, feathering the edges of said selection, then turning that into an alpha max.
It's a really interesting question/problem. I, alas, don't have the answer for you but will be watching this thread with curiosity!
Your edited SetTransparent function is on the right direction, and you're almost there.
Just a slight modification you can try this:
void SetTransparent(ref Bitmap b)
{
const float selectivity = 20f; // set it to some number much larger than 1 but less than 255
for (int i = 0; i < b.Width; i++)
{
for (int ii = 0; ii < b.Height; ii++)
{
Color cc = b.GetPixel(i, ii);
float avgg = (cc.R + cc.G + cc.B) / 3f;
float durch = Math.Min(255f, (255f - avgg) * selectivity);
b.SetPixel(i, ii, Color.FromArgb((int)durch, cc.R, cc.G, cc.B));
}
}
}
The idea is that to avoid affecting the alpha value of the billard ball, you will only want to reduce the alpha for colors that are very close to zero. In other words, it is a function that rises rapidly from 0 to 255 as the color moves away from white.
This will not produce the ideal result, as #DA said, because there is some information lost (transparent pixels and non-transparent pixels being blended together near the object's edges) that is unrecoverable. To make perfectly alias-free alpha edges, the source image itself must be generated with transparency.
I've run into an issue where the SpriteBatch doesn't draw with modified Alpha of specified "Trail".
What I'm trying to do is a "fade effect" where the alpha of "Item" decreases so that it gets more transparent until it eventually gets destroyed. However it doesn't change the alpha on it?
The alpha does decrease but the alpha value of the color doesn't get modified, it stays the same color and then dissapears
Here's what happens:
http://dl.dropbox.com/u/14970061/Untitled.jpg
And this is what I'm trying to do http://dl.dropbox.com/u/14970061/Untitled2.jpg
Here's a cutout of the related code I'm using at the moment.
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend);
for (int i = 0; i < Trails.Count; i++)
{
Trail Item = Trails[i];
if (Item.alpha < 1)
{
Trails.RemoveAt(i);
i--;
continue;
}
Item.alpha -= 255 * (float)gameTime.ElapsedGameTime.TotalSeconds;
Color color = new Color(255, 0, 0, Item.alpha);
spriteBatch.Draw(simpleBullet, Item.position, color);
}
spriteBatch.End();
Don't use NonPremultiplied if you don't have to! Leave it as AlphaBlend. Read up on Premultiplied Alpha and how it was added in XNA 4.0.
The correct solution to your problem is to use the multiply operator on your colour:
Color color = Color.Red * Item.alpha/255f;
Or use the equivalent Lerp function to interpolate it to transparency:
Color color = Color.Lerp(Color.Red, Color.Transparent, Item.alpha/255f);
(Also, if you did change your blend state to non-premultiplied, to be correct you'd have to change your content import to not premultiply your textures, and ensure your content has blendable data around its transparent edges.)
Make sure that your call to spriteBatch.Begin() includes the necessary parameters:
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend);
Alpha range is between 1 (fully opaque) and 0 (fully transparent), also its a float I believe. So you are going out of bounds of its range.
edit: try decreasing it by 0.1 and if its less or equal to zero, delete it
Turns out it did work, I just used the wrong BlendState, I switched to BlendState.NonPremultiplied and now it works.