I need to render a sprite in a texture2d so that this texture can later be render on the screen, but at the same time I need to access the pixels of this modified texture so, if I add let's say a sprite in the texture and I call a get pixel function in a coordinate where the sprite was then it should give me the new pixel values that correspond to the sprite (that has been blended with the texture2d).
I am using xna 4.0 not 3.5 or less.
thanks.
the equivalent of Graphics.FromImage(img).DrawImage(... in GDI
I tried this and failed
public static Texture2D DrawSomething(Texture2D old, int X, int Y, int radius) {
var pp = Res.game.GraphicsDevice.PresentationParameters;
var r = new RenderTarget2D(Res.game.GraphicsDevice, old.Width, old.Height, false, pp.BackBufferFormat, pp.DepthStencilFormat,
pp.MultiSampleCount, RenderTargetUsage.DiscardContents);
Res.game.GraphicsDevice.SetRenderTarget(r);
var s = new SpriteBatch(r.GraphicsDevice);
s.Begin();
s.Draw(old, new Vector2(0, 0), Color.White);
s.Draw(Res.picture, new Rectangle(X - radius / 2, Y - radius / 2, radius, radius), Color.White);
s.End();
Res.game.GraphicsDevice.SetRenderTarget(null);
return r;
}
Res.game is basically a pointer to the main game form and Res.picture is a random texture2d
Use a RenderTarget2D: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.rendertarget2d.aspx
If possible, avoid creating a new render target every time. Create it outside of the method and reuse it for best performance.
Here some pseudo-code:
public Texture2D DrawOnTop(RenderTarget2D target, Texture2D oldTexture, Texture2D picture)
{
SetRenderTarget(target);
Draw(oldTexture);
Draw(picture);
SetRenderTarget(null);
return target;
}
If the size changes frequently and you cannot reuse the target, at least dispose the previous one, like annonymously suggested in the comments. Each new target will consume memory, unless you release the resource in time. But dispose it after you used it in a shader or did whatever you wanted to do with it. Once disposed it is gone.
Related
I'm making a game in C# and XNA 4.0. It uses multiple objects (such as a player character, enemies, platforms, etc.), each with their own texture and hitbox. The objects are created and drawn using code similar to the following:
class Object
{
Texture2D m_texture;
Rectangle m_hitbox;
public Object(Texture2D texture, Vector2 position)
{
m_texture = texture;
m_hitbox = new Rectangle((int)position.X, (int)position.Y, texture.Width, texture.Height);
}
public void Draw(SpriteBatch spriteBatch)
{
spriteBatch.Draw(texture, m_hitbox, Color.White);
}
}
Everything works properly, but I also want to allow the player to resize the game window. The main game class uses the following code to do so:
private void Update(GameTime gameTime)
{
if (playerChangedWindowSize == true)
{
graphics.PreferredBackBufferHeight = newHeight;
graphics.PreferredBackBufferWidth = newWidth;
graphics.ApplyChanges();
}
}
This will inevitably cause the positions and hitboxes of the objects to become inaccurate whenever the window size is changed. Is there an easy way for me to change the positions and hitboxes based on a new window size? If the new window width was twice as big as it was before I could probably just double the width of every object's hitbox, but I'm sure that's a terrible way of doing it.
Consider normalizing your coordinate system to view space {0...1} and only apply the window dimensions scalar at the point of rendering.
View Space to Screen Space Conversion
Pseudo code for co-ordinates:
x' = x * screenResX
y' = y * screenResY
Similarly for dimensions. Let's say you have a 32x32 sprite originally designed for 1920x1080 and wish to scale so that it fits the same logical space on screen (so it doesn't appear unnaturally small):
r = 32 * screenResX' / screenResY
width' = width * r
height' = height * r
Then it won't matter what resolution the user has set.
If you are concerned over performance this may impose, then you can perform the above at screen resolution change time for a one-off computation. However you should still always keep the original viewspace {0...1}.
Collision Detection
It's arguably more efficient to perform CD on screen space coordinates
Hope this helps
My scene is 2048 x 1152, and the camera never moves. When I create a rectangle with the following:
timeBarRect = new Rect(220, 185, Screen.width / 3, Screen.height / 50);
Its position changes depending on the resolution of my game, so I can't figure out how to get it to always land where I want it on the screen. To clarify, if I set the resolution to 16:9, and change the size of the preview window, the game will resize at ratios of 16:9, but the bar will move out from where it's supposed to be.
I have two related questions:
Is it possible to place the Rect at a global coordinate? Since the screen is always 2048 x 1152, if I could just place it at a certain coordinate, it'd be perfect.
Is the Rect a UI element? When it's created, I can't find it in the hierarchy. If it's a UI element, I feel like it should be created relative to a canvas/camera, but I can't figure out a way to do that either.
Update:
I am realizing now that I was unclear about what is actually being visualized. Here is that information: Once the Rect is created, I create a texture, update the size of that texture in Update() and draw it to the Rect in OnGui():
timeTexture = new Texture2D (1, 1);
timeTexture.SetPixel(0,0, Color.green);
timeTexture.Apply();
The texture size being changed:
void Update ()
{
if (time < timerMax) {
playerCanAttack = false;
time = time + (10 * Time.deltaTime);
} else {
time = timerMax;
playerCanAttack = true;
}
The actual visualization of the Rect, which is being drawn in a different spot depending on the size of the screen:
void OnGUI(){
float ratio = time / 500;
float rectWidth = ratio * Screen.width / 1.6f;
timeBarRect.width = rectWidth;
GUI.DrawTexture (timeBarRect, timeTexture);
}
I don't know that I completely understand either of the two questions I posed, but I did discover that the way to get the rect's coordinates to match the screen no matter what resolution was not using global coordinates, but using the camera's coordinates, and placing code in Update() such that the rect's coordinates were updated:
timeBarRect.x = cam.pixelWidth / timerWidth;
timeBarRect.y = cam.pixelHeight / timerHeight;
I'm creating an XNA game that creates random islands from multiple sprites. It creates them in a separate thread, then compiles them to a single texture using RenderTarget2D.
To create my RenderTarget2D I need a graphics device. If I use the automatically created graphics device, things work okay for the most part, except that draw calls in the main game thread conflict with it. Using lock() on the graphics device causes flickering, and even then the texture is sometimes not created properly.
If I create my own Graphics device, there are no conflicts but the islands never render correctly, instead coming out pure black and white. I have no idea why this happens. Basically I need a way to create a second graphics device that lets me get the same results, instead of the black / white. Anyone got any ideas?
Here's the code I'm using to try and create my second graphics device for exclusive use by the TextureBuilder:
var presParams = game.GraphicsDevice.PresentationParameters.Clone();
// Configure parameters for secondary graphics device
GraphicsDevice2 = new GraphicsDevice(game.GraphicsDevice.Adapter, GraphicsProfile.HiDef, presParams);
Here's the code I'm using to render my islands to a single texture:
public IslandTextureBuilder(List<obj_Island> islands, List<obj_IslandDecor> decorations, SeaGame game, Vector2 TL, Vector2 BR, int width, int height)
{
gDevice = game.Game.GraphicsDevice; //default graphics
//gDevice = game.GraphicsDevice2 //created graphics
render = new RenderTarget2D(gDevice, width, height, false, SurfaceFormat.Color, DepthFormat.None);
this.islands = islands;
this.decorations = decorations;
this.game = game;
this.width = width;
this.height = height;
this.TL = TL; //top left coordinate
this.BR = BR; //bottom right coordinate
}
public Texture2D getTexture()
{
lock (gDevice)
{
//Set render target. Clear the screen.
gDevice.SetRenderTarget(render);
gDevice.Clear(Color.Transparent);
//Point camera at the island
Camera cam = new Camera(gDevice.Viewport);
cam.Position = TL;
cam.Update();
//Draw all of the textures to render
SpriteBatch batch = new SpriteBatch(gDevice);
batch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, cam.Transform);
{
foreach (obj_Island island in islands)
{
island.Draw(batch);
}
foreach (obj_IslandDecor decor in decorations)
{
decor.Draw(batch);
}
}
batch.End();
//Clear render target
gDevice.SetRenderTarget(null);
//Copy to texture2D for permanant storage
Texture2D texture = new Texture2D(gDevice, render.Width, render.Height);
Color[] color = new Color[render.Width * render.Height];
render.GetData<Color>(color);
texture.SetData<Color>(color);
Console.WriteLine("done");
return texture;
}
Here's what should happen, with a transparent background (and usually does if I use the default device)
http://i110.photobucket.com/albums/n81/taumonkey/GoodIsland.png
Here's happens when the default device conflicts and the main thread manages to call Clear() (even though it's locked too)
NotSoGoodIsland.png (need 10 reputation....)
Here's what happens when I use my own Graphics device
http://i110.photobucket.com/albums/n81/taumonkey/BadIsland.png
Thanks in advance for any help provided!
I may have solved this by moving the RenderToTarget code into the Draw() method and calling it from within the main thread the first time Draw() is called.
I'm trying to draw 2D polygons with wide, colored outlines without using a custom shader.
(if I were to write one it'd probably be slower than using the CPU since I'm not well-versed in shaders)
To do so I plan to draw the polygons like normal, and then use the resulting depth-buffer as a stencil when drawing the same, expanded geometry.
The XNA "GraphicsDevice" can draw primitives given any array of IVertexType instances:
DrawUserPrimitives<T>(PrimitiveType primitiveType, T[] vertexData, int vertexOffset, int primitiveCount, VertexDeclaration vertexDeclaration) where T : struct;
I've defined an IvertexType for 2D coordinate space:
public struct VertexPosition2DColor : IVertexType
{
public VertexPosition2DColor (Vector2 position, Color color) {
this.position = position;
this.color = color;
}
public Vector2 position;
public Color color;
public static VertexDeclaration declaration = new VertexDeclaration (
new VertexElement(0, VertexElementFormat.Vector2, VertexElementUsage.Position, 0),
new VertexElement(sizeof(float)*2, VertexElementFormat.Color, VertexElementUsage.Color, 0)
);
VertexDeclaration IVertexType.VertexDeclaration {
get {return declaration;}
}
}
I've defined an array class for storing a polygon's vertices, colors, and edge normals:
I hope to pass this class as the T[] parameter in the GraphicDevice's DrawPrimitives function.
The goal is for the outline vertices to be GPU-calculated since it's apparently good at such things.
internal class VertexOutlineArray : Array
{
internal VertexOutlineArray (Vector2[] positions, Vector2[] normals, Color[] colors, Color[] outlineColors, bool outlineDrawMode) {
this.positions = positions;
this.normals = normals;
this.colors = colors;
this.outlineColors = outlineColors;
this.outlineDrawMode = outlineDrawMode;
}
internal Vector2[] positions, normals;
internal Color[] colors, outlineColors;
internal float outlineWidth;
internal bool outlineDrawMode;
internal void SetVertex(int index, Vector2 position, Vector2 normal, Color color, Color outlineColor) {
positions[index] = position;
normals[index] = normal;
colors[index] = color;
outlineColors[index] = outlineColor;
}
internal VertexPosition2DColor this[int i] {
get {return (outlineDrawMode)? new VertexPosition2DColor(positions[i] + outlineWidth*normals[i], outlineColors[i])
: new VertexPosition2DColor(positions[i], colors[i]);
}
}
}
I want to be able to render the shape and it's outline like so:
the depth buffer is used as a stencil when drawing the expanded outliner geometry
protected override void RenderLocally (GraphicsDevice device)
{
// Draw shape
mVertices.outlineDrawMode = true; //mVertices is a VertexOutlineArray instance
device.RasterizerState = RasterizerState.CullNone;
device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth16;
device.Clear(ClearOptions.DepthBuffer, Color.SkyBlue, 0, 0);
device.DrawUserPrimitives<VertexPosition2DColor>(PrimitiveType.TriangleList, (VertexPosition2DColor[])mVertices, 0, mVertices.Length -2, VertexPosition2DColor.declaration);
// Draw outline
mVertices.outlineDrawMode = true;
device.DepthStencilState = new DepthStencilState {
DepthBufferWriteEnable = true,
DepthBufferFunction = CompareFunction.Greater //keeps the outline from writing over the shape
};
device.DrawUserPrimitives(PrimitiveType.TriangleList, mVertices.ToArray(), 0, mVertices.Count -2);
}
This doesn't work though, because I'm unable to pass my VertexArray class as a T[]. How can I amend this or otherwise accomplish the goal of doing outline calculations on the GPU without a custom shader?
I am wondering why you dont simply write a class that draws the outline using pairs of thin triangles as lines? You could create a generalized polyline routine that receives an input of the 2d points and a width of the line and the routine spits out a VertexBuffer.
I realize this isn't answering your question but I cant see what the advantage is of trying to do it your way. Is there a specific effect you want to achieve or are you going to be manipulating the data very frequently or scaling the polygons alot?
The problem you are likely having is that XNA4 for Windows Phone 7 does not support custom shaders at all. In fact they purposefully limited it to a set of predefined shaders because of the number of permutations that would have to be tested. The ones currently supported are:
AlphaTestEffect
BasicEffect
EnvironmentMapEffect
DualTextureEffect
SkinnedEffect
You can read about them here:
http://msdn.microsoft.com/en-us/library/bb203872(v=xnagamestudio.40).aspx
I have not tested creating or utilizing a IVertexType with Vector2 position and normal and so I cant comment on if it is supported or not. If I were to do this I would use just the BasicEffect and VertexPositionNormal for the main polygonal shape rendering and adjust the DiffuseColor for each polygon. For rendering the outline you use the existing VertexBuffer and scale it appropriately by calling GraphicsDevice.Viewport.Unproject() to determine the 3d coordinate distance require to produce a n-pixel 2d screen distance(your outline width).
Remember that when you are using the BasicEffect, or any effect for that matter, that you have to loop through the EffectPass array of the CurrentTechnique and call the Apply() method for each pass before you make your Draw call.
device.DepthStencilState = DepthStencilState.Default;
device.BlendState = BlendState.AlphaBlend;
device.RasterizerState = RasterizerState.CullCounterClockwise;
//Set the appropriate vertex and indice buffers
device.SetVertexBuffer(_polygonVertices);
device.Indices = _polygonIndices;
foreach (EffectPass pass in _worldEffect.CurrentTechnique.Passes)
{
pass.Apply();
PApp.Graphics.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _polygonVertices.VertexCount, 0, _polygonIndices.IndexCount / 3);
}
I am doing custom drawing using the GDI+.
Normally if I want to fit whatever I am drawing to the window, I calculate the appropriate ratio and I ScaleTransform everything by that ratio:
e.Graphics.ScaleTransform(ratio, ratio);
The problem with ScaleTransform is that it scales everything including pen strokes and brushes.
Hoe do I scale all of the pixel coordinates of what I'm drawing? Every line, rectangle, or path is basically a series of points. So I can multiply all of those points by the ratio manually, but is there an easy alternative to do this more seamlessly?
Try putting all your objects in a GraphicsPath instance first. It doesn't have a ScaleTransform method but you can transform the objects with GraphicsPath.Transform. You can pass a scaling matrix via Matrix.Scale.
You can wrap the GDI graphics object and store the scale factor
interface IDrawing
{
void Scale(float sx, float sy);
void Translate(float dx, float dy);
void SetPen(Color col, float thickness);
void DrawLine(Point from, Point to);
// ... more methods
}
class GdiPlusDrawing : IDrawing
{
private float scale;
private Graphics graphics;
private Pen pen;
public GdiPlusDrawing(Graphics g)
{
float scale = 1.0f;
}
public void Scale(float s)
{
scale *= s;
graphics.ScaleTransform(s,s);
}
public void SetPen(Color color, float thickness)
{
// Use scale to compensate pen thickness.
float penThickness = thickness/scale;
pen = new Pen(color, penThickness); // Note, need to dispose.
}
// Implement rest of IDrawing
}
I think ScaleTransform works on every numeric value that the GDI context is concerned with, so you can't just use it for coordinates, unfortunately. WPF has a GeometryTransform but I don't know of an equivalent to it in GDI+.
If you're concerned about code duplication you could always write a utility method to draw the shapes with a certain scale level applied to their points.
You could also try manually reversing the ScaleTransform by applying the inverse of it to any objects you don't want scaled; I know some brushes expose this method.
Fortunately, Pen has a local ScaleTransform by which inverse rescaling can be done to compensate for the global transform.
Pen.ResetTransform after using each rescaling before the next, or the current pen scaling (independent of graphics context) can shrink to nearly nothing (actually, one pixel), shoot to the moon, or points midway.