I'm creating an XNA game that creates random islands from multiple sprites. It creates them in a separate thread, then compiles them to a single texture using RenderTarget2D.
To create my RenderTarget2D I need a graphics device. If I use the automatically created graphics device, things work okay for the most part, except that draw calls in the main game thread conflict with it. Using lock() on the graphics device causes flickering, and even then the texture is sometimes not created properly.
If I create my own Graphics device, there are no conflicts but the islands never render correctly, instead coming out pure black and white. I have no idea why this happens. Basically I need a way to create a second graphics device that lets me get the same results, instead of the black / white. Anyone got any ideas?
Here's the code I'm using to try and create my second graphics device for exclusive use by the TextureBuilder:
var presParams = game.GraphicsDevice.PresentationParameters.Clone();
// Configure parameters for secondary graphics device
GraphicsDevice2 = new GraphicsDevice(game.GraphicsDevice.Adapter, GraphicsProfile.HiDef, presParams);
Here's the code I'm using to render my islands to a single texture:
public IslandTextureBuilder(List<obj_Island> islands, List<obj_IslandDecor> decorations, SeaGame game, Vector2 TL, Vector2 BR, int width, int height)
{
gDevice = game.Game.GraphicsDevice; //default graphics
//gDevice = game.GraphicsDevice2 //created graphics
render = new RenderTarget2D(gDevice, width, height, false, SurfaceFormat.Color, DepthFormat.None);
this.islands = islands;
this.decorations = decorations;
this.game = game;
this.width = width;
this.height = height;
this.TL = TL; //top left coordinate
this.BR = BR; //bottom right coordinate
}
public Texture2D getTexture()
{
lock (gDevice)
{
//Set render target. Clear the screen.
gDevice.SetRenderTarget(render);
gDevice.Clear(Color.Transparent);
//Point camera at the island
Camera cam = new Camera(gDevice.Viewport);
cam.Position = TL;
cam.Update();
//Draw all of the textures to render
SpriteBatch batch = new SpriteBatch(gDevice);
batch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, cam.Transform);
{
foreach (obj_Island island in islands)
{
island.Draw(batch);
}
foreach (obj_IslandDecor decor in decorations)
{
decor.Draw(batch);
}
}
batch.End();
//Clear render target
gDevice.SetRenderTarget(null);
//Copy to texture2D for permanant storage
Texture2D texture = new Texture2D(gDevice, render.Width, render.Height);
Color[] color = new Color[render.Width * render.Height];
render.GetData<Color>(color);
texture.SetData<Color>(color);
Console.WriteLine("done");
return texture;
}
Here's what should happen, with a transparent background (and usually does if I use the default device)
http://i110.photobucket.com/albums/n81/taumonkey/GoodIsland.png
Here's happens when the default device conflicts and the main thread manages to call Clear() (even though it's locked too)
NotSoGoodIsland.png (need 10 reputation....)
Here's what happens when I use my own Graphics device
http://i110.photobucket.com/albums/n81/taumonkey/BadIsland.png
Thanks in advance for any help provided!
I may have solved this by moving the RenderToTarget code into the Draw() method and calling it from within the main thread the first time Draw() is called.
Related
In Monogame, I'm creating a 2D texture that's a horizontal flip of an existing texture using GetData and SetData like this:
private static Texture2D Flip(Texture2D art)
{
Color[] oldData = new Color[art.Width * art.Height];
art.GetData(oldData);
Texture2D newTexture = new Texture2D(Root.Graphics.GraphicsDevice, art.Width, art.Height);
Color[] newData = new Color[art.Width * art.Height];
// Edit newData using oldData
newTexture.SetData(newData);
return newTexture;
}
This seemed to work on Windows but when running it on Android, although the flip itself works, some other, seemingly random and unrelated, texture gets corrupted: it becomes an amalgam of several textures and partially vertically flipped.
I suspect that my call to SetData somehow overwrites another region of memory or something.
How can I create programatically a new texture without this corruption happening?
I'm making a game in C# and XNA 4.0. It uses multiple objects (such as a player character, enemies, platforms, etc.), each with their own texture and hitbox. The objects are created and drawn using code similar to the following:
class Object
{
Texture2D m_texture;
Rectangle m_hitbox;
public Object(Texture2D texture, Vector2 position)
{
m_texture = texture;
m_hitbox = new Rectangle((int)position.X, (int)position.Y, texture.Width, texture.Height);
}
public void Draw(SpriteBatch spriteBatch)
{
spriteBatch.Draw(texture, m_hitbox, Color.White);
}
}
Everything works properly, but I also want to allow the player to resize the game window. The main game class uses the following code to do so:
private void Update(GameTime gameTime)
{
if (playerChangedWindowSize == true)
{
graphics.PreferredBackBufferHeight = newHeight;
graphics.PreferredBackBufferWidth = newWidth;
graphics.ApplyChanges();
}
}
This will inevitably cause the positions and hitboxes of the objects to become inaccurate whenever the window size is changed. Is there an easy way for me to change the positions and hitboxes based on a new window size? If the new window width was twice as big as it was before I could probably just double the width of every object's hitbox, but I'm sure that's a terrible way of doing it.
Consider normalizing your coordinate system to view space {0...1} and only apply the window dimensions scalar at the point of rendering.
View Space to Screen Space Conversion
Pseudo code for co-ordinates:
x' = x * screenResX
y' = y * screenResY
Similarly for dimensions. Let's say you have a 32x32 sprite originally designed for 1920x1080 and wish to scale so that it fits the same logical space on screen (so it doesn't appear unnaturally small):
r = 32 * screenResX' / screenResY
width' = width * r
height' = height * r
Then it won't matter what resolution the user has set.
If you are concerned over performance this may impose, then you can perform the above at screen resolution change time for a one-off computation. However you should still always keep the original viewspace {0...1}.
Collision Detection
It's arguably more efficient to perform CD on screen space coordinates
Hope this helps
I'm attempting to draw many textures onto one texture to create a map for an RTS game, and while I can can draw an individual texture onscreen, drawing them all to a render target seems to have no effect (the window remains AliceBlue when debugging) . I am trying to determine whether or not anything is even drawn to the render target, and so I am trying to save it as a Jpeg to a file and then view that Jpeg, from my desktop. How can I access that Jpeg from MemoryStream?
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
gridImage = new RenderTarget2D(GraphicsDevice, 1000, 1000);
GraphicsDevice.SetRenderTarget(gridImage);
GraphicsDevice.Clear(Color.AliceBlue);
spriteBatch.Begin();
foreach (tile t in grid.tiles)
{
Texture2D dirt = Content.Load<Texture2D>(t.texture);
spriteBatch.Draw(dirt, t.getVector2(), Color.White);
}
test = Content.Load<Texture2D>("dirt");
GraphicsDevice.SetRenderTarget(null);
MemoryStream memoryStream = new MemoryStream();
gridImage.SaveAsJpeg(memoryStream, gridImage.Width, gridImage.Height); //Or SaveAsPng( memoryStream, texture.Width, texture.Height )
// rt.Dispose();
spriteBatch.End();
}
I made a simple screenshot method,
void Screenie()
{
int width = GraphicsDevice.PresentationParameters.BackBufferWidth;
int height = GraphicsDevice.PresentationParameters.BackBufferHeight;
//Force a frame to be drawn (otherwise back buffer is empty)
Draw(new GameTime());
//Pull the picture from the buffer
int[] backBuffer = new int[width * height];
GraphicsDevice.GetBackBufferData(backBuffer);
//Copy to texture
Texture2D texture = new Texture2D(GraphicsDevice, width, height, false, GraphicsDevice.PresentationParameters.BackBufferFormat);
texture.SetData(backBuffer);
//Get a date for file name
DateTime date = DateTime.Now; //Get the date for the file name
Stream stream = File.Create(SCREENSHOT FOLDER + date.ToString("MM-dd-yy H;mm;ss") + ".png");
//Save as PNG
texture.SaveAsPng(stream, width, height);
stream.Dispose();
texture.Dispose();
}
Also, Are you loading Texture2D dirt = Content.Load<Texture2D>(t.texture); Every frame? It looks like it... Dont do that! That will cause massive lag loading hundreds of tiles, hundreds of times per second! instead make a global texture Texture2D DirtDexture and in your LoadContent() method do DirtTexture = Content.Load<Texture2D>(t.texture); Now when you draw you can do spriteBatch.Draw(DirtTexture,...
Do the same with spriteBatch = new SpriteBatch(GraphicsDevice); and
gridImage = new RenderTarget2D(GraphicsDevice, 1000, 1000);
You dont need to make new RenderTarget and Spritebatch each frame! Just do it in the Initialize() Method!
Also see RenderTarget2D and XNA RenderTarget Sample For more information on using render targets
EDIT: I realize its all in LoadContent, I didnt see that because the formatting was messed up, remember to add your Foreach (Tile, etc) in your Draw Method
I need to render a sprite in a texture2d so that this texture can later be render on the screen, but at the same time I need to access the pixels of this modified texture so, if I add let's say a sprite in the texture and I call a get pixel function in a coordinate where the sprite was then it should give me the new pixel values that correspond to the sprite (that has been blended with the texture2d).
I am using xna 4.0 not 3.5 or less.
thanks.
the equivalent of Graphics.FromImage(img).DrawImage(... in GDI
I tried this and failed
public static Texture2D DrawSomething(Texture2D old, int X, int Y, int radius) {
var pp = Res.game.GraphicsDevice.PresentationParameters;
var r = new RenderTarget2D(Res.game.GraphicsDevice, old.Width, old.Height, false, pp.BackBufferFormat, pp.DepthStencilFormat,
pp.MultiSampleCount, RenderTargetUsage.DiscardContents);
Res.game.GraphicsDevice.SetRenderTarget(r);
var s = new SpriteBatch(r.GraphicsDevice);
s.Begin();
s.Draw(old, new Vector2(0, 0), Color.White);
s.Draw(Res.picture, new Rectangle(X - radius / 2, Y - radius / 2, radius, radius), Color.White);
s.End();
Res.game.GraphicsDevice.SetRenderTarget(null);
return r;
}
Res.game is basically a pointer to the main game form and Res.picture is a random texture2d
Use a RenderTarget2D: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.rendertarget2d.aspx
If possible, avoid creating a new render target every time. Create it outside of the method and reuse it for best performance.
Here some pseudo-code:
public Texture2D DrawOnTop(RenderTarget2D target, Texture2D oldTexture, Texture2D picture)
{
SetRenderTarget(target);
Draw(oldTexture);
Draw(picture);
SetRenderTarget(null);
return target;
}
If the size changes frequently and you cannot reuse the target, at least dispose the previous one, like annonymously suggested in the comments. Each new target will consume memory, unless you release the resource in time. But dispose it after you used it in a shader or did whatever you wanted to do with it. Once disposed it is gone.
I have a fine, working system in C# that draws with Cairo commands in a render method. However, sometimes I would like to draw into a pixmap, rather than dynamically when the screen needs to be updated. For example, currently I have:
public override void render(Cairo.Context g) {
g.Save();
g.Translate(x, y);
g.Rotate(_rotation);
g.Scale(_scaleFactor, _scaleFactor);
g.Scale(1.0, ((double)_yRadius)/((double)_xRadius));
g.LineWidth = border;
g.Arc(x1, y2, _xRadius, 0.0, 2.0 * Math.PI);
g.ClosePath();
}
But I would like, if I choose, to render the Cairo commands to a Gtk.Pixbuf. Something like:
g = GetContextFromPixbuf(pixbuf);
render(g);
Is that possible? It would be great if I didn't have to turn the context back into a pixbuf, but that the cairo drawing would go directly to the pixbuf. Any hints on this would be appreciated!
The answer is actually quite easy: when you render the objects, render them to a context created from a saved surface. Then when you render the window, insert a context based on the same saved surface.
Create a surface:
surface = new Cairo.ImageSurface(Cairo.Format.Argb32, width, height);
Render a shape to the surface:
using (Cairo.Context g = new Cairo.Context(surface)) {
shape.render(g); // Cairo drawing commands
}
Render the window:
g.Save();
g.SetSourceSurface(surface, 0, 0);
g.Paint();
g.Restore();
... // other Cairo drawing commands
That's it!