I'm having difficulty figuring how to do something in XNA.
I have something like this:
public void Draw()
{
spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.AlphaBlend);
DrawFirstObject(); // Depth = 0.5f
spriteBatch.End();
spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.Additive);
DrawSecondObject(); // Depth = 0.2f
spriteBatch.End();
}
Basically I need to have 2 different spritebatch begin calls one with AlphaBlend and one with Additive BlendState. But the problem is when I make this the drawn objects from the second call are always drawn on top of the first ones instead behind them where they need to be. I can't reformat my code so the second call is on the top and I need the keep the depth order. So I would be thankful if you have any suggestion.
As you are using transparent images in your first Draw call, ideally you should be using SpriteSortMode.BackToFront.
SpriteSortMode
Related
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Media;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
namespace TileEngine
{
class Renderer : DrawableGameComponent
{
public Renderer(Game game) : base(game)
{
}
SpriteBatch spriteBatch ;
protected override void LoadContent()
{
base.LoadContent();
}
public override void Draw(GameTime gameTime)
{
base.Draw(gameTime);
}
public override void Update(GameTime gameTime)
{
base.Update(gameTime);
}
public override void Initialize()
{
base.Initialize();
}
public RenderTarget2D new_texture(int width, int height)
{
Texture2D TEX = new Texture2D(GraphicsDevice, width, height); //create the texture to render to
RenderTarget2D Mine = new RenderTarget2D(GraphicsDevice, width, height);
GraphicsDevice.SetRenderTarget(Mine); //set the render device to the reference provided
//maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation
//works out. Wish I could call base.draw here.
return Mine; //I'm hoping that this returns the same instance and not a copy.
}
public void draw_texture(int width, int height, RenderTarget2D Mine)
{
GraphicsDevice.SetRenderTarget(null); //Set the renderer to render to the backbuffer again
Rectangle drawrect = new Rectangle(0, 0, width, height); //Set the rendering size to what we want
spriteBatch.Begin(); //This uses spritebatch to draw the texture directly to the screen
spriteBatch.Draw(Mine, drawrect, Color.White); //This uses the color white
spriteBatch.End(); //ends the spritebatch
//Call base.draw after this since it doesn't seem to recognize inside the function
//maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation
//works out. Wish I could call base.draw here.
}
}
}
I solved a previous issue where I wasn't allowed to access GraphicsDevice outside the main Default 'main' class
Ie "Game" or "Game1" etc.
Now I have a new issue. FYi no one told me that it would be possible to use GraphicsDevice References to cause it to not be null by using the drawable class. (hopefully after this last bug is solved it doesn't still return null)
Anyways at present the problem is that I can't seem to get it to initialize as an instance in my main program.
Ie
Renderer tileClipping;
and I'm unable to use it such as
it is to be noted i haven't even gotten to testing these two steps below but before it compiled
but when those functions of this class were called it complained that it can't render to a null device.
Which meant that the device wasn't being initialized. I had no idea why. It took me hours to google this.
I finally figured out the words I needed.. which were "do my rendering in XNA in a seperate class"
now I haven't used the addcomponent function because I don't want it to only run these functions automatically
and I want to be able to call the custom ones.
In a nutshell what I want is:
*access to rendering targets and graphics device OUTSIDE default class
*passing of Rendertarget2D (which contain textures and textures should automatically be passed with a rendering target? )
*the device should be passed to this function as well OR the device should be passed to this function as a byproduct of passing the rendertarget (which is automatically associated with the render device it was given originally)
*I'm assuming I'm dealing with abstracted pointers here so when I pass a class object or instance, I should be recieving the SAME object , I referenced, and not a copy that has only the lifespan of the function running.
*the purpose for all these options: I want to initialize new 2d textures on the fly to customize tileclipping and even the X , y Offsets of where a WHOLE texture will be rendered, and the X and Y offsets of where tiles
will be rendered ON that surface.
This is why. And I'll be doing region based lighting effects per tile or even per 8X8 pixel spaces.. we'll see
I'll also be doing sprite rotations on the whole texture then copying it again to a circular masked
texture, and then doing a second copy for only solid tiles for masked rotated collisions on sprites.
I'll be checking the masked pixels for my collision, and using raycasting possibly to check for collisions on
those areas.
The sprite will stay in the center, when this rotation happens.
Here is a detailed diagram:
http://i.stack.imgur.com/INf9K.gif
I'll be using texture2D for steps 4-6
I suppose for steps 1 as well.
Now ontop of that, the clipping size (IE the sqaure rendered)
will be able to be shrunk or increased, on a per frame basis
Therefore I can't use the same static size for my main texture2d and I can't use just the backbuffer
Or we get the annoying flicker.
Also I will have multiple instances of the renderer class so that I can freely pass textures around
as if they are playing cards (in a sense) layering them ontop of eachother, cropping them how i want
and such.
and then using spritebatch to simply draw them at the locations I want.
Hopefully this makes sense, and yes I will be planning on using alpha blending but only after
all tiles have been drawn..
The masked collision is important and Yes I am avoiding using math on the tile rendering and instead resorting to image manipulation in video memory which is WHY I need this to work the way I'm intending it to work and not in the default way that XNA seems to handle graphics.
Thanks to anyone willing to help.
I hate the code form offered, because then I have to rely on static presence of an update function.
What if I want to kill that update function or that object, but have it in memory, but just have it temporarily inactive? I'm making the assumption here the update function of one of these gamecomponents is automatic ?
Anyways this is as detailed as I can make this post hopefully someone can help me solve the issue. Instead of tell me "derrr don't do it this wayyy" which is what a few people told me (but they don't understand the actual goal I have in mind)
I'm trying to create basically a library where I can copy images freely no matter the size, i just have to specify the size in the function then as long as a reference to that object exists it should be kept alive? right? :/ anyways..
Anything else? I Don't know. I understand object oriented coding but I don't understand this XNA
It's beggining to feel impossible to do anything custom in it without putting ALL my rendering code into the draw function of the main class
tileClipping.new_texture(GraphicsDevice, width, height)
tileClipping.Draw_texture(...)
I'm totally new to working with sprites.
I need to draw opaque sprite layers for a game interface. The methods for these draw calls are defined in two different classes:
public class ColorStreamRenderer : Object2D
{
.
.
public override void Draw(GameTime gameTime)
{
.
.
this.Game.GraphicsDevice.Clear(Color.White);
this.SharedSpriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.Additive);
this.SharedSpriteBatch.Draw(this.backBuffer, new Rectangle(0, 0, 1280, 960), null, Color.White);
this.SharedSpriteBatch.End();
.
.
}
and
public class AvateeringXNA : Microsoft.Xna.Framework.Game
{
.
.
protected override void Draw(GameTime gameTime)
{
.
.
private Texture2D recttex;
.
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteSortMode.Texture, BlendState.Opaque);
this.recttex = Content.Load<Texture2D>("doodle");
spriteBatch.Begin(SpriteSortMode.Texture, BlendState.Opaque);
spriteBatch.Draw(recttex, rightMenu[ID], Color.White);
spriteBatch.End();
.
.
}
rightMenu[ID] is a Microsoft.Xna.Framework.Rectangle which lies within the output display window.
These two classes have already inherited one class each, and since C# doesn't allow multiple inheritance, I can't use Content.Load within the ColorStreamRenderer class. So the sprites need to be drawn in different classes.
Now, the problem is, I'm not able to control the depth of the sprites. No matter what parameters I pass, the sprites of class AvateeringXNA are always behind those of the ColorStreamRenderer class.
By tweaking the blending options on the screen, I know that these are actually drawn on the screen since the layers behind are partially visible.
I've tried all the overloads for SharedSpriteBatch.Begin and SharedSpriteBatch.Draw, none of them work. Even defining the depth explicitly by layerDepth parameter for SharedSpriteBatch.Draw and spriteBatch.Draw is not working.
Rather than untangling your code, I'll give you the info that I think you need to solve this:
First: SpriteBatch (except in immediate mode) does all of its drawing inside the End call. It queues up all of its drawing so it can submit sprites to the GPU in batches (for performance). If you use its built-in sorting, it only sorts the sprites within its current queue (between begin and end).
If possible, it's best to just use a single begin/end block for your entire frame, and just Draw() everything in back-to-front order in the first place. But this isn't a requirement.
One neat trick is to treat SpriteBatch like a list of sprites to be rendered. Fill up multiple ones at once, then call End on them in the desired rendering order. (Handy if your objects' logical order doesn't match the desired rendering order.)
By default the only depth-sorting you get with SpriteBatch is if you use its sorting modes. But it can use the depth buffer (like 3D graphics) if you use DepthStencilState.Default. I wouldn't recommend it, though. The layerDepth parameter to Draw sets the Z position of a sprite (valid between 0 and 1).
Second: Content is just a property of Game of type ContentManager. You can take that ContentManager object and pass it into different classes and load things from within them.
I have to add a second viewport (= like splitscreen) to allow the player to see an event somewhere else on the current level.
Is there any possibility to draw the event area without redrawing every things already drew ?
[EDIT]
RenderTarget2D is the key. Thx User1459910 for all.
It almost worked.
New questions :
I've searched for a while and still don't find a tutorial about "Xna 2D camera with source and destination rectangle" if you have a link, i'd like to see it, please ♥
Currently, the drawing code looks like this :
protected override void Draw(GameTime gameTime)
{
/*
...
here is the code to "draw" in the renderTarget2D renderAllScene object
...
*/
//Let's draw into the 2 viewports
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearClamp, null, null, null, camera1.transform);
spriteBatch.Draw(renderAllScene, viewport1.Bounds, Color.White);
spriteBatch.End();
if (EventIsRunning)
{
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearClamp, null, null, null, camera2.transform);
spriteBatch.Draw(renderAllScene, viewport2.Bounds, Color.White);
spriteBatch.End();
}
*The Viewport 1 is great. The camera follows the character but after moving the camera for a short distance, the map is cutted at 1280pixels i think. so it drew only 1280pixels of all the map. I don't know why. Maybe because i failed when i created renderAllScene = new RenderTarget2D. :x
renderAllScene = new RenderTarget2D(GraphicsDevice, GraphicsDevice.PresentationParameters.BackBufferWidth, GraphicsDevice.PresentationParameters.BackBufferHeight);
*For the Viewport 2 : I need the source rectangle. I'll try it tomorrow.
I'll assume you are making a 2D game with NOTHING 3D at all.
Here is what you could do:
You need to render the whole map, and all game objects that appear on it, on a Texture. If you don't know how to render to a Texture, here is the procedure:
Create a RenderTarget2D object
On the Draw function, before you render anything, you must call the graphicsDevice.SetRenderTarget() method, and set the RenderTarget2D you created.
After you are done rendering, call graphicsDevice.SetRenderTarget(null) to reset the render target to the default one. You must do it or you'll have problems!
To render the RenderTarget2D, simply use SpriteBatch.Draw((Texture2D)renderTarget2D, position, color), being "renderTarget2D" of course the name of the RenderTarget2D you created.
Then, you use two 2D Cameras. One will display where the hero is, and the other one will display the event area.
A 2D camera is basically a trick with Source and Destination rectangles. The trick is to use a Source Rectangle to define the area that displays the hero and the area around it and use the main Viewport as the Destination Rectangle, and use another Source Rectangle to define the event area and another Destination Rectangle as the second Viewport.
If you have doubts, google about "XNA 2D Camera", and research about Source and Destination rectangles on the MSDN's article for SpriteBatch.Draw().
So I'm working on a project, where I have a 3d cube-based world. I got all of that to work, and I'm starting the user interface, and the moment I start using spritebatch to draw a cursor texture I have, I discovered that XNA doesn't layer all the models correctly, some of the models that are further away will be drawn first, instead of behind a model. When I take out all the spritebatch code, which is just this:
spriteBatch.Begin();
cursor.draw(spriteBatch);
spriteBatch.End();
I find that the problem is fixed immediately. The cursor is an object, the draw method is just using spriteBatch.draw();
The way I see it, there are two solutions, I could find a way to draw my cursor and other interfaces without using SpriteBatch, or maybe there is a parameter in spriteBatch.Begin() that I could plug in to fix the issue? I'm not sure how to do either of these, anyone else encounter this problem and know how to fix it?
Thanks in advance.
This does not answer the question!
I'm not sure if you could (or should) draw 2D without a spritebatch, however I've had the same problem with 3D model rendering when using 2D spritebatch, and the solution on solution I've found on GameDev helped me solve this:
Your device states are probably wrong. This often happens when mixing
2D and 3D (for example the overload for SpriteBatch.Begin() which
takes no arguments sets some device states that are incompatible with
3D rendering. No worries though, all you have to do is to make sure
that the following device states are set the way you want them:
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Basically, you first call the SpriteBatch methods for 2D draws, then the above code (which should ensure proper 3D rendering), and then draw your 3D models. In fact, I only used the first two lines - BlendState and DepthStencilState and it worked as it should.
Have you had a look at this overload?
You can use the last parameter, layerDepth, to control in what order sprites are drawn. If you use that, make sure to check out the sprite sort mode (in the SpriteBatch.Begin(...) call) as well. Does this cover what you need to do?
Edit: Also note that this implies using the correct perspective matrix, drawing in 2D (I'm assuming you want your cursor to display in 2D on top of everything else), and after all the 3D stuff (it's quite possible to draw sprites at Z=0 for example, making objects in front of that obstruct the sprite).
I'm attempting to change RenderTargets at runtime, so I can draw some elements at runtime, manipulate them and then finally draw the texture to the screen. Problem is, the screen turns purple if I change the RenderTarget at runtime. Here's the code I've got in Draw:
RenderTarget2D tempTarget = new RenderTarget2D(GraphicsDevice, 128, 128, 1,
GraphicsDevice.DisplayMode.Format, GraphicsDevice.PresentationParameters.MultiSampleType,
GraphicsDevice.PresentationParameters.MultiSampleQuality, RenderTargetUsage.PreserveContents);
GraphicsDevice.SetRenderTarget(0, tempTarget);
GraphicsDevice.Clear(ClearOptions.Target, Color.SpringGreen, 0, 0);
GraphicsDevice.SetRenderTarget(0, null);
It doesn't seem to matter how I create the RenderTarget, if I do it at runtime (and I do need to create in-memory textures at runtime and draw on them with SpriteBatch) it results in an entirely purple screen. What can I do to fix this?
It looks like the best option is to create the RenderTarget somewhere other than Draw, draw to it during Update, save the resulting texture (and manipulate as necessary) then draw that texture during Draw.
I know this is late, but the solution is to write to the RenderTarget BEFORE you clear the screen and beginning drawing your other items.
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.SetRenderTarget(_renderTarget);
//...
//Perform Rendering to the specified target
//...
GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(Color.CornflowerBlue);
//...
//Code that draws to the users screen goes here
//...
}
This should prevent you from rendering in the Update method as suggested by others, which is counter-intuitive in many aspects.
When spritebatch.End() is called objects are written to the backbuffer or in your case to tempTarget. To make the texture,
change the target
call begin
call all of the draws
end the spritebatch
set target back to null
then use the render2d