Animating a sprite - c#

Im developing a game like "bubble-bobble". So far I have done physics and collision detection.
Now I want to make my Hero (Rectangle Sprite) animated. I would be glad if someone could explain simple scripting for simple animated characters or some nice links for animation.

The XNA Documentation includes an entire article on Animating a Sprite. The basic technique is to use an AnimatedTexture class, which is included within the Animated sprite sample code.

The high level idea is that you load a texture into memory using a graphics API. Since you're using C#, this is most likely done through XNA.
This texture you have loaded contains each frame of animation that is required, and may span across multiple textures. When you go and render your 'sprite' object, you pass the XNA API the texture you want to use, and a source rectangle coordinates that surround the specific frame of animation you want within that texture.
It's up to you to manage this process. I create tools that assemble these source rectangles and stores meta data about each specific animation each sprite has; like which rectangles, and the duration of each frame, etc.

Related

Display graphics on top of OpenGL game

This is not really language-specific (although I am most comfortable in C#). I am basically trying to create an application that would display basic 2D graphics over an OpenGL application (specifically FlightGear). There would be no need to have mouse-interaction (e.g. clicks) with the overlay - it would be for informational purposes only.
What is the simplest way of going about this?
If I understand your question correctly, you just have to render your scene and after that you can change your camera transformation to a parallel projection, with the size of your viewport. You can now render simple rectangles with your 2D graphics as textures on it.

Drawing in Unity using C#

I currently have a WinForms C# application. I have a .txt file with GPS coordinates of some points (borders of buildings, roads etc.), and using Graphics.FillPolygon method in System.Drawing I am drawing these on a Panel. However, I got an idea that for what I am trying to do (it basically is a 2D game), a Unity 2D project would be more suitable and easy to use (mainly because of easier view handling using the cameras). However, I don't know how to do this drawing in it. I just need to somehow get these coordinates drawn in Unity, but I don't know how to do it.
Note: I already have some minor expirience with 3D in Unity, but I am a total newbie to 2D.
Thanks
There is no GDI/canvas implementation built into Unity. Drawing these in rastered 2D would be quite hard in Unity (basically putpixel level, or finding some third party library).
You have an alternative, though - you can draw these shapes in 3D, ignoring the third dimension. You have a few methods for this:
Line renderers - this one will be the easiest, but forget about anything else than just drawing the outline, no fill etc.
Make your own mesh - This will let you use the geometry with your scene. It's an retained mode API (you init your geometry once)
Use the GL namespace - This will be only rendered on the screen, no interaction with the scene. It's an immidiate mode API (draw everything on every frame)

ideas to create a software where shows views of cubes?

I'm planning to draw these shapes in WPF. It's for an educational software.
What idea's do you have in how to implement these cubes and views?
I was planning at the beginning starting using canvas and draw, but I guess it will be become eternal. So I supposed if exists some library to help me drawing them?
The ability to draw 3D shapes (such as cubes) and render them from different angles is built right into WPF. From the look of your cubes, you're going to want an orthographic camera, rather than a perspective camera, because the lines composing your cubes are parallel.
You might also find the Petzold.Media3D library helpful, because it has objects like cubes built in (you don't have to write your own algorithms to build them).
Finally, you might this primer helpful in getting started with WPF 3D.
Once you have some idea of how to use 3D, it will just be a matter of putting cubes in your scene at the proper locations and positioning the cameras properly for viewing the cubes. You will probably want to keep reusing the same four camera positions: one for the "3D view", and one each for the top, side, and front views.
This should be a lot less work than trying to draw the cubes using 2D.

XNA 2D Shaders and SpriteSortMode

So I've been trying to wrap my head around shaders in 2D in XNA.
http://msdn.microsoft.com/en-us/library/bb313868(v=xnagamestudio.31).aspx
This link above stated that I needed to use SpriteSortMode.Immediate, which is a problem for me because I have developed a parallax system that relies on deferred rendering (SpriteSortMode.BackToFront).
Can someone explain to me the significance of SpriteSortMode in shaders, if Immediate is mandatory, anything I can do to maintain my current system, and anything else in general to help me understand this better?
In addition, my game runs off of multiple SpriteBatch.Begin()/End() calls (for drawing background, then the game, then the foreground and HUD and etc). I've noticed that the Bloom example does not work in that case, and I am guessing it has something to do with this.
In general, I've been having some trouble understanding these concepts. I know what a shader is and how it works, but I don't know how it interacts with XNA and what goes on there. I would really appreciate some enlightenment. :)
Thanks SO!
The sort mode will matter if you are attempting to do something non-trivial like render layered transparency or make use of a depth buffer. I'm going to assume you want to do something non-trivial since you want to use a pixel shader to accomplish it.
SpriteSortMode.Immediate will draw things in exactly the order of the draw calls. If you use another mode, SpriteBatch will group the draw calls to the video card by texture if it can. This is for performance reasons.
Keep in mind that every time you call SpriteBatch.Begin you are applying a new pixel shader and discarding the one previously set. (Even if the new one is just SpriteBatch's standard pixel shader that applies a Tint color.) Additionally, remember that by calling SpriteBatch.End you are telling the video card to execute all of the current SpriteBatch commands.
This means that you could potentially keep your existing sorting method, if your fancy pixel shaders are of limited scope. In other words, draw your background with one Effect and then your foreground and characters with another. Each Begin/End call to SpriteBatch can be treated separately.
If your goal is to apply one Effect (such as heat waves or bloom) to everything on the screen you have another option. You could choose to render all of your graphics onto a RenderTarget that you create instead of directly to the video card's backbuffer. If you do this, at the end of your rendering section you can call GraphicsDevice.SetRenderTarget(null) and paint your completed image to the backbuffer with a custom shader that applies to the entire scene.
I'm not one hundred percent sure on how much the sprite sorting mode effects shaders, i would think it would vary depending on what you were using the shader for.
as for bloom if you're using multiple begin and ends (which you really want to minimise if you can). you can create a render target the size of the screen, draw everything as you are now. then at the very end, take that render target back (using graphicsdevice.SetRenderTarget(null);) then draw your full screen render target (at 0,0 position) with the bloom shader, that way you will bloom the entire scene, regardless of the components of the scene using various sort modes.

Silverlight Collision Detection when controls are in different canvasses/layers

For our game we have to create collision detection.
The problem is that the collided objects are in different canvasses/layers, which makes collision detection by pointlocation inpossible.
Does anyone have an idea how to solve this?
It's hard to give a great answer without some more information, but if all your layers are the same size then you can just roll your own collision detection. All you need to know is the locations and sizes of two things to be collision detected. Then you just test to see if one rectangle intersects with the other rectangle.
There is also a function that might be useful to use called TranslatePoint. This translates from one UIElements coordinates to another. So if you had a ball bouncing around in a smaller area of the screen with it's own local coordinate system, you could get the ball's coordinates relative to the entire screen with this function.
Can I suggest you try using the Farseer physics engine just to save yourself some pain?
http://farseerphysics.codeplex.com/
It's very good and is in use in some WP7 games already.
http://www.farseergames.com/
There's also some Blend behaviors and helpers to make using it even easier:
http://physicshelper.codeplex.com/

Categories