I'm working on a personal project that, like many XNA projects, started with a terrain displacement map which is used to generate a collection of vertices which are rendered in a Device.DrawIndexedPrimitives() call.
I've updated to a custom VertexDeclaration, but I don't have access to that code right now, so I will post the slightly older, but paradigmatically identical (?) code.
I'm defining a VertexBuffer as:
VertexBuffer = new VertexBuffer(device, VertexPositionNormalTexture.VertexDeclaration, vertices.Length, BufferUsage.WriteOnly);
VertexBuffer.SetData(vertices);
where 'vertices' is defined as:
VertexPositionNormalTexture[] vertices
I've also got two index buffers that are swapped on each Update() iteration. In the Draw() call, I set the GraphicsDevice buffers:
Device.SetVertexBuffer(_buffers.VertexBuffer);
Device.Indices = _buffers.IndexBuffer;
Ignoring what I hope are irrelevant lines of code, I've got a method that checks within a bounding shape to determine whether a vertex is within a certain radius of the mouse cursor and raises or lowers those vertex positions depending upon which key is pressed. My problem is that the VertexBuffer.SetData() is only called once at initialization of the container class.
Modifying the VertexPositionNormalTexture[] array's vertex positions doesn't get reflected to the screen, though the values of the vertex positions are changed. I believe this to be tied to the VertexBuffer.SetData() call, but you can't simply call SetData() with the vertex array after modifying it.
After re-examining how the IndexBuffer is handled (2 buffers, swapped and passed into SetData() at Update() time), I'm thinking this should be the way to handle VertexBuffer manipulations, but does this work? Is there a more appropriate way? I saw another reference to a similar question on here, but the link to source was on MegaUpload, so...
I'll try my VertexBuffer.Swap() idea out, but I have also seen references to DynamicVertexBuffer and wonder what the gain there is? Performance supposedly suffers, but for a terrain editor, I don't see that as being too huge a trade-off if I can manipulate the vertex data dynamically.
I can post more code, but I think this is probably a lack of understanding of how the device buffers are set or data is streamed to them.
EDIT: The solution proposed below is correct. I will post my code shortly.
First: I am assuming you are not adding or subtracting vertices from the terrain. If you aren't, you won't need to alter the indexbuffer at all.
Second: you are correct in recognizing that simply editing your array of vertices will not change what is displayed on screen. A VertexBuffer is entirely separate from the vertices it is created from and does not keep a reference to the original array of them. It is a 'snapshot' of your vertices when you set the data.
I'm not sure about some of what seem to be assumptions you have made. You can, as far as I am aware, call VertexBuffer.SetData() at any time. If you are not changing the number of vertices in your terrain, only their positions, this is good. Simply re-set the data in the buffer every time you change the position of a vertex. [Note: if I am wrong and you can only set the data on a buffer once, then just replace the old instance of the buffer with a new one and set the data on that. I don't think you need to, though, unless you've changed the number of vertices]
Calling SetData is fairly expensive for a large buffer, though. You may consider 'chunking' your terrain into many smaller buffers to avoid the overhead required to set the data upon changing the terrain.
I do not know much about the DynamicVertexBuffer class, but I don't think it's optimal for this situation (even if it sounds like it is). I think it's more used for particle vertices. I could be wrong, though. Definitely research it.
Out of curiosity, why do you need two index buffers? If your vertices are the same, why would you use different indices per frame?
Edit: Your code for creating the VertexBuffer uses BufferUsage.WriteOnly. Good practice is to make the BufferUsage match that of the GraphicsDevice. If you haven't set the BufferUsage of the device, you probably just want to use BufferUsage.None. Try both and check performance differences if you like.
Related
I'm currently working on my weather system for a 2D Unity game project. What I am trying to accomplish is randomising which layer a particle will spawn on, so that sometimes it will appear behind other sprites such as the player or buildings, sometimes in front, and sometimes collide with them (on the same layer).
I'm relatively new to the particle system in general but after much playing around have accomplished my goals apart from this one.
I basically want to be able to randomise each particle to have, for instance, a Z value of -1, 0 or 1, with the player/other objects potentially being on 0, where the particles will collide with them. To visualise, here is the behaviour I currently have, where every particle collides with the objects, when I want roughly 1/3 to:
https://i.imgur.com/vplFcye.gifv
Perhaps rather than working with the Z positional values, I should perhaps be thinking more in terms of layers? Randomising the layer the particles spawn on and therefore what the they can interact with, but either way I cannot find a way to script or set within the inspector the ability to change this layer/z value.
I have considered using three particle systems, which I believe would accomplish my goals, but is not ideal, and potentially detrimental to performance I'd assume?
If anyone has any insight, perhaps I've just missed a really obvious variable/setting in the inspector even.
Thanks.
Perhaps not the most elegant solve for this, but you could set your Emission shape to Mesh, Type to Edge, and use a custom mesh that's 3 separate rectangles, one each for the front/back/middle layer. I tested this and it seems to work well:
The only issue I had with this is I originally tried this with a model that was just 3 edges, but found that the Unity importer strips edges not attached to faces, so I had to make them three near-zero-width faces in my mesh instead.
I'm creating a tool that converts hi-poly meshes to low-poly meshes and I have some best practice questions on how I want to approach some of the problems.
I have some experience with C++ and DirectX but I prefer to use C#/WPF to create this tool, I'm also hoping that C# has some rich libraries for opening, displaying and saving 3d models. This brings me to my first question:
Best approach for reading, viewing and saving 3d models
To display 3D models in my WPF application, I'm thinking about using the Helix 3D toolkit.
To read vertex data from my 3D models I'm going to write my own .OBJ reader because I'll have to optimize the vertices and write out everything
Best approach for optimizing the 3d model
For optimization things will get tricky, especially when dealing with tons of vertices and tons of changes. Guess I'll keep it simple at the start and try to detect if an edge is on the same slope as adjacent edges and then I'll remove that redundant edge and retriangulate everything.
In later stages I also want to create LODs to simplify the model by doing the opposite of what a turbosmooth modifier does in Max (inverse interpolation). I have no real clue how to start on this right now but I'll look around online and experiment a little.
And at last I have to save the model, and make sure everything still works.
For viewing 3D objects you can also consider the Ab3d.PowerToys library - it is not free, but greatly simplifies work with WPF 3D and also comes with many samples.
OBJ file is good because it is very commonly used and has very simple structure that is easy to read and write to. But it does not support object hierarchies, transformations, animations, bones, etc. If you will need any of those, than you will need to use some other data format.
I do not have any experience in optimizing hi-poly meshes, so I cannot give you any advice here. Here I can only say that you may also consider combining the meshes with the same material into one mesh - this can reduce the number of draw calls and also improve performance.
My main advice is on how to write your code to make it perform better in WPF 3D. Because you will need to check and compare many vertices, you need to avoid getting data from the MeshGeometry3D.Positions and MeshGeometry3D.TriangleIndices collections - accessing a single value from those collections is very slow (you may check the .Net source and see how many lines of code are behind each get).
Therefore I would recommend you to have your own structure of meshes with Lists (List, List) for Positions and TriangleIndices. In my observations, Lists of structs are faster than using simple arrays of structs (but the lists must be presized - their size need to be set in constructor). This way you can access the data much faster. Also, when an extra boost is needed, you may also use unsafe blocks with pointers. You may also add some other data to your mesh classes - for example you mentioned adjacent edges.
Once you have your positions and triangle indices set, you can create the WPF's MeshGeometry3D object with the following code:
var wpfMesh = new MeshGeometry3D()
{
Positions = new Point3DCollection(optimizedPositions),
TriangleIndices = new Int32Collection(optimizedTriangleIndices)
};
This is faster than adding each Point3D to Positions collection.
Because you will not change that instance of wpfMesh (for each change you will create a new MeshGeometry3D), you can freeze it - call Freeze() on it. This allows WPF to optimize the meshes (combine them into vertex buffers) to reduce the number of draw calls. What is more, after you freeze a MeshGeometry3D (or any other WPF object), you can pass it from one thread to another. This means that you can parallelize your work and create the MeshGeometry3D objects in worker threads and then pass them to UI thread as frozen objects.
The same applies to change the Positions (and other data) in MeshGeometry3D object. It is faster to copy the existing positions to an array or List, change the data there and then recreate the Positions collection back from your array, then to change each individual position. Before doing any change of MeshGeometry3D you also need to disconnect it from the parent GeometryModel3D to prevent triggering many change events. This is done with the following:
var mesh = parentGeometryModel3D.Geometry; // Save MeshGeometry3D to mesh
parentGeometryModel3D.Geometry = null; // Disconnect
// modify the mesh here ...
parentGeometryModel3D.Geometry = mesh; // Connect the mesh back
I'm making a game and need to store a fair bit of animation data. For each frame I have about 15 values to store. My initial idea was to have a list of "frame" objects that contain those values. Then I thought I could also just have separate lists for each of the values and skip the objects altogether. The data will be loaded once from an XML file when the game is started.
I'm just looking for advice here, is either approach at all better (speed, memory usage, ease of use, etc) than the other?
Sorry if this is a dumb question, still pretty new and couldn't find any info on stuff like this.
(PS: this is a 2D game with sprites, so 1 frame != 1 frame of screen time. I estimate somewhere around 500-1000 frames total)
If the animation data is not changing, you could use a struct instead of an class, which combines the "namespacing" of objects with the "value-typeness" of primitives. That will make all the values of a single frame reside in the same space, and save you some memory and page faults.
Just make sure that the size of your arrays doesn't get you into LOH if you intend to allocate and deallocate them often.
you are creating an animation So you should keep in mind a few basic things of.net:
Struct is created on stack. So, it is faster to instantiate (and destroy) a struct than a class.
On the other hand, every time you assign a structure or pass it to a function, it gets copied.
you can still pass struct by reference simply by specifying ref in the function parameter list
So it will be better to keep in mind this things
In openGL we could create some polygons and connect them as a group by the function
'pushMatrix()' and then we could rotate them and move them as one object..
Is there a way to do it with xna? if i have 3 polygons and i want to rotate and move them all together as a group how can i do that?
EDIT:
I am using Primitives Shapes to build a Skeleton of a basketball player.
The game will only be a shoot out game to the basket, which means the player
will only have to move his Arm.
I need a full control over the Arm parts, and in order to do that, I need to move
the Arm which is built from Primitive shapes Harmonicaly. In order to do that,
I've tried Implementing the MatrixStack for performing matrix Transformations but with
no success. Any suggestions?
I will answer this in basic terms, as I can't quite gleen from your question how well versed you are with XNA or graphics development in general. I'm not even sure where your problem is; is it the code, the structure or how XNA works compared to OpenGL?
The short answer is that there is no matrix stack built in.
What you do in OpenGL and XNA/DX is the very same thing when working with matrices. What you do with pushMatrix is actually only preserving the matrix (transformation) state on a stack for convenience.
Connecting objects as a group is merely semantics, you don't actually connect them as a group in any real way. What you're doing is setting a render state which is used by the GPU to transform and draw vertices for every draw call thereafter until that state is once again changed. This can be done in XNA/DX in the same way as in OpenGL.
Depending on what you're using to draw your objects, there are different ways of applying transformations. From your description I'm guessing you're using DrawPrimitives (or something like that) on the GraphicsDevice object, but whichever you're using, it'll use whatever transformation has been previously applied, normally on the Effect. The simplest of these is the BasicEffect, which has three members you'd be interested in:
World
View
Projection
If you use the BasicEffect, you merely apply your transform using a matrix in the World member. Anything that you draw after having applied your transforms to your current effect will use those transforms. If you're using a custom Effect, you do something quite like it except for how you set the matrix on the effect (using the parameters collection). Have a look at:
http://msdn.microsoft.com/en-us/library/bb203926(v=xnagamestudio.40).aspx
http://msdn.microsoft.com/en-us/library/bb203872(v=xnagamestudio.40).aspx
If what you're after is an actual transform stack, you'll have to implement one yourself, although this is quite simple. Something like this:
Stack<Matrix> matrixStack = new Stack<Matrix>();
...
matrixStack.Push( armMatrix );
...
basicEffect.World = matrixStack.Peek();
foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes)
{
pass.Apply();
graphics.GraphicsDevice.DrawPrimitives(...);
}
basicEffect.End();
...
matrixStack.Pop();
I want to optimize my basic XNA engine. The structure is somewhat like this: I've a GameWorld instance and more GameObjects attached to it. Now, in every frame I do a loop between GameObjects and I call the draw method inside of them. The con of this implementation is that the GameDevice draw function is called multiple times, one for every object.
Now, I want to reduce the drawing calls, implementing a structure that, before the drawing method is called, transfers all the geometry in a big vector cointains all the vertex data and performs a single drawing call to draw them all.
Is that an efficient way? Someone can tell me a solution to optimize?
Thanks
The first step is to reduce the number of objects you are drawing. There are many ways to do this, most commonly:
Frustum culling - i.e. cull all objects outside of the view frustum
Scene queries - e.g. organise your scene using a BSP tree or a QuadTree - some data structure that gives you the ability to reduce the potentially visible set of objects
Occlusion culling - more advanced topic but in some cases you can determine an object is not visible because it is occluded by other geometry.
There are loads of tutorials on the web covering all these. I would attack them in the order above, and probably ignore occlusion culling for now. The most important optimisation in any graphics engine is that the fastest primitive to draw is the one you don't have to draw.
Once you have you potentially visible set of objects it is fine to send them all to GPU individually but you must ensure that you do so in a way that minimises the state changes on the GPU - e.g. group all objects that use the same texture/material properties together.
Once that is done you should find everything is pretty fast. Of course you can always take it further but the above steps are probably the best way to start.
Just to make the point clear - don't just assume that less drawing calls = faster. Of course it depends on many factors including hardware but generally XNA/DirectX API is pretty good at queueing geometry through the pipeline - this is what it's for after all. The key is not minimising calls but minimising the amount of changes in state (textures/shaders etc) required across the scene.