Is it a substitute for deprecated OpenTK.Graphics.Glu from tao framework?
Especially to the need to geometry rendering.
OpenTK is an evolution of Tao. And , honestly, I didn't listen about that glut becomes depricated. glut functions are widely use till now in OpenGL world, as contains a wide range of very useful functions and calculations, you may want avoid to do, if you're not a math expert but just a programmer.
By the way, all that stuff present in glut can be done also without use of it (at least I didn't find anything that not ), but, as I said before, you have to have very good understanding of math that stands behind all that stuff.
OpenTK has its own preferred ways of doing geometry rendering and such. You can, of course, still use the Glu class in the OpenTK.Graphics namespace, however, OpenTK would prefer you (I think) to do change this:
Glu.Perspective(MathHelper.PiOver4, AspectRatio, 0.1f, 100f);
into this:
Matrix4 perspectiveMatrix = Matrix4.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
.1f, 100f);
GL.LoadMatrix(ref perspectiveMatrix);
And you can replace the gluLookAt function like this too. Have a look into the Matrix4 struct. It contains a lot of useful stuff.
Related
We have a project that currently uses DirectX11 SlimDX and would like to move it to SharpDX. However, this project uses the Effect framework from SlimDX which I understand is no longer properly supported in DirectX11. However I can't find definitive information about how I should transition the effects.
The effects in use are relatively simple pixels shaders, contained in .fx files.
Should I move away from .fx files? What to? To plain .hlsl files? Or should I use the SharpDX Toolkit? Does this use a different format from .fx files?
I can't find any documentation on this. Is there anyone who has made this transition who could give me some advice, or any documentation on the SharpDX Toolkit effects framework?
The move to SharpDX is pretty simple, there's a couple of changes in naming, and resource description, but apart the fact that it's relatively cumbersome (depending on the size of your code base), there's nothing too complex.
About effects framework, you have the library SharpDX.Direct3D11.Effects that wraps it, so you have it of course supported.
It's pretty much the same as per SlimDX counterpart, so you should not have any major issues moving from it.
If you want to transition away from fx framework to more plain hlsl, you can keep the same fx file, compilation steps will change, instead on compiling the whole file you need to compile each shader separately.
So for example, to compile and create a VertexShader:
CompilationResult result = ShaderBytecode.Compile(content, "VS", "vs_5_0", flags, EffectFlags.None, null, null);
VertexShader shader = new VertexShader(device, result.Bytecode);
Also you need to be careful with all constantbuffers/resource registers, it's generally good to set them explicitely, for example:
cbuffer cbData : register(b0)
{
float4x4 tW;
float4x4 tColor;
float4 cAmb;
};
You of course don't have anymore all the EffectVariable, Get by name/semantic, so instead you need to map your cBuffer to a struct in c# (you can also use datastream directly), and create Constant buffer resources.
[StructLayout(LayoutKind.Sequential,Pack=16)]
public struct cbData
{
public Matrix tW;
public Matrix tColor;
public Vector4 cAmb;
}
BufferDescription bd = new BufferDescription()
{
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 144, //Matches the struct
Usage = ResourceUsage.Dynamic
};
var cbuffer = new SharpDX.Direct3D11.Buffer(device, bd);
Use either UpdateSubResource or MapSubresource to update data, and deviceContext.VertexShader.SetConstantBuffer to bind to pipeline.
If you need to inspect shader with reflection, this is done this way (please note that's actually what the effects framework does, it's just a layer on top of d3dcompiler):
ShaderReflection refl = new ShaderReflection(result.Bytecode);
You then need to set up all API calls manually (which is what Effects does for you when you call EffectPass.Apply ).
Also since you compile shaders individually, there is no more layout validation between stages (effects compiler giving you : No valid VertexShader-PixelShader combination....). So you need to be careful setting your pipeline with non matching shaders (you can use reflection data to validate manually, or watch a black screen with debug runtime spamming your output window in visual studio).
So transitioning can be a bit tedious, but can also be beneficial since it's easier to minimize pipeline state changes (In my use case this is not a concern, so effects framework does just fine, but if you have a high number of draw calls that can become significant).
In openGL we could create some polygons and connect them as a group by the function
'pushMatrix()' and then we could rotate them and move them as one object..
Is there a way to do it with xna? if i have 3 polygons and i want to rotate and move them all together as a group how can i do that?
EDIT:
I am using Primitives Shapes to build a Skeleton of a basketball player.
The game will only be a shoot out game to the basket, which means the player
will only have to move his Arm.
I need a full control over the Arm parts, and in order to do that, I need to move
the Arm which is built from Primitive shapes Harmonicaly. In order to do that,
I've tried Implementing the MatrixStack for performing matrix Transformations but with
no success. Any suggestions?
I will answer this in basic terms, as I can't quite gleen from your question how well versed you are with XNA or graphics development in general. I'm not even sure where your problem is; is it the code, the structure or how XNA works compared to OpenGL?
The short answer is that there is no matrix stack built in.
What you do in OpenGL and XNA/DX is the very same thing when working with matrices. What you do with pushMatrix is actually only preserving the matrix (transformation) state on a stack for convenience.
Connecting objects as a group is merely semantics, you don't actually connect them as a group in any real way. What you're doing is setting a render state which is used by the GPU to transform and draw vertices for every draw call thereafter until that state is once again changed. This can be done in XNA/DX in the same way as in OpenGL.
Depending on what you're using to draw your objects, there are different ways of applying transformations. From your description I'm guessing you're using DrawPrimitives (or something like that) on the GraphicsDevice object, but whichever you're using, it'll use whatever transformation has been previously applied, normally on the Effect. The simplest of these is the BasicEffect, which has three members you'd be interested in:
World
View
Projection
If you use the BasicEffect, you merely apply your transform using a matrix in the World member. Anything that you draw after having applied your transforms to your current effect will use those transforms. If you're using a custom Effect, you do something quite like it except for how you set the matrix on the effect (using the parameters collection). Have a look at:
http://msdn.microsoft.com/en-us/library/bb203926(v=xnagamestudio.40).aspx
http://msdn.microsoft.com/en-us/library/bb203872(v=xnagamestudio.40).aspx
If what you're after is an actual transform stack, you'll have to implement one yourself, although this is quite simple. Something like this:
Stack<Matrix> matrixStack = new Stack<Matrix>();
...
matrixStack.Push( armMatrix );
...
basicEffect.World = matrixStack.Peek();
foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes)
{
pass.Apply();
graphics.GraphicsDevice.DrawPrimitives(...);
}
basicEffect.End();
...
matrixStack.Pop();
I am part of a team that has created a tool to view and interact with very large and heavily interconnected graphs in C#/WPF. Viewing and interacting with the graph is done through a custom control that takes in a set of DrawingVisuals and displays them on a canvas. Nodes in the graph may have a custom shape created with our editor. The current control works very well and is fairly coupled with our program but there are legitimate worries about performance when considering much larger graphs (20,000+ nodes and lots of connection).
After doing a bit of research it seems the two approaches are:
A GDI+ route where graphics are drawn to a WriteableBitmap or InteropBitmap.
SlimDX or DirectX variant (hosted in a D3DImage)
Given these two extremely different approaches which route would be best to take considering:
Interacting with the graph must be fast even while viewing the whole graph.
Updating the visuals should be fast (color or size change)
Hit testing must be fast (point and rectangle).
Development must be completed in a timely manner.
Which method would you use and why?
EDIT:
It looks like a similar question was asked but not answered.
I use GDI for my cartographic application. While GDI+ is slower than, say, DirectX, I find that there are a lot of things and tricks that can be used to speed things up. A lot of CPU is used for preparing the data before drawing it itself, so GDI should not be the only bottleneck there.
Things to look out for (and these are general enough to apply to other graphics engines, too):
First of all: measure. Use a profiler to see what is the real bottleneck in your code.
Reuse GDI primitives. This is vital. If you have to draw 100,000 graphics objects that look the same or similar, use the same Pen, Brush etc. Creating these primitives is expensive.
Cache the rendering data - for example: don't recalculate gfx element positions if you don't need to.
When panning/zooming, draw the scene with lower GDI+ quality (and without expensive GDI operations). There are a number of Graphics object settings to lower the quality. After the user stops panning, draw the scene with the high quality.
A lot and lot of little things that improve performance. I've been developing this app for 2-3 years (or is it 4 already hmm?) now and I still find ways to improve things :). This is why profiling is important - the code changes and this can affect the performance, so you need to profile the new code.
One other thing: I haven't used SlimDX, but I did try Direct2D (I'm referring to Microsoft.WindowsAPICodePack.DirectX.Direct2D1). The performance was considerably faster than GDI+ in my case, but I had some issues with rendering bitmaps and never had the time to find the proper solution.
I have recently ported some drawing code over to DirectX and have been very pleased with the results. We were mainly rendering bitmaps using bit-bashing and are seeing render times that could be measured in minutes reduced to around 1-2 seconds.
This can't be directly compared to you usage, as we've gone from bit-bashing in C++ to Direct3D in C# using SlimDX, but I imagine you will see performance benefits, even if they're not the orders of magnitude we're seeing.
I would advise you to take a look at using Direct2D with SlimDX. You will need to use DirectX 10.1 as Direct2D isn't compatible with DirectX 11 for some reason. If you have used the drawing API in WPF then you will already be familiar with Direct2D as its API is based on the WPF drawing API as far as I can tell. The main problems with Direct2D are a lack of documentation and the fact it only works in Vista onwards.
I've not experimented with DirectX 10/WPF interop, but I believe it is possible (http://stackoverflow.com/questions/1252780/d3dimage-using-dx10)
EDIT: I thought I'd give you a comparison from our code of drawing a simple polygon. First the WPF version:
StreamGeometry geometry = new StreamGeometry();
using (StreamGeometryContext ctx = geometry.Open())
{
foreach (Polygon polygon in mask.Polygons)
{
bool first = true;
foreach (Vector2 p in polygon.Points)
{
Point point = new Point(p.X, p.Y);
if (first)
{
ctx.BeginFigure(point, true, true);
first = false;
}
else
{
ctx.LineTo(point, false, false);
}
}
}
}
Now the Direct2D version:
Texture2D maskTexture = helper.CreateRenderTexture(width, height);
RenderTargetProperties props = new RenderTargetProperties
{
HorizontalDpi = 96,
PixelFormat = new PixelFormat(SlimDX.DXGI.Format.Unknown, AlphaMode.Premultiplied),
Type = RenderTargetType.Default,
Usage = RenderTargetUsage.None,
VerticalDpi = 96,
};
using (SlimDX.Direct2D.Factory factory = new SlimDX.Direct2D.Factory())
using (SlimDX.DXGI.Surface surface = maskTexture.AsSurface())
using (RenderTarget target = RenderTarget.FromDXGI(factory, surface, props))
using (SlimDX.Direct2D.Brush brush = new SolidColorBrush(target, new SlimDX.Color4(System.Drawing.Color.Red)))
using (PathGeometry geometry = new PathGeometry(factory))
using (SimplifiedGeometrySink sink = geometry.Open())
{
foreach (Polygon polygon in mask.Polygons)
{
PointF[] points = new PointF[polygon.Points.Count()];
int i = 0;
foreach (Vector2 p in polygon.Points)
{
points[i++] = new PointF(p.X * width, p.Y * height);
}
sink.BeginFigure(points[0], FigureBegin.Filled);
sink.AddLines(points);
sink.EndFigure(FigureEnd.Closed);
}
sink.Close();
target.BeginDraw();
target.FillGeometry(geometry, brush);
target.EndDraw();
}
As you can see, the Direct2D version is slightly more work (and relies on a few helper functions I've written) but it's actually pretty similar.
Let me try and list the pros and cons of each approach - which will perhaps give you some idea about which to use.
GDI Pros
Easy to draw vector shapes with
No need to include extra libraries
GDI Cons
Slower than DX
Need to limit "fancy" drawing (gradients and the like) or it might slow things down
If the diagram needs to be interactive - might not be a great option
SlimDX Pros
Can do some fancy drawing while being faster than GDI
If the drawing is interactive - this approach will be MUCH better
Since you draw the primitives you can control quality at each zoom level
SlimDX Cons
Not very easy to draw simple shapes with - you'll need to write your own abstractions or use a library that helps you draw shapes
Not as simple to use a GDI especially if you've not used it before
And perhaps more I forgot to put in here, but perhaps these will do for starters?
-A.
In my program the source rectangle for drawing can either be a regular rectangle, an empty rectangle, or a rectangle with an X or Y of -1. If the rectangle is normal (example being (0, 0, 64, 64)) then it just draws that from the texture. If it is Rectangle.Empty it draws nothing and just continues with the loop. If the source rectangle has an X or Y of -1 then it is determined to be a collision tile.
The problem with this is that it -1 is not intuitive. It is confusing and a bad solution. Also if there come to be more tile types it will start getting ridiculous like -2 meaning a slow tile or -3 meaning a water tile.
Another problem is that since I did not know there were going to be collision tiles early on and regular XNA rectangles were fine, the entire system (thankfully only around 1,000 of lines of code at the moment) uses XNA rectangles. I figure I'm going to have to make a separate class at this point and update everything but I'm not sure.
What would be a good solution to this? I have not really dabbled in extension methods at all. Could they be applied to the Rectangle class and be given methods like IsCollisionTile() or IsBlankTile()? Initially I was hoping I could derive from the Rectangle class to make a Tile class but unfortunately the class is sealed. I suppose another simple solution could be just making a global constants class with -1 being CollisionTile, 0 being BlankTile, et cetera. This would at least make it slightly more understandable but this still looks ugly to me:
if (tiles[y, x].X == Constants.BlankTile)
continue;
if (tiles[y, x].X == Constants.CollisionTile)
{
Utility.DrawRectangle(spriteBatch, new Rectangle(x * TileSize, y * TileSize, TileSize, TileSize), collisionTileColor);
continue;
}
spriteBatch.Draw(tileset, new Rectangle(x * TileSize, y * TileSize, TileSize, TileSize), tiles[y, x], Color.White);
If only there was a property I could use like Tag with controls. I'd really like to not abandon using the Rectangle class because it is so embedded in to the system and the program is purely functional, just not aesthetic in this regard. Ideally, I'd prefer a solution that just allows the Rectangle class to be extended to somehow be able to communicate with its clients what kind of tile it is supposed to be.
Well then, that took a lot more typing than I had originally hoped for. Sorry for the long read x_x
I would recommend setting global constants. The problem with extension methods in this case arises because Rectangle is a struct, a value type. That means that your extension method is working with a copy of the rectangle, not the original.
If the class can't be inherited from (which would usually be the appropriate solution here. Shame on you Microsoft!) then extension methods could defiantly work as you described.
Problem is - IMO it's less in style with C# and OOP in general to use methods that function like getters. That's what getters are for.
Because of that, I think the global constants option is more in line with the general style, but that's just my opinion.
From a totally programmatic POV - both methods are valid where the global constant class might be slightly faster (though I'm not sure of this)
At the beginning, you should consider, when do you use your Methods
IsCollisionTile() and IsBlankTile()
You have two choices:
*) You wanna use it globally, then you should better write a Utility-Class to have your Methods right there where you need them:
public static class CollisionHelper
{
public static Boolean IsCollisionTile(ITile tileToCheck)
{
...
}
}
*) Second, if you wanna use it only in connection with your tiles, you should definitly write an extension method, e.g. to accept every ITile-Object. Extensions methods are a great way to widely EXTEND the capabilities of classes. A sample could be:
public class RectangleTile : ITile
{
public static Boolean IsCollisionTile(this ITile tileToCheck)
{
...
}
}
I hope you have now an idea about Extension-Methods and how you could use them to solve your problem very easily ;)
I want to optimize my basic XNA engine. The structure is somewhat like this: I've a GameWorld instance and more GameObjects attached to it. Now, in every frame I do a loop between GameObjects and I call the draw method inside of them. The con of this implementation is that the GameDevice draw function is called multiple times, one for every object.
Now, I want to reduce the drawing calls, implementing a structure that, before the drawing method is called, transfers all the geometry in a big vector cointains all the vertex data and performs a single drawing call to draw them all.
Is that an efficient way? Someone can tell me a solution to optimize?
Thanks
The first step is to reduce the number of objects you are drawing. There are many ways to do this, most commonly:
Frustum culling - i.e. cull all objects outside of the view frustum
Scene queries - e.g. organise your scene using a BSP tree or a QuadTree - some data structure that gives you the ability to reduce the potentially visible set of objects
Occlusion culling - more advanced topic but in some cases you can determine an object is not visible because it is occluded by other geometry.
There are loads of tutorials on the web covering all these. I would attack them in the order above, and probably ignore occlusion culling for now. The most important optimisation in any graphics engine is that the fastest primitive to draw is the one you don't have to draw.
Once you have you potentially visible set of objects it is fine to send them all to GPU individually but you must ensure that you do so in a way that minimises the state changes on the GPU - e.g. group all objects that use the same texture/material properties together.
Once that is done you should find everything is pretty fast. Of course you can always take it further but the above steps are probably the best way to start.
Just to make the point clear - don't just assume that less drawing calls = faster. Of course it depends on many factors including hardware but generally XNA/DirectX API is pretty good at queueing geometry through the pipeline - this is what it's for after all. The key is not minimising calls but minimising the amount of changes in state (textures/shaders etc) required across the scene.