We have a project that currently uses DirectX11 SlimDX and would like to move it to SharpDX. However, this project uses the Effect framework from SlimDX which I understand is no longer properly supported in DirectX11. However I can't find definitive information about how I should transition the effects.
The effects in use are relatively simple pixels shaders, contained in .fx files.
Should I move away from .fx files? What to? To plain .hlsl files? Or should I use the SharpDX Toolkit? Does this use a different format from .fx files?
I can't find any documentation on this. Is there anyone who has made this transition who could give me some advice, or any documentation on the SharpDX Toolkit effects framework?
The move to SharpDX is pretty simple, there's a couple of changes in naming, and resource description, but apart the fact that it's relatively cumbersome (depending on the size of your code base), there's nothing too complex.
About effects framework, you have the library SharpDX.Direct3D11.Effects that wraps it, so you have it of course supported.
It's pretty much the same as per SlimDX counterpart, so you should not have any major issues moving from it.
If you want to transition away from fx framework to more plain hlsl, you can keep the same fx file, compilation steps will change, instead on compiling the whole file you need to compile each shader separately.
So for example, to compile and create a VertexShader:
CompilationResult result = ShaderBytecode.Compile(content, "VS", "vs_5_0", flags, EffectFlags.None, null, null);
VertexShader shader = new VertexShader(device, result.Bytecode);
Also you need to be careful with all constantbuffers/resource registers, it's generally good to set them explicitely, for example:
cbuffer cbData : register(b0)
{
float4x4 tW;
float4x4 tColor;
float4 cAmb;
};
You of course don't have anymore all the EffectVariable, Get by name/semantic, so instead you need to map your cBuffer to a struct in c# (you can also use datastream directly), and create Constant buffer resources.
[StructLayout(LayoutKind.Sequential,Pack=16)]
public struct cbData
{
public Matrix tW;
public Matrix tColor;
public Vector4 cAmb;
}
BufferDescription bd = new BufferDescription()
{
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 144, //Matches the struct
Usage = ResourceUsage.Dynamic
};
var cbuffer = new SharpDX.Direct3D11.Buffer(device, bd);
Use either UpdateSubResource or MapSubresource to update data, and deviceContext.VertexShader.SetConstantBuffer to bind to pipeline.
If you need to inspect shader with reflection, this is done this way (please note that's actually what the effects framework does, it's just a layer on top of d3dcompiler):
ShaderReflection refl = new ShaderReflection(result.Bytecode);
You then need to set up all API calls manually (which is what Effects does for you when you call EffectPass.Apply ).
Also since you compile shaders individually, there is no more layout validation between stages (effects compiler giving you : No valid VertexShader-PixelShader combination....). So you need to be careful setting your pipeline with non matching shaders (you can use reflection data to validate manually, or watch a black screen with debug runtime spamming your output window in visual studio).
So transitioning can be a bit tedious, but can also be beneficial since it's easier to minimize pipeline state changes (In my use case this is not a concern, so effects framework does just fine, but if you have a high number of draw calls that can become significant).
Related
I'm creating a tool that converts hi-poly meshes to low-poly meshes and I have some best practice questions on how I want to approach some of the problems.
I have some experience with C++ and DirectX but I prefer to use C#/WPF to create this tool, I'm also hoping that C# has some rich libraries for opening, displaying and saving 3d models. This brings me to my first question:
Best approach for reading, viewing and saving 3d models
To display 3D models in my WPF application, I'm thinking about using the Helix 3D toolkit.
To read vertex data from my 3D models I'm going to write my own .OBJ reader because I'll have to optimize the vertices and write out everything
Best approach for optimizing the 3d model
For optimization things will get tricky, especially when dealing with tons of vertices and tons of changes. Guess I'll keep it simple at the start and try to detect if an edge is on the same slope as adjacent edges and then I'll remove that redundant edge and retriangulate everything.
In later stages I also want to create LODs to simplify the model by doing the opposite of what a turbosmooth modifier does in Max (inverse interpolation). I have no real clue how to start on this right now but I'll look around online and experiment a little.
And at last I have to save the model, and make sure everything still works.
For viewing 3D objects you can also consider the Ab3d.PowerToys library - it is not free, but greatly simplifies work with WPF 3D and also comes with many samples.
OBJ file is good because it is very commonly used and has very simple structure that is easy to read and write to. But it does not support object hierarchies, transformations, animations, bones, etc. If you will need any of those, than you will need to use some other data format.
I do not have any experience in optimizing hi-poly meshes, so I cannot give you any advice here. Here I can only say that you may also consider combining the meshes with the same material into one mesh - this can reduce the number of draw calls and also improve performance.
My main advice is on how to write your code to make it perform better in WPF 3D. Because you will need to check and compare many vertices, you need to avoid getting data from the MeshGeometry3D.Positions and MeshGeometry3D.TriangleIndices collections - accessing a single value from those collections is very slow (you may check the .Net source and see how many lines of code are behind each get).
Therefore I would recommend you to have your own structure of meshes with Lists (List, List) for Positions and TriangleIndices. In my observations, Lists of structs are faster than using simple arrays of structs (but the lists must be presized - their size need to be set in constructor). This way you can access the data much faster. Also, when an extra boost is needed, you may also use unsafe blocks with pointers. You may also add some other data to your mesh classes - for example you mentioned adjacent edges.
Once you have your positions and triangle indices set, you can create the WPF's MeshGeometry3D object with the following code:
var wpfMesh = new MeshGeometry3D()
{
Positions = new Point3DCollection(optimizedPositions),
TriangleIndices = new Int32Collection(optimizedTriangleIndices)
};
This is faster than adding each Point3D to Positions collection.
Because you will not change that instance of wpfMesh (for each change you will create a new MeshGeometry3D), you can freeze it - call Freeze() on it. This allows WPF to optimize the meshes (combine them into vertex buffers) to reduce the number of draw calls. What is more, after you freeze a MeshGeometry3D (or any other WPF object), you can pass it from one thread to another. This means that you can parallelize your work and create the MeshGeometry3D objects in worker threads and then pass them to UI thread as frozen objects.
The same applies to change the Positions (and other data) in MeshGeometry3D object. It is faster to copy the existing positions to an array or List, change the data there and then recreate the Positions collection back from your array, then to change each individual position. Before doing any change of MeshGeometry3D you also need to disconnect it from the parent GeometryModel3D to prevent triggering many change events. This is done with the following:
var mesh = parentGeometryModel3D.Geometry; // Save MeshGeometry3D to mesh
parentGeometryModel3D.Geometry = null; // Disconnect
// modify the mesh here ...
parentGeometryModel3D.Geometry = mesh; // Connect the mesh back
I'm working on simple .NET renderer now, and keep pushing the same stone for a month already:
I'm using effects framework, compiled by SlimDX developers for DX11, and I keep having troubles with updating EffectVariables inside rendering loop (i need vector value updated after every DrawIndexed call). Code looks like (filtered):
public Vector4 wireframeColor = new Vector4();
public Vector4 gridColor = new Vector4();
wireColorVar = effect.GetVariableByName("wireframeColor").AsVector();
wireColorVar.Set(gridColor);
DrawIndexed(GridDomain);
foreach model in scene
{
wireColorVar.Set(wireframeColor);
DrawIndexed(modelDomain);
}
.fx file looks like:
cbuffer ColorBuffer
{
float4 wireframeColor;
float4 diffuseColor;
};
float4 PVertex_Wire_PShader(PVertex_PInput input) : SV_TARGET
{
return wireframeColor;
}
The problem is that every time all those rendering passes apply, in shader they deal only with last known variable value - i.e. wireframeColor, and never with the value of gridColor. I've had this problem for a while, so can surely tell, that it applies to every type of EffectVariables, from buffers to ShaderResources (UAViews for example), and this is really frustrating. Effect variables always tend to take the value of the last invoked one. DeviceContext.Flush() gives nothing, but it really looks like some sort of enqueing of GPU commands.
Only source of info for now, and it didn't work
Looks like applying effect passes doesn't flush variable changes. When i needed compute-shader to work, i had to manually apply resourceviews to explicitly placed variables (by register index) of shader stage.
Is this Effects implementation problem? It's not inevitable, i can still use low-level constant buffer assignments, but then there is no point to use effects at all.
P.S. oh, and don't suggest using diffuseColor field, or other means of just multiplying variable count. I need to change values of one and only variable many times between swapChain.Present() call.
Thank you for attention.
Switched to native DX11 shader mechanism, Microsoft effects seem to follow strange logic while applying GPU commands.
Is it a substitute for deprecated OpenTK.Graphics.Glu from tao framework?
Especially to the need to geometry rendering.
OpenTK is an evolution of Tao. And , honestly, I didn't listen about that glut becomes depricated. glut functions are widely use till now in OpenGL world, as contains a wide range of very useful functions and calculations, you may want avoid to do, if you're not a math expert but just a programmer.
By the way, all that stuff present in glut can be done also without use of it (at least I didn't find anything that not ), but, as I said before, you have to have very good understanding of math that stands behind all that stuff.
OpenTK has its own preferred ways of doing geometry rendering and such. You can, of course, still use the Glu class in the OpenTK.Graphics namespace, however, OpenTK would prefer you (I think) to do change this:
Glu.Perspective(MathHelper.PiOver4, AspectRatio, 0.1f, 100f);
into this:
Matrix4 perspectiveMatrix = Matrix4.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
.1f, 100f);
GL.LoadMatrix(ref perspectiveMatrix);
And you can replace the gluLookAt function like this too. Have a look into the Matrix4 struct. It contains a lot of useful stuff.
I am part of a team that has created a tool to view and interact with very large and heavily interconnected graphs in C#/WPF. Viewing and interacting with the graph is done through a custom control that takes in a set of DrawingVisuals and displays them on a canvas. Nodes in the graph may have a custom shape created with our editor. The current control works very well and is fairly coupled with our program but there are legitimate worries about performance when considering much larger graphs (20,000+ nodes and lots of connection).
After doing a bit of research it seems the two approaches are:
A GDI+ route where graphics are drawn to a WriteableBitmap or InteropBitmap.
SlimDX or DirectX variant (hosted in a D3DImage)
Given these two extremely different approaches which route would be best to take considering:
Interacting with the graph must be fast even while viewing the whole graph.
Updating the visuals should be fast (color or size change)
Hit testing must be fast (point and rectangle).
Development must be completed in a timely manner.
Which method would you use and why?
EDIT:
It looks like a similar question was asked but not answered.
I use GDI for my cartographic application. While GDI+ is slower than, say, DirectX, I find that there are a lot of things and tricks that can be used to speed things up. A lot of CPU is used for preparing the data before drawing it itself, so GDI should not be the only bottleneck there.
Things to look out for (and these are general enough to apply to other graphics engines, too):
First of all: measure. Use a profiler to see what is the real bottleneck in your code.
Reuse GDI primitives. This is vital. If you have to draw 100,000 graphics objects that look the same or similar, use the same Pen, Brush etc. Creating these primitives is expensive.
Cache the rendering data - for example: don't recalculate gfx element positions if you don't need to.
When panning/zooming, draw the scene with lower GDI+ quality (and without expensive GDI operations). There are a number of Graphics object settings to lower the quality. After the user stops panning, draw the scene with the high quality.
A lot and lot of little things that improve performance. I've been developing this app for 2-3 years (or is it 4 already hmm?) now and I still find ways to improve things :). This is why profiling is important - the code changes and this can affect the performance, so you need to profile the new code.
One other thing: I haven't used SlimDX, but I did try Direct2D (I'm referring to Microsoft.WindowsAPICodePack.DirectX.Direct2D1). The performance was considerably faster than GDI+ in my case, but I had some issues with rendering bitmaps and never had the time to find the proper solution.
I have recently ported some drawing code over to DirectX and have been very pleased with the results. We were mainly rendering bitmaps using bit-bashing and are seeing render times that could be measured in minutes reduced to around 1-2 seconds.
This can't be directly compared to you usage, as we've gone from bit-bashing in C++ to Direct3D in C# using SlimDX, but I imagine you will see performance benefits, even if they're not the orders of magnitude we're seeing.
I would advise you to take a look at using Direct2D with SlimDX. You will need to use DirectX 10.1 as Direct2D isn't compatible with DirectX 11 for some reason. If you have used the drawing API in WPF then you will already be familiar with Direct2D as its API is based on the WPF drawing API as far as I can tell. The main problems with Direct2D are a lack of documentation and the fact it only works in Vista onwards.
I've not experimented with DirectX 10/WPF interop, but I believe it is possible (http://stackoverflow.com/questions/1252780/d3dimage-using-dx10)
EDIT: I thought I'd give you a comparison from our code of drawing a simple polygon. First the WPF version:
StreamGeometry geometry = new StreamGeometry();
using (StreamGeometryContext ctx = geometry.Open())
{
foreach (Polygon polygon in mask.Polygons)
{
bool first = true;
foreach (Vector2 p in polygon.Points)
{
Point point = new Point(p.X, p.Y);
if (first)
{
ctx.BeginFigure(point, true, true);
first = false;
}
else
{
ctx.LineTo(point, false, false);
}
}
}
}
Now the Direct2D version:
Texture2D maskTexture = helper.CreateRenderTexture(width, height);
RenderTargetProperties props = new RenderTargetProperties
{
HorizontalDpi = 96,
PixelFormat = new PixelFormat(SlimDX.DXGI.Format.Unknown, AlphaMode.Premultiplied),
Type = RenderTargetType.Default,
Usage = RenderTargetUsage.None,
VerticalDpi = 96,
};
using (SlimDX.Direct2D.Factory factory = new SlimDX.Direct2D.Factory())
using (SlimDX.DXGI.Surface surface = maskTexture.AsSurface())
using (RenderTarget target = RenderTarget.FromDXGI(factory, surface, props))
using (SlimDX.Direct2D.Brush brush = new SolidColorBrush(target, new SlimDX.Color4(System.Drawing.Color.Red)))
using (PathGeometry geometry = new PathGeometry(factory))
using (SimplifiedGeometrySink sink = geometry.Open())
{
foreach (Polygon polygon in mask.Polygons)
{
PointF[] points = new PointF[polygon.Points.Count()];
int i = 0;
foreach (Vector2 p in polygon.Points)
{
points[i++] = new PointF(p.X * width, p.Y * height);
}
sink.BeginFigure(points[0], FigureBegin.Filled);
sink.AddLines(points);
sink.EndFigure(FigureEnd.Closed);
}
sink.Close();
target.BeginDraw();
target.FillGeometry(geometry, brush);
target.EndDraw();
}
As you can see, the Direct2D version is slightly more work (and relies on a few helper functions I've written) but it's actually pretty similar.
Let me try and list the pros and cons of each approach - which will perhaps give you some idea about which to use.
GDI Pros
Easy to draw vector shapes with
No need to include extra libraries
GDI Cons
Slower than DX
Need to limit "fancy" drawing (gradients and the like) or it might slow things down
If the diagram needs to be interactive - might not be a great option
SlimDX Pros
Can do some fancy drawing while being faster than GDI
If the drawing is interactive - this approach will be MUCH better
Since you draw the primitives you can control quality at each zoom level
SlimDX Cons
Not very easy to draw simple shapes with - you'll need to write your own abstractions or use a library that helps you draw shapes
Not as simple to use a GDI especially if you've not used it before
And perhaps more I forgot to put in here, but perhaps these will do for starters?
-A.
I'm working on a personal project that, like many XNA projects, started with a terrain displacement map which is used to generate a collection of vertices which are rendered in a Device.DrawIndexedPrimitives() call.
I've updated to a custom VertexDeclaration, but I don't have access to that code right now, so I will post the slightly older, but paradigmatically identical (?) code.
I'm defining a VertexBuffer as:
VertexBuffer = new VertexBuffer(device, VertexPositionNormalTexture.VertexDeclaration, vertices.Length, BufferUsage.WriteOnly);
VertexBuffer.SetData(vertices);
where 'vertices' is defined as:
VertexPositionNormalTexture[] vertices
I've also got two index buffers that are swapped on each Update() iteration. In the Draw() call, I set the GraphicsDevice buffers:
Device.SetVertexBuffer(_buffers.VertexBuffer);
Device.Indices = _buffers.IndexBuffer;
Ignoring what I hope are irrelevant lines of code, I've got a method that checks within a bounding shape to determine whether a vertex is within a certain radius of the mouse cursor and raises or lowers those vertex positions depending upon which key is pressed. My problem is that the VertexBuffer.SetData() is only called once at initialization of the container class.
Modifying the VertexPositionNormalTexture[] array's vertex positions doesn't get reflected to the screen, though the values of the vertex positions are changed. I believe this to be tied to the VertexBuffer.SetData() call, but you can't simply call SetData() with the vertex array after modifying it.
After re-examining how the IndexBuffer is handled (2 buffers, swapped and passed into SetData() at Update() time), I'm thinking this should be the way to handle VertexBuffer manipulations, but does this work? Is there a more appropriate way? I saw another reference to a similar question on here, but the link to source was on MegaUpload, so...
I'll try my VertexBuffer.Swap() idea out, but I have also seen references to DynamicVertexBuffer and wonder what the gain there is? Performance supposedly suffers, but for a terrain editor, I don't see that as being too huge a trade-off if I can manipulate the vertex data dynamically.
I can post more code, but I think this is probably a lack of understanding of how the device buffers are set or data is streamed to them.
EDIT: The solution proposed below is correct. I will post my code shortly.
First: I am assuming you are not adding or subtracting vertices from the terrain. If you aren't, you won't need to alter the indexbuffer at all.
Second: you are correct in recognizing that simply editing your array of vertices will not change what is displayed on screen. A VertexBuffer is entirely separate from the vertices it is created from and does not keep a reference to the original array of them. It is a 'snapshot' of your vertices when you set the data.
I'm not sure about some of what seem to be assumptions you have made. You can, as far as I am aware, call VertexBuffer.SetData() at any time. If you are not changing the number of vertices in your terrain, only their positions, this is good. Simply re-set the data in the buffer every time you change the position of a vertex. [Note: if I am wrong and you can only set the data on a buffer once, then just replace the old instance of the buffer with a new one and set the data on that. I don't think you need to, though, unless you've changed the number of vertices]
Calling SetData is fairly expensive for a large buffer, though. You may consider 'chunking' your terrain into many smaller buffers to avoid the overhead required to set the data upon changing the terrain.
I do not know much about the DynamicVertexBuffer class, but I don't think it's optimal for this situation (even if it sounds like it is). I think it's more used for particle vertices. I could be wrong, though. Definitely research it.
Out of curiosity, why do you need two index buffers? If your vertices are the same, why would you use different indices per frame?
Edit: Your code for creating the VertexBuffer uses BufferUsage.WriteOnly. Good practice is to make the BufferUsage match that of the GraphicsDevice. If you haven't set the BufferUsage of the device, you probably just want to use BufferUsage.None. Try both and check performance differences if you like.