2D Drawing Performance (GDI+ vs SlimDX) - c#

I am part of a team that has created a tool to view and interact with very large and heavily interconnected graphs in C#/WPF. Viewing and interacting with the graph is done through a custom control that takes in a set of DrawingVisuals and displays them on a canvas. Nodes in the graph may have a custom shape created with our editor. The current control works very well and is fairly coupled with our program but there are legitimate worries about performance when considering much larger graphs (20,000+ nodes and lots of connection).
After doing a bit of research it seems the two approaches are:
A GDI+ route where graphics are drawn to a WriteableBitmap or InteropBitmap.
SlimDX or DirectX variant (hosted in a D3DImage)
Given these two extremely different approaches which route would be best to take considering:
Interacting with the graph must be fast even while viewing the whole graph.
Updating the visuals should be fast (color or size change)
Hit testing must be fast (point and rectangle).
Development must be completed in a timely manner.
Which method would you use and why?
EDIT:
It looks like a similar question was asked but not answered.

I use GDI for my cartographic application. While GDI+ is slower than, say, DirectX, I find that there are a lot of things and tricks that can be used to speed things up. A lot of CPU is used for preparing the data before drawing it itself, so GDI should not be the only bottleneck there.
Things to look out for (and these are general enough to apply to other graphics engines, too):
First of all: measure. Use a profiler to see what is the real bottleneck in your code.
Reuse GDI primitives. This is vital. If you have to draw 100,000 graphics objects that look the same or similar, use the same Pen, Brush etc. Creating these primitives is expensive.
Cache the rendering data - for example: don't recalculate gfx element positions if you don't need to.
When panning/zooming, draw the scene with lower GDI+ quality (and without expensive GDI operations). There are a number of Graphics object settings to lower the quality. After the user stops panning, draw the scene with the high quality.
A lot and lot of little things that improve performance. I've been developing this app for 2-3 years (or is it 4 already hmm?) now and I still find ways to improve things :). This is why profiling is important - the code changes and this can affect the performance, so you need to profile the new code.
One other thing: I haven't used SlimDX, but I did try Direct2D (I'm referring to Microsoft.WindowsAPICodePack.DirectX.Direct2D1). The performance was considerably faster than GDI+ in my case, but I had some issues with rendering bitmaps and never had the time to find the proper solution.

I have recently ported some drawing code over to DirectX and have been very pleased with the results. We were mainly rendering bitmaps using bit-bashing and are seeing render times that could be measured in minutes reduced to around 1-2 seconds.
This can't be directly compared to you usage, as we've gone from bit-bashing in C++ to Direct3D in C# using SlimDX, but I imagine you will see performance benefits, even if they're not the orders of magnitude we're seeing.
I would advise you to take a look at using Direct2D with SlimDX. You will need to use DirectX 10.1 as Direct2D isn't compatible with DirectX 11 for some reason. If you have used the drawing API in WPF then you will already be familiar with Direct2D as its API is based on the WPF drawing API as far as I can tell. The main problems with Direct2D are a lack of documentation and the fact it only works in Vista onwards.
I've not experimented with DirectX 10/WPF interop, but I believe it is possible (http://stackoverflow.com/questions/1252780/d3dimage-using-dx10)
EDIT: I thought I'd give you a comparison from our code of drawing a simple polygon. First the WPF version:
StreamGeometry geometry = new StreamGeometry();
using (StreamGeometryContext ctx = geometry.Open())
{
foreach (Polygon polygon in mask.Polygons)
{
bool first = true;
foreach (Vector2 p in polygon.Points)
{
Point point = new Point(p.X, p.Y);
if (first)
{
ctx.BeginFigure(point, true, true);
first = false;
}
else
{
ctx.LineTo(point, false, false);
}
}
}
}
Now the Direct2D version:
Texture2D maskTexture = helper.CreateRenderTexture(width, height);
RenderTargetProperties props = new RenderTargetProperties
{
HorizontalDpi = 96,
PixelFormat = new PixelFormat(SlimDX.DXGI.Format.Unknown, AlphaMode.Premultiplied),
Type = RenderTargetType.Default,
Usage = RenderTargetUsage.None,
VerticalDpi = 96,
};
using (SlimDX.Direct2D.Factory factory = new SlimDX.Direct2D.Factory())
using (SlimDX.DXGI.Surface surface = maskTexture.AsSurface())
using (RenderTarget target = RenderTarget.FromDXGI(factory, surface, props))
using (SlimDX.Direct2D.Brush brush = new SolidColorBrush(target, new SlimDX.Color4(System.Drawing.Color.Red)))
using (PathGeometry geometry = new PathGeometry(factory))
using (SimplifiedGeometrySink sink = geometry.Open())
{
foreach (Polygon polygon in mask.Polygons)
{
PointF[] points = new PointF[polygon.Points.Count()];
int i = 0;
foreach (Vector2 p in polygon.Points)
{
points[i++] = new PointF(p.X * width, p.Y * height);
}
sink.BeginFigure(points[0], FigureBegin.Filled);
sink.AddLines(points);
sink.EndFigure(FigureEnd.Closed);
}
sink.Close();
target.BeginDraw();
target.FillGeometry(geometry, brush);
target.EndDraw();
}
As you can see, the Direct2D version is slightly more work (and relies on a few helper functions I've written) but it's actually pretty similar.

Let me try and list the pros and cons of each approach - which will perhaps give you some idea about which to use.
GDI Pros
Easy to draw vector shapes with
No need to include extra libraries
GDI Cons
Slower than DX
Need to limit "fancy" drawing (gradients and the like) or it might slow things down
If the diagram needs to be interactive - might not be a great option
SlimDX Pros
Can do some fancy drawing while being faster than GDI
If the drawing is interactive - this approach will be MUCH better
Since you draw the primitives you can control quality at each zoom level
SlimDX Cons
Not very easy to draw simple shapes with - you'll need to write your own abstractions or use a library that helps you draw shapes
Not as simple to use a GDI especially if you've not used it before
And perhaps more I forgot to put in here, but perhaps these will do for starters?
-A.

Related

C# WPF Combining multiple bitmaps into one [duplicate]

This question already has answers here:
Create a Composite BitmapImage in WPF
(2 answers)
Closed 1 year ago.
Hello I'm working on a WPF program to automate the process of producing cards (I feed it information from a database, it spits out image files of the correct dimensions).
These cards are made up of 3 effective "layers" placed on top of each other and should produce an output like so
(if I need to remove it I will, since I just grabbed an image with the right aspect ratio).
Now I can get the separate "layers" as their own bitmaps with something like
//Get the filepath and store it in a variable named 'FilePath' before this
BitmapImage image = new BitmapImage();
image.UriSource = (Uri)FilePath;
(I know that code isn't right but you get the idea).
So the question is, how do I add these three bitmaps together into a single bitmap that can then be saved as say a .png or such.
I know WinForms has a lot more options built in for image and bitmap manipulation but I am doing this in WPF.
I was thinking of doing this with byte arrays and using loops to copy values from one to the other but any better suggestions are highly appreciated.
I think it's important to understand here what WinForms and WPF actually are.
WPF did not "replace" all the stuff in WinForms. WinForms is essentially a wrapper to the underlying Windows GDI API, which is itself still very much a current technology and likely to remain so in the foreseeable future.
WPF replaces the rendering of GUI elements with an engine based on DirectX. In order to do this it has to provide its own image classes, but this is solely for the purpose of display within the hardware-accelerated DirectX environment.
This is an important distinction: WPF is not, in-and-of itself, a part of the Windows operating system. It uses DirectX for rendering, but DirectX itself is designed for interfacing to graphics hardware and not for direct image manipulation (with some rare exceptions like GPU processing). The GDI, however, is still very much a part of windows and was specifically designed for this kind of thing, all the way back to the days of software rendering.
So in other words, unless you have a very specific requirement that involves hardware accelerated display you may as well use the GDI. WPF and WinForms can co-exist alongside each other just fine because they do completely different things. Just because one of the things WinForms happens to do is expose an older rendering technology that you don't want to use yourself doesn't mean that WinForms as a whole is obsolete.
UPDATE: to use GDI functions you'll need to add a reference to System.Drawing; normally this is done for you when you create a windows project but if you've created a console application etc then you'll need to do it manually. The Graphics class provides many functions for rendering, but from what you've described this will probably cover most of what you're trying to do:
using System.Drawing;
using System.Drawing.Imaging;
namespace yournamespace
{
class Program
{
private static void Main(string[] args)
{
// load an image
var source = new Bitmap("source.png");
// create a target image to draw into
var target = new Bitmap(1000, 1000, PixelFormat.Format32bppRgb);
// get a context
using (var graphics = Graphics.FromImage(target))
{
// draw an image into it, scaled to a different size
graphics.DrawImage(source, new Rectangle(250, 250, 500, 500));
// draw primitives
using (var pen = new Pen(Brushes.Blue, 10))
graphics.DrawEllipse(pen, 100, 100, 800, 800);
}
// save the target to a file
target.Save("target.png", ImageFormat.Png);
}
}
}

Best approach to read, optimize (polygon crunch) and display 3D model in C#/WPF

I'm creating a tool that converts hi-poly meshes to low-poly meshes and I have some best practice questions on how I want to approach some of the problems.
I have some experience with C++ and DirectX but I prefer to use C#/WPF to create this tool, I'm also hoping that C# has some rich libraries for opening, displaying and saving 3d models. This brings me to my first question:
Best approach for reading, viewing and saving 3d models
To display 3D models in my WPF application, I'm thinking about using the Helix 3D toolkit.
To read vertex data from my 3D models I'm going to write my own .OBJ reader because I'll have to optimize the vertices and write out everything
Best approach for optimizing the 3d model
For optimization things will get tricky, especially when dealing with tons of vertices and tons of changes. Guess I'll keep it simple at the start and try to detect if an edge is on the same slope as adjacent edges and then I'll remove that redundant edge and retriangulate everything.
In later stages I also want to create LODs to simplify the model by doing the opposite of what a turbosmooth modifier does in Max (inverse interpolation). I have no real clue how to start on this right now but I'll look around online and experiment a little.
And at last I have to save the model, and make sure everything still works.
For viewing 3D objects you can also consider the Ab3d.PowerToys library - it is not free, but greatly simplifies work with WPF 3D and also comes with many samples.
OBJ file is good because it is very commonly used and has very simple structure that is easy to read and write to. But it does not support object hierarchies, transformations, animations, bones, etc. If you will need any of those, than you will need to use some other data format.
I do not have any experience in optimizing hi-poly meshes, so I cannot give you any advice here. Here I can only say that you may also consider combining the meshes with the same material into one mesh - this can reduce the number of draw calls and also improve performance.
My main advice is on how to write your code to make it perform better in WPF 3D. Because you will need to check and compare many vertices, you need to avoid getting data from the MeshGeometry3D.Positions and MeshGeometry3D.TriangleIndices collections - accessing a single value from those collections is very slow (you may check the .Net source and see how many lines of code are behind each get).
Therefore I would recommend you to have your own structure of meshes with Lists (List, List) for Positions and TriangleIndices. In my observations, Lists of structs are faster than using simple arrays of structs (but the lists must be presized - their size need to be set in constructor). This way you can access the data much faster. Also, when an extra boost is needed, you may also use unsafe blocks with pointers. You may also add some other data to your mesh classes - for example you mentioned adjacent edges.
Once you have your positions and triangle indices set, you can create the WPF's MeshGeometry3D object with the following code:
var wpfMesh = new MeshGeometry3D()
{
Positions = new Point3DCollection(optimizedPositions),
TriangleIndices = new Int32Collection(optimizedTriangleIndices)
};
This is faster than adding each Point3D to Positions collection.
Because you will not change that instance of wpfMesh (for each change you will create a new MeshGeometry3D), you can freeze it - call Freeze() on it. This allows WPF to optimize the meshes (combine them into vertex buffers) to reduce the number of draw calls. What is more, after you freeze a MeshGeometry3D (or any other WPF object), you can pass it from one thread to another. This means that you can parallelize your work and create the MeshGeometry3D objects in worker threads and then pass them to UI thread as frozen objects.
The same applies to change the Positions (and other data) in MeshGeometry3D object. It is faster to copy the existing positions to an array or List, change the data there and then recreate the Positions collection back from your array, then to change each individual position. Before doing any change of MeshGeometry3D you also need to disconnect it from the parent GeometryModel3D to prevent triggering many change events. This is done with the following:
var mesh = parentGeometryModel3D.Geometry; // Save MeshGeometry3D to mesh
parentGeometryModel3D.Geometry = null; // Disconnect
// modify the mesh here ...
parentGeometryModel3D.Geometry = mesh; // Connect the mesh back

Moving from SlimDX to SharpDX - Effects

We have a project that currently uses DirectX11 SlimDX and would like to move it to SharpDX. However, this project uses the Effect framework from SlimDX which I understand is no longer properly supported in DirectX11. However I can't find definitive information about how I should transition the effects.
The effects in use are relatively simple pixels shaders, contained in .fx files.
Should I move away from .fx files? What to? To plain .hlsl files? Or should I use the SharpDX Toolkit? Does this use a different format from .fx files?
I can't find any documentation on this. Is there anyone who has made this transition who could give me some advice, or any documentation on the SharpDX Toolkit effects framework?
The move to SharpDX is pretty simple, there's a couple of changes in naming, and resource description, but apart the fact that it's relatively cumbersome (depending on the size of your code base), there's nothing too complex.
About effects framework, you have the library SharpDX.Direct3D11.Effects that wraps it, so you have it of course supported.
It's pretty much the same as per SlimDX counterpart, so you should not have any major issues moving from it.
If you want to transition away from fx framework to more plain hlsl, you can keep the same fx file, compilation steps will change, instead on compiling the whole file you need to compile each shader separately.
So for example, to compile and create a VertexShader:
CompilationResult result = ShaderBytecode.Compile(content, "VS", "vs_5_0", flags, EffectFlags.None, null, null);
VertexShader shader = new VertexShader(device, result.Bytecode);
Also you need to be careful with all constantbuffers/resource registers, it's generally good to set them explicitely, for example:
cbuffer cbData : register(b0)
{
float4x4 tW;
float4x4 tColor;
float4 cAmb;
};
You of course don't have anymore all the EffectVariable, Get by name/semantic, so instead you need to map your cBuffer to a struct in c# (you can also use datastream directly), and create Constant buffer resources.
[StructLayout(LayoutKind.Sequential,Pack=16)]
public struct cbData
{
public Matrix tW;
public Matrix tColor;
public Vector4 cAmb;
}
BufferDescription bd = new BufferDescription()
{
BindFlags = BindFlags.ConstantBuffer,
CpuAccessFlags = CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = 144, //Matches the struct
Usage = ResourceUsage.Dynamic
};
var cbuffer = new SharpDX.Direct3D11.Buffer(device, bd);
Use either UpdateSubResource or MapSubresource to update data, and deviceContext.VertexShader.SetConstantBuffer to bind to pipeline.
If you need to inspect shader with reflection, this is done this way (please note that's actually what the effects framework does, it's just a layer on top of d3dcompiler):
ShaderReflection refl = new ShaderReflection(result.Bytecode);
You then need to set up all API calls manually (which is what Effects does for you when you call EffectPass.Apply ).
Also since you compile shaders individually, there is no more layout validation between stages (effects compiler giving you : No valid VertexShader-PixelShader combination....). So you need to be careful setting your pipeline with non matching shaders (you can use reflection data to validate manually, or watch a black screen with debug runtime spamming your output window in visual studio).
So transitioning can be a bit tedious, but can also be beneficial since it's easier to minimize pipeline state changes (In my use case this is not a concern, so effects framework does just fine, but if you have a high number of draw calls that can become significant).

Displaying camera preview using DirectX Texture2D causes oscillation on Windows Phone 8

I am currently writing a small app which shows the preview from the phone camera using a SharpDX sprite batch. For those who have an nokia developer account, the code is mainly from this article.
Problem
Occasionally, it seems like previous frames are drawn to the screeb (the "video" jumps back and forth), for the fracture of a second, which looks like oscillation/flicker.
I thought of a threading problem (since the PreviewFrameAvailable event handler is called by a different thread than the method which is responsible for rendering), but inserting a lock statement into both methods makes the code too slow (the frame rate drops below 15 frames/sec).
Does anyone have an idea how to resolve this issue or how to accoplish thread synchronization in this case without loosing too much performance?
Code
First, all resources are created, whereas device is a valid instance of GraphicsDevice:
spriteBatch = new SpriteBatch(device);
photoDevice = await PhotoCaptureDevice.OpenAsync(CameraSensorLocation.Back, captureSize);
photoDevice.FocusRegion = null;
width = (int)photoDevice.PreviewResolution.Width;
height = (int)photoDevice.PreviewResolution.Height;
previewData = new int[width * height];
cameraTexture = Texture2D.New(device, width, height, PixelFormat.B8G8R8A8.UNorm);
photoDevice.PreviewFrameAvailable += photoDevice_PreviewFrameAvailable;
Then, whenever the preview frame changes, I set the data to the texture:
void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
{
sender.GetPreviewBufferArgb(previewData);
cameraTexture.SetData(previewData);
}
Finally, the Texture is drawn using a SpriteBatch whereas the parameters backBufferCenter, textureCenter, textureScaling and Math.Pi / 2 are used to center and adjust the texture in landscape orientation:
spriteBatch.Begin();
spriteBatch.Draw(cameraTexture, backBufferCenter, null, Color.White, (float)Math.PI / 2, textureCenter, textureScaling, SpriteEffects.None, 1.0f);
spriteBatch.End();
The render method is called by the SharpDX game class, which basically uses the IDrawingSurfaceBackgroundContentProvider interface, which is called by the DrawingSurfaceBackgroundGrid component of the Windows Phone 8 runtime.
Solution
Additional to Olydis solution (see below), I also had to set Game.IsFixedTimeStep to false, due to a SharpDX bug (see this issue on GitHub for details).
Furthermore, it is not safe to call sender.GetPreviewBufferArgb(previewData) inside the handler for PreviewFrameAvailable, due to cross thread access. See the corresponding thread in the windows phone developer community.
My Guess
As you guessed, I'm also pretty sure this may be due to threading. I suspect that, for example, the relatively lengthy SetData call may be intercepted by the Draw call, leading to unexpected output.
Solution
The following solution does not use synchronization, but instead moves "critical" parts (access to textures) to the same context.
Also, let's allocate two int[] instead of one, which we will use in an alternating fashion.
Code Fragments
void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
{
sender.GetPreviewBufferArgb(previewData2);
// swap buffers
var previewDataTemp = previewData1;
previewData1 = previewData2;
previewData2 = previewDataTemp;
}
Then add this to your Draw call (or equal context):
cameraTexture.SetData(previewData1);
Conclusion
This should practically prevent your problem since only "fully updated" textures are drawn and there is no concurrenct access to them. The use of two int[] reduces the risk of having SetData and GetPreviewBufferArgb access the same array concurrently - however, it does not eliminate the risk (but no idea if concurrent access to the int[] can result in weird behaviour in the first place).

Efficient image manipulation in C#

I'm using the System.Drawing classes to generate thumbnails and watermarked images from user-uploaded photos. The users are also able to crop the images using jCrop after uploading the original. I've taken over this code from someone else, and am looking to simplify and optimize it (it's being used on a high-traffic website).
The previous guy had static methods that received a bitmap as a parameter and returned one as well, internally allocating and disposing a Graphics object. My understanding is that a Bitmap instance contains the entire image in memory, while Graphics is basically a queue of draw operations, and that it is idempotent.
The process currently works as follows:
Receive the image and store it in a temporary file.
Receive crop coordinates.
Load the original bitmap into memory.
Create a new bitmap from the original, applying the cropping.
Do some crazy-ass brightness adjusting on the new bitmap, maybe (?) returning a new bitmap (I'd rather not touch this; pointer arithmetics abound!), lets call this A.
Create another bitmap from the resulting one, applying the watermark (lets call this B1)
Create a 175x175 thumbnail bitmap from A.
Create a 45x45 thumbnail bitmap from A.
This seems like a lot of memory allocations; my question is this: is it a good idea to rewrite portions of the code and reuse the Graphics instances, in effect creating a pipeline? In effect, I only need 1 image in memory (the original upload), while the rest can be written directly to disk. All the generated images will need the crop and brightness transformations, and a single transformation that is unique to that version, effectively creating a tree of operations.
Any thought or ideas?
Oh, and I should probably mention that this is the first time I'm really working with .NET, so if something I say seems confused, please bear with me and give me some hints.
Reusing Graphics objects will probably not result in significant performance gain.
The underlying GDI code simple creates a device context for the bitmap you have loaded in RAM (a Memory DC).
The bottleneck of your operation appears to be in loading the image from disk.
Why reload the image from disk? If it is already in a byte array in RAM, which it should be when it is uploaded - you can just create a memory stream on the byte array and then create a bitmap from that memory stream.
In other words, save it to the disk, but don't reload it, just operate on it from RAM.
Also, you shouldn't need to create a new bitmap to apply the watermark (depending on how it'd done.)
You should profile the operation to see where it needs improvement (or even if it needs to be improved.)
The process seems reasonable. Each image has to exist in memory before it is saved to disk - so each version of your thumbnails will be in memory first. The key to making sure this works efficiently is to Dispose your Graphics and Bitmap objects. The easiest way to do that is with the using statement.
using( Bitmap b = new Bitmap( 175, 175 ) )
using( Graphics g = Graphics.FromBitmap( b ) )
{
...
}
I completed a similar project a while ago and did some practical testing to see if there was a difference in performance if I reused the Graphics object rather than spin up a new one for every image. In my case, I was working on a steady stream of large numbers of images (>10,000 in a "batch"). I found that I did get a slight performance increase by reusing the Graphics object.
I also found I got a slight increase by using GraphicsContainers in the Graphics object to easily swap different states into/out of the object as it was used to perform various actions. (Specifically, I had to apply a crop and draw some text and a box (rectangle) on each image.) I don't know if this makes sense for what you need to do. You might want to look at the BeginContainer and EndContainer methods in the Graphics object.
In my case, the difference was slight. I don't know if you would get more or less improvement in your implementation. But since you will incur a cost in rewriting your code, you might want to consider finishing the current design and doing some perf tests before rewriting. Just a thought.
Some links you might find useful:
Using Nested Graphics Containers
GraphicsContainer Class
I am only going to throw this out there casually, but if you wanted a quick 'guide' to best practices for working with images, look at the Paint.NET project. For free high-proformance tools for doing image manipulation, look at AForge.NET.
The benefit of AForge is to allow you to do alot of these steps without creating a new bitmap every time. If this is for a website, I can almost guarentee that the code you are working with will be the performance bottleneck for the application.

Categories