IIS & service-based generation of web images - GDI, GDI+ or Direct2D? - c#

It is May 2017. My boss has asked me to produce some code to make some custom web images on our website based on text that the user enters into their browser.
The Server environment is Windows 2012 running IIS, and I am familiar with C#. From what I read I should be able to use GDI+ to create images, draw smooth text into them etc.
However, one of my colleagues suggested GDI+ may not work on Windows Server, and that GDI+ is based on older GDI which is 32-bit and will therefore be scrapped one day soon, and that I should use DirectX instead. I feel that to introduce another layer would make matters more complex to write & support.
There are a lot of web pages discussing these subjects as well as performance of each but it feels inconclusive so I ask for experience from the SO community.
So, question: Will GDI work on Windows Server ?
EDIT: Thanks for the responses. I see from them that I was a tad vague on a couple of points. Specifically, we are intending the rendering to image process to be a queue-based process with a service running the GDI+ graphics code. I have just read this from 2013 which suggests that GDI+ should not be run within a service, and suggesting that Direct2D is the MS preferred way-to-go.
EDIT 2: Further research has found this page. It says the options are GDI, GDI+ or Direct2D. I copy the key paras here, though the entire page is a quick read so view at source for context if you can.
Options for Available APIs
There are three options for server-side rendering: GDI, GDI+ and
Direct2D. Like GDI and GDI+, Direct2D is a native 2D rendering API
that gives applications more control over the use of graphics devices.
In addition, Direct2D uniquely supports a single-threaded and a
multithreaded factory. The following sections compare each API in
terms of drawing qualities and multithreaded server-side rendering.
GDI
Unlike Direct2D and GDI+, GDI does not support high-quality
drawing features. For instance, GDI does not support antialiasing for
creating smooth lines and has only limited support for transparency.
Based on the graphics performance test results on Windows 7 and
Windows Server 2008 R2, Direct2D scales more efficiently than GDI,
despite the redesign of locks in GDI. For more information about these
test results, see Engineering Windows 7 Graphics Performance. In
addition, applications using GDI are limited to 10240 GDI handles per
process and 65536 GDI handles per session. The reason is that
internally Windows uses a 16-bit WORD to store the index of handles
for each session.
GDI+*
While GDI+ supports antialiasing and alpha
blending for high-quality drawing, the main problem with GDI+ for
server-scenarios is that it does not support running in Session 0.
Since Session 0 only supports non-interactive functionality, functions
that directly or indirectly interact with display devices will
therefore receive errors. Specific examples of functions include not
only those dealing with display devices, but also those indirectly
dealing with device drivers. Similar to GDI, GDI+ is limited by its
locking mechanism. The locking mechanisms in GDI+ are the same in
Windows 7 and Windows Server 2008 R2 as in previous versions.
Direct2D
Direct2D is a hardware-accelerated, immediate-mode, 2-D graphics API
that provides high performance and high-quality rendering. It offers a
single-threaded and a multithreaded factory and the linear scaling of
course-grained software rendering. To do this, Direct2D defines a root
factory interface. As a rule, an object created on a factory can only
be used with other objects created from the same factory. The caller
can request either a single-threaded or a multithreaded factory when
it is created. If a single-threaded factory is requested, then no
locking of threads is performed. If the caller requests a
multithreaded factory, then, a factory-wide thread lock is acquired
whenever a call is made into Direct2D. In addition, the locking of
threads in Direct2D is more granular than in GDI and GDI+, so that the
increase of the number of threads has minimal impact on the
performance.
After some discussion of threading and some sample code, it concludes...
Conclusion
As seen from the above, using Direct2D for server-side rendering is simple and straightforward. In addition, it provides high quality and highly parallelizable rendering that can run in low-privilege environments of the server.
Whilst I interpret the slant of the piece as being pro-Direct2D, the points on locking and session-0 for GDI+ are concerning. Arguably, since we propose a queue-based process, the locking issue is less severe, but if there are immediate and practical restrictions to what a service can do with GDI+ then it looks like Direct2D is the only viable route for my project.
Did I interpret this correctly or has the SO community more recent & relevant experience?
EDIT: With the initial batch of responses slowing up and no sign of a definitive answer, I add this edit. The team here has selected sharpdx as a wrapping library to MS DirectWrite which is itself part of the Direct3D family of API's. We are not 100% certain that sharpdx will be required and we will be comparing it to a solely DirectWrite implementation as we go along looking out for the benefit or hindrance the extra layer represents. We believe at this point in time that this follows the direction MS were trying to suggest in the article sampled above, and that we will be free of GDI/+ shortcomings in a service environment and able to benefit from performance and feature gains in DirectWrite. We shall see.
EDIT: Having delved into SharpDx we are making progress and something mentioned by Mgetz about 'WARP' now makes sense. Direct3D is the underpinning tech we access via the SharpDX API. AS with all low-level graphics work, we request a device context (aka dc), then a drawing surface, then we draw. The device context part is where WARP comes in. A dc is usually fronting a hardware device - but in my project I am targeting a service on a server where it is unlikely that there will be a graphics processor, and maybe not even a video card. If it is a virtual server then the video processor may be shared etc. So I don't want to be tied to a 'physical' hardware device. Enter WARP (good time to view the link for full context), which is an entirely software realisation of a dc - no hardware dependency. Sweet. Here is an extract from the linked page:
Enabling Rendering When Direct3D 10 Hardware is Not Available
WARP allows fast rendering in a variety of situations where hardware
implementations are unavailable, including:
When the user does not have any Direct3D-capable hardware When an application runs as a service or in a server environment
When a video card is not installed
When a video driver is not available, or is not working correctly
When a video card is out of memory, hangs, or would take too many system resources to initialize

In your case, I would probably try to go with SkiaSharp (https://github.com/mono/SkiaSharp) to abstract a bit from the platform/API details

Related

Can C# .NET be used for hard real-time?

Given that the familiar form of .NET is run on Windows, which is not a real-time O/S, and MONO runs on Linux (standard kernel is also not a real-time O/S).
Given also, that any memory allocation scheme offering garbage collection (as in "managed" .NET), and indeed any heap memory scheme will introduce non-deterministic, potentially non-trivial delays into an application's execution behavior.
Is there any combination of alternate host O/S and coding paradigm in which one can leverage all of the power and conveniences of C# .NET while implementing a solution which can execute designated portions of code within tightly specified time constraints? e.g. start a C# method every 10ms to a tolerance of less than 1ms, with completion time determined only by the work performed in the method itself?
Obviously, the application would have to be carefully written; time-critical code would have to avoid memory allocations; the application would have to have completed all its memory allocation etc. work and have no other threads active once the hard real-time loop is started. Also, the host O/S would have to support real-time scheduling.
Is this possible within the .NET / MONO framework, or is it precluded by the design of the .NET runtime, framework, and O/Ss on which it (or compatible equivalent) is supported?
For example: is it possible to do reliable fine-grained (~1ms) machine control purely in C# with something like NETduino, or do they have limits or require alternate strategies for such applications?
Short Answer: No.
Longer answer: The closest you can get is running the .net Micro Framework directly on Hardware, but the TinyCLR still doesn't give you deterministic timings. Microsoft has Windows CE/Windows Embedded Compact as their real time offering, but even that is only real time for slower tasks (I believe somewhere in the range of 50 microseconds or more - not sure if that qualifies for Hard Real Time)
I do not know if it were technically possible to create a real-time c# implementation, but no one has done one and even .net native isn't made for that.
Can C# be used for hard real-time? Yes
When we talk about real-time it's most often (if not always) about robotics and IoT. And for that we almost always go with one of these options (forget Windows CE and Windows 10 IoT):
Microcontrollers (example: Arduino, RPi Pico, NodeMCU)
Linux based SBCs (example: Raspberry Pi, BeagleBone, Rock Pi)
Microcontrollers are by nature real-time. Basically the device will just run a loop forever (there are interrupts and multi-threading on some chips though). Top languages in this category are C/C++ and MicroPython. But C# can also be used:
Wilderness Labs (Netduino and Meadow F7)
.NET nanoframefork (several boards)
The second option (Linux based SBCs) is a bit more tricky. The OS has complete control over the hardware and it has a scheduler. That way many processes can be run on just one CPU. The OS itself has a lot of housekeeping as well.
Linux has a set of scheduling APIs that can be used to tell the OS that we want you to favor our process over others. And the OS will do its best to comply but no guarantees. This is usually called soft real-time. In .NET you can use the Process.PriorityClass to change your process's nice value. Depending on how busy the OS is and the amount of resources available (CPUs and memory) you might get satisfying results.
Other than that, Linux also provides hard real-time capabilities with the PREEMT_RT patch, and there is also a feature that you can isolate a CPU core for your selected processes. But to my knowledge .NET does not have any API to use these capabilities (P/Invoke may work).

C++ AMP calculations and WPF rendering graphics card dual use performance

Situation:
In an application that has both the need for calculation as well as rendering images (image preprocessing and then displaying) I want to use both AMP and WPF (with AMP doing some filters on the images and WPF not doing much more than displaying scaled/rotated images and some simple overlays, both running at roughly 30fps, new images will continuously stream in).
Question:
Is there any way to find out how the 2 will influence each other?
I am wondering on whether I will see the hopefully nice speed-up I will see in an isolated AMP only environment in the actual application later on as well.
Additional Info:
I will be able and am going to measure the AMP performance separately, since it is low level and new functionality that I am going to set up in a separate project anyway. The WPF rendering part already exists though in a complex application, so it would be difficult to isolate that.
I am not planning on doing the filters etc for rendering only since the results will be needed in intermediate levels as well (other algorithms, e. g. edge detection, saving, ...).
There are a couple of things you should consider here;
Is there any way to find out how the 2 will influence each other?
Directly no, but indirectly yes. Both WPF and AMP are making use of the GPU for rendering. If the AMP portion of your application uses too much of the GPU's resources it will interfere with your frame rate. The Cartoonizer case study from the C++ AMP book uses MFC and C++ AMP to do exactly the way you describe. On slower hardware with high image processing loads you can see the application's responsiveness suffer. However in almost all cases cartoonizing images on the GPU is much faster and can achieve video frame rates.
I am wondering on whether I will see the hopefully nice speed-up
With any GPU application the key to seeing performance improvements is that the speedup from running compute on the GPU, rather than the CPU, must make up for the additional overhead of copying data to and from the GPU.
In this case there is additional overhead as you must also marshal data from the native (C++ AMP) to managed (WPF) environments. You need to take care to do this efficiently by ensuring that your data types are blitable and do not require explicit marshaling. I implemented an N-body modeling application that used WPF and native code.
Ideally you would want to render the results of the GPU calculation without moving it through the CPU. This is possible but not if you explicitly involve WPF. The N-body example achieves this by embedding a DirectX render area directly into the WPF and then renders data directly from the AMP arrays. This was largely because the WPF viewport3D really didn't meet my performance needs. For rendering images WPF may be fine.
Unless things have changed with VS 2013 you definitely want your C++ AMP code in a separate DLL as there are some limitations when using native code in a C++/CLI project.
As #stijn suggests I would build a small prototype to make sure that the gains you get by moving some of the compute to the GPU are not lost due to the overhead of moving data both to and from the GPU but also into WPF.

GPU access on Windows Mobile

I am building an app for Windows Mobile 6.5 and I was wondering if there is any way to hardware accelerate various calculations. I would like to have the GPU do some of the work for the app, instead of relying on the CPU to do everything.
I would like to use C#, but if that is not possible, then C++ is just fine.
Thanks for any guidance!
EDIT-
An example of the types of calculations I want to offload to the GPU would be things like calculating the locations of 25-100 different rectangles so they can be placed on the screen. This is just a simple example, but I've currently been doing these kinds of calculations on a seperate thread, so I figured (since it's geometry calculations) it would be a prime candidate for GPU.
To fully answer your question I would need more details about what calculations you are trying to perform, but the short answer is no, the GPUs in Windows Mobile devices and the SDK Microsoft exposes are not suitable for GPGPU(General-Purpose Computation on Graphics Hardware).
GPGPU really only became practical when GPUs started providing programmable vertex and pixel shaders with DirectX9(and limited support in 8). The GPUs used with Windows Mobile 6.5 devices are much more similar to those around DirectX8, and do not have programmable vertex and pixel shaders:
http://msdn.microsoft.com/en-us/library/aa920048.aspx
Even on modern desktop graphics cards with GPGPU libraries such as CUDA, getting performance increases when offloading calculations to the GPU is not a trivial task. The calculations must be inherently suited to GPUS( ie able to run massively in parallel, and enough calculations performed on any memory to offset the cost of transferring it to the GPU and back ).
That does not mean it is impossible to speed up calculations with the GPU on Windows Mobile 6.5, however. There is a small set problems that can be mapped to a fixed functions pipeline without shaders. If you can figure out how to solve your problem by rending polygons and reading back the resulting image, then you can use the GPU to do it, but it is unlikely that the calculations you need to do would be suitable, or that it would be worth the effort of attempting.

Most eficent way of .NET IPC on Windows Mobile

I'm going to split a program into two parts, because I'm running out of process memory. One part is taking a picture and storing it on the file system (GUI) and the other part is analyzing the picture (OCR) and reporting the results back to the main part.
The communication between the two processes will look like this:
Is the OCR process responding?
If not, start OCR process.
Tell the OCR process that there is a new picture.
Wait until the OCR process returns the result (most likely less than 1 KB of characters)
The three most important things, in order of priority for me are:
High performance
High stability
Low complexity - I've only got around three days to finish and test the program.
The GUI is written in .NET/C#, so the solution must be compatible with that. Which method of IPC would you recommend me to use?
I'd probably use point to point queues for this. They perform very well and are stable - the kernel uses them for it's own notification system. The MSDN article already has the managed classes built for using them, so complexity is also low.
You could use WCF for Windows Mobile. Microsoft have released guidelines and sample projects for how to do this. If you set it up to use message queue end points (I'm not sure if named pipes are available), then performance should be very good. Apart from that, WCF is a very easy technology to get started with. Good luck!

Is it reasonable to write a server application in C# in my case?

I want it to work on windows servers.
It will be a cloud type server - it'll consist of modules\parts running on different machines all over the world using http\tcp + upnp to connect to each other
There are going to be controlling\monitoring\observing modules on each machine to provide stats on performance
This net is going to be working with large amount of VIDEO\AUDIO life streaming\broadcasting data
It is going to use FFMPEG for re-encoding and OpenGL, OpenCV and such for filtering (.NET wrappers exist and work BTW)
It will not use any WCF or IIS
I want to develop it in team of 2-4 developers, smart students.
So is it OK to create this in C# .Net or I shall not waste my time on promises of ease it could provide to a developer and go C\C++?
So is it reasonable to write a server application in C# in my case?
Offtop - why not WCF
Warning: it gets way to subjective in here.
WCF is grate when you have big corp with relatively small data exchange per one session of service.
When you have video, LIVE video, it all gets complicated. Large amounts of data, lots of users stream in and out from your service at the same time.
Try to do live video streaming over http binding - than try it with others than you'll see why I do not like idea of live streaming with WCF - it is slow, with way2much not needed for live streaming info and after all have you ever seen a live video streaming app on WCF? No - you haven't - may be you have seen +- live video on Silverlight + IIS pair which I do not like because it is just for Silverlight\WindowsMediaPlayer video streaming solution while I want more than that.
I love to have cross-platform clients with reach UI’s. And I do not like (it is all here my personal opinion - so it is subjective) Silverlight+IIS+WCF group. So what shall I do - right go to sockets, streams in such old and simple formats like FLV and Flash as back end client - Simpler in development in some parts, more conservative way of doing live video over the web than one you get from MS today.
I love Flash FLV live streaming because you just open socket and start sending live FLV video data onto it (for each user FLV header and than FLV "TAG's", one by one: video tag, audio tag, video tag, audio tag etc) and Flash plays it! With no special\unusual code. It is fast, easy in supporting, and does not make client need anything new\unusual. And you on server side can take grate use of that "TAG" form of video\audio data representation.
So that is in short why I just do not want to use WCF - hard to get live video playing out from it on client side, no general benefits for live video server.
And when most of live data goes thru sockets why to bother with using WCF for service management.
During last half of 2009 and first half of 2010 I was getting into WCF, live video streaming, silverlight and flash, comparing process of client\server creation, reading different formats with a team of wary interesting developers. In general at the end of project we had lots of mini servers streaming live data and lots of different clients receiving it. Comparing all we've done we came to conclusions which are near one I present you here.
That is why I do not want to use WCF in my nearest project - I do not want to think about how to deliver media data, I want to focus on its filtering\editing.
Why the question appeared
We started playing with FFmpeg\OpenCV in C, and it is pretty simple to manipulate data using them... in C... on Linux...
But when we started to play with there .Net bindings (we are now playing with Tao.FFmpeg) we found that in most cases we end up playing with C# Marshal a lot, and having 2 variables for its C analog (problem of pointers) and so on. I hope we will not see such problem with Emgu CV but steel it makes me a little bit afraid...
I think it's entirely reasonable. The benefits of C# with regard to ease of development will greatly outweigh any performance drawbacks of not using C++.
C# is generally more cross-platform than C++. True, C++ is a cross-platform language, but there are large differences between the APIs that C++ programs use to interact with the system. C# and .Net/Mono have a much more standardized interface to the socket layer.
Finally, with ambitious projects like this, getting the project into a usable form is a much more important goal than getting the highest performance possible. Performance only matters if the project is complete. Write it in C# because that will give you the greatest odds of completion. Then worry about performance.
I'm not exactly sure why people have brought up Cross Platform concerns as clearly the OP has stated the app will run on Windows.
As to the actual questions.
Can you build a server application that communicates via tcp/http in C# that does not have to run in IIS. -> Yes.
Can you build a server application that is performant and scales in C# -> Yes.
Can you do so with Students -> Maybe. Depends on the students... ;) But that is irrespective of the language in use.
Is this something I would do? Yes. We've done that. We have a c# app running on approximately 20,000 machines right now that are communicating effectively over tcp. We aren't using WCF, but we did decide to use RESTful style services over http for the data transfer.
Our biggest issue was simply tuning the app to transfer the "right" amount of data over the wire at a time. This network is for data collection and storage. It's averaging around 200GB of data collected a day..
UPDATE
I wanted to clarify a bit about the above app. The 20,000 machines at the above installation are clients (XP, Vista, 7, 2003 Server, and 2008 Servers). There's only one data collection point server in the mix. The clients post data to the server, when connected to a network, once every 45 seconds. Roughly 97% of the machines stay connected in this manner, the rest connect a couple times a week.
This works out to the server processing about 37 million requests a day.
Now, to be sure, each request is relatively small at around 5KB to 6KB each. However, the shear number of requests shows that a C# application can handle managing those connections, which is the bigger part of the OP's problem.
Because the OP's files are large (Video), then the real issue is simply in data transfer. Which will be hindered more by hard drive speeds, as well as network speed and latency. Those issues are irrespective of which language you are working in and will limit the number of connections per server based on available bandwidth.
Working this out let's limit it down to one server for an example. If you have a video rate of 400kb/s then and a 25MB connection to the internet, then that box could physically only handle around 62 simultaneous connections. Which is so FAR below the number of connections our app is doing as to be a rounding error.
Assuming perfect network conditions (which don't exist), pumping that internet connection up to 100MB (which can be expensive) means a 4x increase in simultaneous connections to 240; still completely manageable.
However, the network is only one side of the equation. Drive speed on the servers matters a lot. You better have a good disk array capable of continuously delivering that amount of data. I know drives claim 3GB data transfers, but a drive which can saturate the channel has never been built. Which means serious planning and money in the server setup.
The point of all of this is to say that the language doesn't matter one bit in your situation. You have other much larger contention issues. With that being the case, go with the language that will help you get the project done faster.
Why stop at C#, if you (possibly) want cross-platform, write it in Python or similar, you'll find that the networking aspects of a scripting language are far better than C# (as that's pretty much the role scripting languages are put to nowadays, running web-based servers).
You'll find developer productivity is much improved over C# (just as C# has better productivity over C++), and there are lots of people who know and want to work on these systems. It sounds like performance of the servers themselves is of less importance than the networking, so it appears that script would be your best choice. Plus ffmpeg libraries are more tightly integrated with python using pyffmpeg than C# (well, mostly).
And it'd be a lot cooler, more fun, and very much cross-platform!
If you want C# and also cross-platform abilities, your development will have to target the Mono platform (or another cross-platform .NET runtime, if you can find one). You might have to give up VisualStudio, and maybe some Microsoft-specific libraries and tools, but you can still have C# on multiple platforms. Just make sure you start the multi-platform building and testing EARLY in the process or it will be hell to change things later.
If the target of the application is to run only on Windows platforms, I'm completely sure to write this application in C#. Many applications like that can be running right now and we don't even know that.
If the target is to run on multiple platformms, you should encapsulate first all the problems that a non-windows platform can bring to your application.
Why do you have to write it in C++ if, in this case, C# is capable to do everything that C++ does? I would use C++ to program things on hardware-level things, like a robot or something else. To write a server application, C# will fit very well what you want, it was designed for these things.
And C# is cross-platform, you just need the right tool to make it work on a specific platform.

Categories