I've embedded the LuaInterface project into an application written in C# using .NET Framework 4.0. After compiling LuaInterface and Lua 5.1 I've referenced them in my application and created a Lua VM and exposed a few .NET classes. When the Lua VM doesn't make many calls, performance is not affected at all; but when it starts to call a larger number of .NET functions the entire application becomes slow and unresponsive.
In response to this, I've made an additional thread to run the Lua VM on. For some reason though, the thread on which the GUI is updated will not update while Lua is doing a function call, resulting in stuttering in the GUI. When moving a window around, you can clearly see that it doesn't respond for a little while, then moves, doesn't respond, etc.
How can I solve this issue? I was under the impression that giving Lua its own thread, a different thread shouldn't be affected! Is this purely related to my own code in some way? Does LuaInterface have a some serious issues calling .NET functions (performance-wise)? What else could I use?
I didn’t try to compile LuaInterface against .NET 4. So far I used only the precompiled dlls. I know that you can speed up mixed image assemblies in .NET 4 by setting the to zero. According to MS: .NET Framework 4, a streamlined interop marshalling architecture provides a significant performance improvement for transitions from managed code to unmanaged code.
http://msdn.microsoft.com/en-us/library/ff361650.aspx
Keep us updated in case you find a trick which works for you. In Visual Studio 2010 you can actually build against .NET 2 so if I were you I would try to create a dummy app and compile it against multiple targets. It might help you to quantify the speed degradation when you are using .NET 4.
If you give us some code maybe I could play with it a bit and figure out what is wrong. I am really interested in LuaInterface and keen to figure out what is wrong.
Since I don't have a code sample I am just speculating on this; but it is possible that the issue is related to your UI not being thread safe. It is pretty common to have locking issues for example with Windows Forms Controls.
How to: Make Thread-Safe Calls to Windows Forms Controls
http://msdn.microsoft.com/en-us/library/ms171728(v=vs.80).aspx
Related
My understanding is that in Linux in order to run a truly Hard real-time application it needs to be compiled as a Linux kernel module and called directly by the kernel. Is that correct? If so does any have any good reading material on the subject? (something that is easy for a non C developer to understand) If not how are they interfaced with the OS to provide deterministic timing?
Is it possible to compile a C# program ahead-of-time with say Mono or .Net Native and have a run as a hard real-time application? The Code would, of course, have to be written so that it is fast and completes in the allotted time so that it does not get preempted. (if I understand how RT works). The idea being that there would be a Hard Real-Time main thread (with unsafe memory), that interfaced via shared memory with one or more Managed C# Thread nonrealtime threads.
If running C# code as Hard Real-time is not an option, would running C code as HRT be an option and then sharing memory with a .net Application?
I found this but it is 4 years old and there was only one answer and I wanted to know if anyone had anymore incite, since then
Can C# .NET be used for hard real-time?
Is there a way/system to debug/monitor code without stopping execution?
In industrial automation control programming (PLC/PAC/DCS) it is possible to connect the debugger while the program is running, and see in the code editor the value of variables and expressions, without setting breakpoints or tracepoints.
As an example, let's have a F# multithreaded application, where code is executed in a continuous loop or triggered by timers. Is there a way to attach a debugger like Visual studio Debugger and see the values of variables and expressions (in the code editor or in a watch pane) WITHOUT interrupting the execution?
It doesn't matter if it's not synchronous, it's acceptable if the debugger/monitor does not capture all the code scans.
I am tasked to create an high level controller for a process plant and I would like to use C# or F# or even C++ with a managed or native application, instead of a PAC system. But being forced to interrupt execution to debug is a huge disadvantage in this kind of application.
UPDATE
First of all thanks to all for their answer.
Based on those answers, though, I realized that probably I need to reformulate my question as follows:
Is anyone aware of any library/framework/package/extension that allows to work with a native or managed application in windows or linux (C#, F# or C++) the exact same way as a PAC development platform, specifically:
1) Put the dev platform in "status" mode, where it shows automatically the runtime value for variables and expressions present in the code exceprt currently visible, without interrupting execution?
2) Create watch windows that show the runtime value of variables and expressions, again without interrupting execution?
Also, what I am looking for is something that (like any PAC platform) offers these features OUT OF THE BOX, without requiring any change in the application code (like adding log instructions).
Thank you in advance
UPDATE 2
It looks like there is something (see http://vsdevaids.webs.com/); does anyone know whether they are still available somewhere?
UPDATE 3
For those interested, I managed to download the last available release of VSDEVAIDS. I installed it and looks working, but it's pointless without a licence and couldn't find information on how to reach the author.
http://www.mediafire.com/file/vvdk2e0g6091r4h/VSDevAidsInstaller.msi
If somebody has better luck, please let me know.
this is a normal requirement - needing instrumentation / diagnostic data from a production system. Its not really a debugger. Its usually one of the first things you should establish in your system design.
Not knowing your system at all its hard to say what you need but generally they fall into 2 categories
human readable trace - something like log4net is what I would recommend
machine readable counters etc. Say 'number of widget shaving in last pass',..... This one is harder to generalize, you could layer it onto log4net too. Or invent your own pipe
With regards to your edited question, I can almost guarantee you that what you are looking for does not exist. Consequence-free debugging/monitoring of even moderate usefulness for production code with no prior effort? I'd have heard of it. Consider that both C++ and C# are extremely cross-platform. There are a few caveats:
There are almost certainly C++ compilers built for very specific hardware that do what you require. This hardware is likely to have very limited capabilities, and the compilers are likely to otherwise be inferior to their larger counterparts, such as gcc, clang, MSVC, to name a few.
Compile-time instrumentation can do what you require, although it affects speed and memory usage, and even stability, in my experience.
There ARE also frameworks that do what you require, but not without affecting your code. For example, if you are using WPF as your UI, it's possible to monitor anything directly related to the UI of your application. But...that's hardly a better solution than log4net.
Lastly, there are tools that can monitor EVERY system call your application makes for both Windows (procmon.exe/"Process Monitor" from SysInternals) and Linux (strace). There's very little you can't find out using these. That said, the ease of use is hardly what you're looking for, and strictly internal variables are still not going to be visible. Still might be something to consider if you know you'll be making system calls with the variables you're interested in and can set up adequate filtering.
Also, you should reconsider your "No impact on the code" requirement. There are .NET frameworks that can allow you to monitor an entire class merely by making a single function call during construction, or by deriving from a class in the framework. Many modern UIs are predicated on the UIs being able to be notified of any change to the data they are monitoring. Extensive effort has gone into making this as powerful and easy as possible. But it does require you to at least consider it when writing your code.
Many years ago (think 8 bit 6502/6809 days) you could buy (or usually rent, I seem to remember a figure of £40K to purchase one in the late 80s) a processor simulator, that would allow you replace the processor in your design with a pin compatible device that had a flying lead to the simulator box. this would allow things like capturing instructions/data leading up to a processor interrupt, or some other way of stopping the processor (even a 'push button to stop code' was possible). You could even step-backwards allowing you to see why an instruction or branch happened.
In these days of multi-core, nm-technology, I doubt there is such a thing.
I have been searching for this kind of features since quite a long time with no luck, unfortunately. Submitting the question to the StackOverflow community was sort of a "last resort", so now I'm ready to conclude that it doesn't exist.
VSDevAids (as #zzxyz pointed out) is not a solution, as it requires significant support from the application itself.
Pod cpu emulators (mentioned by #Neil) aka in-circuit emulators (ICE) and their evolutions are designed to thoroughly test the interaction between firmware and hardware, not so useful in high level programming (especially if managed like .NET).
Thanks for all contributions.
I have two applications, the main-app is written in ML4 (a programming language which compiles to machine language; I do not know much about this technology), the tool-app is written in C#.NET.
The main-app calls the .NET assembly via COM, invokes a delegate which does some work, shows a window, etc. So far it works pretty acceptable.
In the .NET app, I seem to have a pretty strict thread-limit. The complete application can have around 18 threads. Starting another one results in a OutOfMemoryException in Thread.StartInternal without any further information.
The question is obviously: why? Both apps run in the process of the ML4-app, but I never heard of such a thread-limit. Does maybe the COM-Interface cause it?
Or can a process be configured to have such limitation?
Typically, I wouldn't post such a question here, since it reads like a no-effort-question. The problem is, I have a very limited knowledge about processes and threads in the operating system, so I cannot really tell what possible causes could be.
While using SlimTune to profile a C# application, I find that when profiling native functions is enabled there are lots of entries for a function called "CoUninitializeE." CoUninitialize seems to be related to COM objects, however I'm not directly using any Com objects, and Google has no information about the version ending with an E.
Does anyone have knowledge of what this function is/how to reduce the amount of time spent on it? (For instance, is it related to memory management, so that reducing memory allocations or deallocations would help?)
Edit
It appears the function's name is actually "CoUninitializeEx" and that SlimTune is just chopping off a letter for some reason. I still would appreciate knowledge of what leads to this function being called.
CoInitalizeEx() and CoUninitialize() are pretty core in Windows programming. They respectively initialize and shutdown COM on a thread. The CLR calls these functions automatically before and after a Thread runs. It is pretty hard to avoid using COM in a .NET program, it is the basic extensibility model for native Windows code. Quite invisible, thanks to the many wrapper classes in the .NET framework that hides the plumbing.
The generic diagnostic is that you use a lot of threads. Yes, expensive. The thread pool is a workaround.
In my Mono (C#) project that is meant to be cross-platform, I am using the GTK for the UI. However one thing I noticed is, on my netbook in Archlinux, the performance is really speedy, so events such as mouse hover, and redrawing of widgets, etc, are really fast.
Compared to windows (7) on dual core CPUs, the performance is really really weak. Which perplexes me.
Am I doing something wrong that is warranting this difference in performance between OSes?
What are some ways I can do to optimize GTK on Windows? Its really bad to take around 0.5 secs for a hover event to kick in whereas its almost immediate on a weak(er) netbook with Linux.
My code is here for the GUI layer: http://code.google.com/p/subsynct/source/browse/branches/dev/subsync#subsync/GUI
Thanks!
The real problem is with the Graphics Library GTK uses. Cairo. You are right in saying that GTK performs a lot better on Linux and other Operating Systems as compared to Windows. That suggests that in fact the problem isn't actually with the entire Cairo Library. It is in the Win32 backend of Cairo. According to the Backend-Info in Cairo Docs; Cairo uses xlib and in some cases cairo-gl (think customized OpenGL) to work with on Linux and other platforms. While on Windows it uses Win32 GDI which, after all is a bit slow and outdated (not to mention completely software rendered).
Still, even this doesn't account completely for the poor performance of Gtk on Windows. Another problem may be that instead of using native Widgets, Gtk prefers to draw it's own widgets which look the almost the same on all platforms. However on Windows it also tries to emulate the native widgets using LibWimp to further increase native look and feel. This extra Windows-only step may also account for performance overhead. To see this for youself, try deleting (or renaming) libwimp.dll in the GIMP directory. GIMP runs a lot faster after that (though looked a little non-native).
There are also other smaller factors that may or may not affect Gtk's performance on Windows, like the fact that GTK has an extra runtime with like 12-15 extra dll's compared to other toolkits which have like 1-2. Dynamically Linking the entire Gtk Runtime may greatly increase startup time. There is also the fact that Gtk uses a lot of other libraries like Glib , Pango , and of course , Cairo . Writing glue code for these libraries also adds a lot of overhead , and sometimes even an extra library like Gdk .
To optimise Gtk you may try changing the backend of Cairo (difficult , unreccomended and requires another ton of glue code) or stop using libWimp (this will make Gtk look less native). But overall I think GTK is not that slow. I've never personally needed to use any optimizations. Even though I used WinApi in the past too.
I would guess that the performance problems are in Cairo. I suggest you use gtkparasite in Linux to see where and when parts of your app are being redrawn and optimize that.
You could also use the free CLR Profiler from MS on Windows to find the hotspots in your app.