My understanding is that in Linux in order to run a truly Hard real-time application it needs to be compiled as a Linux kernel module and called directly by the kernel. Is that correct? If so does any have any good reading material on the subject? (something that is easy for a non C developer to understand) If not how are they interfaced with the OS to provide deterministic timing?
Is it possible to compile a C# program ahead-of-time with say Mono or .Net Native and have a run as a hard real-time application? The Code would, of course, have to be written so that it is fast and completes in the allotted time so that it does not get preempted. (if I understand how RT works). The idea being that there would be a Hard Real-Time main thread (with unsafe memory), that interfaced via shared memory with one or more Managed C# Thread nonrealtime threads.
If running C# code as Hard Real-time is not an option, would running C code as HRT be an option and then sharing memory with a .net Application?
I found this but it is 4 years old and there was only one answer and I wanted to know if anyone had anymore incite, since then
Can C# .NET be used for hard real-time?
Related
I have two applications, the main-app is written in ML4 (a programming language which compiles to machine language; I do not know much about this technology), the tool-app is written in C#.NET.
The main-app calls the .NET assembly via COM, invokes a delegate which does some work, shows a window, etc. So far it works pretty acceptable.
In the .NET app, I seem to have a pretty strict thread-limit. The complete application can have around 18 threads. Starting another one results in a OutOfMemoryException in Thread.StartInternal without any further information.
The question is obviously: why? Both apps run in the process of the ML4-app, but I never heard of such a thread-limit. Does maybe the COM-Interface cause it?
Or can a process be configured to have such limitation?
Typically, I wouldn't post such a question here, since it reads like a no-effort-question. The problem is, I have a very limited knowledge about processes and threads in the operating system, so I cannot really tell what possible causes could be.
Edit: To narrow the scope of this question a bit:
When the .NET framework invokes an external method, do I still incur the cost of all the .NET overhead during the execution of that method?
I am dipping my toes into the world of 'physical' computing. I have a Raspberry Pi, and I've been able to do a number of very cool things with it, such as execute C# code via mono, create custom network connection to my windows machine, access GPIO pins via python/C#.
However, now I am up against something I have never had to contend with in the .NET world of productivity applications: the speed of sound and electricity.
I have an ultrasonic range sensor. I am able to get a reading using Python, but the variance is too great for my liking. I also tried it from a C# wrapper, but that is even worse, to the point that my code cannot execute fast enough to even see the echo pin turn on and off.
So here is my question: if I write a small C function that will perform all the necessary state changes and readings on the board, will I reap the speed benefits while calling from C#? That is to say, when execution of an extern method begins from within in .NET framework... what is happening?
Given that the familiar form of .NET is run on Windows, which is not a real-time O/S, and MONO runs on Linux (standard kernel is also not a real-time O/S).
Given also, that any memory allocation scheme offering garbage collection (as in "managed" .NET), and indeed any heap memory scheme will introduce non-deterministic, potentially non-trivial delays into an application's execution behavior.
Is there any combination of alternate host O/S and coding paradigm in which one can leverage all of the power and conveniences of C# .NET while implementing a solution which can execute designated portions of code within tightly specified time constraints? e.g. start a C# method every 10ms to a tolerance of less than 1ms, with completion time determined only by the work performed in the method itself?
Obviously, the application would have to be carefully written; time-critical code would have to avoid memory allocations; the application would have to have completed all its memory allocation etc. work and have no other threads active once the hard real-time loop is started. Also, the host O/S would have to support real-time scheduling.
Is this possible within the .NET / MONO framework, or is it precluded by the design of the .NET runtime, framework, and O/Ss on which it (or compatible equivalent) is supported?
For example: is it possible to do reliable fine-grained (~1ms) machine control purely in C# with something like NETduino, or do they have limits or require alternate strategies for such applications?
Short Answer: No.
Longer answer: The closest you can get is running the .net Micro Framework directly on Hardware, but the TinyCLR still doesn't give you deterministic timings. Microsoft has Windows CE/Windows Embedded Compact as their real time offering, but even that is only real time for slower tasks (I believe somewhere in the range of 50 microseconds or more - not sure if that qualifies for Hard Real Time)
I do not know if it were technically possible to create a real-time c# implementation, but no one has done one and even .net native isn't made for that.
Can C# be used for hard real-time? Yes
When we talk about real-time it's most often (if not always) about robotics and IoT. And for that we almost always go with one of these options (forget Windows CE and Windows 10 IoT):
Microcontrollers (example: Arduino, RPi Pico, NodeMCU)
Linux based SBCs (example: Raspberry Pi, BeagleBone, Rock Pi)
Microcontrollers are by nature real-time. Basically the device will just run a loop forever (there are interrupts and multi-threading on some chips though). Top languages in this category are C/C++ and MicroPython. But C# can also be used:
Wilderness Labs (Netduino and Meadow F7)
.NET nanoframefork (several boards)
The second option (Linux based SBCs) is a bit more tricky. The OS has complete control over the hardware and it has a scheduler. That way many processes can be run on just one CPU. The OS itself has a lot of housekeeping as well.
Linux has a set of scheduling APIs that can be used to tell the OS that we want you to favor our process over others. And the OS will do its best to comply but no guarantees. This is usually called soft real-time. In .NET you can use the Process.PriorityClass to change your process's nice value. Depending on how busy the OS is and the amount of resources available (CPUs and memory) you might get satisfying results.
Other than that, Linux also provides hard real-time capabilities with the PREEMT_RT patch, and there is also a feature that you can isolate a CPU core for your selected processes. But to my knowledge .NET does not have any API to use these capabilities (P/Invoke may work).
I have different applications written in C# and C++ that communicate between each other. I would like to test this environment with some scenarios that I wrote.
Each scenario runs for different hours.
Is there a way to accelerate everything in order to have the scenarios run in minutes.
The applications may contains some Thread.Sleep(...) or equivalent in C++.
My idea is that a Thread.Sleep(2000) would wait for 2 seconds normally but when accelerated it should only wait for 200 ms.
Unfortunately I cannot change or refactor the code of the applications.
A first idea I have is to run the applications in an "accelerated Windows" system. A sort of wrapper of the OS where time run faster. But I have no idea on how to achieve this.
So any new idea or solution would be great.
Thanx
I don't think "making Windows run faster" is practical, but I would consider intercepting the Sleep() method calls and short-circuiting them. I haven't looked, but Thread.Sleep() will probably thunk down somewhere to a Win32 API or an NT API (http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx).
You can use depends.exe from the Windows SDK and a debugger to see what your code uses, but I suspect (unless you know otherwise) that it won't simply be a Sleep() call, it'll more likely be a call waiting on some system object, I/O or a trigger, using WaitForMultipleObjectsEx() and friends.
Have a look at Detours from Microsoft Research for API interception http://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx
There are are also various writings by Jeffry Richter and Matt Pietrek on the subject. In fact here's a codeproject article inspired by one of those;
http://www.codeproject.com/Articles/5178/DLL-Injection-and-function-interception-tutorial
I've embedded the LuaInterface project into an application written in C# using .NET Framework 4.0. After compiling LuaInterface and Lua 5.1 I've referenced them in my application and created a Lua VM and exposed a few .NET classes. When the Lua VM doesn't make many calls, performance is not affected at all; but when it starts to call a larger number of .NET functions the entire application becomes slow and unresponsive.
In response to this, I've made an additional thread to run the Lua VM on. For some reason though, the thread on which the GUI is updated will not update while Lua is doing a function call, resulting in stuttering in the GUI. When moving a window around, you can clearly see that it doesn't respond for a little while, then moves, doesn't respond, etc.
How can I solve this issue? I was under the impression that giving Lua its own thread, a different thread shouldn't be affected! Is this purely related to my own code in some way? Does LuaInterface have a some serious issues calling .NET functions (performance-wise)? What else could I use?
I didn’t try to compile LuaInterface against .NET 4. So far I used only the precompiled dlls. I know that you can speed up mixed image assemblies in .NET 4 by setting the to zero. According to MS: .NET Framework 4, a streamlined interop marshalling architecture provides a significant performance improvement for transitions from managed code to unmanaged code.
http://msdn.microsoft.com/en-us/library/ff361650.aspx
Keep us updated in case you find a trick which works for you. In Visual Studio 2010 you can actually build against .NET 2 so if I were you I would try to create a dummy app and compile it against multiple targets. It might help you to quantify the speed degradation when you are using .NET 4.
If you give us some code maybe I could play with it a bit and figure out what is wrong. I am really interested in LuaInterface and keen to figure out what is wrong.
Since I don't have a code sample I am just speculating on this; but it is possible that the issue is related to your UI not being thread safe. It is pretty common to have locking issues for example with Windows Forms Controls.
How to: Make Thread-Safe Calls to Windows Forms Controls
http://msdn.microsoft.com/en-us/library/ms171728(v=vs.80).aspx