Edit: To narrow the scope of this question a bit:
When the .NET framework invokes an external method, do I still incur the cost of all the .NET overhead during the execution of that method?
I am dipping my toes into the world of 'physical' computing. I have a Raspberry Pi, and I've been able to do a number of very cool things with it, such as execute C# code via mono, create custom network connection to my windows machine, access GPIO pins via python/C#.
However, now I am up against something I have never had to contend with in the .NET world of productivity applications: the speed of sound and electricity.
I have an ultrasonic range sensor. I am able to get a reading using Python, but the variance is too great for my liking. I also tried it from a C# wrapper, but that is even worse, to the point that my code cannot execute fast enough to even see the echo pin turn on and off.
So here is my question: if I write a small C function that will perform all the necessary state changes and readings on the board, will I reap the speed benefits while calling from C#? That is to say, when execution of an extern method begins from within in .NET framework... what is happening?
Related
My understanding is that in Linux in order to run a truly Hard real-time application it needs to be compiled as a Linux kernel module and called directly by the kernel. Is that correct? If so does any have any good reading material on the subject? (something that is easy for a non C developer to understand) If not how are they interfaced with the OS to provide deterministic timing?
Is it possible to compile a C# program ahead-of-time with say Mono or .Net Native and have a run as a hard real-time application? The Code would, of course, have to be written so that it is fast and completes in the allotted time so that it does not get preempted. (if I understand how RT works). The idea being that there would be a Hard Real-Time main thread (with unsafe memory), that interfaced via shared memory with one or more Managed C# Thread nonrealtime threads.
If running C# code as Hard Real-time is not an option, would running C code as HRT be an option and then sharing memory with a .net Application?
I found this but it is 4 years old and there was only one answer and I wanted to know if anyone had anymore incite, since then
Can C# .NET be used for hard real-time?
Is there a way/system to debug/monitor code without stopping execution?
In industrial automation control programming (PLC/PAC/DCS) it is possible to connect the debugger while the program is running, and see in the code editor the value of variables and expressions, without setting breakpoints or tracepoints.
As an example, let's have a F# multithreaded application, where code is executed in a continuous loop or triggered by timers. Is there a way to attach a debugger like Visual studio Debugger and see the values of variables and expressions (in the code editor or in a watch pane) WITHOUT interrupting the execution?
It doesn't matter if it's not synchronous, it's acceptable if the debugger/monitor does not capture all the code scans.
I am tasked to create an high level controller for a process plant and I would like to use C# or F# or even C++ with a managed or native application, instead of a PAC system. But being forced to interrupt execution to debug is a huge disadvantage in this kind of application.
UPDATE
First of all thanks to all for their answer.
Based on those answers, though, I realized that probably I need to reformulate my question as follows:
Is anyone aware of any library/framework/package/extension that allows to work with a native or managed application in windows or linux (C#, F# or C++) the exact same way as a PAC development platform, specifically:
1) Put the dev platform in "status" mode, where it shows automatically the runtime value for variables and expressions present in the code exceprt currently visible, without interrupting execution?
2) Create watch windows that show the runtime value of variables and expressions, again without interrupting execution?
Also, what I am looking for is something that (like any PAC platform) offers these features OUT OF THE BOX, without requiring any change in the application code (like adding log instructions).
Thank you in advance
UPDATE 2
It looks like there is something (see http://vsdevaids.webs.com/); does anyone know whether they are still available somewhere?
UPDATE 3
For those interested, I managed to download the last available release of VSDEVAIDS. I installed it and looks working, but it's pointless without a licence and couldn't find information on how to reach the author.
http://www.mediafire.com/file/vvdk2e0g6091r4h/VSDevAidsInstaller.msi
If somebody has better luck, please let me know.
this is a normal requirement - needing instrumentation / diagnostic data from a production system. Its not really a debugger. Its usually one of the first things you should establish in your system design.
Not knowing your system at all its hard to say what you need but generally they fall into 2 categories
human readable trace - something like log4net is what I would recommend
machine readable counters etc. Say 'number of widget shaving in last pass',..... This one is harder to generalize, you could layer it onto log4net too. Or invent your own pipe
With regards to your edited question, I can almost guarantee you that what you are looking for does not exist. Consequence-free debugging/monitoring of even moderate usefulness for production code with no prior effort? I'd have heard of it. Consider that both C++ and C# are extremely cross-platform. There are a few caveats:
There are almost certainly C++ compilers built for very specific hardware that do what you require. This hardware is likely to have very limited capabilities, and the compilers are likely to otherwise be inferior to their larger counterparts, such as gcc, clang, MSVC, to name a few.
Compile-time instrumentation can do what you require, although it affects speed and memory usage, and even stability, in my experience.
There ARE also frameworks that do what you require, but not without affecting your code. For example, if you are using WPF as your UI, it's possible to monitor anything directly related to the UI of your application. But...that's hardly a better solution than log4net.
Lastly, there are tools that can monitor EVERY system call your application makes for both Windows (procmon.exe/"Process Monitor" from SysInternals) and Linux (strace). There's very little you can't find out using these. That said, the ease of use is hardly what you're looking for, and strictly internal variables are still not going to be visible. Still might be something to consider if you know you'll be making system calls with the variables you're interested in and can set up adequate filtering.
Also, you should reconsider your "No impact on the code" requirement. There are .NET frameworks that can allow you to monitor an entire class merely by making a single function call during construction, or by deriving from a class in the framework. Many modern UIs are predicated on the UIs being able to be notified of any change to the data they are monitoring. Extensive effort has gone into making this as powerful and easy as possible. But it does require you to at least consider it when writing your code.
Many years ago (think 8 bit 6502/6809 days) you could buy (or usually rent, I seem to remember a figure of £40K to purchase one in the late 80s) a processor simulator, that would allow you replace the processor in your design with a pin compatible device that had a flying lead to the simulator box. this would allow things like capturing instructions/data leading up to a processor interrupt, or some other way of stopping the processor (even a 'push button to stop code' was possible). You could even step-backwards allowing you to see why an instruction or branch happened.
In these days of multi-core, nm-technology, I doubt there is such a thing.
I have been searching for this kind of features since quite a long time with no luck, unfortunately. Submitting the question to the StackOverflow community was sort of a "last resort", so now I'm ready to conclude that it doesn't exist.
VSDevAids (as #zzxyz pointed out) is not a solution, as it requires significant support from the application itself.
Pod cpu emulators (mentioned by #Neil) aka in-circuit emulators (ICE) and their evolutions are designed to thoroughly test the interaction between firmware and hardware, not so useful in high level programming (especially if managed like .NET).
Thanks for all contributions.
Given that the familiar form of .NET is run on Windows, which is not a real-time O/S, and MONO runs on Linux (standard kernel is also not a real-time O/S).
Given also, that any memory allocation scheme offering garbage collection (as in "managed" .NET), and indeed any heap memory scheme will introduce non-deterministic, potentially non-trivial delays into an application's execution behavior.
Is there any combination of alternate host O/S and coding paradigm in which one can leverage all of the power and conveniences of C# .NET while implementing a solution which can execute designated portions of code within tightly specified time constraints? e.g. start a C# method every 10ms to a tolerance of less than 1ms, with completion time determined only by the work performed in the method itself?
Obviously, the application would have to be carefully written; time-critical code would have to avoid memory allocations; the application would have to have completed all its memory allocation etc. work and have no other threads active once the hard real-time loop is started. Also, the host O/S would have to support real-time scheduling.
Is this possible within the .NET / MONO framework, or is it precluded by the design of the .NET runtime, framework, and O/Ss on which it (or compatible equivalent) is supported?
For example: is it possible to do reliable fine-grained (~1ms) machine control purely in C# with something like NETduino, or do they have limits or require alternate strategies for such applications?
Short Answer: No.
Longer answer: The closest you can get is running the .net Micro Framework directly on Hardware, but the TinyCLR still doesn't give you deterministic timings. Microsoft has Windows CE/Windows Embedded Compact as their real time offering, but even that is only real time for slower tasks (I believe somewhere in the range of 50 microseconds or more - not sure if that qualifies for Hard Real Time)
I do not know if it were technically possible to create a real-time c# implementation, but no one has done one and even .net native isn't made for that.
Can C# be used for hard real-time? Yes
When we talk about real-time it's most often (if not always) about robotics and IoT. And for that we almost always go with one of these options (forget Windows CE and Windows 10 IoT):
Microcontrollers (example: Arduino, RPi Pico, NodeMCU)
Linux based SBCs (example: Raspberry Pi, BeagleBone, Rock Pi)
Microcontrollers are by nature real-time. Basically the device will just run a loop forever (there are interrupts and multi-threading on some chips though). Top languages in this category are C/C++ and MicroPython. But C# can also be used:
Wilderness Labs (Netduino and Meadow F7)
.NET nanoframefork (several boards)
The second option (Linux based SBCs) is a bit more tricky. The OS has complete control over the hardware and it has a scheduler. That way many processes can be run on just one CPU. The OS itself has a lot of housekeeping as well.
Linux has a set of scheduling APIs that can be used to tell the OS that we want you to favor our process over others. And the OS will do its best to comply but no guarantees. This is usually called soft real-time. In .NET you can use the Process.PriorityClass to change your process's nice value. Depending on how busy the OS is and the amount of resources available (CPUs and memory) you might get satisfying results.
Other than that, Linux also provides hard real-time capabilities with the PREEMT_RT patch, and there is also a feature that you can isolate a CPU core for your selected processes. But to my knowledge .NET does not have any API to use these capabilities (P/Invoke may work).
Background:
I'm writing an application in C# using .NET 4.0. It prints a bunch of documents in a certain order. The documents are of all different types and are actually printed using ShellExecute with the "print" verb.
To make sure the order doesn't get jumbled, I'd like to examine the print queue for the printer involved. My main loop would look like:
Invoke "print" action on the document
Wait for document to show up in print queue
Repeat until done
How Can I Monitor The Print Queue Using Managed Code?
I found some great examples of doing similar things using unmanaged calls (Like: http://blogs.msdn.com/b/martijnh/archive/2009/08/05/printmonitor-a-c-print-spooler-monitor.aspx). Also, I know how to look at the spooled files under c:\windows\system32\spool... and figure things out that way.
Howver, none of those solutions are very satisfying ... with amount of unmanaged cod I'm calling I feel like I should just be writing the app in C++. (And not have the .NET dependency/overhead.)
Main Question: Is there really no way to monitor a print queue using only managed calls?
More general question: I come from the java world, and typically only use .NET languages when I want to do something OS specific or something that needs to interact with other things in the MS world. (For example SSIS components.)
It seems like every time I start a project I end up in this same mess: all kinds of calls to native functions, COM stuff, etc, etc.
Secondary Question: Is there something I'm missing about the .NET philosophy or implementation? (Am I just not looking hard enough for managed libraries to do things? Is .NET the wrong choice for anything that needs to do Windows-Specific things like manipulate the print queue?) I get (or think I get) that .NET is theoretically supposed to be OS-independent.. but surely most modern operating system have printers and print queues and things like that. (So if you had generic calls for doing these kinds of things, they could be implemented on each platform's version of the framework..)
Main Question: Take a look at the PrintQueue and LocalPrintServer class in the System.Printing namespace.
Secondary Question: .NET was not written to be OS-independent (sans Mono), it was written to be Windows version independent. While it would be nice only deal with managed objects and managed calls, I see this as a somewhat unrealistic expectation. The sheer size and volume of existing C and COM functions exposed by Windows makes wrapping everything a daunting task. While i'm sure Microsoft has tons of developers on the payroll, I would say that the return on investment is quite low for such an undertaking, considering the relatively easy to use COM & P/Invoke support available.
I'm writing a C# application that calls a C++ dll. This dll is a device driver for an imaging system; when the image is being acquired, a preview of the image is available from the library on a line-by-line basis. The C++ dll takes a callback to fill in the preview, and that callback consists basically of the size of the final image, the currently scanned line, and the line of data itself.
Problem is, there's a pretty serious delay from the time when scanning stops and the C# callback stops getting information. The flow of the program goes something like:
Assign callback to C++ dll from within C#
User starts to get data
Device starts up
dll starts to call the callback after a few seconds (normal)
Device finishes image formation
dll is still calling the callback for double the time of image formation.
This same dll worked with a C++ application just fine; there does not appear to be that last step delay. In C#, however, if I have the callback immediately return, the delay still exists; no matter what I do inside the callback, it's there.
Is this delay an inherent limitation of calling managed code from unmanaged code, or is there something either side could do to make this go faster? I am in contact with the C++ library writer, so it's possible to implement a fix from the C++ side.
Edit: Could doing something simple like a named pipe work? Could an application read from its own pipe?
It may be possible that the Managed Debug Assistant that checks native callbacks for garbage collected targets may be the culprit (is it in debug mode under the debugger?)
See the PSA: Pinvokes may be 100x slower under the debugger blog entry by Mike Stall.
Are you doing any funky data marshalling across the interop layer? If so then you may have a huge delay whilst it's basically marshalling all your image data by converting it. You can easily test this as the large the image data, the longer it will take
A few possible alternatives that spring to mind are
1.Use a memory mapped file though you'd need to implement a simple semaphore or signalling system to say 'I have data ready' and 'I have consumed the data
2. Compile the C++ dll in mixed mode ( any C++ code can be compiled into .NET with the /clr flag ) then use C#/CLI
3. Use Remoting and IPC Channels - maybe a bit of an overkill but worth a look
Hope that helps
Turns out the delay is in the C++ side, by a developer who swore up and down it wasn't.