I read some segments about window thread pool. It looks like CLR thread pool.
CLR is based on Windows, so CLR thread is based on windows thread pool, is it right?
I know that each .net process has one thread pool, what's the situation in windows thread pool? The OS has one thread pool or many?
In C#, can developer control the window thread pool by code?
It is one of those CLR implementation questions that doesn't have a straight answer. It is not up to the CLR to determine how the ThreadPool is implemented. It is the job of the CLR host. A layer of software that integrates the CLR with the operating system. The core interface that the CLR uses to get thread-pooly things done is IHostThreadPoolManager. It is an unmanaged COM interface but you'll have little trouble recognizing the almost one-to-one mapping with ThreadPool class members.
There are many implementations of the CLR host. The more recognizable ones are the default CLR host for desktop apps, implemented by mscoree.dll. There are different versions of it for different Windows versions. And ASP.NET, Sql Server, the Visual Studio Hosting process, the custom host for Silverlight, Windows Phone, XBox. And the less recognizable ones, large unmanaged apps can host the CLR themselves in order to support scripting implemented in a .NET language. CAD programs like AutoCAD etc are standard examples.
The core notion of a thread is virtualized in the CLR. IClrTask and IClrTaskManager are the hosting interfaces for that. Which allows a host to implement a thread on something else than an operating system thread. Like a fiber. Nobody actually does this btw.
Sure, Windows has its own api for a threadpool. The CreateThreadPool() winapi function gets that ball rolling. However, poking around the mscor*.dll files on my machine with dumpbin.exe /imports, I do not see it being used. At least part of the problem might be that CreateThreadPool() is a later winapi function, available only since Vista. XP and earlier Windows versions had a much simpler implementation. So, no, at least for the desktop version of .NET 4.5.2, the Windows threadpool does not appear to be relevant.
Related
My understanding is that in Linux in order to run a truly Hard real-time application it needs to be compiled as a Linux kernel module and called directly by the kernel. Is that correct? If so does any have any good reading material on the subject? (something that is easy for a non C developer to understand) If not how are they interfaced with the OS to provide deterministic timing?
Is it possible to compile a C# program ahead-of-time with say Mono or .Net Native and have a run as a hard real-time application? The Code would, of course, have to be written so that it is fast and completes in the allotted time so that it does not get preempted. (if I understand how RT works). The idea being that there would be a Hard Real-Time main thread (with unsafe memory), that interfaced via shared memory with one or more Managed C# Thread nonrealtime threads.
If running C# code as Hard Real-time is not an option, would running C code as HRT be an option and then sharing memory with a .net Application?
I found this but it is 4 years old and there was only one answer and I wanted to know if anyone had anymore incite, since then
Can C# .NET be used for hard real-time?
Given that the familiar form of .NET is run on Windows, which is not a real-time O/S, and MONO runs on Linux (standard kernel is also not a real-time O/S).
Given also, that any memory allocation scheme offering garbage collection (as in "managed" .NET), and indeed any heap memory scheme will introduce non-deterministic, potentially non-trivial delays into an application's execution behavior.
Is there any combination of alternate host O/S and coding paradigm in which one can leverage all of the power and conveniences of C# .NET while implementing a solution which can execute designated portions of code within tightly specified time constraints? e.g. start a C# method every 10ms to a tolerance of less than 1ms, with completion time determined only by the work performed in the method itself?
Obviously, the application would have to be carefully written; time-critical code would have to avoid memory allocations; the application would have to have completed all its memory allocation etc. work and have no other threads active once the hard real-time loop is started. Also, the host O/S would have to support real-time scheduling.
Is this possible within the .NET / MONO framework, or is it precluded by the design of the .NET runtime, framework, and O/Ss on which it (or compatible equivalent) is supported?
For example: is it possible to do reliable fine-grained (~1ms) machine control purely in C# with something like NETduino, or do they have limits or require alternate strategies for such applications?
Short Answer: No.
Longer answer: The closest you can get is running the .net Micro Framework directly on Hardware, but the TinyCLR still doesn't give you deterministic timings. Microsoft has Windows CE/Windows Embedded Compact as their real time offering, but even that is only real time for slower tasks (I believe somewhere in the range of 50 microseconds or more - not sure if that qualifies for Hard Real Time)
I do not know if it were technically possible to create a real-time c# implementation, but no one has done one and even .net native isn't made for that.
Can C# be used for hard real-time? Yes
When we talk about real-time it's most often (if not always) about robotics and IoT. And for that we almost always go with one of these options (forget Windows CE and Windows 10 IoT):
Microcontrollers (example: Arduino, RPi Pico, NodeMCU)
Linux based SBCs (example: Raspberry Pi, BeagleBone, Rock Pi)
Microcontrollers are by nature real-time. Basically the device will just run a loop forever (there are interrupts and multi-threading on some chips though). Top languages in this category are C/C++ and MicroPython. But C# can also be used:
Wilderness Labs (Netduino and Meadow F7)
.NET nanoframefork (several boards)
The second option (Linux based SBCs) is a bit more tricky. The OS has complete control over the hardware and it has a scheduler. That way many processes can be run on just one CPU. The OS itself has a lot of housekeeping as well.
Linux has a set of scheduling APIs that can be used to tell the OS that we want you to favor our process over others. And the OS will do its best to comply but no guarantees. This is usually called soft real-time. In .NET you can use the Process.PriorityClass to change your process's nice value. Depending on how busy the OS is and the amount of resources available (CPUs and memory) you might get satisfying results.
Other than that, Linux also provides hard real-time capabilities with the PREEMT_RT patch, and there is also a feature that you can isolate a CPU core for your selected processes. But to my knowledge .NET does not have any API to use these capabilities (P/Invoke may work).
Related to these questions:
How do I get the _real_ thread id in a CLR "friendly" way?
How often does a managed thread switch OS threads?
I would like to be able to actually test the Thread.BeginThreadAffinity() methods and verify how they work and that they work.
Is there some .NET functionality that will force an OS thread switch?
There is not much to test with Thread.BeginThreadAffinity(). I calls a function in the CLR host, IHostTaskManager::BeginThreadAffinity(). IHostTaskManager is an optional interface that a custom CLR host can implement to provide a custom thread implementation, one that doesn't necessarily use an operating system thread. The ICLRTaskManager and ICLRTask interfaces provide the core services for such a custom thread.
These interfaces were added in .NET 2.0, on request by the SQL Server team. SQL Server has had a custom threading option built in for a long time, based on fibers. Fibers were popular in the olden days when machines with multiple processor cores were still rare. Other names for a fiber are "green thread" and "co-routine". They've been put to pasture by the multi-core revolution in the previous decade.
The SQL Server project was a bust. They could not get it reliable enough and abandoned the project. Unfortunately we are left with the consequences, there is no simple way to map a .NET thread to an OS thread, the subject of your first link. As well as the considerable FUD shown in the accepted answer.
While the CLR still has the basic support for this feature, I do not know of a single example where a custom host implements its own threading. The massive failure of the SQL Server team project certainly was a major signpost that this is difficult to implement, considering the resources the team had access to to make this work. And it just doesn't make sense in general, mapping a single thread to a single processor core, as done by the operating system by default and used by the default CLR host, is incredibly hard to beat for efficiency. Processor cores are very cheap to buy these days.
Long story short: Thread.BeginThreadAffinity() does nothing. CLR threads are already affine to OS threads by default. The odds that you'll ever run into a custom CLR host where it does anything at all are sufficiently close to zero to ignore the method.
A simple way to invoke an OS thread context switch is by using one of the WaitHandle.WaitXxx() methods or Thread.Sleep() with a non-zero wait.
I created a multithreaded service to perform image processing. Everything worked fine until one of our clients installed the product in a 16 process server with lots of memory. Now the service throws lots of out of memory errors, which is understandable because processes can only get 1.5GB of memory regardless of how much is installed.
What is the accepted solution for this situation? Should this service instead spawn off a separate worker process? Should I have one worker process per CPU talking via named pipes to the main service?
EDIT we are running on a 64bit server, but can't target x64 because of imaging libraries limitations
Thank you
There are multiple solutions for this. These are some of the options:
Link your .exe with /LARGEADDRESSAWARE option. That will give your app up to 3 Gig of RAM, and no other changes are required.
Ask your software vendor who provided you with 32-bit binaries for 64 bit version.
Move your 32-bit dependencies out-of proc (e.g communicating via COM or WCF), and change your EXE architecture to 64 bit.
Spawn new processes for each execution action, rather than threads.
Convert your code to use Address Windowing Extensions.
Options #1 and #2 are the easiest to implement, #5 is most difficult.
EDIT
I noticed C# tag in your question. For managed apps you can still use Large Address Aware flag using EditBin.exe tool.
The frequency with which I am coming across the situation where I have to call native 32-bit code from a managed 64-bit process is increasing as 64-bit machines and applications become prevalent. I don't want to mark my applciation as 32-bit and I cannot obtain 64-bit versions of of the code that is being calling.
The solution that I currently use is to create C++ COM shims that are loaded out of process to make the 32-bit calls from the 64-bit process.
This COM shim solution works well and the cross process calls are handled behind the scenes by COM, which minimises the overhead of this approach.
I would however like to keep all the new development that we undertake using C# and wondered if there are any frameworks that minimise the overhead of doing this. I have looked at IPCChannel but I feel that this approach is not as neat as the COM shim solution.
thanks,
Ed
I had the same problem and my solution was to use remoting. Basically the project consisted of:
Platform-independent CalculatorRemote.dll library with
CalculatorNative internal static class with x32 P/Invoke methods
RemoteCalculator class derived from MarshalByRefObject which used native methods from CalculatorNative;
Main platform-independent C# library (e.g. Calculator.dll), referencing CalculatorRemote.dll, with Calculator class which was privately using singleton of the RemoteCalculator class to invoke x32 functions where needed;
x32 console application which hosted RemoteCalculator from CalculatorRemote.dll to consume by Calculator.dll via IpcChannel.
So if the main application started in x64 mode it was spawning a RemoteCalculator host application and used remoted RemoteCalculator instance. (When in x32 it just used a local instance of RemoteCalculator.) The tricky part was telling calculator-host application to shut down.
I think this it better than using COM because:
You don't have to register COM classes anywhere;
Interoperating with COM should be slower than .NET remoting;
Sometimes if something is going wrong on the COM-side you need to restart your application to recover from that; (possibly I'm just not very familiar with COM)
When running in x32 mode there won't be any performance penalty with remoting -- all methods will be invoked in the same AppDomain.
Pretty much the only answer is out of process communication. You could create a .NET project that is a 32-bit executable that makes all of the 32-bit calls needed and communicate with it via Windows Messages, WCF, Named Pipes, Memory Mapped Files (4.0), etc. I am pretty sure this is how Paint.NET does their WIA (Windows Imaging Acquisition) from a 64-bit process.
In the case of PDN, they simply pass the name of the file they expect as the output, but more complex communication isn't difficult. It could be a better way to go depending on what you're doing.