I created a multithreaded service to perform image processing. Everything worked fine until one of our clients installed the product in a 16 process server with lots of memory. Now the service throws lots of out of memory errors, which is understandable because processes can only get 1.5GB of memory regardless of how much is installed.
What is the accepted solution for this situation? Should this service instead spawn off a separate worker process? Should I have one worker process per CPU talking via named pipes to the main service?
EDIT we are running on a 64bit server, but can't target x64 because of imaging libraries limitations
Thank you
There are multiple solutions for this. These are some of the options:
Link your .exe with /LARGEADDRESSAWARE option. That will give your app up to 3 Gig of RAM, and no other changes are required.
Ask your software vendor who provided you with 32-bit binaries for 64 bit version.
Move your 32-bit dependencies out-of proc (e.g communicating via COM or WCF), and change your EXE architecture to 64 bit.
Spawn new processes for each execution action, rather than threads.
Convert your code to use Address Windowing Extensions.
Options #1 and #2 are the easiest to implement, #5 is most difficult.
EDIT
I noticed C# tag in your question. For managed apps you can still use Large Address Aware flag using EditBin.exe tool.
Related
We have a self-hosted WCF application which analyses text. The installation we're discussing, involves processing batches of small fragments of text (social media) and longer ones (like newspaper articles). The longer fragments take on average 5-6 sec to process in one WCF instance, while the shorter ones are under 1 sec. There are millions of items of each kind to be processed every day.
Several questions:
What is the recommended configuration? Windows Azure / any kind of IaaS like Amazon / cluster managed by a load balancer?
Is there a built-in support for load balancing in WCF, which does not require writing a wrapper?
For some reason, when a long task is running and another task is submitted to an instance deployed on a multicore machine, they both run in parallel on the same core, instead of starting on another core which is free. Is this some kind of conservative allocation? Can it be managed more efficiently?
The easy answer is Azure (because it's a PaaS by microsoft) but it isn't really a technical question. It depends on costs, and growth prediction.
Not really. WCF supports load balancing, but WCF itself runs in your process and can't load balance itself. It's usually a feature of your hosting platform.
If that's 2 different processes then the OS schedules CPU-time, and I wouldn't recommend messing with that. If both were run on the same core it's probably because they can (which makes sense, as WCF uses a lot if IO)
I was working on a windows 8 app in C#/XAML. I was using MVVM pattern, SQLite DB, multiple language support, etc. I did not pay much attention to how much memory the app uses when it runs. It ran reasonably fast. At a certain point I felt that the app rans significantly slower. I was trying to go through the submission process too -- creating the upload packages(for X64, X86, ARM), running the certification test. The app passed the certification test. But I was a little bit concerned about the speed and I was trying to check its memory usage and found that it use several hundred MB memory and at some point the memory can go up to 1GB (based on the number reported in Task Manager). So I tried some debugging and found that before it reaches the second line of the code in App.xaml.cs, it already uses around 150 MB. I tried to load the project into a different machine and ran it there, the memory usage is usually less than 100MB and the speed is what I experienced before it slowed down. So that is normal to me.
So does any of you have some similar experience? Do you have any idea how to make the app work normally on my original machine? My impression is that it has nothing to do what I have in the code. It could be related to some setting I had for the project on my original machine. But I don't know what settings. I tried restart the machine and that did not solve the problem.
After the app became slower, it also crashed more. In the event viewer, I saw the message mentioned the vrfcore.dll. I did some search on that and saw that it is related to application verifier and I did remember trying to run app verifier before. I also tried the debug location in the toolbars too and tried simulating suspension. But the memory is high even when I am not aware of doing those. This problem seemed only affect my app on my machine, but not all the apps.
I am using a native COM server from a .Net C# application
Everything works fine except that callbacks from the COM server into the .Net application gradually becomes slower. The server and .Net application is always running on the same machine.
Calling from .Net to the COM server is always fast.
The strange thing is that it doesn't happen on all computers even if they run the same binaries.
I have spent a lot of time on this issue. Compared environments where the callbacks are fast with ones where it is slow without finding anything special.
The callbacks start fast but get exponentially slower over time.
Doesn't matter if the callbacks are assigned to .Net methods or not.
(There is a server switch to turn all callbacks off. That is how I know the problem is with the callbacks)
The slow computers use Window 7 64-bit but the same configuration is fast on other computers.
There are slow and fast computers on the same domain and network
Doesn't matter if the user is local admin or standard user
I have monitored Disk/Network activity but there is no difference between slow and fast
There is no noticeable difference in memory consumption
Looked at CLR memory from WinDbg but found nothing strange
Some things I have noted:
The server process use 100% CPU when the callbacks are slow.
Looked at the call stack with process explorer and the server is most of the time in one of the RPC Ndr* functions, i.e. NdrClientCall2.
I am now out of ideas and need some help to solve this.
Short question : is it possible (on an x64 OS of course) ? If not, why exactly ?
I have developed a c# plugin dll for excel 32.
When compiled in x86 it works fine.
When compiled in x64 the COM call fails.
Do I need a 64 bit version of excel ?
I thought COM was agnostic of compiling architecture and made communication possible between dlls developed in different technologies and having different architectures, but I guess the latter is wrong.
I guess an x64 bit dll can obviously not be called via COM (or else) from a 32-bit app.
COM supports two kind of servers, in-process and out-of-process. Office extensions are in-process components, a DLL that gets loaded into the process. A hard rule for 32-bit processes is that they cannot load 64-bit DLLs. And the other way around. This is enforced by the registry itself, a 32-bit process cannot directly access the registration information for 64-bit COM servers. They are redirected to the HKLM/Software/Wow6432Node keys. Or in other words, they cannot even see components of the wrong bitness.
Out-of-process components don't have that restriction, they run in their own process. COM marshals the calls between the two processes using RPC and papers over the bitness difference. This is also a way to get an in-process 64-bit server to work with a 32-bit host, you can run the component in a surrogate process. This is tricky to get going and almost never worth the hassle, out of process calls are much more expensive than in-process calls due to the required marshaling and context switching. Not just a little more expensive either, it is about 10,000 times slower, mostly because an in-process function call is so very fast. It is only ever used to keep a legacy 32-bit server working with a 64-bit program. Look at COM+ hosting if you want to try this, I don't know much about it.
The frequency with which I am coming across the situation where I have to call native 32-bit code from a managed 64-bit process is increasing as 64-bit machines and applications become prevalent. I don't want to mark my applciation as 32-bit and I cannot obtain 64-bit versions of of the code that is being calling.
The solution that I currently use is to create C++ COM shims that are loaded out of process to make the 32-bit calls from the 64-bit process.
This COM shim solution works well and the cross process calls are handled behind the scenes by COM, which minimises the overhead of this approach.
I would however like to keep all the new development that we undertake using C# and wondered if there are any frameworks that minimise the overhead of doing this. I have looked at IPCChannel but I feel that this approach is not as neat as the COM shim solution.
thanks,
Ed
I had the same problem and my solution was to use remoting. Basically the project consisted of:
Platform-independent CalculatorRemote.dll library with
CalculatorNative internal static class with x32 P/Invoke methods
RemoteCalculator class derived from MarshalByRefObject which used native methods from CalculatorNative;
Main platform-independent C# library (e.g. Calculator.dll), referencing CalculatorRemote.dll, with Calculator class which was privately using singleton of the RemoteCalculator class to invoke x32 functions where needed;
x32 console application which hosted RemoteCalculator from CalculatorRemote.dll to consume by Calculator.dll via IpcChannel.
So if the main application started in x64 mode it was spawning a RemoteCalculator host application and used remoted RemoteCalculator instance. (When in x32 it just used a local instance of RemoteCalculator.) The tricky part was telling calculator-host application to shut down.
I think this it better than using COM because:
You don't have to register COM classes anywhere;
Interoperating with COM should be slower than .NET remoting;
Sometimes if something is going wrong on the COM-side you need to restart your application to recover from that; (possibly I'm just not very familiar with COM)
When running in x32 mode there won't be any performance penalty with remoting -- all methods will be invoked in the same AppDomain.
Pretty much the only answer is out of process communication. You could create a .NET project that is a 32-bit executable that makes all of the 32-bit calls needed and communicate with it via Windows Messages, WCF, Named Pipes, Memory Mapped Files (4.0), etc. I am pretty sure this is how Paint.NET does their WIA (Windows Imaging Acquisition) from a 64-bit process.
In the case of PDN, they simply pass the name of the file they expect as the output, but more complex communication isn't difficult. It could be a better way to go depending on what you're doing.