.Net 4.5 Selfhost application issues - c#

I have a windows service written in C# using Selfhost technology that act like a little server that can handle some webservice's requests.
This server need to be run continuosly and actually has no policy of autoreload or explicit memory cleanup. In some tests i have noticed that after few days (a dozen) it become "instable" (what i mean is that it starts to fail accesses to DB and the memory allocated by process drammatically increase).
What i would to ask is if it's a known issue of this technology that need to be reload or a clean up memory afert certainly time, or if i need to investigate better in my code.
I smell of memory leak but it seems too strange because i only use managed objects.

Related

.NET 4.5 Memory Leak

I have a problem with an application that I wrote in .NET/C#. It consists of a server which manages a few other machines, and runs tests on them. It is a windows forms application. In order to run tests with proper error handling, for each machine I have two threads: one for running tests and one that pings it continuously. Each machine has a running queue, in which tasks are stored, tasks that will be run on that particular machine.
The issue is that after some time, when more than a few tasks are present in the queue, the memory it consumes(process explorer, task manager) gradually increases from about 50-100MB to 1.6-1.8 GB. At about this limit almost every transaction(file copy on share, remote WMI access) with the remote machines fails with either "Not enough storage" or "Out of memory". I tried some tools in order to localize the string and the closest I got was .Net Memory Profiler. That wasn't of great help, because the largest amount of memory was residing in "Private Data - Unidentified". This I'm guessing it's unmanaged data, because I can evaluate every other data(from my program) down to each string and int, and every instance of it.
Can anyone suggest a tool I can use in order to properly localize the leak and maybe fix it. It would help me a lot if I would know the DLL(from my app)/Thread that uses that memory, or at least if I can view somehow what is in that memory.
Note: A lot of posts are out there about the two exceptions: Not enough storage, and Out of memory. Most of them suggest increasing the IRPStackSize on the 'server' machine(in my case, clients). I have IRPStackSize of 50(0x32) on all of the machines, including the server.
EDIT
Regarding the comments: yes, I do maintain a log, but nothing strange happens. Using a memory profiler I discovered that my application, the .NET side uses about 20MB of memory when the unmanaged part is well over 1GB. With the help of WinDbg I found out what resides in that extra memory(in most of it). In order to access the machines and run different tests on them I use WMI, for which I have a wrapper. Everything I use is being disposed(using statements, and for some actually calling the Dispose method. Strangely though, the memory is filled with clones of this class. Does anyone know why a class would clone itself in memory.
Note: the rate at which the memory usage increases in about 5MB/s, so it's not really over a long period of time. I also wonder why it is not being freed by the garbage collector. I am using C# classes to work with WMI, not COM, nor unmanaged code. Also, among the objects on the heap I see a lot of data belonging to wmiutils, CWbemError. Oddly enough, google doesn't even know the word(no results for CWbemError)

WCF client not releasing reserved memory

I have created a wcf service hosted in an exe process, and instantiated the client through class library which makes calls to the service. The class library is for a com addin to excel 2007 and the reason for the wcf service is so we don't use up excel in-proc memory when retrieving large amounts of data.
I've created the wcf service by implementing ClientBase with WSHttpBinding. I'm currently testing with a bare bone project and the only function is to return a message from the wcf service.
My question is regarding the memory usage in creating the wcf client and why it doesn't get released once it has been disposed. I've used address space monitor to monitor the memory usage and creating the binding and client uses around 70mb of committed memory.
Any information on wcf memory usage or GC for com dlls would be useful
Thanks
Heres a writeup:
http://www.danrigsby.com/blog/index.php/2008/02/26/dont-wrap-wcf-service-hosts-or-clients-in-a-using-statement/
Also Below is a thread similiar to yours that was posted awhile ago. It was answered by Igor Zevaka. Hopefully it could add more knowledge.
this.Dispose() doesn't release memory used by Form after closing it.
THats the way garbage collection in .net works. In all sorts of places it gives advantages, however in some it seems to be a hinderance. You may find - and i am stretchign things a bit - that when you dispose of 1 form and create a new instance it reuses that memory space. Although i doubt it.
Anyway ... garbage collection in .net is kinda interesting imo.
It will get cleaned up eventually ... just in an indeterminate amount of time.
I believe there is a command to force the garbage collection
Best Practice for Forcing Garbage Collection in C#
Of course its a bit like fight club - dont talk abou tit and if you do find it, you probably will wish you hadnt
GC.Collect();
iirc
Also dispose has an overload that takes a bool. When you call true on that it also goes through all its parts forcing them. There are several dispose patterns that are easily googleable. Juval Lowry goes into them in great depth in his components book.

How do I sandbox calling an external unmanaged application from managed code?

We are developing an online test application for XSLT processors in ASP.NET, however, I'm a bit worried about how to limit the vulnerabilities of our system. Is it possible with .NET to sandbox a third party unmanaged or managed application? It should:
not be allowed to start any other process by any means or vulnerability;
have no access to other existing processes on the system;
be killed when it takes too much processing power or memory;
work with both managed and unmanaged external applications;
should not be able to access system calls
Some applications have a managed API, but that doesn't suffice because than I need to run it in the same processing space as ASP.NET with all potential risks (stack overflow, memory exceptions, buffer overflow). I'm not aware whether .NET offers sandboxing of unmanaged applications.
We currently execute the external program in a console with specific affinity and monitor this, but that doesn't feel like a right or even closely safe approach.
You can execute managed code within an AppDomain which can be configured to provide some level of protection, however as soon as you allow unmanaged code to run, its pretty much got access to everything the user its running under has access to.
I'm pretty sure you can prevent unmanaged/unsafe code being executed within an AppDomain though.

Targetting memory leak in a .NET production service

I have a C#.NET service running in production. The service functions as a TCP server to which clients register and make requests against. In looking at the Task Manager, it appears to be leaking about 10MB/day. I don't seem to notice these in dev (perhaps because of far less traffic and client activity). In searching around I've read that the Task Manager can be seriously wrong, but I'm not sure how accurate this is or in what circumstances the TM would display incorrect information.
To solve this problem I need to more closely monitor memory consumption. The problem is that the leak only seems to appear in production, where the deployed service was built for Release. Also since it's a service that can't be run directly be VS with an attached profiler/debugging, I'm not sure how to best pinpoint the problem with something more precise than TM.
Any group wisdom would be much appreciated, thanks.
EDIT:
I've added perfmon counters for the privates bytes of the service (7MB to start out) as well as CLR mem in all heaps (30MB to start out)
Task manager says the total memory to be ~37MB so this seems to make sense
The first part of this is to let the service go for a day and check out my counters again.
If my private bytes get huge but CLR mem is roughly static this would indicate an unmanaged leak. If both get huge then it's a managed leak.
Thanks guys.
Your first task is figuring out if the process is leaking memory. You can do this with perfmon measuring the Private Bytes
http://www.goldstarsoftware.com/papers/CapturingVirtualBytesToALogFile.pdf
If the graph is consistently rising (for say half an hour ) you have a memory leak. You can then use other counters to figure out if this is a .NET leak (.NET memory) though this is unlikely. I find that in most of these cases, there is a COM component that is being invoked but not released.
If you truly have a memory leak (and this isn't just variable memory usage)- the process will shutdown with an out of memory exception after running for a while.
You need one of the below MemoryProfilers in order to monitor it;
http://www.jetbrains.com/profiler/
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
There are other choices but these are very capable and you can profile remote application's memory with them (at least JetBrains's solution handles that)
Follow this guide: http://blogs.msdn.com/b/tess/archive/2008/03/25/net-debugging-demos-lab-7-memory-leak.aspx
It goes over exactly what you're describing, a memory leak in production. As was mentioned you have to first determine whether it's unmanaged code or managed code that's leaking using perfmon and Private Bytes.
In general make sure for networking objects you're wrapping them in using statements so that they're properly disposed.
A workflow I often use for managed memory leaks is to start the server on a test machine, hit it with a known amount of connections (say 123,456 connections). Then take a memory snapshot by going to task manager and right clicking on the process name and selecting 'create dump'. Open this dump with WinDBG and SOS and run the command !dumpheap -stat. Look for objects that have a multiple of 123,456 instances. Should these objects still be in memory? If not run a !gcroot on an instance of those objects to find why it's still in memory.
Get a dump of the memory when its in a leak state using the Task Manager right click on the process and select create dump file. You can also use ProcDump which gives you more options.
Use SOS Extensions in either WinDebug or Visual Studio to inspect the memory.

Long time running applications

I'm going to design an Application (C# or VB.NET) which use .NET Framework to run for very long time. It may be restarted every year or even more...
Is there anything (using special design patterns or so) which I must care about in designing "Long time running applications in .NET"?
Is .NET ever a good platform for these kind of applications or I should use other platforms such as J2SE?
(It's not a web application.)
I actually would say that using .NET is well-suited to long running applications. Managed code, in general, tends to do fairly well in this type of scenario, as a compacting GC helps prevent issues that can arise due to memory fragmentation over time.
That being said, it's difficult to give much guidance, as there's very little information in the question itself. The "every year or more" run times is not enough information to say that a particular framework or language choice would benefit - any language can work, as the issues that arise from long running applications tend to be more design issues, and less framework/language/toolset/etc.
I've written some .NET-based applications which run as services and stay continually running for very long times, and never had any issues with the application (at least none related to the technology itself).
I'd worry less about keeping an app running and more about what happens when it inevitably stops - and make no mistake, it WILL stop.
There are many factors that can go wrong; a crash, server fault, network failure or someone simply stopping the app. The true work will be resuming the application's tasks after it restarts.
.NET's garbage collector is very good, so as long as you don't have any non-obvious memory leaks, that should be OK. "Non-obvious" includes not releasing event-handlers when you're truly done with them., using lambda expressions for event handlers in other classes, and that sort of thing.
Be sure that you're catching and logging all unhandled exceptions. If it does die, you'll want to know why.
Also, take a look at the application restart support in Windows 7. This can restart your app in case it does fail. Although it's written for unmanaged code, it's accessible for .net in the Windows 7 API code pack.

Categories