C# ServerManager leaking memory on Windows 7 - c#

I have been trying to diagnose a memory leak in a service which only appears on Windows 7/Server 2008 R2. I narrowed it down to where we are using Microsoft.Web.Administration.ServerManager to gather info about the apps in our site. I whittled it down to the console app below, which exhibits the same behavior. It might still be more complex than it needs to be, but I wanted to emulate the behavior of the service as much as possible.
I found a previous question here that was very similar and made the changes suggested in the answers. This appeared to reduce the rate of growth, but it still leaks significantly (under the comments "Original Test" I have commented out code that I changed based on those answers. the "Modified Test" comments indicate the changes I made. I didn't initially have the GC.Collect call in, and when I ran this on a Windows 10 system, it grew for quite some time before the garbage collection kicked in. With the GC.Collect call in place, it ran without growing on Win 10, but on Win 7 it made no difference.
I ran it under a profiler that indicated the memory being leaked was native, and that the leak was coming from nativerd.dll.
Has anyone encountered a problem like this? I'm new to C# and am still learning how Garbage Collection works, so I'm wondering if there is something I'm doing wrong?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Web.Administration;
namespace ServerManagerLeakTest
{
class Program
{
static void Main(string[] args)
{
Console.Write("Working.");
var me = new MyClass();
me.Run();
}
}
internal class MyClass
{
ServerManagerWrapper _smw = new ServerManagerWrapper();
public void Run()
{
while (true)
{
var t = Task.Run(async delegate
{
DoWork();
await Task.Delay(1000);
});
try
{
t.Wait();
}
catch (Exception e)
{
Console.Write("Main Exception: " + e.Message);
}
Console.Write(".");
}
}
public void DoWork()
{
try
{
var data = _smw.GetWebApps().ToList();
data.Clear();
}
catch (Exception e)
{
Console.Write("DoWork Exception: " + e.Message);
}
}
}
internal class ServerManagerWrapper
{
public List<int> GetWebApps()
{
List<int> result = new List<int>() { };
// Original Test
//
// using (var serverManager = new ServerManager())
// {
// foreach (var site in serverManager.Sites)
// {
// result.AddRange(GetWebApps(site));
// }
//}
// Modified Test
var serverManager = new ServerManager();
foreach (var site in serverManager.Sites)
{
result.AddRange(GetWebApps(site));
}
serverManager.Dispose();
serverManager = null;
System.GC.Collect();
return result;
}
private IEnumerable<int> GetWebApps(Site site)
{
// Original Test
//
//for (var application in site.Applications)
//{
// yield return application.GetHashCode();
//}
// Modified Test
List<int> result = new List<int>() { };
for (int i = 0; i < site.Applications.Count; i++)
{
result.Add(site.Applications[i].GetHashCode());
}
return result;
}
}
}

Answer provided in comments from #Lex Li.
Move the check to a separate process. Calling IIS REST API, PowerShell, or even appcmd and parse the result. Let the leak be out of your own service.

Related

Error compiling dynamicly C#, after reboot machine work ok

I create some method(this is only test method, to isolate problem from very very big system):
private void CompileHalloWorld()
{
System.Threading.Thread.Sleep((30000));
if (!Directory.Exists(_workingDir))
{
Directory.CreateDirectory(_workingDir);
}
Directory.SetCurrentDirectory(_workingDir);
var csc = new CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v4.0" } });
var parameters = new CompilerParameters(new[] { "mscorlib.dll", "System.Core.dll" }, "foo.exe", true);
parameters.GenerateExecutable = true;
CompilerResults results = null;
try
{
results = csc.CompileAssemblyFromSource(parameters,
#"using System;
class Program {
public static void Main(string[] args) {
Console.WriteLine(""Hallo World!"");
}
}");
}
catch (Exception e)
{
int a = 2;
}
results.Errors.Cast<CompilerError>().ToList().ForEach(error => Console.WriteLine(error.ErrorText));
}
On 99% machines this method work good, but on one of machines is some issue. When server work some times (few days), something broke in Windows I think. Then, when I run CompileHalloWorld() from exe app all work fine, but when I run this method from simple empty service, after invoke CompileAssemblyFromSourceinvoke in results structure is no errors in collection, but csc.exe return exit code -1073741502... After reboot of server all work good again, but I can't restart server every day...
I try to find some solution in SO. I check in Task Manager and no csc.exe proces hang, no Visual Studio work, no VBCSCompiler.exe hangs...
Please help me.

Cannot get ReadyReceive pub-sub to work using NetMQ 4.x

I created 2 simple C# Console Projects (.net 4.5.2), added the v4.0.0.1 NetMQ Nuget package to each, loaded each program up into separate Visual Studio 2017 Community Editions, put a breakpoint on the 1 line contained within the OnReceiveReady callback method, started the subscriber program first, then started the publisher program. The ReceieveReady event is not being triggered in the subscriber. What am I doing wrong? Even if I chose subSocket.Subscribe("") then I still didn't get any messages received. Also, removing/modifying the Send/Receive HighWatermarks didn't change things either. Thanks for your help!
Here's the Publisher code:
using System;
using NetMQ;
using NetMQ.Sockets;
using System.Threading;
namespace SampleNQPub
{
class Program
{
static void Main(string[] args)
{
var addr = "tcp://127.0.0.1:3004";
using (var pubSocket = new PublisherSocket())
{
Console.WriteLine("Publisher socket binding.");
pubSocket.Options.SendHighWatermark = 10;
pubSocket.Bind(addr);
for (int i=0; i < 30; i++)
{
pubSocket.SendMoreFrame("NQ").SendFrame(i.ToString());
Thread.Sleep(1000);
}
pubSocket.Disconnect(addr);
}
}
}
}
Here's the Subscriber code:
using System.Threading;
using NetMQ;
using NetMQ.Sockets;
namespace SampleNQSub
{
class Program
{
static void Main(string[] args)
{
var addr = "tcp://127.0.0.1:3004";
using (var subSocket = new SubscriberSocket())
{
subSocket.ReceiveReady += OnReceiveReady;
subSocket.Options.ReceiveHighWatermark = 10;
subSocket.Connect(addr);
subSocket.Subscribe("NQ");
for (int i=0; i < 20; i++)
{
Thread.Sleep(1000);
}
subSocket.Disconnect(addr);
}
}
static void OnReceiveReady(object sender, NetMQSocketEventArgs e)
{
var str = e.Socket.ReceiveFrameString();
}
}
}
Ok, this is a gotcha question in the NetMQ world and I just figured it out. You MUST setup a NetMQPoller that will wind up calling every ReceiveReady callback which you have added to it (NetMQPoller).
Here is the corrected code which will at least (i.e., ReceiveFrameString still only getting the "NQ" part but that's just another method call to fix) get the ReceiveReady event triggered:
using System.Threading;
using System.Threading.Tasks;
using NetMQ;
using NetMQ.Sockets;
namespace SampleNQSub
{
class Program
{
static void Main(string[] args)
{
var addr = "tcp://127.0.0.1:3004";
NetMQPoller poller = new NetMQPoller();
using (var subSocket = new SubscriberSocket())
{
subSocket.ReceiveReady += OnReceiveReady;
subSocket.Options.ReceiveHighWatermark = 10;
subSocket.Connect(addr);
subSocket.Subscribe("NQ");
poller.Add(subSocket);
poller.RunAsync();
for (int i = 0; i < 20; i++)
{
Thread.Sleep(1000);
}
subSocket.Disconnect(addr);
}
}
static void OnReceiveReady(object sender, NetMQSocketEventArgs e)
{
var str = e.Socket.ReceiveFrameString();
e.Socket.ReceiveMultipartStrings();
}
}
}
I noticed that the authors of NetMQ decided in 4.x to take care of the Context object internally so the user wouldn't have to bare the burden of managing it. It would be nice also if they could hide this "polling pump" code from the user as well for the most simple use case.
As a comparison, take a look at the subscriber using NodeJS (with the zmq library) utilizing the Publisher console app I posted above (save this code to sub.js and, in a Windows console, type 'node sub.js'):
var zmq = require('zmq'), sock = zmq.socket('sub');
sock.connect('tcp://127.0.0.1:3004');
sock.subscribe('NQ');
console.log('Subscriber connected to port 3004');
sock.on('message', function() {
var msg = [];
Array.prototype.slice.call(arguments).forEach(function(arg) {
msg.push(arg.toString());
});
console.log(msg);
});
So where's the poller pump mechanism in this? (Answer: I don't care! I just want the messages supplied to me in a callback that I register. [Obviously, tongue-in-cheek. I get that a NetMQPoller is versatile and handles more complex issues, but for basic "give me a message in a callback when it arrives", it would be nice if it were handled internally by the library.])

Bing geocoder sometimes return the correct number of records and sometimes none

I'm using Bing Map API in my WPF Application to pin the list of addresses to my WPF UI but sometimes I get no result when using this block of code, and sometimes it does return the correct number of records.
How to fix this issue? Why sometime bing-map responses are abnormal?
using Microsoft.Maps.MapControl.WPF;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Windows;
using Ravager.BingMapsService;
using Ravager.Shell.UI;
namespace Ravager.MappingPlugins.MapViewModel
{
public class GeoCodeResultByAddress
{
public async static Task<List<GeocodeResult>> GeocodeAddress(List<string> addres)//async Task<>
{
List<GeocodeResult> GeocodeResult = new List<GeocodeResult>();
foreach (var address in addres)
{
try
{
GeocodeResult res = null;
if (!(string.IsNullOrEmpty(address) && string.IsNullOrWhiteSpace(address)))
{
int Counts = 0;
again: using (GeocodeServiceClient client = new GeocodeServiceClient("CustomBinding_IGeocodeService"))
{
GeocodeRequest request = new GeocodeRequest();
request.Credentials = new Credentials() { ApplicationId = (App.Current.Resources["MyCredentials"] as ApplicationIdCredentialsProvider).ApplicationId };
request.Query = address;
GeocodeResponse respons = await Task.Run(() => client.Geocode(request));
res = respons.Results.Count > 0 ? respons.Results[0] : null;
if (res != null)
{
GeocodeResult.Add(res);
}
else
{
if (Counts < 3)
{
Counts++;
goto again;
}
MessageBox.Show("Unable to Generate GeocodeAddress for " + address);
}
}
}
else
{
MessageBox.Show("Unable to Generate GeocodeAddress");
}
}
catch (Exception ex)
{
}
}
return GeocodeResult;
}
}
}
This has been answered several times before. You are more than likely being rate limited. This occurs to trial and basic Bing Maps keys when either your account is consuming transactions at a rate that will exceed the free terms of use, or when the service is under a lot of load from others who are using Bing Maps under the free terms of use. The only way to limit the possibility of being rate limited is to upgrade to an Enterprise key.
When rate limiting occurs to a request a flag is added in the header of the response to indicate this has happened. This is documented here: https://msdn.microsoft.com/en-us/library/ff701703.aspx

C# all pipe instances are busy

The following code creates a new thread acting first as a named pipe client for sending parameters and then as a server for retrieving results. After that it executes a function in another AppDomain acting as a named pipe server and after that as a client to send the results back.
public OrderPrice DoAction()
{
Task<OrderPrice> t = Task<OrderPrice>.Factory.StartNew(NamedPipeClient, parameters);
if (domain == null)
{
domain = AppDomain.CreateDomain(DOMAINNAME);
}
domain.DoCallBack(AppDomainCallback);
return t.Result;
}
static OrderPrice NamedPipeClient(object parameters) {
OrderPrice price = null;
using (NamedPipeClientStream stream = new NamedPipeClientStream(PIPE_TO)) {
stream.Connect();
SerializeToStream(stream, parameters);
}
using (NamedPipeServerStream stream = new NamedPipeServerStream(PIPE_BACK)) {
stream.WaitForConnection();
price = (OrderPrice)DeserializeFromStream(stream);
}
return price;
}
void AppDomainCallback() {
OrderPrice price = null;
using (NamedPipeServerStream stream = new NamedPipeServerStream(PIPE_TO)) {
stream.WaitForConnection();
List<object> parameters = (List<object>)DeserializeFromStream(stream);
if (mi != null)
price = (OrderPrice)mi.Invoke(action, parameters.ToArray());
}
using (NamedPipeClientStream stream = new NamedPipeClientStream(PIPE_BACK)) {
stream.Connect();
SerializeToStream(stream, price);
}
}
The code is called once per second on average and it worked fine for 7+ hours. But at some point "system.io.ioexception all pipe instances are busy" is thrown and they wont reconnect anymore after that. Browsing here it seems like it could be because of not properly disposing the pipe objects, but I guess thats all good since they are inside using statements.
Does anyone have any clue what could be wrong here? The code is in .NET 4.0 running on windows server 2008.
Sounds like it should be a mutex instead of a simple lock
Lock, mutex, semaphore... what's the difference?
as far as the occasional halting, it could be starvation or a deadlock.
This is good reading material for abstracts on what may be happening
http://en.wikipedia.org/wiki/Dining_philosophers_problem

HttpWebRequest Limitations? Or bad implementation

I am trying to build a c# console app that will monitor about 3000 urls (Just need to know that HEAD request returned 200, not necessarily content, etc.)
My attempt here was to build a routine the checks the web URLS, looping and creating threads each executing the routine. What's happening is if i run with <20 threads, it executes ok most of the time, but if i use >20 threads, some of the url's time out. I tried increasing the Timeout to 30 seconds, same occurs. The network I am running this on is more than capable of executing 50 HTTP HEAD requests (10MBIT connection at ISP), and both the CPU and network run very low when executing the routine.
When a timeout occurs, i test the same IP on a browser and it works fine, I tested this repeatedly and there was never a case during testing that a "timed out" url was actually timing out.
The reason i want to run >20 threads is that i want to perform this test every 5 minutes, with some of the URL's taking a full 10sec (or higher if the timeout is set higher), i want to make sure that its able to run through all URLs within 2-3 minutes.
Is there a better way to go about checking if a URL is available, or, should I be looking at the system/network for an issue.
MAIN
while (rdr.Read())
{
Thread t = new Thread(new ParameterizedThreadStart(check_web));
t.Start(rdr[0]);
}
static void check_web(object weburl)
{
bool isok;
isok = ConnectionAvailable(weburl.ToString());
}
public static bool ConnectionAvailable(string strServer)
{
try
{
strServer = "http://" + strServer;
HttpWebRequest reqFP = (HttpWebRequest)HttpWebRequest.Create(strServer);
reqFP.Timeout = 10000;
reqFP.Method = "HEAD";
HttpWebResponse rspFP = (HttpWebResponse)reqFP.GetResponse();
if (HttpStatusCode.OK == rspFP.StatusCode)
{
Console.WriteLine(strServer + " - OK");
rspFP.Close();
return true;
}
else
{
Console.WriteLine(strServer + " Server returned error..");
rspFP.Close();
return false;
}
}
catch (WebException x)
{
if (x.ToString().Contains("timed out"))
{
Console.WriteLine(strServer + " - Timed out");
}
else
{
Console.WriteLine(x.Message.ToString());
}
return false;
}
}
Just remember, you asked.
Very bad implementation.
Do not go creating threads like that. It does very little good to have more threads than processor cores. The extra threads will pretty much just compete with each other, especially since they're all running the same code.
You need to implement using blocks. If you throw an exception (and chances are you will), then you will be leaking resources.
What is the purpose in returning a bool? Do you check it somewhere? In any case, your error and exception processing are a mess.
When you get a non-200 response, you don't display the error code.
You're comparing against the Message property to decide if it's a timeout. Microsoft should put a space between the "time" and "out" just to spite you.
When it's not a timeout, you display only the Message property, not the entire exception, and the Message property is already a string and doesn't need you to call ToString() on it.
Next Batch of Changes
This isn't finished, I don't think, but try this one:
public static void Main()
{
// Don't mind the interpretation. I needed an excuse to define "rdr"
using (var conn = new SqlConnection())
{
conn.Open();
using (var cmd = new SqlCommand("SELECT Url FROM UrlsToCheck", conn))
{
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
{
// Use the thread pool. Please.
ThreadPool.QueueUserWorkItem(
delegate(object weburl)
{
// I invented a reason for you to return bool
if (!ConnectionAvailable(weburl.ToString()))
{
// Console would be getting pretty busy with all
// those threads
Debug.WriteLine(
String.Format(
"{0} was not available",
weburl));
}
},
rdr[0]);
}
}
}
}
}
public static bool ConnectionAvailable(string strServer)
{
try
{
strServer = "http://" + strServer;
var reqFp = (HttpWebRequest)WebRequest.Create(strServer);
reqFp.Timeout = 10000;
reqFp.Method = "HEAD";
// BTW, what's an "FP"?
using (var rspFp = (HttpWebResponse) reqFp.GetResponse()) // IDisposable
{
if (HttpStatusCode.OK == rspFp.StatusCode)
{
Debug.WriteLine(string.Format("{0} - OK", strServer));
return true; // Dispose called when using is exited
}
// Include the error because it's nice to know these things
Debug.WriteLine(String.Format(
"{0} Server returned error: {1}",
strServer, rspFp.StatusCode));
return false;
}
}
catch (WebException x)
{
// Don't tempt fate and don't let programs read human-readable messages
if (x.Status == WebExceptionStatus.Timeout)
{
Debug.WriteLine(string.Format("{0} - Timed out", strServer));
}
else
{
// The FULL exception, please
Debug.WriteLine(x.ToString());
}
return false;
}
}
Almost Done - Not Tested Late Night Code
public static void Main()
{
using (var conn = new SqlConnection())
{
conn.Open();
using (var cmd = new SqlCommand("", conn))
{
using (var rdr = cmd.ExecuteReader())
{
if (rdr == null)
{
return;
}
while (rdr.Read())
{
ThreadPool.QueueUserWorkItem(
CheckConnectionAvailable, rdr[0]);
}
}
}
}
}
private static void CheckConnectionAvailable(object weburl)
{
try
{
// If this works, it's a lot simpler
var strServer = new Uri("http://" + weburl);
using (var client = new WebClient())
{
client.UploadDataCompleted += ClientOnUploadDataCompleted;
client.UploadDataAsync(
strServer, "HEAD", new byte[] {}, strServer);
}
}
catch (WebException x)
{
Debug.WriteLine(x);
}
}
private static void ClientOnUploadDataCompleted(
object sender, UploadDataCompletedEventArgs args)
{
if (args.Error == null)
{
Debug.WriteLine(string.Format("{0} - OK", args.UserState));
}
else
{
Debug.WriteLine(string.Format("{0} - Error", args.Error));
}
}
Use ThreadPool class. Don't spawn hundreds of threads like this. Threads have such a huge overhead and what happens in your case is that your CPU will spend 99% time on context switching and 1% doing real work.
Don't use threads.
Asynch Call backs and queues. Why create a thread when the resource that they are all wanting is access to the outside world. Limit your threads to about 5, and then implement a class that uses a queue. split the code into two parts, the fetch and the process. One controls the flow of data while the other controls access to the outside world.
Use whatever language you like but you won't got wrong if you think that threads are for processing and number crunching and async call backs are for resource management.

Categories