When I initialize my client to connect to AppFabric's cache, it seems to inconsistently take up to 30 seconds to connect on the following line:
factory = new DataCacheFactory(configuration);
See full Init() code below - mostly taken from here.
I say inconsistently because sometimes it takes 1 second and other times 27, 28 , etc ... seconds. I have an asp.net site using the AppFabric cache - which lives on a different box (on the same domain). Everything is working great, except for the inconsistent connection time. When it connects, its all good - I just need to get it to consistently connect in ~1 second :) ... Thoughts?
public static void Init()
{
if (cache == null)
{
Stopwatch sw = new Stopwatch();
sw.Start();
try
{
//Define Array for 1 Cache Host
List<DataCacheServerEndpoint> servers = new List<DataCacheServerEndpoint>(1);
var appFabricHost = ConfigurationManager.AppSettings["AppFabricHost"];
var appFabricPort = ConfigurationManager.AppSettings["AppFabricPort"].ParseAs<int>();
//Specify Cache Host Details
// Parameter 1 = host name
// Parameter 2 = cache port number
servers.Add(new DataCacheServerEndpoint(appFabricHost, appFabricPort));
TraceHelper.TraceVerbose("Init", string.Format("Defined AppFabric - Host: {0}, Port: {1}", appFabricHost, appFabricPort));
//Create cache configuration
DataCacheFactoryConfiguration configuration = new DataCacheFactoryConfiguration();
//Set the cache host(s)
configuration.Servers = servers;
//Set default properties for local cache (local cache disabled)
configuration.LocalCacheProperties = new DataCacheLocalCacheProperties();
//Disable tracing to avoid informational/verbose messages on the web page
DataCacheClientLogManager.ChangeLogLevel(System.Diagnostics.TraceLevel.Off);
//Pass configuration settings to cacheFactory constructor
factory = new DataCacheFactory(configuration);
//Get reference to named cache
cache = factory.GetCache(cacheName);
TraceHelper.TraceVerbose("Init", "Defined AppFabric - CacheName: " + cacheName);
}
catch (Exception ex)
{
TraceHelper.TraceError("Init", ex);
}
finally
{
TraceHelper.TraceInfo("Init", string.Format("AppFabric init took {0} seconds", sw.Elapsed.Seconds));
}
if (cache == null)
{
TraceHelper.TraceError("Init", string.Format("First init cycle took {0} seconds and failed, retrying", sw.Elapsed.Seconds));
UrlShortener.Init(); // if at first you don't succeed, try try again ...
}
}
}
Is it any faster and/or more consistent if you keep all the configuration info in a .config file rather than creating your configuration programmatically? See here for details - I would always use this method as opposed to the programmatic configuration as it's much easier to update when something changes.
Otherwise I think the general advice is that DataCacheFactory is an expensive object to create due to what it does i.e. makes a network connection to each server in the cluster. You definitely don't want to be creating a DataCacheFactory every time you need to get something from the cache, instead you might want to think about creating it in Application_Start as perhaps a singleton and then reusing that one throughout your application (which, granted, doesn't solve the problem but it might serve to mitigate it).
Related
I am working on a WCF Service that is hosted in Windows Service, using nettcpbinding.
when i tried to perform load test on the service i built a simple client that call the service about 1000 call in second, the return from the service take about 2 to 8 seconds at first and after leaving the simple client running for about half hour the time to return the result increased, and some client gives some time out exceptions for the send time which was configured to be 2 minutes.
i revised the service throltting configuration and it's like this
these are the steps i tried to perform:
revised the service throttling configuration
<serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentInstances="2147483647" maxConcurrentSessions="2147483647"/>
was working on Windows 7 machine, so i moved to server 2008 but the same result.
update the configuration of tcp binding to be like the following
NetTcpBinding baseBinding = new NetTcpBinding(SecurityMode.None, true);
baseBinding.MaxBufferSize = int.MaxValue;
baseBinding.MaxConnections = int.MaxValue;
baseBinding.ListenBacklog = int.MaxValue;
baseBinding.MaxBufferPoolSize = long.MaxValue;
baseBinding.TransferMode = TransferMode.Buffered;
baseBinding.MaxReceivedMessageSize = int.MaxValue;
baseBinding.PortSharingEnabled = true;
baseBinding.ReaderQuotas.MaxDepth = int.MaxValue;
baseBinding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
baseBinding.ReaderQuotas.MaxArrayLength = int.MaxValue;
baseBinding.ReaderQuotas.MaxBytesPerRead = int.MaxValue;
baseBinding.ReaderQuotas.MaxNameTableCharCount = int.MaxValue;
baseBinding.ReliableSession.Enabled = true;
baseBinding.ReliableSession.Ordered = true;
baseBinding.ReliableSession.InactivityTimeout = new TimeSpan(23, 23, 59, 59);
BindingElementCollection elements = baseBinding.CreateBindingElements();
ReliableSessionBindingElement reliableSessionElement = elements.Find<ReliableSessionBindingElement>();
if (reliableSessionElement != null)
{
reliableSessionElement.MaxPendingChannels = 128;
TcpTransportBindingElement transport = elements.Find<TcpTransportBindingElement>();
transport.ConnectionPoolSettings.MaxOutboundConnectionsPerEndpoint = 1000;
CustomBinding newBinding = new CustomBinding(elements);
newBinding.CloseTimeout = new TimeSpan(0,20,9);
newBinding.OpenTimeout = new TimeSpan(0,25,0);
newBinding.ReceiveTimeout = new TimeSpan(23,23,59,59);
newBinding.SendTimeout = new TimeSpan(0,20,0);
newBinding.Name = "netTcpServiceBinding";
return newBinding;
}
else
{
throw new Exception("the base binding does not " +
"have ReliableSessionBindingElement");
}
changed my services function to use async and await
public async Task<ReturnObj> Connect(ClientInfo clientInfo)
{
var task = Task.Factory.StartNew(() =>
{
// do the needed work
// insert into database
// query some table to return information to client
});
var res = await task;
return res;
}
and updated the client to use async and await in it's call to the service.
applied the Worker thread solution proposed in this link
https://support.microsoft.com/en-us/kb/2538826
although i am using .net 4.5.1, and set the MinThreads to 1000 worker and 1000 IOCP
after all this the service start to handle more requests but the delay still exist and the simple client take about 4 hours to give time out
the strange thing that i found the service handle about 8 to 16 call within 100 ms, regarding the number of threads currently a live in the service.
i found a lot of articles talk about configuration needed to be placed in machine.config and Aspnet.config, i think this is not related to my case as i am using nettcp on windows service not IIS, but i have implemented these changes and found no change in the results.
could some one point me to what i am missing or i want from the service something it can't support?
It could be how your test client is written. With NetTcp, when you create a channel, it tries to get one from the idle connection pool. If it's empty, then it opens a new socket connection. When you close a client channel, it's returned back to the idle connection pool. The default size of the idle connection pool is 10, which means once there are 10 connections in the idle pool, any subsequent closes will actually close the TCP socket. If your test code is creating and disposing of channels quickly, you could be discarding connections in the pool. You could then be hitting a problem of too many sockets in the TIME_WAIT state.
Here is a blog post describing how to modifying the pooling behavior.
This is most likely due to the concurrency mode set to Single (this is default value). Try setting ConcurrencyMode to Multiple by adding ServiceBehaviourAttribute to your service implementation.
Be sure to check documenttation: https://msdn.microsoft.com/en-us/library/system.servicemodel.concurrencymode(v=vs.110).aspx
Example:
// With ConcurrencyMode.Multiple, threads can call an operation at any time.
// It is your responsibility to guard your state with locks. If
// you always guarantee you leave state consistent when you leave
// the lock, you can assume it is valid when you enter the lock.
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MultipleCachingHttpFetcher : IContract
You may be interested also in Sessions, Instancing, and Concurrency article which describes concurrency problems.
For the windows azure queues the scalability target per storage is supposed to be around 500 messages / second (http://msdn.microsoft.com/en-us/library/windowsazure/hh697709.aspx). I have the following simple program that just writes a few messages to a queue. The program takes 10 seconds to complete (4 messages / second). I am running the program from inside a virtual machine (on west-europe) and my storage account also is located in west-europe. I don't have setup geo replication for my storage. My connection string is setup to use the http protocol.
// http://blogs.msdn.com/b/windowsazurestorage/archive/2010/06/25/nagle-s-algorithm-is-not-friendly-towards-small-requests.aspx
ServicePointManager.UseNagleAlgorithm = false;
CloudStorageAccount storageAccount=CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
var cloudQueueClient = storageAccount.CreateCloudQueueClient();
var queue = cloudQueueClient.GetQueueReference(Guid.NewGuid().ToString());
queue.CreateIfNotExist();
var w = new Stopwatch();
w.Start();
for (int i = 0; i < 50;i++ )
{
Console.WriteLine("nr {0}",i);
queue.AddMessage(new CloudQueueMessage("hello "+i));
}
w.Stop();
Console.WriteLine("elapsed: {0}", w.ElapsedMilliseconds);
queue.Delete();
Any idea how I can get better performance?
EDIT:
Based on Sandrino Di Mattia's answer I re-analyzed the code I've originally posted and found out that it was not complete enough to reproduce the error. In fact I had created a queue just before the call to ServicePointManager.UseNagleAlgorithm = false; The code to reproduce my problem looks more like this:
CloudStorageAccount storageAccount=CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
var cloudQueueClient = storageAccount.CreateCloudQueueClient();
var queue = cloudQueueClient.GetQueueReference(Guid.NewGuid().ToString());
//ServicePointManager.UseNagleAlgorithm = false; // If you change the nagle algorithm here, the performance will be okay.
queue.CreateIfNotExist();
ServicePointManager.UseNagleAlgorithm = false; // TOO LATE, the queue is already created without 'nagle'
var w = new Stopwatch();
w.Start();
for (int i = 0; i < 50;i++ )
{
Console.WriteLine("nr {0}",i);
queue.AddMessage(new CloudQueueMessage("hello "+i));
}
w.Stop();
Console.WriteLine("elapsed: {0}", w.ElapsedMilliseconds);
queue.Delete();
The suggested solution from Sandrino to configure the ServicePointManager using the app.config file has the advantage that the ServicePointManager is initialized when the application starts up, so you don't have to worry about time dependencies.
I answered a similar question a few days ago: How to achive more 10 inserts per second with azure storage tables.
For adding 1000 items in table storage it took over 3 minutes, and with the changes I described in my answer it dropped to 4 seconds (250 requests/sec). In the end, table storage and storage queues aren't all that different. The backend is the same, data is simply stored in a different way. And both table storage and queues are exposed through a REST API, so if you improve the way you handle your requests, you'll get a better performance.
The most important changes:
expect100Continue: false
useNagleAlgorithm: false (you're already doing this)
Parallel requests combined with connectionManagement/maxconnection
Also, ServicePointManager.DefaultConnectionLimit should be increased before making a service point. Actually Sandrino's answer says the same thing but using config.
Turn off proxy detection even in the cloud. Auto detect in proxy config element. Slows initialisation.
Choose distributed partition keys.
Collocate your account near to compute, and customers.
Design to add more accounts as needed.
Microsoft set the SLA at 2,000 tps on queues and tables as of 07 2012.
I didn't read Sandrino's linked answer, sorry, just was on this question as I was watching Build 2012 session on exactly this.
I have a library that I use that uses WCF to call an http service to get settings. Normally the first call takes ~100 milliseconds and subsequent calls takes only a few milliseconds. But I have found that when I create a new AppDomain the first WCF call from that AppDomain takes over 2.5 seconds.
Does anyone have an explanation or fix for why the first creation of a WCF channel in a new AppDomain would take so long?
These are the benchmark results(When running without debugger attached in release in 64bit), notice how in the second set of numbers the first connections takes over 25x longer
Running in initial AppDomain
First Connection: 92.5018 ms
Second Connection: 2.6393 ms
Running in new AppDomain
First Connection: 2457.8653 ms
Second Connection: 4.2627 ms
This isn't a complete example but shows most of how I produced these numbers:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Running in initial AppDomain");
new DomainRunner().Run();
Console.WriteLine();
Console.WriteLine("Running in new thread and AppDomain");
DomainRunner.RunInNewAppDomain("test");
Console.ReadLine();
}
}
class DomainRunner : MarshalByRefObject
{
public static void RunInNewAppDomain(string runnerName)
{
var newAppDomain = AppDomain.CreateDomain(runnerName);
var runnerProxy = (DomainRunner)newAppDomain.CreateInstanceAndUnwrap(typeof(DomainRunner).Assembly.FullName, typeof(DomainRunner).FullName);
runnerProxy.Run();
}
public void Run()
{
AppServSettings.InitSettingLevel(SettingLevel.Production);
var test = string.Empty;
var sw = Stopwatch.StartNew();
test += AppServSettings.ServiceBaseUrlBatch;
Console.WriteLine("First Connection: {0}", sw.Elapsed.TotalMilliseconds);
sw = Stopwatch.StartNew();
test += AppServSettings.ServiceBaseUrlBatch;
Console.WriteLine("Second Connection: {0}", sw.Elapsed.TotalMilliseconds);
}
}
The call to AppServSettings.ServiceBaseUrlBatch is creating a channel to a service and calling a single method. I have used wireshark to watch the call and it only takes a milliseconds to get a response from the service. It creates the channel with the following code:
public static ISettingsChannel GetClient()
{
EndpointAddress address = new EndpointAddress(SETTINGS_SERVICE_URL);
BasicHttpBinding binding = new BasicHttpBinding
{
MaxReceivedMessageSize = 1024,
OpenTimeout = TimeSpan.FromSeconds(2),
SendTimeout = TimeSpan.FromSeconds(5),
ReceiveTimeout = TimeSpan.FromSeconds(5),
ReaderQuotas = { MaxStringContentLength = 1024},
UseDefaultWebProxy = false,
};
cf = new ChannelFactory<ISettingsChannel>(binding, address);
return cf.CreateChannel();
}
From profiling the app it shows that in the first case constructing the channel factory and creating the channel and calling the method takes less than 100 milliseconds
In the new AppDomain constructing the channel factory took 763 milliseconds, 521 milliseconds to create the channel, 1,098 milliseconds to call the method on the interface.
TestSettingsRepoInAppDomain.DomainRunner.Run() 2,660.00
TestSettingsRepoInAppDomain.AppServSettings.get_ServiceBaseUrlBatch() 2,543.47
Tps.Core.Settings.Retriever.GetSetting(string,!!0,!!0,!!0) 2,542.66
Tps.Core.Settings.Retriever.TryGetSetting(string,!!0&) 2,522.03
Tps.Core.Settings.ServiceModel.WcfHelper.GetClient() 1,371.21
Tps.Core.Settings.ServiceModel.IClientChannelExtensions.CallWithRetry(class System.ServiceModel.IClientChannel) 1,098.83
EDIT
After using perfmon with the .NET CLR Loading object I can see that when it loads the second AppDomain it is loading way more classes into memory than it does initially. The first flat line is a pause I put in after the first appdomain, there it has 218 classes loaded. The second AppDomain causes 1,944 total classes to be loaded.
I assume its the loading of all these classes that is taking up all of the time, so now the question is, what classes is it loading and why?
UPDATE
The answer turns out to be because of the fact that only one AppDomain is able to take advantage of the native image system dlls. So the slowness in the second appdomain was it having to rejit all of the System.* dlls used by wcf. The first appdomain could use the pre ngened native versions of those dlls, so it didn't have the same startup cost.
After investigating the LoaderOptimizationAttribute that Petar suggested, that indeed seemed to fix the issue, using either MultiDomain or MultiDomainHost results in the second AppDomain to take the same amount of time as the first time to access stuff over wcf
Here you can see the default option, note how in the second AppDomain none of the assemblies say Native, meaning they all had to be rejitted, which is what was taking all of the time
Here is after adding the LoaderOptimization(LoaderOptimization.MultiDomain) to Main. You can see that everything is loaded into the shared AppDomain
Here is after user LoaderOptimization(LoaderOptimization.MultiDomainHost) to main. You can see that all system dlls are shared, but my own dlls and any not in the GAC are loaded seperately into each AppDomain
So for the service that prompted this question using MultiDomainHost is the answer, because it has fast startup time and I can unload AppDomains to remove the dynamically built assemblies that the service uses
You can decorate your Main with LoaderOptimization attribute to tell the CLR loader how to load classes.
[LoaderOptimization(LoaderOptimization.MultiDomain)]
MultiDomain - Indicates that the application will probably have many domains that use the same code, and the loader must share maximal internal resources across application domains.
Do you have an HTTP proxy defined in IE? (maybe an auto configure script). This can be a cause.
Otherwise I would guess it is the time that takes to load all the dlls. Try to deparate the proxy creation from the actull call to the service, to see what's taking the time.
I found the following article that talks about how only the first AppDomain can use native image dlls, so a child appdomain will always be forced to JIT lots of stuff that the initial AppDomain doesn't have to. This could lead to the performancce impact I am seeing, but would it be possible to somehow not get this performance penalty?
If there is a native image for the assembly, only the first AppDomain
can use the native image. All other AppDomains will have to
JIT-compile the code which can result in a significant CPU cost.
I have following class that returns number of current Request per Second of IIS. I call RefreshCounters every minute in order to keep Requests per Second value refreshed (because it is average and if I keep it too long old value will influence result too much)... and when I need to display current RequestsPerSecond I call that property.
public class Counters
{
private static PerformanceCounter pcReqsPerSec;
private const string counterKey = "Requests_Sec";
public static object RequestsPerSecond
{
get
{
lock (counterKey)
{
if (pcReqsPerSec != null)
return pcReqsPerSec.NextValue().ToString("N2"); // EXCEPTION
else
return "0";
}
}
}
internal static string RefreshCounters()
{
lock (counterKey)
{
try
{
if (pcReqsPerSec != null)
{
pcReqsPerSec.Dispose();
pcReqsPerSec = null;
}
pcReqsPerSec = new PerformanceCounter("W3SVC_W3WP", "Requests / Sec", "_Total", true);
pcReqsPerSec.NextValue();
PerformanceCounter.CloseSharedResources();
return null;
}
catch (Exception ex)
{
return ex.ToString();
}
}
}
}
The problem is that following Exception is sometimes thrown:
System.InvalidOperationException: Category does not exist.
at System.Diagnostics.PerformanceCounterLib.GetCategorySample(String machine,\ String category)
at System.Diagnostics.PerformanceCounter.NextSample()
at System.Diagnostics.PerformanceCounter.NextValue()
at BidBop.Admin.PerfCounter.Counters.get_RequestsPerSecond() in [[[pcReqsPerSec.NextValue().ToString("N2");]]]
Am I not closing previous instances of PerformanceCounter properly? What am I doing wrong so that I end up with that exception sometimes?
EDIT:
And just for the record, I am hosting this class in IIS website (that is, of course, hosted in App Pool which has administrative privileges) and invoking methods from ASMX service. Site that uses Counter values (displays them) calls RefreshCounters every 1 minute and RequestsPerSecond every 5 seconds; RequestPerSecond are cached between calls.
I am calling RefreshCounters every 1 minute because values tend to become "stale" - too influenced by older values (that were actual 1 minute ago, for example).
Antenka has led you in a good direction here. You should not be disposing and re-creating the performance counter on every update/request for value. There is a cost for instantiating the performance counters and the first read can be inaccurate as indicated in the quote below. Also your lock() { ... } statements are very broad (they cover a lot of statements) and will be slow. Its better to have your locks as small as possible. I'm giving Antenka a voteup for the quality reference and good advice!
However, I think I can provide a better answer for you. I have a fair bit of experience with monitoring server performance and understand exactly what you need. One problem your code doesn't take into account is that whatever code is displaying your performance counter (.aspx, .asmx, console app, winform app, etc) could be requesting this statistic at any rate; it could be requested once every 10 seconds, maybe 5 times per second, you don't know and shouldn't care. So you need to separate the PerformanceCounter collection code from that does the monitoring from the code that actually reports the current Requests / Second value. And for performance reasons, I'm also going to show you how to setup the performance counter on first request and then keep it going until nobody has made any requests for 5 seconds, then close/dispose the PerformanceCounter properly.
public class RequestsPerSecondCollector
{
#region General Declaration
//Static Stuff for the polling timer
private static System.Threading.Timer pollingTimer;
private static int stateCounter = 0;
private static int lockTimerCounter = 0;
//Instance Stuff for our performance counter
private static System.Diagnostics.PerformanceCounter pcReqsPerSec;
private readonly static object threadLock = new object();
private static decimal CurrentRequestsPerSecondValue;
private static int LastRequestTicks;
#endregion
#region Singleton Implementation
/// <summary>
/// Static members are 'eagerly initialized', that is,
/// immediately when class is loaded for the first time.
/// .NET guarantees thread safety for static initialization.
/// </summary>
private static readonly RequestsPerSecondCollector _instance = new RequestsPerSecondCollector();
#endregion
#region Constructor/Finalizer
/// <summary>
/// Private constructor for static singleton instance construction, you won't be able to instantiate this class outside of itself.
/// </summary>
private RequestsPerSecondCollector()
{
LastRequestTicks = System.Environment.TickCount;
// Start things up by making the first request.
GetRequestsPerSecond();
}
#endregion
#region Getter for current requests per second measure
public static decimal GetRequestsPerSecond()
{
if (pollingTimer == null)
{
Console.WriteLine("Starting Poll Timer");
// Let's check the performance counter every 1 second, and don't do the first time until after 1 second.
pollingTimer = new System.Threading.Timer(OnTimerCallback, null, 1000, 1000);
// The first read from a performance counter is notoriously inaccurate, so
OnTimerCallback(null);
}
LastRequestTicks = System.Environment.TickCount;
lock (threadLock)
{
return CurrentRequestsPerSecondValue;
}
}
#endregion
#region Polling Timer
static void OnTimerCallback(object state)
{
if (System.Threading.Interlocked.CompareExchange(ref lockTimerCounter, 1, 0) == 0)
{
if (pcReqsPerSec == null)
pcReqsPerSec = new System.Diagnostics.PerformanceCounter("W3SVC_W3WP", "Requests / Sec", "_Total", true);
if (pcReqsPerSec != null)
{
try
{
lock (threadLock)
{
CurrentRequestsPerSecondValue = Convert.ToDecimal(pcReqsPerSec.NextValue().ToString("N2"));
}
}
catch (Exception) {
// We had problem, just get rid of the performance counter and we'll rebuild it next revision
if (pcReqsPerSec != null)
{
pcReqsPerSec.Close();
pcReqsPerSec.Dispose();
pcReqsPerSec = null;
}
}
}
stateCounter++;
//Check every 5 seconds or so if anybody is still monitoring the server PerformanceCounter, if not shut down our PerformanceCounter
if (stateCounter % 5 == 0)
{
if (System.Environment.TickCount - LastRequestTicks > 5000)
{
Console.WriteLine("Stopping Poll Timer");
pollingTimer.Dispose();
pollingTimer = null;
if (pcReqsPerSec != null)
{
pcReqsPerSec.Close();
pcReqsPerSec.Dispose();
pcReqsPerSec = null;
}
}
}
System.Threading.Interlocked.Add(ref lockTimerCounter, -1);
}
}
#endregion
}
Ok now for some explanation.
First you'll notice this class is designed to be a static singleton.
You can't load multiple copies of it, it has a private constructor
and and eagerly initialized internal instance of itself. This makes
sure you don't accidentally create multiple copies of the same
PerformanceCounter.
Next you'll notice in the private constructor (this will only run
once when the class is first accessed) we create both the
PerformanceCounter and a timer which will be used to poll the
PerformanceCounter.
The Timer's callback method will create the PerformanceCounter if
needed and get its next value is available. Also every 5 iterations
we're going to see how long its been since your last request for the
PerformanceCounter's value. If it's been more than 5 seconds, we'll
shutdown the polling timer as its unneeded at the moment. We can
always start it up again later if we need it again.
Now we have a static method called GetRequestsPerSecond() for you to
call which will return the current value of the RequestsPerSecond
PerformanceCounter.
The benefits of this implementation are that you only create the performance counter once and then keep using until you are finished with it. Its easy to use because you simple call RequestsPerSecondCollector.GetRequestsPerSecond() from wherever you need it (.aspx, .asmx, console app, winforms app, etc). There will always be only one PerformanceCounter and it will always be polled at exactly 1 times per second regardless of how quickly you call RequestsPerSecondCollector.GetRequestsPerSecond(). It will also automatically close and dispose of the PerformanceCounter if you haven't requested its value in more than 5 seconds. Of course you can adjust both the timer interval and the timeout milliseconds to suit your needs. You could poll faster and timeout in say 60 seconds instead of 5. I chose 5 seconds as it proves that it works very quickly while debugging in visual studio. Once you test it and know it works, you might want a longer timeout.
Hopefully this helps you not only better use PerformanceCounters, but also feel safe to reuse this class which is separate from whatever you want to display the statistics in. Reusable code is always a plus!
EDIT: As a follow up question, what if you want to performance some cleanup or babysitting task every 60 seconds while this performance counter is running? Well we already have the timer running every 1 second and a variable tracking our loop iterations called stateCounter which is incremented on each timer callback. So you could add in some code like this:
// Every 60 seconds I want to close/dispose my PerformanceCounter
if (stateCounter % 60 == 0)
{
if (pcReqsPerSec != null)
{
pcReqsPerSec.Close();
pcReqsPerSec.Dispose();
pcReqsPerSec = null;
}
}
I should point out that this performance counter in the example should not "go stale". I believe 'Request / Sec" should be an average and not a moving average statistic. But this sample just illustrates a way you could do any type of cleanup or "babysitting" of your PerformanceCounter on a regular time interval. In this case we are closing and disposing the performance counter which will cause it to be recreated on next timer callback. You could modify this for your use case and according the specific PerformanceCounter you are using. Most people reading this question/answer should not need to do this. Check the documentation for your desired PerformanceCounter to see if it is a continuous count, an average, a moving average, etc... and adjust your implementation appropriately.
I don't know, if this passes you .. I've read article PerformanceCounter.NextValue Method
And there was a comment:
// If the category does not exist, create the category and exit.
// Performance counters should not be created and immediately used.
// There is a latency time to enable the counters, they should be created
// prior to executing the application that uses the counters.
// Execute this sample a second time to use the category.
So, I have a question, which can lead to answer: isn't call to a RequestsPerSecond method happends too early?
Also, I would suggest you to to try check if the Category doesn't exists and log the info somewhere, so we can analyze it and determine which conditions we have and how often that happends.
I just solved this type of error or exception with:
Using,
new PerformanceCounter("Processor Information", "% Processor Time", "_Total");
Instead of,
new PerformanceCounter("Processor", "% Processor Time", "_Total");
I had an issue retrieving requests per second on IIS using code similar to the following
var pc = new PerformanceCounter();
pc.CategoryName = #"W3SVC_W3WP";
pc.InstanceName = #"_Total";
pc.CounterName = #"Requests / Sec";
Console.WriteLine(pc.NextValue());
This would sometimes throw InvalidOperationException and I was able to reproduce the exception by restarting IIS. If I run with a non warmed up IIS, e.g. after a laptop reboot or IIS restart, then I get this exception. Hit the website first, make any http request beforehand, and wait a second or two and I don't get the exception. This smells like the performance counters are cached,and when Idle they get dumped, and take a while to re-cache? (or similar).
Update1: Initially when I manually browse to the website and warm it up, it solves the problem. I've tried programmatically warming up the server with new WebClient().DownloadString(); Thread.Sleep() up to 3000ms and this has not worked? So my results of manually warming up server, might somehow be a false positive. I'm leaving my answer here, because it might be the cause, (i.e. manual warming up), and maybe someone else can elaborate further?
Update2: Ah, ok, here are some unit tests that summarises some learning from further experimenting I did yesterday. (There's not a lot on google on this subject btw.)
As far as I can reason, the following statements might be true; (and I submit the unit tests underneath as evidence.) I may have misinterpreted the results, so please double check ;-D
Create a performance counter and calling getValue before the category exists, e.g. querying an IIS counter, while IIS is cold and no process running, will throw InvalidOperation exception "category does not exist". (I assume this is true for all counters, and not just IIS.)
From within a Visual Studio unit test, once your counter throws an exception, if you subsequently warm up the server after the first exception, and create a new PerformanceCounter and query again, it will still throw an exception! (this one was a surprise, I assume this is because of some singleton action. My apologies I have not had enough time to decompile the sources to investigate further before posting this reply.)
In 2 above, if you mark the unit test with [STAThread] then I was able to create a new PerformanceCounter after one has failed. (This might have something to do with Performance counter possibly being singletons? Needs further testing.)
No pause was required for me before creating counter and using it, despite some warnings in MSDN same code documentation, other than the time it takes to create a performance counter itself before calling NextValue().In my case, to warm up the counter and bring the "category" into existance, was for me to fire one shot across the bow of IIS, i.e. make a single GET request, and viola, no longer get "InvalidOperationException", and this seems to be a reliable fix for me, for now. At least when querying IIS performance counters.
CreatingPerformanceCounterBeforeWarmingUpServerThrowsException
[Test, Ignore("Run manually AFTER restarting IIS with 'iisreset' at cmd prompt.")]
public void CreatingPerformanceCounterBeforeWarmingUpServerThrowsException()
{
Console.WriteLine("Given a webserver that is cold");
Console.WriteLine("When I create a performance counter and read next value");
using (var pc1 = new PerformanceCounter())
{
pc1.CategoryName = #"W3SVC_W3WP";
pc1.InstanceName = #"_Total";
pc1.CounterName = #"Requests / Sec";
Action action1 = () => pc1.NextValue();
Console.WriteLine("Then InvalidOperationException will be thrown");
action1.ShouldThrow<InvalidOperationException>();
}
}
[Test, Ignore("Run manually AFTER restarting IIS with 'iisreset' at cmd prompt.")]
public void CreatingPerformanceCounterAfterWarmingUpServerDoesNotThrowException()
{
Console.WriteLine("Given a webserver that has been Warmed up");
using (var client = new WebClient())
{
client.DownloadString("http://localhost:8082/small1.json");
}
Console.WriteLine("When I create a performance counter and read next value");
using (var pc2 = new PerformanceCounter())
{
pc2.CategoryName = #"W3SVC_W3WP";
pc2.InstanceName = #"_Total";
pc2.CounterName = #"Requests / Sec";
float? result = null;
Action action2 = () => result = pc2.NextValue();
Console.WriteLine("Then InvalidOperationException will not be thrown");
action2.ShouldNotThrow();
Console.WriteLine("And the counter value will be returned");
result.HasValue.Should().BeTrue();
}
}
Just out of curiousity, what do you have set for properties in Visual Studio? In VS go to Project Properties, Build, Platform target and change it to AnyCPU. I have seen it before where Performance Counters aren't always retrieved when it is set to x86, and changing it to AnyCPU could fix it.
i have the following code to cache some expensive code.
private MyViewModel GetVM(Params myParams)
{
string cacheKey = myParams.runDate.ToString();
var cacheResults = HttpContext.Cache[cacheKey] as MyViewModel ;
if (cacheResults == null)
{
cacheResults = RunExpensiveCodeToGenerateVM(myParams);
HttpContext.Cache[cacheKey] = cacheResults;
}
return cacheResults;
}
will this stay in the cache forever? until the server reboots or runs out of memory?
will this stay in the cache forever?
This will depend on the particular cache provider you are using. For example if you are using the default in-memory cache it might expire if the server starts running low on memory or if the application pool is recycled. But if you are using some other cache provider, like for example a distributed cache like memcached or AppFactory this will depend on the particular implementation.
The rule of thumb is to never assume that something is inside the cache because you previously stored it. Always check for the presence of the item in the cache first and if not present fetch it and store in the cache again.