keeping variables aside from a while loop - c#

I am receiving from a serial port (vhf receiver) some data from different sources lets say for example cars. The whole program at the moment is a while loop. While the port is open i am receiving bytes in arrays and do some decoding. Every cycle of the while loop 1 message is received and decoded. But in one case there is a message that is split in to parts an odd part and an even part. I need both to decode the message but only one part is received in 1 while cycle lets say the oddpart from car1. The next while cycle may contain oddpart from car2 or evenpart from car1 or evenpart from car3 etc. So i need a way to store the odd even parts of each car until both of them are received. And another important thing: there should be a 10 seconds time window from the moment i receive the first part until the moment i recieve the second part. If 10seconds pass the first part should be descarded. I guess arrays wont work because i need something dynamic like a List or a dictionary.
while (serialportopen)
{
//somedecoding I end up with
string hexid;
string oddpart;
string evenpart;
}
I tried creating a class like this with properties to hold the message parts but i cant create an object with the name hexid because it is already declared. Any ideas including lists dictionaries or classes?

The way most systems do it, when dealing with variable length data segments, is there's something unique about the first packet that comes through, that notifies the receiving driver that it's two part. For example, it might simply be the MSB (most significant bit) set or unset on the first packet, where 0 could denote that there is no second packet, and 1 could denote that there is a second packet. Then it's simply a matter of masking it when y you get it, and calling a function to receive the second part, or setting a flag that the second part needs to be received, and that it's NOT a new packet.
As for the scope, I'm kind of foggy on what you're asking, but based on the code above, if you declare the variables outside the loop, ie.:
string hexid;
string oddpart;
string evenpart;
while (serialportopen)
{
}
They'll maintain scope for multiple iterations of the loop. It's better to do it that way regardless, as it won't re-declare the variable each time. That may or may not be optimized out by the compiler.
Otherwise there's something uniqe about the bit pattern of that packet that notifies the driver that another packet is needed. There may be an opcode or command field in that packet, it may be simply an issue of masking that, calling on a function to retrieve and return the second part, ie.
if ((command & CERTAIN_PACKET) == CERTAIN_PACKET)
odpart = GetSecondPart();
I hope that helps you some!

I understand that you have a way to telle an odd part from an even part, from a complete id, and that you have a way to tell the origin of your message. In this case I would store the oddparts (if they are always the ones that are received first) in a dictionary. When I receive an even part I would check to see if there is its corresponding odd part stored and if this is the case I would process the whole thing, otherwise I would discard it. Furthermore, when I receive an odd part I would start a timer responsible for deleting it from the dictionary after 10 seconds are passed. Since the timer has to be stopped, it will be stored in a dictionary, too. Something like this (it's only an example, probably you'd want to create some instance variables to avoid passing parameters everywhere):
Dictionary<string, string> oddParts = new Dictionary>string, string>();
Dictionary<string, Timer> timers = new Dictionary<string, Timer>();
while (serialportopen)
{
// somedecoding I end up with
string hexid;
string oddpart;
string evenpart;
// you need something to identify where the message came from and that is present in both the odd and even part (the BF... of your previous question)
string msgId;
if (!String.IsNullOrEmpty(oddpart))
ProcessOddPart(msgId, oddpart, oddParts, timers);
else if (!String.IsNullOrEmpty(evenpart))
hexid = HexIdFromEvenPart(msgId, evenpart, oddparts, timers);
if (!String.IsNullOrEmpty(hexid))
Process(hexid);
}
void Process(string hexid)
{
// your stuff here
}
void ProcessOddPart(string msgId, string oddPart, Dictionary<string, string> oddParts, Dictionary<string, Timer> timers)
{
oddParts[msgId] = oddPart;
System.Timers.Timer tmr = new Timer();
timers[msgId = tmr;
ElapsedEventHandler timePassed = delegate(object anObj, ElapsedEventArgs args) {
tmr.Stop();
oddParts.Remove(msgId);
timers.Remove(msgId);
};
tmr.Interval = 10000;
tmr.Elapsed += timePassed;
tmr.Start();
}
void HexIdFromEvenPart(string msgId, string evenPart, Dictionary<string, string> oddParts, Dictionary<string, Timer> timers)
{
string oddPart;
Timer tmr;
if (timers.TryGetValue(msgId, out tmr))
{
tmr.Stop();
timers.Remove(msgId);
}
if (!oddParts.TryGetValue(msgId, out oddPart))
return null;
return oddPart + evenPart;
}

Related

Is there a way to avoid using side effects to process this data

I have an application I'm writing that runs script plugins to automate what a user used to have to do manually through a serial terminal. So, I am basically implementing the serial terminal's functionality in code. One of the functions of the terminal was to send a command which kicked off the terminal receiving continuously streamed data from a device until the user pressed space bar, which would then stop the streaming of the data. While the data was streaming, the user would then set some values in another application on some other devices and watch the data streamed in the terminal change.
Now, the streamed data can take different shapes, depending on the particular command that's sent. For instance, one response may look like:
---RESPONSE HEADER---
HERE: 1
ARE: 2 SOME:3
VALUES: 4
---RESPONSE HEADER---
HERE: 5
ARE: 6 SOME:7
VALUES: 8
....
another may look like:
here are some values
in cols and rows
....
So, my idea is to have a different parser based on the command I send. So, I have done the following:
public class Terminal
{
private SerialPort port;
private IResponseHandler pollingResponseHandler;
private object locker = new object();
private List<Response1Clazz> response1;
private List<Response2Clazz> response2;
//setter omited for brevity
//get snapshot of data at any point in time while response is polling.
public List<Response1Clazz> Response1 {get { lock (locker) return new List<Response1Clazz>(response1); }
//setter omited for brevity
public List<Response2Clazz> Response2 {get { lock (locker) return new List<Response1Clazz>(response2); }
public Terminal()
{
port = new SerialPort(){/*initialize data*/}; //open port etc etc
}
void StartResponse1Polling()
{
Response1 = new List<Response1Clazz>();
Parser<List<Response1Clazz>> parser = new KeyValueParser(Response1); //parser is of type T
pollingResponseHandler = new PollingResponseHandler(parser);
//write command to start polling response 1 in a task
}
void StartResponse2Polling()
{
Response2 = new List<Response2Clazz>();
Parser<List<Response2Clazz>> parser = new RowColumnParser(Response2); //parser is of type T
pollingResponseHandler = new PollingResponseHandler(parser); // this accepts a parser of type T
//write command to start polling response 2
}
OnSerialDataReceived(object sender, Args a)
{
lock(locker){
//do some processing yada yada
//we pass in the serial data to the handler, which in turn delegates to the parser.
pollingResponseHandler.Handle(processedSerialData);
}
}
}
the caller of the class would then be something like
public class Plugin : BasePlugin
{
public override void PluginMain()
{
Terminal terminal = new Terminal();
terminal.StartResponse1Polling();
//update some other data;
Response1Clazz response = terminal.Response1;
//process response
//update more data
response = terminal.Response1;
//process response
//terminal1.StopPolling();
}
}
My question is quite general, but I'm wondering if this is the best way to handle the situation. Right now I am required to pass in an object/List that I want modified, and it's modified via a side effect. For some reason this feels a little ugly because there is really no indication in code that this is what is happening. I am purely doing it because the "Start" method is the location that knows which parser to create and which data to update. Maybe this is Kosher, but I figured it is worth asking if there is another/better way. Or at least a better way to indicate that the "Handle" method produces side effects.
Thanks!
I don't see problems in modifying List<>s that are received as a parameter. It isn't the most beautiful thing in the world but it is quite common. Sadly C# doesn't have a const modifier for parameters (compare this with C/C++, where unless you declare a parameter to be const, it is ok for the method to modify it). You only have to give the parameter a self-explaining name (like outputList), and put a comment on the method (you know, an xml-comment block, like /// <param name="outputList">This list will receive...</param>).
To give a more complete response, I would need to see the whole code. You have omitted an example of Parser and an example of Handler.
Instead I see a problem with your lock in { lock (locker) return new List<Response1Clazz>(response1); }. And it seems to be non-sense, considering that you then do Response1 = new List<Response1Clazz>();, but Response1 only has a getter.

Trying to find a lock-less solution for a C# concurrent queue

I have the following code in C#:
(_StoreQueue is a ConcurrentQueue)
var S = _StoreQueue.FirstOrDefault(_ => _.TimeStamp == T);
if (S == null)
{
lock (_QueueLock)
{
// try again
S = _StoreQueue.FirstOrDefault(_ => _.TimeStamp == T);
if (S == null)
{
S = new Store(T);
_StoreQueue.Enqueue(S);
}
}
}
The system is collecting data in real time (fairly high frequency, around 300-400 calls / second) and puts it in bins (Store objects) that represent a 5 second interval. These bins are in a queue as they get written and the queue gets emptied as data is processed and written.
So, when data is arriving, a check is done to see if there is a bin for that timestamp (rounded by 5 seconds), if not, one is created.
Since this is quite heavily multi-threaded, the system goes with the following logic:
If there is a bin, it is used to put data.
If there is no bin, a lock gets initiated and within that lock, the check is done again to make sure it wasn't created by another thread in the meantime. and if there is still no bin, one gets created.
With this system, the lock is roughly used once every 2k calls
I am trying to see if there is a way to remove the lock, but it is mostly because I'm thinking there has to be a better solution that the double check.
An alternative I have been thinking about is to create empty bins ahead of time and that would entirely remove the need for any locks, but the search for the right bin would become slower as it would have to scan the list pre-built bins to find the proper one.
Using a ConcurrentDictionary can fix the issue you are having. Here i assumed a type double for your TimeStamp property but it can be anything, as long as you make the ConcurrentDictionary key match the type.
class Program
{
ConcurrentDictionary<double, Store> _StoreQueue = new ConcurrentDictionary<double, Store>();
static void Main(string[] args)
{
var T = 17d;
// try to add if not exit the store with 17
_StoreQueue.GetOrAdd(T, new Store(T));
}
public class Store
{
public double TimeStamp { get; set; }
public Store(double timeStamp)
{
TimeStamp = timeStamp;
}
}
}

How can I specify which value a enum can have based on the current value?

I am looking for a way to specify a lets call it Decision Tree or a Flow.
I have a Start value 1 or REQUESTED and this enum can have multiple following values like 2 or IN_PROGRESS or 3 or DECLINED.
And now only from the value 2 it should be possible to go to a higher value like 4 or FINISHED.
What is the most practically way to define the possible paths a process or flow can have?
What's practical is often what's easiest to read an understand. To that end I recommend being explicit about what which states can lead to which other states. The enum is just a list of possible values. Using the int values of the enum might seem more concise, but it's harder to read and can lead to other problems.
First, here's an enum and a simple class that changes from one state to another if that change is allowed. (I didn't cover every state.)
enum RequestState
{
Requested,
InProgress,
Declined,
Finished
}
public class Request
{
private RequestState _state = RequestState.Requested;
public void BeginWork()
{
if (_state == RequestState.Declined || _state == RequestState.Finished)
throw new InvalidOperationException("You can only begin work on a new request.");
_state = RequestState.InProgress;
}
public void Decline()
{
if (_state == RequestState.Finished)
throw new InvalidOperationException("Too late - it's finished!");
_state = RequestState.Declined;
}
// etc.
}
If we base it on the numeric value of _state and determine that the number can only go up, a few things can go wrong:
Someone can rearrange the enums or add a new one, not knowing that the numeric value or position has logical significance. That's an easy mistake because that value usually isn't significant.
You might need to implement logic that isn't quite so simple. You might need a state that can be preceded by some of the values before it but not all of them.
You might realize that there's a valid reason for going backwards. What if a request is declined, and in the future you determine that you want to reopen requests, effectively sending them back to Requested?
If the way this is implemented starts out a little bit weird, those changes could make it even harder to change and follow. But if you just describe clearly what changes are possible given any state then it will be easy to read and modify.
You could do something to leverage that enums are basically just integers:
private static Status NextState(Status status)
{
var intOfStatus = ((int)status) + 1;
return (Status)intOfStatus;
}
And some sample logic based on this approach:
public enum Status
{
NotStarted = 0,
Started = 1,
InProgress = 2,
Declined = 3
}
public static void Main()
{
var curStatus = Status.NotStarted;
Console.WriteLine(curStatus.ToString()); //writes 'NotStarted'
if ((int)curStatus++ == (int)Status.Started)
{
curStatus = Status.Started;
}
Console.WriteLine(NextState(curStatus)); //writes 'InProgress'
}

Chance of hitting the same function at the same time by two Threads/Tasks

Assuming the following case:
public HashTable map = new HashTable();
public void Cache(String fileName) {
if (!map.ContainsKey(fileName))
{
map.Add(fileName, new Object());
_Cache(fileName);
}
}
}
private void _Cache(String fileName) {
lock (map[fileName])
{
if (File Already Cached)
return;
else {
cache file
}
}
}
When having the following consumers:
Task.Run(()=> {
Cache("A");
});
Task.Run(()=> {
Cache("A");
});
Would it be possible in any ways that the Cache method would throw a Duplicate key exception meaning that both tasks would hit the map.add method and try to add the same key??
Edit:
Would using the following data structure solve this concurrency problem?
public class HashMap<Key, Value>
{
private HashSet<Key> Keys = new HashSet<Key>();
private List<Value> Values = new List<Value>();
public int Count => Keys.Count;
public Boolean Add(Key key, Value value) {
int oldCount = Keys.Count;
Keys.Add(key);
if (oldCount != Keys.Count) {
Values.Add(value);
return true;
}
return false;
}
}
Yes, of course it would be possible. Consider the following fragment:
if (!map.ContainsKey(fileName))
{
map.Add(fileName, new Object());
Thread 1 may execute if (!map.ContainsKey(fileName)) and find that the map does not contain the key, so it will proceed to add it, but before it gets the chance to add it, Thread 2 may also execute if (!map.ContainsKey(fileName)), at which point it will also find that the map does not contain the key, so it will also proceed to add it. Of course, that will fail.
EDIT (after clarifications)
So, the problem seems to be how to keep the main map locked for as little as possible, and how to prevent cached objects from being initialized twice.
This is a complex problem, so I cannot give you a ready-to-run answer that will work, (especially since I do not currently even have a C# development environment handy,) but generally speaking, I think that you should proceed as follows:
Fully guard your map with lock().
Keep your map locked as little as possible; when an object is not found to be in the map, add an empty object to the map and exit the lock immediately. This will ensure that this map will not become a point of contention for all requests coming in to the web server.
After the check-if-present-and-add-if-not fragment, you are holding an object which is guaranteed to be in the map. However, this object may and may not be initialized at this point. That's fine. We will take care of that next.
Repeat the lock-and-check idiom, this time with the cached object: every single incoming request interested in that specific object will need to lock it, check whether it is initialized, and if not, initialize it. Of course, only the first request will suffer the penalty of initialization. Also, any requests that arrive before the object has been fully initialized will have to wait on their lock until the object is initialized. But that's all very fine, that's exactly what you want.

Performance Counter - Instances; Create/Update without error but not visible in PerfMon

When invoking UpdatePerformanceCounters: In this updater all the counter names for the category and instance counters are the same - they are always derived from an Enum. The updater is passed a "profile" typically with content such as:
{saTrilogy.Core.Instrumentation.PerformanceCounterProfile}
_disposed: false
CategoryDescription: "Timed function for a Data Access process"
CategoryName: "saTrilogy<Core> DataAccess Span"
Duration: 405414
EndTicks: 212442328815
InstanceName: "saTrilogy.Core.DataAccess.ParameterCatalogue..ctor::[dbo].[sp_KernelProcedures]"
LogFormattedEntry: "{\"CategoryName\":\"saTrilogy<Core> DataAccess ...
StartTicks: 212441923401
Note the "complexity" of the Instance name.
The toUpdate.AddRange() of the VerifyCounterExistence method always succeeds and produces the "expected" output so the UpdatePerformanceCounters method continues through to the "successful" incrementing of the counters.
Despite the "catch" this never "fails" - except, when viewing the Category in PerfMon, it shows no instances or, therefore, any "successful" update of an instance counter.
I suspect my problem may be that my instance name is being rejected, without exception, because of its "complexity" - when I run this through a console tester via PerfView it does not show any exception stack and the ETW events associated with counter updates are successfully recorded in an out-of-process sink. Also, there are no entries in the Windows Logs.
This is all being run "locally" via VS2012 on a Windows 2008R2 server with NET 4.5.
Does anyone have any ideas of how else I may try this - or even test if the "update" is being accepted by PerfMon?
public sealed class Performance {
private enum ProcessCounterNames {
[Description("Total Process Invocation Count")]
TotalProcessInvocationCount,
[Description("Average Process Invocation Rate per second")]
AverageProcessInvocationRate,
[Description("Average Duration per Process Invocation")]
AverageProcessInvocationDuration,
[Description("Average Time per Process Invocation - Base")]
AverageProcessTimeBase
}
private readonly static CounterCreationDataCollection ProcessCounterCollection = new CounterCreationDataCollection{
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.TotalProcessInvocationCount),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.TotalProcessInvocationCount),
PerformanceCounterType.NumberOfItems32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessInvocationRate),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessInvocationRate),
PerformanceCounterType.RateOfCountsPerSecond32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessInvocationDuration),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessInvocationDuration),
PerformanceCounterType.AverageTimer32),
new CounterCreationData(
Enum<ProcessCounterNames>.GetName(ProcessCounterNames.AverageProcessTimeBase),
Enum<ProcessCounterNames>.GetDescription(ProcessCounterNames.AverageProcessTimeBase),
PerformanceCounterType.AverageBase),
};
private static bool VerifyCounterExistence(PerformanceCounterProfile profile, out List<PerformanceCounter> toUpdate) {
toUpdate = new List<PerformanceCounter>();
bool willUpdate = true;
try {
if (!PerformanceCounterCategory.Exists(profile.CategoryName)) {
PerformanceCounterCategory.Create(profile.CategoryName, profile.CategoryDescription, PerformanceCounterCategoryType.MultiInstance, ProcessCounterCollection);
}
toUpdate.AddRange(Enum<ProcessCounterNames>.GetNames().Select(counterName => new PerformanceCounter(profile.CategoryName, counterName, profile.InstanceName, false) { MachineName = "." }));
}
catch (Exception error) {
Kernel.Log.Trace(Reflector.ResolveCaller<Performance>(), EventSourceMethods.Kernel_Error, new PacketUpdater {
Message = StandardMessage.PerformanceCounterError,
Data = new Dictionary<string, object> { { "Instance", profile.LogFormattedEntry } },
Error = error
});
willUpdate = false;
}
return willUpdate;
}
public static void UpdatePerformanceCounters(PerformanceCounterProfile profile) {
List<PerformanceCounter> toUpdate;
if (profile.Duration <= 0 || !VerifyCounterExistence(profile, out toUpdate)) {
return;
}
foreach (PerformanceCounter counter in toUpdate) {
if (Equals(PerformanceCounterType.RateOfCountsPerSecond32, counter.CounterType)) {
counter.IncrementBy(profile.Duration);
}
else {
counter.Increment();
}
}
}
}
From MSDN .Net 4.5 PerformanceCounter.InstanceName Property (http://msdn.microsoft.com/en-us/library/system.diagnostics.performancecounter.instancename.aspx)...
Note: Instance names must be shorter than 128 characters in length.
Note: Do not use the characters "(", ")", "#", "\", or "/" in the instance name. If any of these characters are used, the Performance Console (see Runtime Profiling) may not correctly display the instance values.
The instance name of 79 characters that I use above satisfies these conditions so, unless ".", ":", "[" and "]" are also "reserved" the name would not appear to be the issue. I also tried a 64 character sub-string of the instance name - just in case, as well as a plain "test" string all to no avail.
Changes...
Apart from the Enum and the ProcessCounterCollection I have replaced the class body with the following:
private static readonly Dictionary<string, List<PerformanceCounter>> definedInstanceCounters = new Dictionary<string, List<PerformanceCounter>>();
private static void UpdateDefinedInstanceCounterDictionary(string dictionaryKey, string categoryName, string instanceName = null) {
definedInstanceCounters.Add(
dictionaryKey,
!PerformanceCounterCategory.InstanceExists(instanceName ?? "Total", categoryName)
? Enum<ProcessCounterNames>.GetNames().Select(counterName => new PerformanceCounter(categoryName, counterName, instanceName ?? "Total", false) { RawValue = 0, MachineName = "." }).ToList()
: PerformanceCounterCategory.GetCategories().First(category => category.CategoryName == categoryName).GetCounters().Where(counter => counter.InstanceName == (instanceName ?? "Total")).ToList());
}
public static void InitialisationCategoryVerify(IReadOnlyCollection<PerformanceCounterProfile> etwProfiles){
foreach (PerformanceCounterProfile profile in etwProfiles){
if (!PerformanceCounterCategory.Exists(profile.CategoryName)){
PerformanceCounterCategory.Create(profile.CategoryName, profile.CategoryDescription, PerformanceCounterCategoryType.MultiInstance, ProcessCounterCollection);
}
UpdateDefinedInstanceCounterDictionary(profile.DictionaryKey, profile.CategoryName);
}
}
public static void UpdatePerformanceCounters(PerformanceCounterProfile profile) {
if (!definedInstanceCounters.ContainsKey(profile.DictionaryKey)) {
UpdateDefinedInstanceCounterDictionary(profile.DictionaryKey, profile.CategoryName, profile.InstanceName);
}
definedInstanceCounters[profile.DictionaryKey].ForEach(c => c.IncrementBy(c.CounterType == PerformanceCounterType.AverageTimer32 ? profile.Duration : 1));
definedInstanceCounters[profile.TotalInstanceKey].ForEach(c => c.IncrementBy(c.CounterType == PerformanceCounterType.AverageTimer32 ? profile.Duration : 1));
}
}
In the PerformanceCounter Profile I've added:
internal string DictionaryKey {
get {
return String.Concat(CategoryName, " - ", InstanceName ?? "Total");
}
}
internal string TotalInstanceKey {
get {
return String.Concat(CategoryName, " - Total");
}
}
The ETW EventSource now does the initialisation for the "pre-defined" performance categories whilst also creating an instance called "Total".
PerformanceCategoryProfile = Enum<EventSourceMethods>.GetValues().ToDictionary(esm => esm, esm => new PerformanceCounterProfile(String.Concat("saTrilogy<Core> ", Enum<EventSourceMethods>.GetName(esm).Replace("_", " ")), Enum<EventSourceMethods>.GetDescription(esm)));
Performance.InitialisationCategoryVerify(PerformanceCategoryProfile.Values.Where(v => !v.CategoryName.EndsWith("Trace")).ToArray());
This creates all of the categories, as expected, but in PerfMon I still cannot see any instances - even the "Total" instance and the update always, apparently, runs without error.
I don't know what else I can "change - probably "too close" to the problem and would appreciate comments/corrections.
These are the conclusions and the "answer" insofar as as it explains, to the best of my ability, what I believe is happening and posted by myself - given my recent helpful use of Stack Overflow this, I hope, will be of use to others...
Firstly, there is essentially nothing wrong with the code displayed excepting one proviso - mentioned later. Putting a Console.ReadKey() before program termination and after having done a PerformanceCounterCategory(categoryKey).ReadCategory() it is quite clear that not only are the registry entries correct (for this is where ReadCategory sources its results) but that the instance counters have all been incremented by the appropriate values. If one looks at PerfMon before the program terminates the instance counters are there and they do contain the appropriate Raw Values.
This is the crux of my "problem" - or, rather, my incomplete understanding of the architecture: INSTANCE COUNTERS ARE TRANSIENT - INSTANCES ARE NOT PERSISTED BEYOND THE TERMINATION OF A PROGRAM/PROCESS. This, once it dawned on me, is "obvious" - for example, try using PerfMon to look at an instance counter of one of your IIS AppPools - then stop the AppPool and you will see, in PerfMon, that the Instance for the stopped AppPool is no longer visible.
Given this axiom about instance counters the code above has another completely irrelevant section: When trying the method UpdateDefinedInstanceCounterDictionary assigning the list from an existing counter set is pointless. Firstly, the "else" code shown will fail since we are attempting to return a collection of (instance) counters for which this approach will not work and, secondly, the GetCategories() followed by GetCounters() or/and GetInstanceNames() is an extraordinarily expensive and time-consuming process - even if it were to work. The appropriate method to use is the one mentioned earlier - PerformanceCounterCategory(categoryKey).ReadCategory(). However, this returns an InstanceDataCollectionCollection which is effectively read-only so, as a provider (as opposed to a consumer) of counters it is pointless. In fact, it doesn't matter if you just use the Enum generated new PerformanceCounter list - it works regardless of whether the counters already exist or not.
Anyway, the InstanceDataCollectionCollection (this is essentially that which is demonstrated by the Win32 SDK for .Net 3.5 "Usermode Counter Sample") uses a "Sample" counter which is populated and returned - as per the usage of the System.Diagnostics.PerformanceData Namespace whichi looks like part of the Version 2.0 usage - which usage is "incompatible" with the System.Diagnostics.PerformanceCounterCategory usage shown.
Admittedly, the fact of non-persistance may seem obvious and may well be stated in documentation but, if I were to read all the documentation about everything I need to use beforehand I'd probably end up not actually writing any code! Furthermore, even if such pertinent documentation were easy to find (as opposed to experiences posted on, for example, Stack Overflow) I'm not sure I trust all of it. For example, I noted above that the instance name in the MSDN documentation has a 128 character limit - wrong; it is actually 127 since the underlying string must be null-terminated. Also, for example, for ETW, I wish it were made more obvious that keyword values must be powers of 2 and opcodes with value of less than 12 are used by the system - at least PerfView was able to show me this.
Ultimately this question has no "answer" other than a better understanding of instance counters - especially their persistence. Since my code is intended for use in a Windows Service based Web API then its persistence is not an issue (especially with daily use of LogMan etc.) - the confusing thing is that the damn things didn't appear until I paused the code and checked PerfMon and I could have saved myself a lot of time and hassle if I knew this beforehand. In any event my ETW event source logs all elapsed execution times and instances of what the performance counters "monitor" anyway.

Categories