I am trying to monitor interfaces bandwidth on a remote Windows machine. So far I used SNMP with the Cisco Bandwidth Formula but that requires to retrieve two samples at two different times. Last but not least it seems that the value I record with SNMP is quite wrong. Since I have WMI support I'd like to use it but the only value I've found (which seems to be what I'm looking for) is BytesTotalPerSec of the Win32_PerfRawData_Tcpip_NetworkInterface. That value however looks more like a total counter (just like the SNMP one). Is there a way to retrieve the instant current bandwidth through WMI? To clarify the Current Bandwidth field always return 1000000000 (which is the Maximum Bandwidth) and as you can imagine it is not helpful.
Performance counter data is exposed in 2 places, Win32_PerfRawData* and Win32_PerfFormattedData*. The former contains raw data, the latter contains derived statistics, and is what you're after.
What you typically see in perfmon (for example) is the Win32_PerfFormattedData* data.
Try this :
Set objWMI = GetObject("winmgmts://./root\cimv2")
set objRefresher = CreateObject("WbemScripting.Swbemrefresher")
Set objInterfaces = objRefresher.AddEnum(objWMI, _
"Win32_PerfFormattedData_Tcpip_NetworkInterface").ObjectSet
While (True)
objRefresher.Refresh
For each RefreshItem in objRefresher
For each objInstance in RefreshItem.ObjectSet
WScript.Echo objInstance.Name & ";" _
& objInstance.BytesReceivedPersec & ";" _
& objInstance.BytesSentPersec
Next
Next
Wscript.Echo
Wscript.Sleep 1000
Wend
From experience, taking a measurement for a given second is pretty useless unless you're collecting the metric every second.
If you wanted the minutely bandwidth, you could derive it yourself from the raw data by taking 2 samples (you have to do this on Windows 2000 anyway)
See the windows 2000 section here if that makes more sense.
Derived stats on Windows 2000
There's an excellent article here Make your own Formatted Performance Data Provider
If you wanted to delve into collecting more statistical information over a longer sampling interval
John
Related
I recently started working on some testing for Azure TableStorage, and one thing I noticed is that they are apparently doing some weird timestamping for their ETags.
I would like to check something related to the exact byte limitations of data upload, but the problem seems to be that this is hardly reliable as the entity can sporadically get too big or too small as well.
The problem is:
ETags are apparently timestamped, but they are not timestamped consistently.
Depending on whether you hit a certain precision in the millisecond range, you might end up with a tag like this:
W/\"datetime'2018-11-27T07%3A58%3A29.5163963Z'\"
or like this
W/\"datetime'2018-11-27T07%3A58%3A29.65348Z'\"
Considering the calculation formula for ETag size being:
BaseSize (8) + Property name length * 2 ("ETag" -> 8) + 4 + ETag.length *2
This can cause differences in the size of the whole entity.
Setting a constant ETag myself before performing a TableOperation is also not possible, as the storage endpoint seems to overwrite it anyways.
Is there any way to force the precision on these values to be constant?
My goal is to get data (pulse) from the fitness bracelet Torntisc T1 using my application and independently process data from the bracelet.
To implement I use Xamarin and found a Bluetooth LE plugin for Xamarin plugin to connect to the device and receive data from it. However, all the characteristics obtained are called "Unknown characteristic" and in values of 0 bytes. Although it has 5 services, each of which has 3 characteristics. The only name of characteristics in 1 service is other: "Device Name", "Appearance", "Peripheral Preferred Connection Parameters". However, the value (value) is everywhere 0 bytes. How to get characteristics? How to get a pulse?
To the bracelet there is an application H Band 2.0, which shows a fairly large number of settings for the bracelet, the question arises where is all this?
Native app H Band 2.0. Attempt of decompile here. I found the classes responsible for the connection in the following directory: sources\no\nordicsemi\android\dfu. I see what has been done via BluetoothGatt. Unfortunately I am not an expert in java and android, unfamiliar with this library. I didn't find any methods or anything related to the "pulse", but a large number of magic parsing characteristics: parse (characteristic)
foreach (var TestService in Services)
{
var characteristics = await TestService.GetCharacteristicsAsync();
foreach (var Characteristic in characteristics)
{
var properties = Characteristic.Properties;
var name = Characteristic.Name;
var serv = Characteristic.Service;
var value = Characteristic.Value;
var stringValue = value.ToString();
string result = "";
if (value.Length != 0)
result = System.Text.Encoding.UTF8.GetString(value, 0, value.Length - 1);
}
}
To start with you can use the following app to get a better overview of the services and characteristics you are working with, without having to code calls to get the values you need.
Having said that you will need documentation to be able to communicate with the device, what I mean is what data you send, what are acceptable responses how they map to meaningful data etc. The core of BLE is the low energy bit which means exchange as little data as possible ex. mapping integers to enum values which you do not know without the documentation, you can work your way back from decompiled source but it will be orders of magnitude more difficult.
One more thing is that BLE is notoriously unreliable (you will understand if you run into gatt 133 errros on samsungs :), so most implementations also have a sort of added network layer to handle drops and graceful degradation, as well as sending larger peaces of data, this is custom developed per app/device and you also need extensive documentation for this to implement it, which is no trivial matter.
The library you've chosen is quite good and wraps most things you need quite well but it does not handle the instability so you have to take care of that part yourself.
Cheers :)
I have a giant data set in a c# windows service that uses about 12GB of ram.
Dictionary<DateTime,List<List<Item>>>
There is a constant stream of new data being added, about 1GB per hour. Old data is occasionally removed. This is a high speed buffer for web pages.
I have a parameter in the config file called "MaxSizeMB". I would like to allow the user to enter, say "11000", and my app will delete some old data every time the app exceeds 11GB of ram usage.
This has proved to be frustratingly difficult.
You would think that you can just call GC.GetTotalMemory(false). This would give you the memory usage of .net managed objects (lets pretent it says 10.8GB). Then you just add a constant 200MB as a safety net for all the other stuff allocated in the app.
This doesn't work. In fact, the more data that is loaded, the bigger the difference between GC.GetTotalMemory and task manager. I even tried to work out a constant multiplier value instead of a constant add value, but I cannot get consistent results. The best i have done so far is count the total number of items in the data structure, multiply by 96, and pretend that number is the ram usage. This is also confusing because the Item object is a 32byte struct. This pretend ram usage is also too unstable. Sometimes the app will delete old data at 11GB, but sometimes it will delete data at 8GB ram usage, because my pretend number calculates a false 11GB.
So i can either use this conservative fake ram calculation, and often not use all the ram I am allowed to use (like 2GB lost), or I can use GC.GetTotalMemory and the customer will freak out that the app goes over the ram setting occasionally.
Is there any way I can use as much ram as possible without going over a limit, as it appears in task manager? I don't care if the math is a multiplier, constant add value, power, whatever. I want to stuff data into a data structure and delete data when I hit the max setting.
Note: i already do some memory shrinking techniques such as using a struct as the Item, list.Capacity = list.Count, and GC.Collect(GC.MaxGeneration). Those seem like a separate issue though.
Use System.Diagnostics.PerformanceCounter and monitor your current process memory usage and available memory, based on this, your application should decide to delete something or not..
Several problems
Garbage collection
Getting a good measure of memory
What is the maximum
You assume there is a hard maximum.
But an object needs contiguous memory so that is a soft maximum.
As for an accurate size measure you could record the size of each list and keep a running total.
Then when you purge read the size and reduce from that running total.
Why fight .NET memory limitations and physical memory limitations
I would so go with a database on an SSD
If it is read only and you have known classes then you could use like a RavenDB
Reconsider your design
OK so I am not getting very far with managing .NET memory limitation that you are never going to tame.
Still reconsider your design.
If your PK is a DateTime and assume you only need 24 hours put one per dictionary per hour as that is just one object.
At the end of 23 hours new the prior - let the GC collect the whole thing.
The answer is super simple.
var n0 = System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize64;
var n1 = System.Diagnostics.Process.GetCurrentProcess().WorkingSet64;
var n2 = System.Diagnostics.Process.GetCurrentProcess().VirtualMemorySize64;
float f0 = ((float)n0)/(1000*1000);
float f1 = ((float)n1)/(1000*1000);
float f2 = ((float)n2)/(1000*1000);
Console.WriteLine("private = " + f0 + " MB");
Console.WriteLine("working = " + f1 + " MB");
Console.WriteLine("virtual = " + f2 + " MB");
results:
private = 931.9096 MB
working = 722.0756 MB
virtual = 1767.146 MB
All this moaning and fussing about task manager and .net object size and the answer is built into .NET in one line of code.
I gave the answer to Sarvesh because he got me started down the right path with PerformanceCounter, but GetCurrentProcess() turned out to be a nice shortcut to simply inspect your own process.
I made a program that is at its heart is a keyboard hook. I press a specific button and it performs a specific action. Since there is a fairly large list of options that I can select from using a Combobox, I decided to make a Dictionary called ECCMDS (stands for embedded controller commands). I can then set my Combobox items to ECCMDS.Keys and select by a command by name. Makes for easy saving too because its a string I just save it to a XML file. Well the program monitors anywhere from 4-8 buttons. The problem comes from runtime. The program uses about 53 megs of memory (of course I look over at it now and it says 16 megs :/) Well the tablet that this is running on has 3Gb's of memory and a Atom processor. Normally i'd scoff at 53megs, but using a huge switch statement and the program uses about 2 or 3 megs (been sometime since I actually looked at its usage, so I can't remember exactly)
So although the Dictionary greatly reduces the complexity of my RunCommand method I'm wondering about the memory usage. This tablet at idle is using 80% of its memory, so I'd like to make as little of impact on that as possible. Is there another solution to this problem? Here is a small example of the dictionary
ECCMDS = new Dictionary<string, Action>()
{
{"Decrease Backlight", EC.DescreaseBrightness},
{"Increase Backlight", EC.IncreaseBrightness},
{"Toggle WiFi", new Action(delegate{EC.WirelessState = GetToggledState(EC.WirelessState);})},
{"Enable WiFi", new Action(delegate{EC.WirelessState = ObjectState.Enabled;})},
{"Disable WiFi", new Action(delegate{EC.WirelessState = ObjectState.Disabled;})},
{"{PRINTSCRN}", new Action(delegate{VKeys.User32Input.DoPressRawKey(0x2C);})},
};
is it possible to use reflection or something to achieve this?
EDIT
So after the nice suggestion of making a new program and comparing the 2 methods I've determained that it is not my Dictionary. I didn't think that WPF was that big of a difference between Winforms, but it must be. The new program doesn't hardly have any pictures (like it used to, most of my graphics are generated now) but the results are as follows
Main Entry Point:32356 kb
Before Huge Dictionary:33724 kb
After Initialization:35732 kb
After 10000 runs:37824 kb
That took 932ms to run
After Huge Dictionary:38444 kb
Before Huge Switch Statement:39060 kb
After Initialization:39696 kb
After 10000 runs:40076 kb
That took 1136ms to run
After Huge Switch Statement:40388 kb
I suggest you extract the Dictonary to a separate program and see how much space it occupies before you worry about how much space it is taking and if that is your problem.
I'm trying to parse through e-mails in Outlook 2007. I need to streamline it as fast as possible and seem to be having some trouble.
Basically it's:
foreach( Folder fld in outllookApp.Session.Folders )
{
foreach( MailItem mailItem in fld )
{
string body = mailItem.Body;
}
}
and for 5000 e-mails, this takes over 100 seconds. It doesn't seem to me like this should be taking anywhere near this long.
If I add:
string entry = mailItem.EntryID;
It ends up being an extra 30 seconds.
I'm doing all sorts of string manipulations including regular expressions with these strings and writing out to database and still, those 2 lines take 50% of my runtime.
I'm using Visual Studio 2008
Doing this kind of thing will take a long time as you having to pull the data from the exchange store for each item.
I think that you have a couple of options here..
Process this information out of band use CDO/RDO in some other process.
Or
Use MapiTables as this is the fastest way to get properties there are caveats with this though and you may be doing things in your processin that can be brought into a table.
Redemption wrapper - http://www.dimastr.com/redemption/mapitable.htm
MAPI Tables http://msdn.microsoft.com/en-us/library/cc842056.aspx
I do not know if this will address your specific issue, but the latest Office 2007 service pack made a synificant performance difference (improvement) for Outlook with large numbers of messages.
Are you just reading in those strings in this loop, or are you reading in a string, processing it, then moving on to the next? You could try reading all the messages into a HashTable inside your loop then process them after they've been loaded--it might buy you some gains.
Any kind of UI updates are extremely expensive; if you're writing out text or incrementing a progress bar it's best to do so sparingly.
We had exactly the same problem even when the folders were local and there was no network delay.
We got 10x speedup by storing a copy of every email in a local Sql Server CE table tuned for the search we needed. We also used update events to make sure the local database remains in sync with the Outlook/Exchange folders.
To totally eliminate user lag we took the search out of the Outlook thread and put it in its own thread. The perception of lagging was worse than the actual delay it seems.
I had encountered a similar situation while trying to access Outlook mails via VBA(in excel).
However, it was far more slower in my case: 1 E-mail per sec!(Maybe it was slower in mine than in your case due to the fact that I had it implemented on VBA).
Anyway, I successfully managed to improve the speed by using the SetColumnns(eg. https://learn.microsoft.com/en-us/office/vba/api/Outlook.Items.SetColumns)
I know.. I Know.. This only works for a few properties, like "Subject" and "ReceivedTime" and not for the body!
But think again, do you really want to read through the body of all your emails? or is it just a subset? maybe based on its 'Subject' line or 'ReceivedTime'?
My requirement was to just go into the body of the email in case its subject matched a specific string!
Hence, I did the below:
I had added a second 'Outlook.Items' obj called 'myFilterItemCopyForBody' and applied the same filter I had on the other 'Outlook.Items'.
so, now I have two 'Outlook.Items' : 'myFilterItem' and 'myFilterItemCopyForBody' both with the same E-mail items since the same Restrict conditions are applied on both.
'myFilterItem'- to hold only 'Subject' and 'ReceivedTime' properties of the relevant mails (done by using SetColumns)
'myFilterItemCopyForBody'- to hold all the properties of the mail(including Body)
Now, both 'myFilterItem' and 'myFilterItemCopyForBody' are sorted with 'ReceivedTime' to have them in the same order.
Once sorted, both are looped simultaneously in a nested for each loop and pick corresponding properties (with the help of a counter) as in the code below.
Dim myFilterItem As Outlook.Items
Dim myItems As Outlook.Items
Set myItems = olFldr.Items
Set myFilterItemCopyForBody = myItems.Restrict("#SQL=""urn:schemas:httpmail:datereceived"" > '" & startTime & "' AND ""urn:schemas:httpmail:datereceived"" < '" & endTime & "'")
Set myFilterItem = myItems.Restrict("#SQL=""urn:schemas:httpmail:datereceived"" > '" & startTime & "' AND ""urn:schemas:httpmail:datereceived"" < '" & endTime & "'")
myFilterItemCopyForBody.Sort ("ReceivedTime")
myFilterItem.Sort ("ReceivedTime")
myFilterItem.SetColumns ("Subject, ReceivedTime")
For Each myItem1 In myFilterItem
iCount = iCount + 1
For Each myItem2 In myFilterItemCopyForBody
jCount = jCount + 1
If iCount = jCount Then
'Display myItem2.Body if myItem1.Subject contain a specific string
'MsgBox myItem2.Body
jCount = 0
Exit For
End If
Next myItem2
Next myItem1
Note1: Notice that the Body property is accessed using the 'myItem2' corresponding to 'myFilterItemCopyForBody'.
Note2: The lesser the number of times the compiler enters the loop to access the body property, the better! You can further improve the efficiency by playing with the Restrict and the logic to lower down the number of times the compiler has to loop through the logic.
Hope this helps, even though this is not something new!