I've got the following code that is supposed to measure the current download and upload speed. The issue I'm facing is that there are often usages recorded that my network and/or internet connection can't even handle (above my bandwidth).
public static IStatistics GetNetworkStatistics(string interfaceName) {
var networkStats = _interfaces[interfaceName];
var dataSentCounter = new PerformanceCounter("Network Interface", "Bytes Sent/sec", interfaceName);
var dataReceivedCounter = new PerformanceCounter("Network Interface", "Bytes Received/sec", interfaceName);
float sentSum = 0;
float receiveSum = 0;
var sw = new Stopwatch();
sw.Start();
while (sw.ElapsedMilliseconds < 1000) {
sentSum += dataSentCounter.NextValue();
receiveSum += dataReceivedCounter.NextValue();
}
sw.Stop();
Console.WriteLine("Download:\t{0} KBytes/s", receiveSum / 1024);
Console.WriteLine("Upload:\t\t{0} KBytes/s\n", sentSum / 1024);
networkStats.AddSentData(sentSum);
networkStats.AddReceivedData(receiveSum);
return networkStats;
}
Sample output:
As you can see most of these entries indicate a pretty heavily used network, up to an excessive amount of almost 160MB/s. I realize that you can't measure transfer speed with just one record (this is test data, in the actual application I use the mean of the latest 3), but even so: how can I ever receive 160MB in one second. I believe it's safe to say that I must have made an error somewhere, but I can't find where.
One thought I had was that I should keep a counter in the loop to show me how many times the PerformanceCounter.NextValue() were accessed (generally between 46 and 48), but in the end I believed this shouldn't matter: in the grand picture I'm still having a too large number for just one second of bandwidth usage. I can't shake the feeling that the performance counter might be the cause though.
Sidenote: the 160MB/s number was recorded the moment I loaded a new youtube video and other (+1000 KB/s) recordings are usually done when I refresh a tab, so it should be a (relative) display of my network usage.
Have I overlooked something in my approach?
Edit:
Upon following #Sam's advice and checking my results against the built-in perfmon.exe I noticed that my bursts in bandwith usage generally occur at the same time as those shown in Perfmon, but mine are way larger. I have tried to link the simultaneous bursts and find something in common, but it seems rather random (possibly because Perfmon might combine several results to get their current speed, whereas I'm only using the latest second).
Same goes for the lower numbers: Perfmon usually shows < 10kbps whereas I'm constantly around 50kbps.
Edit2:
This is the code I used in reference to #Hans' comment
var initSent = dataSentCounter.NextValue();
var initReceived = dataReceivedCounter.NextValue();
Thread.Sleep(1000);
sentSum = dataSentCounter.NextValue() - initSent;
receiveSum = dataReceivedCounter.NextValue() - initReceived;
Related
So, my brother was playing Fortnite and it was lagging quite a bit. So I offered to make an application that will limit the CPU usage of other apps, but I actually am having trouble with getting the limit to go on the other application.
Here's the code I've tried:
public void ThrottledLoop(Action action, int cpuPercentageLimit)
{
Stopwatch stopwatch = new Stopwatch();
while (true)
{
stopwatch.Reset();
stopwatch.Start();
long actionStart = stopwatch.ElapsedTicks;
action.Invoke();
long actionEnd = stopwatch.ElapsedTicks;
long actionDuration = actionEnd - actionStart;
long relativeWaitTime = (int)(
(1 / (double)cpuPercentageLimit) * actionDuration);
Thread.Sleep((int)((relativeWaitTime / (double)Stopwatch.Frequency) * 1000));
}
}
Please help, if there is any other information you need just let me know.
Thanks
Summary from comments above.
You can specify what processors ("affinity") processes are allowed to run on. This offers more fine grain control than setting process priority.
e.g. limit certain processes to say the last 4 cores on a system with 16 logical cores but allowing Fortnite to use whatever it wants. Be aware, some apps might not take too kindly to it.
Anti-virus programs sometimes play nice by keeping themselves at the end of the list of cores in a system.
For an example take a look at the Windows Task Manager Details tab.
See also
How can I set processor affinity to a thread or a Task in .NET?
To keep track of performance in our software we measure the duration of calls we are interested in.
for example:
using(var performanceTrack = new PerformanceTracker("pt-1"))
{
// do some stuff
CallAnotherMethod();
using(var anotherPerformanceTrack = new PerformanceTracker("pt-1a"))
{
// do stuff
// .. do something
}
using(var anotherPerformanceTrackb = new PerformanceTracker("pt-1b"))
{
// do stuff
// .. do something
}
// do more stuff
}
This will result in something like:
pt-1 [----------------------------] 28ms
[--] 2ms from another method
pt-1a [-----------] 11ms
pt-1b [-------------] 13ms
In the constructor of PerformanceTracker I start a stopwatch. (As far as I know it's the most reliable way to measure a duration.) In the dispose method I stop the stopwatch and save the results to application insights.
I have noticed a lot of fluctation between the results. To solve this I've already done the following:
Run in release built, outside of visual studio.
Warm up call first, not included in to the statistics.
Before every call (total 75 calls) I call the garbage collector.
After this the fluctation is less, but still not very accurate. For example I have run my test set twice. Both times
See here the results in milliseconds.
Avg: 782.946666666667 981.68
Min: 489 vs 513
Max: 2600 vs 4875
stdev: 305.854933523003 vs 652.343471128764
sampleSize: 75 vs 75
Why is the performance measurement with the stopwatch still giving a lot of variation in the results? I found on SO (https://stackoverflow.com/a/16157458/1408786) that I should maybe add the following to my code:
//prevent the JIT Compiler from optimizing Fkt calls away
long seed = Environment.TickCount;
//use the second Core/Processor for the test
Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(2);
//prevent "Normal" Processes from interrupting Threads
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
//prevent "Normal" Threads from interrupting this thread
Thread.CurrentThread.Priority = ThreadPriority.Highest;
But the problem is, we have a lot of async code. How can I get a reliable performance track in the code? My aim is to discover performance degradation when for example after a check in a method is 10ms slower than before...
I have a progress bar on the front-end of my web app which gets its current percentage by listening for messages sent from a SignalR hub in the back-end. It tracks a long running process which has several stages with many iterations.
My initial set-up was to simply send a message every iteration. This caused issues however as the rate of iteration (and therefore message rate) was too fast for the front-end and the bar became very jumpy and buggy.
So instead I decided to use a Stopwatch object in the following way (where SendProgress() is the procedure that tells the hub to message the client):
int progressCount = 1;
var stopWatch = new System.Diagnostics.Stopwatch();
stopWatch.Start();
for(int i=0; i<taskCount; i++)
{
//PROCESS DONE HERE
if (stopWatch.Elapsed.TotalMilliseconds >= 500.0)
{
SendProgress(taskCount, progressCount, 0, 40);
stopWatch.Reset();
stopWatch.Start();
}
progressCount++;
}
Thus preventing messages from being sent faster than once every 500ms.
This worked well in terms of limiting the message rate, however I noticed a drop in performance, which after a little research I gather is due to using Stopwatch, which is inefficient.
What would be a better approach here? I also though of using Thread.Sleep() but that would just be adding artificial slowness to the algorithm which is, obviously, bad. Is there a way I can accurately control the message rate, without slowing things down too badly?
Run this in a console app to see if the logic applies to what you want to do.
Check the comments to track what's happening (and let me know if you need me to break it down further). Happy coding! ;)
static void Main(string[] args)
{
//set the percentage you want to show progress for (eg, every 20%).
const int updatePercentage = 20;
//this is just the total of the for loop for this example.
const int loopMax = 1000000;
//calculate what 20% of the total is, to set as the check point used to show the progress.
decimal loopCheck = Convert.ToDecimal(loopMax * (Convert.ToDecimal(updatePercentage) / 100));
for (int i = 1; i <= loopMax; i++)
{
//check if the mod of the current position meets the check point.
if ((i % loopCheck) == 0)
{
//show the progress...
Console.WriteLine($"You've reached the next {updatePercentage}% of the for loop ({i.ToString("#,##0")})");
}
}
Console.ReadLine();
}
I'm planning to write a nes emulator. But first, to understand how emulation works, I'll write a Chip-8 emulator.
The emulator is nearly finished. I've some bugs in games, but this will be fixed soon.
My problem number 1 is to synchronize the emulator with the clock speed of the Chip-8.
In the internet I've often read, that the general clock speed should be ~ 540Hz. The timers of the chip should be ticked at a frequenz of 60Hz.
To synchronize my emulator with the Chip-8 I've written follow logic:
private void GameTick()
{
Stopwatch watch = new Stopwatch();
var instructionCount = 0;
_gameIsRunning = true;
while (_gameIsRunning)
{
watch.Restart();
EmulateCycle();
//Updates the internal timer at a 60hz frequenz
//540hz (game tick) divided by 9 equals 60hz (timer tick)
instructionCount++;
if(instructionCount == 9)
{
UpdateSoundAndDelay();
instructionCount = 0;
}
if (_readyToDraw)
{
DrawGraphics();
_readyToDraw = false;
}
SetKeys();
//Pause the game to get a virtual clock speed of ca. 540mhz
var elapsedMicroseconds = watch.ElapsedTicks / (Stopwatch.Frequency / (1000L * 1000L));
while(elapsedMicroseconds < 1852)
{
elapsedMicroseconds = watch.ElapsedTicks / (Stopwatch.Frequency / (1000L * 1000L));
}
}
}
For more detailed information look at my repo: https://github.com/Marcel-Hoffmann/Chip-8-Emulator
As you can see, for each cpu cycle, I'll wait for 1852 microseconds. The result will be ~ 540 cycles in a second equals to 540Hz.
But I'm not very happy with this logic.
Has someone a better Idea, how to synchronize the clock speed?
This is the typical approach, and has many drawbacks - most notably, unnecessary CPU usage and potentially scheduling issues (your application will be seen as 100% CPU beast, so other applications might get their thread quanta before you under load).
A better approach would use a sleep instead - however, by default, the system timer has nowhere near the frequency to accommodate a wait that's less than 2ms. So if you want to use a sleep, you'll need to change the system timer. This is a bit tricky on older Windows (it's a system-wide setting and has noticeable impact on other applications and general CPU usage), but even in that case, it's better than a "busy loop" - as long as you restore the system settings afterwards. On Windows 8 (and to some extent, 7 and Vista), the timer is asynchronous and no longer requires a busy loop, so it's a lot easier to have higher timer resolution.
The system timer APIs are not exposed by .NET, so you'll need to use P/Invokes (timeBeginPeriod and timeEndPeriod for the old-style API). If this isn't available, you can always fall back to your busy loop :)
Backgound: I must call a web service call 1500 times which takes roughly 1.3 seconds to complete. (No control over this 3rd party API.) total Time = 1500 * 1.3 = 1950 seconds / 60 seconds = 32 minutes roughly.
I came up with what I though was a good solution however it did not pan out that great.
So I changed the calls to async web calls thinking this would dramatically help my results it did not.
Example Code:
Pre-Optimizations:
foreach (var elmKeyDataElementNamed in findResponse.Keys)
{
var getRequest = new ElementMasterGetRequest
{
Key = new elmFullKey
{
CmpCode = CodaServiceSettings.CompanyCode,
Code = elmKeyDataElementNamed.Code,
Level = filterLevel
}
};
ElementMasterGetResponse getResponse;
_elementMasterServiceClient.Get(new MasterOptions(), getRequest, out getResponse);
elementList.Add(new CodaElement { Element = getResponse.Element, SearchCode = filterCode });
}
With Optimizations:
var tasks = findResponse.Keys.Select(elmKeyDataElementNamed => new ElementMasterGetRequest
{
Key = new elmFullKey
{
CmpCode = CodaServiceSettings.CompanyCode,
Code = elmKeyDataElementNamed.Code,
Level = filterLevel
}
}).Select(getRequest => _elementMasterServiceClient.GetAsync(new MasterOptions(), getRequest)).ToList();
Task.WaitAll(tasks.ToArray());
elementList.AddRange(tasks.Select(p => new CodaElement
{
Element = p.Result.GetResponse.Element,
SearchCode = filterCode
}));
Smaller Sampling Example:
So to easily test I did a smaller sampling of 40 records this took 60 seconds with no optimizations with the optimizations it only took 50 seconds. I would have though it would have been closer to 30 or better.
I used wireshark to watch the transactions come through and realized the async way was not sending as fast I assumed it would have.
Async requests captured
Normal no optimization
You can see that the asnyc pushes a few very fast then drops off...
Also note that between requests 10 and 11 it took nearly 3 seconds.
Is the overhead for creating threads for the tasks that slow that it takes seconds?
Note: The tasks I am referring to are the 4.5 TAP task library.
Why wouldn't the request come faster than that.
I was told the Apache web server I was hitting could hold 200 max threads so I don't see an issue there..
Am I not thinking about this clearly?
When calling web services are there little advantages from async requests?
Do I have a code mistake?
Any ideas would be great.
After many days of searching I found this post that solved my problem:
Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)
The reason that the request was not hitting the server quicker was due too the my client side code and nothing to do with the server. By default C# only allows 2 concurrent requests.
see here: http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultconnectionlimit.aspx
I simply added this line of code and then all request came through in milliseconds.
System.Net.ServicePointManager.DefaultConnectionLimit = 50;