Ideally I would like to monitor the signal strength of a wireless network in near real-time, say every 100ms, but such a high frequency is probably overkill.
I'm using the Managed Wifi library to poll RSSI. I instantiate a WlanClient client = new WlanClient(); once and re-use that client to measure signal strengths every second or so (but I'd like to do it more often):
foreach (WlanClient.WlanInterface wlanInterface in _client.Interfaces)
{
Wlan.WlanBssEntry[] wlanBssEntries = wlanInterface.GetNetworkBssList();
foreach (Wlan.WlanBssEntry wlanBssEntry in wlanBssEntries)
{
int sigStr = wlanBssEntry.rssi; // signal strength in dBm
// ...
}
}
What is the fastest practical polling delay and is this the best way to measure signal strength?
I'm afraid the smallest polling delay will vary, with your driver stack but I suspect also with the number of Access Points around. WiFi is a protocol based on time slots.
From my (limited) experience a 1 sec interval is about right, you will already see that the list of stations isn't always complete (ie stations missing on 1 scan, back on the next).
is this the best way to measure signal strength?
Depends, but how fast do you expect it to change? When walking around, the signal won't vary much over a second.
For most cases where you want to monitor anything a reasonable guideline is to work out what is as seldom as possible to fulfil your purpose, then increase the frequency a bit beyond that to catch delays and unexpected spikes.
If for example you were going to display this to a user, then much more than once per half a second is going to mean changes too quick for the user to meaningfully make sense of, so around a quarter of a second should be more than enough to be sure you're catching everything you need.
If you are logging, then it depends on how long your log period will be for. Once every few minutes is likely to catch any serious problem times, so once a minute should do fine.
In all, while there is often some practical maximum frequency, its not worth considering unless the maximum useful frequency is higher, and that depends on your purposes.
Related
I am modifying a C# based UI that interfaces to a small PIC microcontroller tester device.
The UI consists of a couple buttons that initiates a test by sending a "command" to the microcontroller via a serial port connection. Every 250 milliseconds, the UI polls the serial interface looking for a brief message comprised of test results from the PIC. The message is displayed in a text box.
The code I inherited is as follows:
try
{
btr = serialPort1.BytesToRead;
if (btr > 0)
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
What would be the rationale for the first several lines consisting of three calls to BytesToRead and two 300 millisecond sleep calls before finally reading the serial port? Unless I am interpreting the code incorrectly, any successful read from the serial port will take more than 600 milliseconds, which seems peculiar to me.
It is a dreadful hack around the behavior of SerialPort.Read(). Which returns only the number of bytes actually received. Usually just 1 or 2, serial ports are slow and modern PCs are very fast. So by calling Thread.Sleep(), the code is delaying the UI thread long enough to get the Read() call to return more bytes. Hopefully all of them, whatever the protocol looks like. Usually works, not always. And in the posted code it didn't work and the programmer just arbitrarily delayed twice as long. Ugh.
The great misery of course is that the UI thread is pretty catatonic when it is forced to sleep. Pretty noticeable, it gets very slow to paint and to respond to user input.
This needs to be repaired by first paying attention to the protocol. The PIC needs to either send a fixed number of bytes in its response, so you can simply count them off, or give the PC a way to detect that the full response is received. Usually done by sending a unique byte as the last byte of a response (SerialPort.NewLine) or by including the length of the response as a byte value at the start of the message. Specific advice is hard to give, you didn't describe the protocol at all.
You can keep the hacky code and move it into a worker thread so it won't affect the UI so badly. You get one for free from the SerialPort.DataReceived event. But that tend to produce two problems instead of solving the core issue.
If that code initially was in a loop it might have been a way to wait for the PIC to collect data.
If you have real hardware to test on I would sugest you remove both Sleeps.
#TomWr you are right, from what I'm reading this is the case.
Your snippet below with my comments:
try
{
// Let's check how many bytes are available on the Serial Port
btr = serialPort1.BytesToRead;
// Something? Alright then, let's wait 300 ms.
if (btr > 0)
Thread.Sleep(300);
// Let's check again that there are some bytes available the Serial Port
btr = serialPort1.BytesToRead;
// ... and if so wait (maybe again) for 300 ms
// Please note that, at that point can be all cumulated about 600ms
// (if we actually already waited previously)
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
// Seems like a useless overhead could directly use
// an Encoding and ReadExisting method() of the SerialPort.
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
My guess is the same it as been already mentioned above by idstam, basically probably to check whether is data sent by your device and fetch them
You can easily refactor this code with the appropriate SerialPort methods cause there are actually much better and concise ways to do check whether there are data available or not on the Serial Port.
Instead of "I'm checking how many bytes are on the port, if there is something then I wait 300 ms and later same thing again." that is miserably ending up with
"So yep 2 times 300 ms = 600ms, or just once (depending on whether there was a first time", or maybe nothing at all (depending on the device you are communicating to through this UI which can be really slacky since the Thread.Sleep is blocking the UI...). "
First let's considering for a while that you are trying to keep as much of the same codebase, why not just wait for 600ms?
Or why not just using the ReadTimeout property and catching the timeout exception, not that clean but at least better in terms of readability and plus you are getting your String directly instead of using some Convert.ToChar() calls...
I sense that the code has been ported from C or C++ (or at least the rationale behind) from someone who rather has mostly an embedded software background.
Anyway, back to the number of available bytes checks, I mean unless the Serial Port data are flushed in another Thread / BackgroundWorker / Task handler, I don't see any reason of checking it twice especially in the way it is coded.
To make it faster? Not really, cause there is an additional delay if there are actually data on the Serial Port. It does not make that much sense to me.
Another way to make your snippet slightly better is to poll using the ReadExisting().
Otherwise you can also consider asynchronous methods using the SerialPort BaseStream.
All in all it's pretty hard to say without access the rest of your codebase, aka, the context.
If you had more information about what are the objectives / protocol it could give some hints about what to do. Otherwise, I just can say this seems to be poorly coded, once again, out of context.
I double and even triple back what Hans mentioned about the UI responsiveness in the sense that really I hope your snippet is running in a thread which is not the UI one (although you mentioned in your post that the UI is polling I still hope the snippet is for another worker).
If that's really the UI thread then it will be blocked every time there is a Thread.Sleep call which makes the UI not really responsive to the user interactions and may give some feeling of frustration to your end-user.
Might also worth to subscribe to the DataReceived event and perform what you want/need with the handler (e.g. using a buffer and comparing value, etc.).
Please note that mono is still not implementing the trigger of this event but if you are running against a plain MS .NET implementation this is perfectly fine without the hassles of the multi-threading.
In short:
Check which thread(s) is(are) taking care of your snippet and mind about the UI responsiveness
If it is the UI, then use another thread through Thread, BackgroundWorker (Threadpool) or Task.
Stream Asynchronous Methods in order to avoid the hassles of the of UI thread synchronization
Try to see whether the objectives really deserve a double 300 ms Thread Sleep method call
You can directly fetch the String instead of gathering if latter checks are using so to perform operations rather than gathering the Bytes by yourself (if the chosen encoding can fulfill your needs).
When going through a really long array, or have a complicated calculations per each index, is there a way to yield after iterating through the array for the maximum amount of time. The maximum amount of time is the maximum time per each frame.
For example:
for(int i = 0; i < 100000; i++){
do something complicated;
if(maximum amount of time /*right before the user feels lag*/)
yield; (come back and resume i where it last yielded)
}
}
//order does not matter
So basically, what I want to achieve is high percent usage for the cpu, however, I do not want it to go beyond 100%, which the user will experience lag
edit:
Sorry for the little confusion. A more clear example might be 3d rendering in a program such as blender. When the user hits render, it calculates each pixels to determine what color it needs to be. When one looks at the cpu usage, it is close to 100%. however, it does not freeze while it calculates the pixels while it calculates the maximum amount as possible
If you are running your code on multiple CPUs (as implied by the multithreading tag), there should (in the usual case) be no need to stop executing the loop in order for your user interface to remain responsive. Perform the calculation on one or more background threads, and have those background threads update the UI thread as appropriate.
is there a way to yield after iterating through the array for the maximum amount of time
If by yield you mean just stop (and restart from the beginning next frame), then sure. You can pass a CancellationToken to your thread, and have it periodically check for a cancellation request. You can use a timer at the start of each frame to fire off that request, or more likely, use an existing mechanism that already does end-of-frame processing to trigger the thread to stop work.
If by yield you mean stop where I am and resume at that place at the start of the next frame, I would ask why stop given that you have multiple CPUs. If you must stop, you can use the CancellationToken as before, but just keep track of where you are in the loop, resuming from there instead of at the start.
So basically, what I want to achieve is high percent usage for the cpu, however, I do not want it to go beyond 100%, which the user will experience lag
You can never go over 100% CPU usage by definition. To avoid the feeling of lag when the CPU utilization is high, use thread priorities to ensure that the foreground thread has a higher priority than your background threads.
Unless I'm missing something....
double MAX_PROCESSTIME = 50.0;
DateTime loopStart = DateTime.Now();
for(int i = 0; i < 100000; i++){
// do something complicated;
double timePassed = (DateTime.Now() - loopStart).TotalMilliseconds;
if(timePassed > MAX_PROCESSTIME)
{
break;
}
}
How about you consider use a push model instead, to iterate in parallel and raising an event so the consumer just treat each item as they come?
Usually the solution to this problem is to move the work to a separate thread that can't interrupt the UI, and let the UI or a controller thread cancel the work when called for.
Another option is that I've read somewhere typical humans have a perception level of about 25 milliseconds; two events are perceived to occur at the same time as long as they are less than 25 milliseconds apart. Sadly, I can no longer find the original reference, but I did at least find a corroborating article. You can use this fact to set a timer for about that long and let the process run as much as you want until the timer goes off. You may also want to account for the atypical human as well, especially if your app is in an area catering to people that may have above average reflexes.
I'm getting the following error while streaming data:
Google.ApisGoogle.Apis.Requests.RequestError
Internal Error [500]
Errors [
Message[Internal Error] Location[ - ] Reason[internalError] Domain[global]
]
My code:
public bool InsertAll(BigqueryService s, String datasetId, String tableId, List<TableDataInsertAllRequest.RowsData> data)
{
try
{
TabledataResource t = s.Tabledata;
TableDataInsertAllRequest req = new TableDataInsertAllRequest()
{
Kind = "bigquery#tableDataInsertAllRequest",
Rows = data
};
TableDataInsertAllResponse response = t.InsertAll(req, projectId, datasetId, tableId).Execute();
if (response.InsertErrors != null)
{
return true;
}
}
catch (Exception e)
{
throw e;
}
return false;
}
I'm streaming data constantly and many times a day I have this error. How can I fix this?
We seen several problems:
the request randomly fails with type 'Backend error'
the request randomly fails with type 'Connection error'
the request randomly fails with type 'timeout' (watch out here, as only some rows are failing and not the whole payload)
some other error messages are non descriptive, and they are so vague that they don't help you, just retry.
we see hundreds of such failures each day, so they are pretty much constant, and not related to Cloud health.
For all these we opened cases in paid Google Enterprise Support, but unfortunately they didn't resolved it. It seams the recommended option to take is an exponential-backoff with retry, even the support told to do so. Also the failure rate fits the 99.9% uptime we have in the SLA, so there is no reason for objection.
There's something to keep in mind in regards to the SLA, it's a very strictly defined structure, the details are here. The 99.9% is uptime not directly translated into fail rate. What this means is that if BQ has a 30 minute downtime one month, and then you do 10,000 inserts within that period but didn't do any inserts in other times of the month, it will cause the numbers to be skewered. This is why we suggest a exponential backoff algorithm. The SLA is explicitly based on uptime and not error rate, but logically the two correlates closely if you do streaming inserts throughout the month at different times with backoff-retry setup. Technically, you should experience on average about 1/1000 failed insert if you are doing inserts through out the month if you have setup the proper retry mechanism.
You can check out this chart about your project health:
https://console.developers.google.com/project/YOUR-APP-ID/apiui/apiview/bigquery?tabId=usage&duration=P1D
About times. Since streaming has a limited payload size, see Quota policy it's easier to talk about times, as the payload is limited in the same way to both of us, but I will mention other side effects too.
We measure between 1200-2500 ms for each streaming request, and this was consistent over the last month as you can see in the chart.
The approach you've chosen if takes hours that means it does not scale, and won't scale. You need to rethink the approach with async processes that can retry.
Processing in background IO bound or cpu bound tasks is now a common practice in most web applications. There's plenty of software to help build background jobs, some based on a messaging system like Beanstalkd.
Basically, you needed to distribute insert jobs across a closed network, to prioritize them, and consume(run) them. Well, that's exactly what Beanstalkd provides.
Beanstalkd gives the possibility to organize jobs in tubes, each tube corresponding to a job type.
You need an API/producer which can put jobs on a tube, let's say a json representation of the row. This was a killer feature for our use case. So we have an API which gets the rows, and places them on tube, this takes just a few milliseconds, so you could achieve fast response time.
On the other part, you have now a bunch of jobs on some tubes. You need an agent. An agent/consumer can reserve a job.
It helps you also with job management and retries: When a job is successfully processed, a consumer can delete the job from the tube. In the case of failure, the consumer can bury the job. This job will not be pushed back to the tube, but will be available for further inspection.
A consumer can release a job, Beanstalkd will push this job back in the tube, and make it available for another client.
Beanstalkd clients can be found in most common languages, a web interface can be useful for debugging.
I explain my situation.
I have a producer 1 to N consumers pattern. I'm using blocking collections and everything is working well. Doing some test I noticed this strange behavior:
I was testing how long my manipulation of data took in my consumers.
I noticed this strange things, below you'll find the code cleaned of my manipulation and which produce the strange behavior.
I have 4 consumers for 1 producer.
For most of data, the Console doesn't print anything, because ts=0 (its under a tick) but randomly (between every 1 to 5sec) it plots something like this (not in this very specific order, but of the same kind):
10000
20001
10000
30002
10000
40003
10000
10000
It is of the order of 10,000 ticks so around 1ms. Always a number in the format (N)000(N-1)
Note that the BlockingCollection I consume is filled depending on some network events which occurred completely at random times. Nothing regular from here.
The timing is almost perfect, always a multiple of 10,000 ticks.
What could be behind this ? Thks !
while(IsAlive)
{
DataToFieldMapping item;
try
{
_CollectionToConsume.TryTake(out item, -1);
}
catch
{
item = null;
}
if (item != null)
{
long ts = (DateTime.Now.Ticks - item.TimeStamp.Ticks);
if(ts>10)
Console.WriteLine(ts);
}
}
What's going on here is that DateTime.Now has a fairly limited precision. It's not giving you the time to the nearest tick. It is only updated every 10,000 ticks or so, which is why you generally see multiples of 10k ticks in your prints.
If you really want to get a better feel for the duration of those events, use the StopWatch class, which has a much higher precision. That said, StopWatch is simply a diagnostic tool (hence why it's in the Diagnostics namespace). You should only be using it to help you diagnose what's going on, and should be using it in production code.
On a side note, there really isn't any need to use a timer here at all. It appears that you're creating several consumers that are polling the BlockingCollection for new content. There is no reason to do this. They can simply block until the collection has items. (Hence the name, BlockingCollection.
The easiest way is for the consumers to simply do this:
foreach(var item in _CollectionToConsume.GetConsumingEnumerable())
ProcessItem(item);
Then just run that code in a background thread.
if you write the following and run, you'll see that ticks do not roll one to one, but rather in relatively large chunks b/c ticks resolution is actually much smaller.
for(int i =0; i< 100; i++)
{
Console.WriteLine(DateTime.Now.Ticks);
}
Use Stopwatch class to measure performance as that one uses a high-resolution timer which is much more suitable for the purpose.
I'm a total newbie, but I was writing a little program that worked on strings in C# and I noticed that if I did a few things differently, the code executed significantly faster.
So it had me wondering, how do you go about clocking your code's execution speed? Are there any (free)utilities? Do you go about it the old-fashioned way with a System.Timer and do it yourself?
What you are describing is known as performance profiling. There are many programs you can get to do this such as Jetbrains profiler or Ants profiler, although most will slow down your application whilst in the process of measuring its performance.
To hand-roll your own performance profiling, you can use System.Diagnostics.Stopwatch and a simple Console.WriteLine, like you described.
Also keep in mind that the C# JIT compiler optimizes code depending on the type and frequency it is called, so play around with loops of differing sizes and methods such as recursive calls to get a feel of what works best.
ANTS Profiler from RedGate is a really nice performance profiler. dotTrace Profiler from JetBrains is also great. These tools will allow you to see performance metrics that can be drilled down the each individual line.
Scree shot of ANTS Profiler:
ANTS http://www.red-gate.com/products/ants_profiler/images/app/timeline_calltree3.gif
If you want to ensure that a specific method stays within a specific performance threshold during unit testing, I would use the Stopwatch class to monitor the execution time of a method one ore many times in a loop and calculate the average and then Assert against the result.
Just a reminder - make sure to compile in Relase, not Debug! (I've seen this mistake made by seasoned developers - it's easy to forget).
What are you describing is 'Performance Tuning'. When we talk about performance tuning there are two angle to it. (a) Response time - how long it take to execute a particular request/program. (b) Throughput - How many requests it can execute in a second. When we typically 'optimize' - when we eliminate unnecessary processing both response time as well as throughput improves. However if you have wait events in you code (like Thread.sleep(), I/O wait etc) your response time is affected however throughput is not affected. By adopting parallel processing (spawning multiple threads) we can improve response time but throughput will not be improved. Typically for server side application both response time and throughput are important. For desktop applications (like IDE) throughput is not important only response time is important.
You can measure response time by 'Performance Testing' - you just note down the response time for all key transactions. You can measure the throughput by 'Load Testing' - You need to pump requests continuously from sufficiently large number of threads/clients such that the CPU usage of server machine is 80-90%. When we pump request we need to maintain the ratio between different transactions (called transaction mix) - for eg: in a reservation system there will be 10 booking for every 100 search. there will be one cancellation for every 10 booking etc.
After identifying the transactions require tuning for response time (performance testing) you can identify the hot spots by using a profiler.
You can identify the hot spots for throughput by comparing the response time * fraction of that transaction. Assume in search, booking, cancellation scenario, ratio is 89:10:1.
Response time are 0.1 sec, 10 sec and 15 sec.
load for search - 0.1 * .89 = 0.089
load for booking- 10 * .1 = 1
load for cancell= 15 * .01= 0.15
Here tuning booking will yield maximum impact on throughput.
You can also identify hot spots for throughput by taking thread dumps (in the case of java based applications) repeatedly.
Use a profiler.
Ants (http://www.red-gate.com/Products/ants_profiler/index.htm)
dotTrace (http://www.jetbrains.com/profiler/)
If you need to time one specific method only, the Stopwatch class might be a good choice.
I do the following things:
1) I use ticks (e.g. in VB.Net Now.ticks) for measuring the current time. I subtract the starting ticks from the finished ticks value and divide by TimeSpan.TicksPerSecond to get how many seconds it took.
2) I avoid UI operations (like console.writeline).
3) I run the code over a substantial loop (like 100,000 iterations) to factor out usage / OS variables as best as I can.
You can use the StopWatch class to time methods. Remember the first time is often slow due to code having to be jitted.
There is a native .NET option (Team Edition for Software Developers) that might address some performance analysis needs. From the 2005 .NET IDE menu, select Tools->Performance Tools->Performance Wizard...
[GSS is probably correct that you must have Team Edition]
This is simple example for testing code speed. I hope I helped you
class Program {
static void Main(string[] args) {
const int steps = 10000;
Stopwatch sw = new Stopwatch();
ArrayList list1 = new ArrayList();
sw.Start();
for(int i = 0; i < steps; i++) {
list1.Add(i);
}
sw.Stop();
Console.WriteLine("ArrayList:\tMilliseconds = {0},\tTicks = {1}", sw.ElapsedMilliseconds, sw.ElapsedTicks);
MyList list2 = new MyList();
sw.Start();
for(int i = 0; i < steps; i++) {
list2.Add(i);
}
sw.Stop();
Console.WriteLine("MyList: \tMilliseconds = {0},\tTicks = {1}", sw.ElapsedMilliseconds, sw.ElapsedTicks);