I am modifying a C# based UI that interfaces to a small PIC microcontroller tester device.
The UI consists of a couple buttons that initiates a test by sending a "command" to the microcontroller via a serial port connection. Every 250 milliseconds, the UI polls the serial interface looking for a brief message comprised of test results from the PIC. The message is displayed in a text box.
The code I inherited is as follows:
try
{
btr = serialPort1.BytesToRead;
if (btr > 0)
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
What would be the rationale for the first several lines consisting of three calls to BytesToRead and two 300 millisecond sleep calls before finally reading the serial port? Unless I am interpreting the code incorrectly, any successful read from the serial port will take more than 600 milliseconds, which seems peculiar to me.
It is a dreadful hack around the behavior of SerialPort.Read(). Which returns only the number of bytes actually received. Usually just 1 or 2, serial ports are slow and modern PCs are very fast. So by calling Thread.Sleep(), the code is delaying the UI thread long enough to get the Read() call to return more bytes. Hopefully all of them, whatever the protocol looks like. Usually works, not always. And in the posted code it didn't work and the programmer just arbitrarily delayed twice as long. Ugh.
The great misery of course is that the UI thread is pretty catatonic when it is forced to sleep. Pretty noticeable, it gets very slow to paint and to respond to user input.
This needs to be repaired by first paying attention to the protocol. The PIC needs to either send a fixed number of bytes in its response, so you can simply count them off, or give the PC a way to detect that the full response is received. Usually done by sending a unique byte as the last byte of a response (SerialPort.NewLine) or by including the length of the response as a byte value at the start of the message. Specific advice is hard to give, you didn't describe the protocol at all.
You can keep the hacky code and move it into a worker thread so it won't affect the UI so badly. You get one for free from the SerialPort.DataReceived event. But that tend to produce two problems instead of solving the core issue.
If that code initially was in a loop it might have been a way to wait for the PIC to collect data.
If you have real hardware to test on I would sugest you remove both Sleeps.
#TomWr you are right, from what I'm reading this is the case.
Your snippet below with my comments:
try
{
// Let's check how many bytes are available on the Serial Port
btr = serialPort1.BytesToRead;
// Something? Alright then, let's wait 300 ms.
if (btr > 0)
Thread.Sleep(300);
// Let's check again that there are some bytes available the Serial Port
btr = serialPort1.BytesToRead;
// ... and if so wait (maybe again) for 300 ms
// Please note that, at that point can be all cumulated about 600ms
// (if we actually already waited previously)
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
// Seems like a useless overhead could directly use
// an Encoding and ReadExisting method() of the SerialPort.
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
My guess is the same it as been already mentioned above by idstam, basically probably to check whether is data sent by your device and fetch them
You can easily refactor this code with the appropriate SerialPort methods cause there are actually much better and concise ways to do check whether there are data available or not on the Serial Port.
Instead of "I'm checking how many bytes are on the port, if there is something then I wait 300 ms and later same thing again." that is miserably ending up with
"So yep 2 times 300 ms = 600ms, or just once (depending on whether there was a first time", or maybe nothing at all (depending on the device you are communicating to through this UI which can be really slacky since the Thread.Sleep is blocking the UI...). "
First let's considering for a while that you are trying to keep as much of the same codebase, why not just wait for 600ms?
Or why not just using the ReadTimeout property and catching the timeout exception, not that clean but at least better in terms of readability and plus you are getting your String directly instead of using some Convert.ToChar() calls...
I sense that the code has been ported from C or C++ (or at least the rationale behind) from someone who rather has mostly an embedded software background.
Anyway, back to the number of available bytes checks, I mean unless the Serial Port data are flushed in another Thread / BackgroundWorker / Task handler, I don't see any reason of checking it twice especially in the way it is coded.
To make it faster? Not really, cause there is an additional delay if there are actually data on the Serial Port. It does not make that much sense to me.
Another way to make your snippet slightly better is to poll using the ReadExisting().
Otherwise you can also consider asynchronous methods using the SerialPort BaseStream.
All in all it's pretty hard to say without access the rest of your codebase, aka, the context.
If you had more information about what are the objectives / protocol it could give some hints about what to do. Otherwise, I just can say this seems to be poorly coded, once again, out of context.
I double and even triple back what Hans mentioned about the UI responsiveness in the sense that really I hope your snippet is running in a thread which is not the UI one (although you mentioned in your post that the UI is polling I still hope the snippet is for another worker).
If that's really the UI thread then it will be blocked every time there is a Thread.Sleep call which makes the UI not really responsive to the user interactions and may give some feeling of frustration to your end-user.
Might also worth to subscribe to the DataReceived event and perform what you want/need with the handler (e.g. using a buffer and comparing value, etc.).
Please note that mono is still not implementing the trigger of this event but if you are running against a plain MS .NET implementation this is perfectly fine without the hassles of the multi-threading.
In short:
Check which thread(s) is(are) taking care of your snippet and mind about the UI responsiveness
If it is the UI, then use another thread through Thread, BackgroundWorker (Threadpool) or Task.
Stream Asynchronous Methods in order to avoid the hassles of the of UI thread synchronization
Try to see whether the objectives really deserve a double 300 ms Thread Sleep method call
You can directly fetch the String instead of gathering if latter checks are using so to perform operations rather than gathering the Bytes by yourself (if the chosen encoding can fulfill your needs).
Related
so I have a kinda strange problem. I'm using LAN for the communication with a microcontroller. Everything was working perfect. Meaning: I can send and receive data. For receiving data I'm using a simple method, which is Thread.sleep(1) in a for loop in which I keep checking client.GetStream().DataAvailable for true while client is a TcpClient
Now, with one process I have to send and receive to the microcontroller with a higher Baud rate. I was using 9600 for all other operations and everythingwas fine. Now with 115200 client.GetStream().DataAvailableseems to always have the value false.
What could be the problem?
PS: Another way to communicate with the microcontroller (all chosen by user) is serial communication. This is still working fine with the higher Baud rate.
Here is a code snippet:
using (client = new TcpClient(IP_String, LAN_Port))`
{
client.SendTimeout = 200;
client.ReceiveTimeout = 200;
stream = client.GetStream();
.
.
bool OK = false;
stream.Write(ToSend, 0, ToSend.Length);
for (int j = 0; j < 1000; j++)
{
if (stream.DataAvailable)
{
OK = true;
break;
}
Thread.Sleep(1);
}
.
.
}
EDIT:
While monitoring the communication with a listing device I realized that the bits actually arrive and that the device actually answers. The one and only problem seem that the DataAvailable flag is not being raised. I should probably find another way to check data availability. Any ideas?
I've been trying to think of things I've seen that act this way...
I've seen serial chips that say they'll do 115,200, but actually won't. See what happens if you drop the baud rate one notch. Either way you'll learn something.
Some microcontrollers "bit-bang" the serial port by having the CPU raise and lower the data pin and essentially go through the bits, banging 1 or 0 onto the serial pin. When a byte comes in, they read it, and do the same thing.
This does save money (no serial chip) but it is an absolute hellish nightmare to actually get working reliably. 115,200 may push a bit-banger too hard.
This might be a subtle microcontroller problem. Say you have a receiving serial chip which asserts a pin when a byte has come in, usually something like DRQ* for "Data Request" (the * in DRQ* means a 0-volt is "we have a byte" condition) (c'mon, people, a * isn't always a pointer:-). Well, DRQ* requests an interrupt, the firmware & CPU interrupt, it reads the serial chip's byte, and stashes it into some handy memory buffer. Then it returns from interrupt.
A problem can emerge if you're getting data very fast. Let's assume data has come in, serial chip got a byte ("#1" in this example), asserted DRQ*, we interrupted, the firmware grabs and stashes byte #1, and returns from interrupt. All well and good. But think what happens if another byte comes winging in while that first interrupt is still running. The serial chip now has byte #2 in it, so it again asserts the already-asserted DRQ* pin. The interrupt of the first byte completes. What happens?
You hang.
This is because it's the -edge- of DRQ*, physically going from 5V to 0V, that actually causes the CPU interrupt. On the second byte, DRQ* started at 0 and was set to 0. So DRQ* is (still) asserted, but there's no -edge- to tell the interrupt hardware/CPU that another byte is waiting. And now, of course, all the rest of the incoming data is also dropped.
See why it gets worse at higher speeds? The interrupt routine is fielding data more and more quickly, and typically doing circular I/O buffer calculations within the interrupt handler, and it must be fast and efficient, because fast input can push the interrupt handler to where a full new byte comes in before the interrupt finishes.
This is why it's a good idea to check DRQ* during the interrupt handler to see if another byte (#2) is already waiting (if so, just read it in, to clear the serial chip's DRQ*, and stash the byte in memory right then), or use "level triggering" for interrupts, not "edge triggering". Edge triggering definitely has good uses, but you need to watch out for this.
I hope this is helpful. It sure took me long enough to figure it out the first time. Now I take great care on stuff like this.
Good luck, let me know how it goes.
thanks,
Dave Small
I have a list of files: List<string> Files in my C#-based WPF application.
Files contains ~1,000,000 unique file paths.
I ran a profiler on my application. When I try to do parallel operations, it's REALLY laggy because it's IO bound. It even lags my UI threads, despite not having dispatchers going to them (note the two lines I've marked down):
Files.AsParallel().ForAll(x =>
{
char[] buffer = new char[0x100000];
using (FileStream stream = new FileStream(x, FileMode.Open, FileAccess.Read)) // EXTREMELY SLOW
using (StreamReader reader = new StreamReader(stream, true))
{
while (true)
{
int bytesRead = reader.Read(buffer, 0, buffer.Length); // EXTREMELY SLOW
if (bytesRead <= 0)
{
break;
}
}
}
}
These two lines of code take up ~70% of my entire profile test runs. I want to achieve maximum parallelization for IO, while keeping performance such that it doesn't cripple my app's UI entirely. There is nothing else affecting my performance. Proof: Using Files.ForEach doesn't cripple my UI, and WithDegreeOfParallelism helps out too (but, I am writing an application that is supposed to be used on any PC, so I cannot assume a specific degree of parallelism for this computation); also, the PC I am on has a solid-state hard disk. I have searched on StackOverflow, and have found links that talk about using asynchronous IO read methods. I'm not sure how they apply in this case, though. Perhaps someone can shed some light? Also; how can you tune down the constructor time of a new FileStream; is that even possible?
Edit: Well, here's something strange that I've noticed...the UI doesn't get crushed so bad when I swap Read for ReadAsync while still using AsParallel. Simply awaiting the task created by ReadAsync to finish causes my UI thread to maintain some degree of usability. I think this does some sort of asynchronous scheduling that is done in this method to maintain optimal disk usage while not crushing existing threads. And on that note, is there ever a chance that the operating system contends for existing threads to do IO, such as my application's UI thread? I seriously don't understand why its slowing my UI thread. Is the OS scheduling work from IO on my thread or something? Did they do something to the CLR to eat threads that haven't been explicitly affinated using Thread.BeginThreadAffinity or something? Memory is not an issue; I am looking at Task Manager and there is plenty.
I don't agree with your assertion that you can't use WithDegreeOfParallelism because it will be used on any PC. You can base it on number of CPU. By not using WithDegreeOfParallelism you are going to get crushed on some PCs.
You optimized for a solid state disc where heads don't have to move. I don't think this unrestricted parallel design will hold up on regular disc (any PC).
I would try a BlockingCollection with 3 queues : FileStream, StreamReader, and ObservableCollection. Limit the FileStream to like 4 - it just has to stay ahead of StreamReader. And no parallelism.
A single head is a single head. It cannot read from 5 or 5000 files faster than it can read from 1. On solid state the is no penalty switching from file to file - on a regular disc there is a significant penalty. If your files are fragmented there is a significant penalty (on regular disc).
You don't show what the data write but the next step would be to put the write in a another queue with a BlockingCollection in the BlockingCollection.
E.G. sb.Append(text); in a separate queue.
But that may be more overhead than it is worth.
Keep that head as close to 100% busy on a single contiguous file is the best you are going to do.
private async Task<string> ReadTextAsync(string filePath)
{
using (FileStream sourceStream = new FileStream(filePath,
FileMode.Open, FileAccess.Read, FileShare.Read,
bufferSize: 4096, useAsync: true))
{
StringBuilder sb = new StringBuilder();
byte[] buffer = new byte[0x1000];
int numRead;
while ((numRead = await sourceStream.ReadAsync(buffer, 0, buffer.Length)) != 0)
{
string text = Encoding.Unicode.GetString(buffer, 0, numRead);
sb.Append(text);
}
return sb.ToString();
}
}
File access is inherently not parallel. You can only benefit from parallelism, if you treat some files while reading others. It makes no sense to wait for the disk in parallel.
Instead of waiting 100 000 time 1 ms for disk access, you program to wait once 100 000 ms = 100 s.
Unfortunately, it's a vague question without a reproducible code example. So it's impossible to offer specific advice. But my two recommendations are:
Pass a ParallelOptions instance where you've set the MaxDegreeOfParallelism property to something reasonably low. Something like the number of cores in your system, or even that number minus one.
Make sure you aren't expecting too much from the disk. You should start with the known speed of the disk and controller, and compare that with the data throughput you're getting. Adjust the degree of parallelism even lower if it looks like you're already at or near the maximum theoretical throughput.
Performance optimization is all about setting realistic goals based on known limitations of the hardware, measuring your actual performance, and then looking at how you can improve the costliest elements of your algorithm. If you haven't done the first two steps yet, you really should start there. :)
I got it working; the problem was me trying to use an ExtendedObservableCollection with AddRange instead of calling Add multiple times in every UI dispatch...for some reason, the performance of the methods people list in here is actually slower in my situation: ObservableCollection Doesn't support AddRange method, so I get notified for each item added, besides what about INotifyCollectionChanging?
I think because it forces you to call change notifications with .Reset (reload) instead of .Add (a diff), there is some sort of logic in place that causes bottlenecks.
I apologize for not posting the rest of the code; I was really thrown off by this, and I'll explain why in a moment. Also, a note for others who come across the same issue, this might help. The main problem with profiling tools in this scenario is that they don't help much here. Most of your app's time will be spent reading files regardless. So you have to unit test all dispatchers separately.
If I have a port that I need to read from quite a lot, and the returning amount of data varies from one line to many lines, what is the best way to read / sift through what I am reading?
After some research I found out what I was supposed to do.
string Xposition;
Xposition = "";
threeAxisPort.ReadExisting();
threeAxisPort.WriteLine("rx");
Thread.Sleep(50);
Xposition = threeAxisPort.ReadExisting();
threeAxisPort.WriteLine("end");
What I ended up doing was clearing the port of everything usign a ReadExisting function, then waited 50 ms to not flood the port, and did another ReadExisting command. With the additional 50ms wait time there was no leftover from the motors, and the returning block of text was exactly what I needed, however, I will still be thinking up of a more dynamic way to handle that string, because in a worst case scenario the read existing will pick up something I won't want.
In a WPF Window, I've got a line chart that plots real-time data (Quinn-Curtis RealTime chart for WPF). In short, for each new value, I call a SetCurrentValue(x, y) method, and then the UpdateDraw() method to update the chart.
The data comes in via a TCP connection in another thread. Every new value that comes in causes an DataReceived event, and its handler should plot the value to the chart and then update it. Logically, I can't call UpdateDraw() directly, since my chart is in the UI thread which is not the same thread as where the data comes in.
So I call Dispatcher.Invoke( new Action (UpdateDraw()) ) - and this works fine, well, as long as I update max. 30 times/sec. When updating more often, the Dispatcher can't keep up and the chart updated slower than the data comes in. I tested this using a single-thread situation with simulated data and without the Dispatcher there are no problems.
So, my conclusion is that the Dispatcher is too slow for this situation. I actually need to update 100-200 times/sec!
Is there a way to put a turbo on the Dispatcher, or are there other ways to solve this? Any suggestions are welcome.
An option would be to use a shared queue to communicate the data.
Where the data comes on, you push the data to the end of the queue:
lock (sharedQueue)
{
sharedQueue.Enqueue(data);
}
On the UI thread, you find a way to read this data, e.g. using a timer:
var incomingData = new List<DataObject>();
lock (sharedQueue)
{
while (sharedQueue.Count > 0)
incomingData.Add(sharedQueue.Dequeue());
}
// Use the data in the incomingData list to plot.
The idea here is that you're not communicating that data is coming in. Because you have a constant stream of data, I suspect that's not a problem. I'm not saying that the exact implementation as give above is the rest, but this is about the general idea.
I'm not sure how you should check for new data, because I do not have enough insight into the details of the application; but this may be a start for you.
Youre requierments are bonkers- You seriously do NOT need 100-200 updates per second, especialyl as teh screen runs at 60 updates per second normally. People wont see them anyway.
Enter new data into a queue.
Trigger a pull event on / for the dispatcher.
Santize data in the queue (thro out doubles, last valid wins) and put them in.l
30 updates per second are enough - people wont see a difference. I had performacne issues on some financial data under high load with a T&S until I did that - now the graph looks better.
Keep Dispatcher moves as few as you can.
I still like to know why you'd want to update a chart 200 times per second when your monitor can't even display it that fast. (Remember, normal flatscreen monitors have an update-rate of 60 fps)
What's the use of updating something 200 times per second when you can only SEE updates 60 times per second ?
You might as well batch incoming data and update the chart at 60 fps since you won't be able to see the difference anyway.
If it's not just about displaying the data but you're doing something else with it - say you are monitoring it to see if it reaches a certain threshold - than I recommend splitting the system in 2 parts : one part monitoring at full speed, the other independently displaying at the maximum speed your monitor can handle : 60 fps.
So please, tell us why you want to update a ui-control more often than it can be displayed to the user.
WPF drawing occurs in a separate thread. Depending on your chart complexity, your PC must have had a mega-descent video card to keep up with 100 frames per second. WPF uses Direct3D to draw everything on screen and optimizing video driver for this has been added in Vista (improved in Windows 7). So, on XP you might have troubles just because of your high data-output rate on poorly designed OS.
Despite all that, I see no reason of printing information to screen with a rate of more than 30-60 frames per second. Come on! Even FPS shooters does not require such a strong reflexes from player. Do you want to tell me, that your poor chart does? :) If by this outputting, you produce some side-effects, which are what you actually need, then it's completely different story. Tell us more about the problem then.
Ideally I would like to monitor the signal strength of a wireless network in near real-time, say every 100ms, but such a high frequency is probably overkill.
I'm using the Managed Wifi library to poll RSSI. I instantiate a WlanClient client = new WlanClient(); once and re-use that client to measure signal strengths every second or so (but I'd like to do it more often):
foreach (WlanClient.WlanInterface wlanInterface in _client.Interfaces)
{
Wlan.WlanBssEntry[] wlanBssEntries = wlanInterface.GetNetworkBssList();
foreach (Wlan.WlanBssEntry wlanBssEntry in wlanBssEntries)
{
int sigStr = wlanBssEntry.rssi; // signal strength in dBm
// ...
}
}
What is the fastest practical polling delay and is this the best way to measure signal strength?
I'm afraid the smallest polling delay will vary, with your driver stack but I suspect also with the number of Access Points around. WiFi is a protocol based on time slots.
From my (limited) experience a 1 sec interval is about right, you will already see that the list of stations isn't always complete (ie stations missing on 1 scan, back on the next).
is this the best way to measure signal strength?
Depends, but how fast do you expect it to change? When walking around, the signal won't vary much over a second.
For most cases where you want to monitor anything a reasonable guideline is to work out what is as seldom as possible to fulfil your purpose, then increase the frequency a bit beyond that to catch delays and unexpected spikes.
If for example you were going to display this to a user, then much more than once per half a second is going to mean changes too quick for the user to meaningfully make sense of, so around a quarter of a second should be more than enough to be sure you're catching everything you need.
If you are logging, then it depends on how long your log period will be for. Once every few minutes is likely to catch any serious problem times, so once a minute should do fine.
In all, while there is often some practical maximum frequency, its not worth considering unless the maximum useful frequency is higher, and that depends on your purposes.