Has any one had any luck with getting good performance out of the built in c# chart running in real time?
I have made a chart with 3 charting areas and 9 series which are all the fast line type. I update in real time points to the series and shift the graphs over once 7 seconds of data has been graphed. All this works fine, however the rate that my graphs update is horribly slow. Sometimes it can take almost a second for the data being fed in to be shown in the graph (and many times i wonder if it is accurately updating my graph with my data since it is so slow and the data changes can be so fast).
I have tried using mychart.Series.SuspendUpdates(), Series.ResumeUpdates(), and Series.Invalidate() as i saw on different posting with no noticeable results.
If anyone could share some insight about ways to optimize I would be truly gracious.( and cutting the number data points is not a valid optimization )
Thanks in advance
OCV
If external libraries is an option, ZedGraph worked great for me when displaying data at 10ms intervals (up to 8 series).
If you really must use the built-in C#, I think you might prevent blocking by separating drawing a data into separate threads.
Related
I've recently made a desktop application that communicates with a device at my job.
The general idea of the app is to give the device (oven) a "set temperature" command and after every 10 seconds check the current temperature and display it on a graph using livecharts.
This application is required to run multiple days at a time and I seem to be having a memory leak problem, I think.
What I experience is the application not responding for a while, then it becomes responsive and adds 1 single "log" effect as in LogTemp function every around 1-2 minutes. It should be once every 10 seconds.
Edit: Just read the lines before this edit and I think I was not too clear. It works as it is supposed to work the first few hours. Noticed the performance took a hit only after 24 hours.
After 24 hours of running I came back to find it is using over 800 MB of RAM and it kept growing by the second.
I suspect it MAY have something to do with livecharts but I am not sure by any means. (it ends up with 8640 points of data after 24 hours)
I have no issue disclosing my code and have even minimized the amount of code needed to be shown to around 200 lines in total which are split to a few different functions, but if anyone heard of such an issue with livecharts and/or can suggest a different type of graph library I'd be more than happy to swap it out.
Actually, here's the code, lmao:
https://pastebin.com/YBn5CuD6
Another thing I thought of, could it be that I am adding too many rows to the ListView? We're talking about.. 8640 rows in 24 hours. Might that be something to do with it?
Sorry for the long post, thank you in advance.
For anyone interested, I lowered the sample rate which is responsible for how many points are on the LiveCharts chart and it goes very smooth right now, even after 3 days of running.
(RAM usage was also around 90 MB, which is to be expected from my application)
I believe that was the issue, so I'll mark this as an answer for the next person googling LiveCharts memory-leak.
I´m currently working on a c# winform-application, in this application I want to visualize a 2d-matrix as a heat-map. The matrix is altered within another thread roughly 50 times per second.
I do not want to slow down the working thread by the visualization, so I suppose using a lock would be counter-productive. Moreover, the visualization may miss some matrix-changes, updating the visualization a few times per second is sufficient.
I´m unsure how I should proceed and hope someone can point me in the right direction?!
We are currently using ZedGraph to draw a line chart of some data. The input data comes from a file of arbitrary size, therefore, we do not know what the maximum number of datapoints in advance. However, by opening the file and reading the header, we can find out how many data points are in the file.
The file format is essentially [time (double), value (double)]. However, the entries are not uniform in the time axis. There may not be any points between say t = 0 sec and t = 10 sec, but there might be 100K entires between t = 10 sec and t = 11 sec, and so on.
As an example, our test dataset file is ~2.6 GB and it has 324M points. We'd like to show the entire graph to the user and let her navigate through the chart. However, loading up 324M points to ZedGraph not only is impossible (we're on a 32-bit machine), but also not useful since there is no point of having so many points on the screen.
Using the FilteredPointList feature of ZedGraph also appears to be out of question, since that requires loading the entire data first and then performing filtering on that data.
So, unless we're missing anything, it appears that our only solution is to -somehow- decimate the data, however as we keep working on it, we're running into a lot of problems:
1- How do we decimate data that is not arriving uniformly in time?
2- Since the entire data can't be loaded into memory, any algorithm needs to work on the disk and so needs to be designed carefully.
3- How do we handle zooming in and out, especially, when the data is not uniform on the x-axis.
If data was uniform, upon initial load of the graph, we could Seek() by predefined amount of entries in the file, and choose every N other samples and feed it to ZedGraph. However, since the data is not uniform, we have to be more intelligent in choosing the samples to display, and we can't come up with any intelligent algorithm that would not have to read the entire file.
I apologize since the question does not have razor-sharp specificity, but I hope I could explain the nature and scope of our problem.
We're on Windows 32-bit, .NET 4.0.
I've needed this before, and it's not easy to do. I ended up writing my own graph component because of this requirement. It turned out better in the end because I put in all the features we needed.
Basically, you need to get the range of data (min and max possible/needed index values), subdivide it into segments (let's say 100 segments), and then determine a value for each segment by some algorithm (average value, median value, etc.). Then you plot based on those summarized 100 elements. This is much faster than trying to plot millions of points :-).
So what I am saying is similar to what you are saying. You mention you do not want to plot every X element because there might be a long stretch of time (index values on the x-axis) between elements. What I am saying is that for each subdivision of data determine what is the best value, and take that as the data point. My method is index value-based, so in your example of no data between the 0 sec and 10-sec index values I would still put data points there, they would just have the same values among themselves.
The point is to summarize the data before you plot it. Think through your algorithms to do that carefully, there are lots of ways to do so, choose the one that works for your application.
You might get away with not writing your own graph component and just write the data summarization algorithm.
I would approach this in two steps:
Pre-processing the data
Displaying the data
Step 1
The file should be preprocessed into a binary fixed format file.
Adding an index to the format, it would be int,double,double.
See this article for speed comparisons:
http://www.codeproject.com/KB/files/fastbinaryfileinput.aspx
You can then either break up the file into time intervals, say
one per hour or day, which will give you an easy way to express
accessing different time intervals. You could also just keep
one big file and have an index file which tells you where to find specific times,
1,1/27/2011 8:30:00
13456,1/27/2011 9:30:00
By using one of these methods you will be able to quickly find any block of data
by either time, via a index or file name, or by number of entries, due to the fixed byte
format.
Step 2
Ways to show data
1. Just display each record by index.
2. Normalize data and create aggregate data bars with open, high, low ,close values.
a. By Time
b. By record count
c. By Diff between value
For more possible ways to aggregate non-uniform data sets, you may want to look at
different methods used to aggregate trade data in the financial markets. Of course,
for speed in realtime rendering you would want to create files with this data already
aggregated.
1- How do we decimate data that is not
arriving uniformly in time?
(Note - I'm assuming your loader datafile is in text format.)
On a similar project, I had to read datafiles that were more than 5GB in size. The only way I could parse it out was by reading it into an RDBMS table. We chose MySQL because it makes importing text files into datatables drop-dead simple. (An interesting aside -- I was on a 32-bit Windows machine and couldn't open the text file for viewing, but MySQL read it no problem.) The other perk was MySQL is screaming, screaming fast.
Once the data was in the database, we could easily sort it and quantify large amounts of data into singular paraphrased queries (using built-in SQL summary functions like SUM). MySQL could even read its query results back out to a text file for use as loader data.
Long story short, consuming that much data mandates the use of a tool that can summarize the data. MySQL fits the bill (pun intended...it's free).
A relatively easy alternative I've found to do this is to do the following:
Iterate through the data in small point groupings (say 3 to 5 points at a time - the larger the group, the faster the algorithm will work but the less accurate the aggregation will be).
Compute the min & max of the small group.
Remove all points that are not the min or max from that group (i.e. you only keep 2 points from each group and omit the rest).
Keep looping through the data (repeating this process) from start to end removing points until the aggregated data set has a sufficiently small number of points where it can be charted without choking the PC.
I've used this algorithm in the past to take datasets of ~10 million points down to the order of ~5K points without any obvious visible distortion to the graph.
The idea here is that, while throwing out points, you're preserving the peaks and valleys so the "signal" viewed in the final graph isn't "averaged down" (normally, if averaging, you'll see the peaks and the valleys become less prominent).
The other advantage is that you're always seeing "real" datapoints on the final graph (it's missing a bunch of points, but the points that are there were actually in the original dataset so, if you mouse over something, you can show the actual x & y values because they're real, not averaged).
Lastly, this also helps with the problem of not having consistent x-axis spacing (again, you'll have real points instead of averaging X-Axis positions).
I'm not sure how well this approach would work w/ 100s of millions of datapoints like you have, but it might be worth a try.
I am developing a program in c#, and thanks to the matlab .net builder,
I am using a matlab mapping toolbox function "polybool", which in one of it's options calculate the difference of 2 polygons in 2-D.
The problem is that the functions takes about 0.01 seconds to finish in which is bad for me
because I call it a lot.
And this doesn't make sense at all because the polygons are 5 points each, so there is no
way that it take 0.01 second to find the results.
Does anyone has any ideas?
How are you computing the 0.01 seconds? If this is total operational time, it may very well be the marshaling in and out of the toolbox functionality, which will take some time. The actual routine may be running quickly, but getting your data from C# into the routine, and the results back, will have some overhead involved with the process.
Granted, this overhead probably scales well - since it's most likely (mostly) constant, so if you start dealing with larger polygons, you'll probably see your overall efficiencies scale very well.
I need to build a high performance winforms data grid using Visual Studio 2005, and I'm at a loss with where to start. I've build plenty of data grid applications, but none of those were very good when the data was constantly refreshing.
The grid is going to be roughly 100 rows by 40 columns, and each cell in the grid is going to update between 1 and 2 times a second(some cells possibly more). To me, this is the biggest drawback of the out of the box data grid, the repainting isn't very efficient.
Couple caveats
1) No third party vendors. This grid is backbone of all our applications, so while XCeed or Syncfusion or whatever might get us up and running faster, we'd slam into its limitations and be hosed. I'd rather put in the extra work up front, and have a grid that does exactly what we need.
2) I have access to Visual Studio 2008, so if it would be much better to start this in 2008, then I can do that. If its a tossup, I'd like to stick with 2005.
So whats the best approach here?
I would recommend the following approach if you have many cells that are updating at different rates. Rather than try to invalidate each cell each time the value changes you would be better off by limiting the refresh rate.
Have a timer that fires at a predefined rate, such as 4 times per second, and then each time it fires you repaint the cells that have changed since the last time around. You can then tweak the update rate in order to find the best compromise between performance and usability with some simple testing.
This has the advantage of not trying to update too often and so killing your CPU performance. It batches up changes between each refresh cycle and so two quick changes to a value that occur fractions of a second apart do not cause two refreshes when only the latest value is actually worth drawing.
Note this delayed drawing only applies to the rapid updates in value and does not apply to general drawing such as when the user moves the scroll bar. In that case you should draw as fast as the scroll events occur to give a nice smooth experience.
We use the Syncfusion grid control and from what I've seen it's pretty flexible if you take the time to modify it. I don't work with the control myself, one of my co-workers does all of the grid work but we've extended it to our needs pretty well including custom painting.
I know this isn't exactly answering your question, but it writing a control like this from scratch is going always going to be much more complicated than you anticipate, regardless of your anticipations. Since it'll be constantly updating I assume it's going to be databound which will be a chore in itself, especially to get it to be highly performant. Then there's debugging it.
Try the grid from DevExpress or ComponentOne. I know from experience that the built-in grids are never going to be fast enough for anything but the most trivial of applications.
I am planning to build a grid control to do the same as pass time, but still haven't got time. Most of the commercial grid controls have big memory foot print and update is typically an issue.
My tips would be (if you go custom control)
1. Extend a Control (not UserControl or something similar). It will give you speed, without losing much.
2. In my case I was targeting the grid to contain more data. Say a million row with some 20-100 odd columns. In such scenarios it usually makes more sense to draw it yourself. Do not try to represent each cell by some Control (like say Label, TextBox, etc). They eat up a lot of resources (window handles, memory, etc).
3. Go MVC.
The idea is simple: At any given time, you can display limited amount of data, due to screen size limitations, Human eye limitation, etc
So your viewport is very small even if you have gazillion rows and columns and the number of updates you have to do are no more than 5 per second to be any useful to read even if the data behind the grid id being updated gazillion times per second. Also remember even if the text/image to be displayed per cell is huge, the user is still limited by the cell size.
Caching styles (generic word to represent textsizes, fonts, Colors etc), also help in such scenario depending on how many of them you will be using in your grid.
There will be lot more work in getting some basic drawing (highlights, grid, boundaries, borders, etc) done to get various effects.
I don't recall exactly, but there was a c# .net grid on sourceforge, which can give you a good idea of how to start. That grid offered 2 options, VirtualGrid where the model data is not held by the grid making it very lightweight, and a Real grid (traditional) where the data storage is owned by the grid itself (mostly creating a duplicate, but depends on the application)
For a super-agile (in terms of updates), it might just be better to have a "VirtualGrid"
Just my thoughts