This question already has an answer here:
DataPointCollection Clear performance
(1 answer)
Closed 3 days ago.
I have a C# WinForms app which displays the chart. Every time the source data changes, I do
foreach (var s in chart.Series)
s.Points.Clear();
foreach (var item in sourceData)
chart.Series["Prsr"].Points.AddY(item.Prsr);
The problem is that the souceData may have over 30k items. Above method in my case blocks the UI for over 1 second.
Does anyone know any better method to avoid the problem?
sourceData is a List<obj>
I also tried with the DataBindY:
chartMain.Series["Prsr"].Points.DataBindY(sourceData, "Prsr");
but it didn't help.
Thx you all very much.
First I've tested the #TnTinMn suggestion (Points.SuspendUpdates and clearing all points in a loop instead od Points.Clear). It shortened the time of clearing the example chart from 400ms to 15ms.
Very big improvement, but this is just half of the total time the UI was blocked.
Then the suggestion from #Flydog57 - suspendUpdate() when adding points shortened the time by almost 300ms. Another big improvement.
Now the next step will as suggested by #LarsTech - reducing the number of samples to display (divide into data ranges and calculate the average for each range). Why didn't I think about it?
Thank you all once again.
Related
Currently I am using LocalReport. Render to create PDF's for 90K records. Using normal 'for' loop, it takes around 4 hours to create PDF only. I have tried many options.
Tried with Parallel. Foreach with and without setting MaxDegreeOfParallelism with different values. There are 2 processors in my system. With setting MaxDegreeOfParallelism(MDP) =4, it is taking the time as normal 'for' loop. I thought increasing MDP to 40 will speed up the process. But didn't get expected results since it took 900 minutes.
Used
var list=List<Thread ()>;
foreach (var record in records) {
var thread = new Thread (=> GeneratePDF());
thread.Start();
list.Add(thread);
}
foreach(var listThreads in thread){
listThreads. Join();
}
I used the code above like that. But it ended up creating too many threads and took so longer time.
I need help in using Parallel. Foreach to speed up the process of creating PDF's for 90K records. Suggestions to change the code is also acceptable.
Any help would be much appreciated.
Thanks
I don't know any pdf generators, so I can only assume there is a lot overhead in initializing and in finalizing things. That's what I'd do:
Find an open source pdf generator.
Let it generate a few separate pieces of a pdf - header, footer, etc.
Dig the code to find where the header/footer is done and try work around them to reuse generator states without running through the entire process.
Try to stich together a pdf from stored states and a generator writing only the different parts.
I'm working on a background changing application. Part of the application is a slideshow with 3 image previews (3 image boxes). Previous, Current and Next image. The problem is that each time the timer ticks the application takes about 8 MB of memory space. I know its most likely caused by the image drawing class but I have no idea how to dispose of the images that I'm not using.
UPDATE:
Thank you so much. I need to adjucst the code you have provided a little bit but it works now. When I tried using the dispose method before I used it on completely different object.
Thank you.
It works in the following order.
Load multiple images
retrieve image path
set time interval in which the images will be changed
start the timer
with each timer tick the timer does the following
pictureBoxCurr.BackgroundImage = Image.FromFile(_filenames.ElementAt(_currNum));
pictureBoxPrev.BackgroundImage = Image.FromFile(_filenames.ElementAt(_currNum - 1));
pictureBoxNext.BackgroundImage = Image.FromFile(_filenames.ElementAt(_currNum + 1));
Each time new previews are shown the memory usage takes another 8MB or so. I have no Idea what exactly is taking that space.
Please let me know if you know what is causing the problem or have any clues.
I would recommend calling the following code at every timer tick, prior to changing the images.
pictureBoxCurr.BackgroundImage.Dispose();
pictureBoxPrev.BackgroundImage.Dispose();
pictureBoxNext.BackgroundImage.Dispose();
This will free the unmanaged image resources immediately, rather than waiting for the Garbage Collector.
In a WPF Window, I've got a line chart that plots real-time data (Quinn-Curtis RealTime chart for WPF). In short, for each new value, I call a SetCurrentValue(x, y) method, and then the UpdateDraw() method to update the chart.
The data comes in via a TCP connection in another thread. Every new value that comes in causes an DataReceived event, and its handler should plot the value to the chart and then update it. Logically, I can't call UpdateDraw() directly, since my chart is in the UI thread which is not the same thread as where the data comes in.
So I call Dispatcher.Invoke( new Action (UpdateDraw()) ) - and this works fine, well, as long as I update max. 30 times/sec. When updating more often, the Dispatcher can't keep up and the chart updated slower than the data comes in. I tested this using a single-thread situation with simulated data and without the Dispatcher there are no problems.
So, my conclusion is that the Dispatcher is too slow for this situation. I actually need to update 100-200 times/sec!
Is there a way to put a turbo on the Dispatcher, or are there other ways to solve this? Any suggestions are welcome.
An option would be to use a shared queue to communicate the data.
Where the data comes on, you push the data to the end of the queue:
lock (sharedQueue)
{
sharedQueue.Enqueue(data);
}
On the UI thread, you find a way to read this data, e.g. using a timer:
var incomingData = new List<DataObject>();
lock (sharedQueue)
{
while (sharedQueue.Count > 0)
incomingData.Add(sharedQueue.Dequeue());
}
// Use the data in the incomingData list to plot.
The idea here is that you're not communicating that data is coming in. Because you have a constant stream of data, I suspect that's not a problem. I'm not saying that the exact implementation as give above is the rest, but this is about the general idea.
I'm not sure how you should check for new data, because I do not have enough insight into the details of the application; but this may be a start for you.
Youre requierments are bonkers- You seriously do NOT need 100-200 updates per second, especialyl as teh screen runs at 60 updates per second normally. People wont see them anyway.
Enter new data into a queue.
Trigger a pull event on / for the dispatcher.
Santize data in the queue (thro out doubles, last valid wins) and put them in.l
30 updates per second are enough - people wont see a difference. I had performacne issues on some financial data under high load with a T&S until I did that - now the graph looks better.
Keep Dispatcher moves as few as you can.
I still like to know why you'd want to update a chart 200 times per second when your monitor can't even display it that fast. (Remember, normal flatscreen monitors have an update-rate of 60 fps)
What's the use of updating something 200 times per second when you can only SEE updates 60 times per second ?
You might as well batch incoming data and update the chart at 60 fps since you won't be able to see the difference anyway.
If it's not just about displaying the data but you're doing something else with it - say you are monitoring it to see if it reaches a certain threshold - than I recommend splitting the system in 2 parts : one part monitoring at full speed, the other independently displaying at the maximum speed your monitor can handle : 60 fps.
So please, tell us why you want to update a ui-control more often than it can be displayed to the user.
WPF drawing occurs in a separate thread. Depending on your chart complexity, your PC must have had a mega-descent video card to keep up with 100 frames per second. WPF uses Direct3D to draw everything on screen and optimizing video driver for this has been added in Vista (improved in Windows 7). So, on XP you might have troubles just because of your high data-output rate on poorly designed OS.
Despite all that, I see no reason of printing information to screen with a rate of more than 30-60 frames per second. Come on! Even FPS shooters does not require such a strong reflexes from player. Do you want to tell me, that your poor chart does? :) If by this outputting, you produce some side-effects, which are what you actually need, then it's completely different story. Tell us more about the problem then.
This is going to be a long post. I would like to have suggestions if any on the procedure I am following. I want the best method to print line numbers next to each CRLF-terminated-line in a richtextbox. I am using C# with .NET. I have tried using ListView but it is inefficient when number of lines grow. I have been successful in using Graphics in custom control to print the line numbers and so far I am happy with the performance.
But as the number of lines grow to 50K to 100K the scrolling is affected badly. I have overridden WndProc method and handling all the messages to call the line-number printing only when required. (Overriding OnContentsResized and OnVScroll make redundant calls to the printing method).
Now the line number printing is fine when number of lines is small say upto 10K (with which I am fine as it is rare need to edit a file with 10000 lines) but I want to remove the limitation.
Few Observations
Number of lines displayed in the richtexbox is constant +-1. So, the performance difference should be due to large text and not because I am using Graphics painting.
Painting line numbers for large text is slower when compared to small files
Now the Pseudo Code
FIRST_LINE_NUMBER = _textBox.GetFirstVisibleLineNumber();
LAST_LINE_NUMBER = _textBox.GetLastVisibleLineNUmber();
for(loop_from_first_to_last_line_number)
{
Y = _textBox.GetYPositionOfLineNumber(current_line_number);
graphics_paint_line_number(current_line_number, Y);
}
I am using GetCharIndexFromPosition and loop through the RichTextBox.Lines to find the line number in both the functions which get the line numbers. To get Y position I am using GetPositionFromCharIndex to get the Point struct.
All the above RichTextBox methods seem to be of O(n), which eats up the performance. (Correct me if I am wrong.)
I have decided to use a binary-tree to store the line numbers to improve the search perfomance when searching for line number by char index. I have an idea of getting a data-structure which takes O(n) construction time, O(nlgn) worst-case-update, and O(lgn) search.
Is this approach worth the effort?
Is there any other approach to solve the problem? If required I am ready to write the control from scratch, I just want it to be light-weight and fast.
Before deciding on the best way forward, we need to make sure we understand the bottleneck.
First of all, it is important to know how RichTextbox (which I assume you are using as you mentioned it) handles the large files. So I would recommend to remove all line printing stuff and see how it performs with large text. If it is poor, there is your problem.
Second step would be to put some profiling statements or just use a profiler (one comes with the VS 2010) to find the bottleneck. It might turn out to be the method for finding the line number, or something else.
At this point, I would only suggest more investigation. If you have finished the investigation and have more info, update your question and I will get back to you accordingly.
I just got a code handed over to me. The code is written in C# and it inserts realtime data into database every second. The data is accumulated in time which makes the numbers big.
The data is updated within the second many times then at the end of the second result is taken and inserted.
We used to address the dataset rows directly within the second through the properties. For example many operations like this one 'datavaluerow.meanvalue += mean; could take place.
we figured out that this is degrading the performance after running the profiler becuase of the internal casting done so we created 2d array of decimals on which the updates are carried out then the values are assigned to the datarows only at the end of the second.
I ran a profiler and found out that it is still taking a lot of time (although less than the time spent accessing datarows frequently when added up).
The code that is exectued at the end of the second is as follows
public void UpdateDataRows(int tick)
{
//ord
//_table1Values is of type decimal[][]
for (int i = 0; i < _table1Values.Length; i++)
{
_table1Values[i][(int)table1Enum.barDateTime] = tick;
table1Row[i].ItemArray = _table1Values[i].Cast<object>().ToArray();
}
// this process is done for other 10 tables
}
Is there a way to further improve this approach.
One obvious question: why do you have a 2D array of decimals when you're only updating them with integers? Could you get away with an int[][] instead?
Next, why are you accessing (int)table1Enum.barDateTime on each iteration? Given that there's a conversion involved there, you may find it helps if you extract that out of the loop.
However, I suspect the majority of the time is going to be spent in _table1Values[i].Cast<object>().ToArray(). Do you really need to do that? Taking a copy of the decimal[] (or int[]) would be faster than boxing every value on every iteration on every call - and then creating another array.