I'm working on a background changing application. Part of the application is a slideshow with 3 image previews (3 image boxes). Previous, Current and Next image. The problem is that each time the timer ticks the application takes about 8 MB of memory space. I know its most likely caused by the image drawing class but I have no idea how to dispose of the images that I'm not using.
UPDATE:
Thank you so much. I need to adjucst the code you have provided a little bit but it works now. When I tried using the dispose method before I used it on completely different object.
Thank you.
It works in the following order.
Load multiple images
retrieve image path
set time interval in which the images will be changed
start the timer
with each timer tick the timer does the following
pictureBoxCurr.BackgroundImage = Image.FromFile(_filenames.ElementAt(_currNum));
pictureBoxPrev.BackgroundImage = Image.FromFile(_filenames.ElementAt(_currNum - 1));
pictureBoxNext.BackgroundImage = Image.FromFile(_filenames.ElementAt(_currNum + 1));
Each time new previews are shown the memory usage takes another 8MB or so. I have no Idea what exactly is taking that space.
Please let me know if you know what is causing the problem or have any clues.
I would recommend calling the following code at every timer tick, prior to changing the images.
pictureBoxCurr.BackgroundImage.Dispose();
pictureBoxPrev.BackgroundImage.Dispose();
pictureBoxNext.BackgroundImage.Dispose();
This will free the unmanaged image resources immediately, rather than waiting for the Garbage Collector.
Related
I made a program that is at its heart is a keyboard hook. I press a specific button and it performs a specific action. Since there is a fairly large list of options that I can select from using a Combobox, I decided to make a Dictionary called ECCMDS (stands for embedded controller commands). I can then set my Combobox items to ECCMDS.Keys and select by a command by name. Makes for easy saving too because its a string I just save it to a XML file. Well the program monitors anywhere from 4-8 buttons. The problem comes from runtime. The program uses about 53 megs of memory (of course I look over at it now and it says 16 megs :/) Well the tablet that this is running on has 3Gb's of memory and a Atom processor. Normally i'd scoff at 53megs, but using a huge switch statement and the program uses about 2 or 3 megs (been sometime since I actually looked at its usage, so I can't remember exactly)
So although the Dictionary greatly reduces the complexity of my RunCommand method I'm wondering about the memory usage. This tablet at idle is using 80% of its memory, so I'd like to make as little of impact on that as possible. Is there another solution to this problem? Here is a small example of the dictionary
ECCMDS = new Dictionary<string, Action>()
{
{"Decrease Backlight", EC.DescreaseBrightness},
{"Increase Backlight", EC.IncreaseBrightness},
{"Toggle WiFi", new Action(delegate{EC.WirelessState = GetToggledState(EC.WirelessState);})},
{"Enable WiFi", new Action(delegate{EC.WirelessState = ObjectState.Enabled;})},
{"Disable WiFi", new Action(delegate{EC.WirelessState = ObjectState.Disabled;})},
{"{PRINTSCRN}", new Action(delegate{VKeys.User32Input.DoPressRawKey(0x2C);})},
};
is it possible to use reflection or something to achieve this?
EDIT
So after the nice suggestion of making a new program and comparing the 2 methods I've determained that it is not my Dictionary. I didn't think that WPF was that big of a difference between Winforms, but it must be. The new program doesn't hardly have any pictures (like it used to, most of my graphics are generated now) but the results are as follows
Main Entry Point:32356 kb
Before Huge Dictionary:33724 kb
After Initialization:35732 kb
After 10000 runs:37824 kb
That took 932ms to run
After Huge Dictionary:38444 kb
Before Huge Switch Statement:39060 kb
After Initialization:39696 kb
After 10000 runs:40076 kb
That took 1136ms to run
After Huge Switch Statement:40388 kb
I suggest you extract the Dictonary to a separate program and see how much space it occupies before you worry about how much space it is taking and if that is your problem.
Update: The answers from Andrew and Conrad were both equally helpful. The easy fix for the timing issue fixed the problem, and caching the bigger object references instead of re-building them every time removed the source of the problem. Thanks for the input, guys.
I'm working with a c# .NET API and for some reason the following code executes what I feel is /extremely/ slowly.
This is the handler for a System.Timers.Timer that triggers its elapsed event every 5 seconds.
private static void TimerGo(object source, System.Timers.ElapsedEventArgs e)
{
tagList = reader.GetData(); // This is a collection of 10 objects.
storeData(tagList); // This calls the 'storeData' method below
}
And the storeData method:
private static void storeData(List<obj> tagList)
{
TimeSpan t = (DateTime.UtcNow - new DateTime(1970, 1, 1));
long timestamp = (long)t.TotalSeconds;
foreach (type object in tagList)
{
string file = #"path\to\file" + object.name + ".rrd";
RRD dbase = RRD.load(file);
// Update rrd with current time timestamp and data.
dbase.update(timestamp, new object[1] { tag.data });
}
}
Am I missing some glaring resource sink? The RRD stuff you see is from the NHawk C# wrapper for rrdtool; in this case I update 10 different files with it, but I see no reason why it should take so long.
When I say 'so long', I mean the timer was triggering a second time before the first update was done, so eventually "update 2" would happen before "update 1", which breaks things because "update 1" has a timestamp that's earlier than "update 2".
I increased the timer length to 10 seconds, and it ran for longer, but still eventually out-raced itself and tried to update a file with an earlier timestamp. What can I do differently to make this more efficient, because obviously I'm doing something drastically wrong...
Doesn't really answer your perf question but if you want to fix the rentrancy bit set your timer.AutoRest to false and then call start() at the end of the method e.g.
private static void TimerGo(object source, System.Timers.ElapsedEventArgs e)
{
tagList = reader.GetData(); // This is a collection of 10 objects.
storeData(tagList); // This calls the 'storeData' method below
timer.Start();
}
Is there a different RRD file for each tag in your tagList? In your pseudo code you open each file N number of times. (You stated there is only 10 objects in the list thought.) Then you perform an update. I can only assume that you dispose your RRD file after you have updated it. If you do not you are keeping references to an open file.
If the RRD is the same but you are just putting different types of plot data into a single file then you only need to keep it open for as long as you want exclusive write access to it.
Without profiling the code you have a few options (I recommend profiling btw)
Keep the RRD files open
Cache the opened files to prevent you from having to open, write close every 5 seconds for each file. Just cache the 10 opened file references and write to them every 5 seconds.
Separate the data collection from data writing
It appears you are taking metric samples from some object every 5 seconds. If you do not having something 'tailing' your file, separate the collection from the writing. Take your data sample and throw it into a queue to be processed. The processor will dequeue each tagList and write it as fast as it can, going back for more lists from the queue.
This way you can always be sure you are getting ~5 second samples even if the writing mechanism is slowed down.
Use a profiler. JetBrains is my personal recommendation. Run the profiler with your program and look for the threads / methods taking the longest time to run. This sounds very much like an IO or data issue, but that's not immediately obvious from your example code.
In a WPF Window, I've got a line chart that plots real-time data (Quinn-Curtis RealTime chart for WPF). In short, for each new value, I call a SetCurrentValue(x, y) method, and then the UpdateDraw() method to update the chart.
The data comes in via a TCP connection in another thread. Every new value that comes in causes an DataReceived event, and its handler should plot the value to the chart and then update it. Logically, I can't call UpdateDraw() directly, since my chart is in the UI thread which is not the same thread as where the data comes in.
So I call Dispatcher.Invoke( new Action (UpdateDraw()) ) - and this works fine, well, as long as I update max. 30 times/sec. When updating more often, the Dispatcher can't keep up and the chart updated slower than the data comes in. I tested this using a single-thread situation with simulated data and without the Dispatcher there are no problems.
So, my conclusion is that the Dispatcher is too slow for this situation. I actually need to update 100-200 times/sec!
Is there a way to put a turbo on the Dispatcher, or are there other ways to solve this? Any suggestions are welcome.
An option would be to use a shared queue to communicate the data.
Where the data comes on, you push the data to the end of the queue:
lock (sharedQueue)
{
sharedQueue.Enqueue(data);
}
On the UI thread, you find a way to read this data, e.g. using a timer:
var incomingData = new List<DataObject>();
lock (sharedQueue)
{
while (sharedQueue.Count > 0)
incomingData.Add(sharedQueue.Dequeue());
}
// Use the data in the incomingData list to plot.
The idea here is that you're not communicating that data is coming in. Because you have a constant stream of data, I suspect that's not a problem. I'm not saying that the exact implementation as give above is the rest, but this is about the general idea.
I'm not sure how you should check for new data, because I do not have enough insight into the details of the application; but this may be a start for you.
Youre requierments are bonkers- You seriously do NOT need 100-200 updates per second, especialyl as teh screen runs at 60 updates per second normally. People wont see them anyway.
Enter new data into a queue.
Trigger a pull event on / for the dispatcher.
Santize data in the queue (thro out doubles, last valid wins) and put them in.l
30 updates per second are enough - people wont see a difference. I had performacne issues on some financial data under high load with a T&S until I did that - now the graph looks better.
Keep Dispatcher moves as few as you can.
I still like to know why you'd want to update a chart 200 times per second when your monitor can't even display it that fast. (Remember, normal flatscreen monitors have an update-rate of 60 fps)
What's the use of updating something 200 times per second when you can only SEE updates 60 times per second ?
You might as well batch incoming data and update the chart at 60 fps since you won't be able to see the difference anyway.
If it's not just about displaying the data but you're doing something else with it - say you are monitoring it to see if it reaches a certain threshold - than I recommend splitting the system in 2 parts : one part monitoring at full speed, the other independently displaying at the maximum speed your monitor can handle : 60 fps.
So please, tell us why you want to update a ui-control more often than it can be displayed to the user.
WPF drawing occurs in a separate thread. Depending on your chart complexity, your PC must have had a mega-descent video card to keep up with 100 frames per second. WPF uses Direct3D to draw everything on screen and optimizing video driver for this has been added in Vista (improved in Windows 7). So, on XP you might have troubles just because of your high data-output rate on poorly designed OS.
Despite all that, I see no reason of printing information to screen with a rate of more than 30-60 frames per second. Come on! Even FPS shooters does not require such a strong reflexes from player. Do you want to tell me, that your poor chart does? :) If by this outputting, you produce some side-effects, which are what you actually need, then it's completely different story. Tell us more about the problem then.
I'm programming a Netduino board using the .NET Micro Framework 4.1 and want to get a higher time resolution than milliseconds. This is because I'm attempting to dim an LED by blinking it really fast.
The issue is that the sample code uses Thread.Sleep(..) which takes a number of milliseconds.
Sample code from http://netduino.com/projects/ showing the issue in question:
OutputPort ledOnboard = new OutputPort(Pins.ONBOARD_LED, false);
while (true)
{
ledOnboard.Write(true);
Thread.Sleep(1); // << PROBLEM: Can only get as low as 1 millisecond
Even if there's another way to accomplish dimming by not using a greater time resolution, I'm game.
This doesn't answer your question about getting a better time resolution, but it does solve your problem with changing the brightness on an LED. You should be using the PWM module for the Netduino.
Netduino Basics: Using Pulse Width Modulation (PWM) is a great article on how to use it.
I have had a similar problem in the past and used the following method to time in the microsecond range. The first line determines how many ticks are in a millisecond (its been a while since I used this, but I think 1 tick was 10 microseconds). The second line gets the amount of time the system has been on (in ticks). I hope this helps.
public const Int64 ticks_per_millisecond = System.TimeSpan.TicksPerMillisecond;
public static long GetCurrentTimeInTicks()
{
return Microsoft.SPOT.Hardware.Utility.GetMachineTime().Ticks;
}
You can use a timer to raise an event instead of using sleep.
The Interval property on a timer is a double so you can have less than a millisecond on it.
http://msdn.microsoft.com/en-us/library/0tcs6ww8(v=VS.90).aspx
In his comment to Seidleroni's answer BrainSlugs83 suggests "sit in a busy loop and wait for the desired number of ticks to elapse. See the function I added in the edit". But I cannot see the function added to the edit. I assume it would be something like this:
using System;
using Microsoft.SPOT.Hardware;
private static long _TicksPerMicroSecond = TimeSpan.TicksPerMillisecond/1000;
private void Wait(long microseconds)
{
var then = Utility.GetMachineTime().Ticks;
var ticksToWait = microseconds * _TicksPerNanoSecond;
while (true)
{
var now = Utility.GetMachineTime().Ticks;
if ((now - then) > ticksToWait) break;
}
}
A point that you might not be thinking about is that your code is relying on the .NET System namespace, which is based on the real time clock in your PC. Notice that the answers rely on the timer in the device.
Moving forward, I would suggest that you take a moment to qualify the source of the information you are using in your code -- is it .NET proper (Which is fundamentally based on your PC), or the device the code is running on (Which will have a namespace other than System, for example)?
PWM is a good way to control DC current artificially (by varying the pulse width), but varying the PWM frequency will still be a function of time at the end of the day.
Rather than use delays....like Sleep....you might want to spawn a thread and have it manage the brightness. Using Sleep is still basically a straight line procedural method and your code will only be able to do this one thing if you use a single thread.
I'm loading a List<Image> from a folder of about 250 images. I did a DateTime comparison and it takes a full 11 second to load those 250 images. That's slow as hell, and I'd very much like to speed that up.
The images are on my local harddrive, not even an external one.
The code:
DialogResult dr = imageFolderBrowser.ShowDialog();
if(dr == DialogResult.OK) {
DateTime start = DateTime.Now;
//Get all images in the folder and place them in a List<>
files = Directory.GetFiles(imageFolderBrowser.SelectedPath);
foreach(string file in files) {
sourceImages.Add(Image.FromFile(file));
}
DateTime end = DateTime.Now;
timeLabel.Text = end.Subtract(start).TotalMilliseconds.ToString();
}
EDIT: yes, I need all the pictures. The thing I'm planning is to take the center 30 pixelcolums of each and make a new image out of that. Kinda like a 360 degrees picture. Only right now, I'm just testing with random images.
I know there are probably way better frameworks out there to do this, but I need this to work first.
EDIT2: Switched to a stopwatch, the difference is just a few milliseconds. Also tried it with Directory.EnumerateFiles, but no difference at all.
EDIT3: I am running .NET 4, on a 32-bit Win7 client.
Do you actually need to load all the images? Can you get away with loading them lazily? Alternatively, can you load them on a separate thread?
You cannot speed up your HDD access and decoding speed. However a good idea would be to load the images in a background thread.
Perhaps you should consider showing a placeholder until the image is actually loaded.
Caution: you'll need to insert the loaded images in your UI thread anyway!
You could use Directory.EnumerateFiles along with Parallel.ForEach to spread the work over as many CPUs as you have.
var directory = "C:\\foo";
var files = Directory.EnumerateFiles(directory, "*.jpg");
var images = files.AsParallel().Select(file => Image.FromFile(file)).ToList();
As loading a image does both file IO and CPU work, you should get some speadup by using more then one thread.
If you are using .net 4, using tasks would be the way to go.
Given that you likely already know the path (from the dialog box?), you might be better using Directory.EnumerateFiles and then work with the collection it returns instead of a list.
http://msdn.microsoft.com/en-us/library/dd383458.aspx
[edit]
just noticed you're also loading the files into your app within the loop - how big are they? Depending on their size, it might actually be a pretty good speed!
Do you need to load them at this point? Can you change some display code elsewhere to load on demand?
You probably can't speed things up as the bottle neck is reading the files themselves from disk and maybe parsing them as images.
What you can do though is Cache the list after it's loaded and then any subsequent calls to your code will be lot faster.