Windows Forms app slows down after about 12 hours - c#

A Programmer in my office has written an incredibly large application in Windows Forms. Anyways, he keeps having trouble with the application slowing down after about 12 hours. We have confirmed that it is the actual Event Loop that ends up running slowly and not the code after the events fire. For instance, even typing into a textbox will be extremely slow. He has several socket communication threads, which we have confirmed are running at normal speed. The only thing I can think of is that he has several System.Timers.Timer instances throughout the application. Could they be the problem? The program slows down usually after no one has been using it for about 5 or 6 hours.
I know there could be a long list of possible issues. We just need some advice on where to start looking. I have tried all of the obvious.
One other thing to mention. His architecture consists of a base form, which includes a panel with controls that every page has along with 3 timers, and all other forms inherit from this base form. There are probably 15 or so of these forms, all of which are loaded into memory at startup. We did this, because the client was complaining about switching between the forms the first time took a few seconds. Each form has potentially fifty to one hundred instances of a control we wrote for him to use which does all of his back-end work. There is a static timer in this control and one static thread as well--since there is only one instance regardless of how many instances of the control are in memory, I can't imagine that those are the issue. Also the base form's timers are static.
I cannot vouch for the efficiency of his code, but it does run really well at our office, and for 5 to 6 hours on site.
Any ideas?
Edit:
I just talked to the guy on site and he asked. 1st, the event handler for one of the static timers is not static--how that it is possible for a static timer to access an instance method seems weird to me. Second, the timers' AutoReset is set to true.
Update:
Ok, I finally got with the guy today to look at some of the code.
He had several static members of his class, i.e. the timers, some buttons, and user controls. Then in the constructor he was using the new operator on each of those static members without a static bool isInit flag.
In other words, the static members were being initialized each time a new form was created but only the last one initialized was being referenced. However, I would imagine that the Form Container was holding references to the old objects so the old objects would never get deleted. Also, wouldn't this be bad aliasing for the containers if the object were to be deleted when the static member's reference was changed? Either way, a bad leak, or a bad alias would cause problems. I am hoping that is the only problem. I am having him fix all of that and then we will test again.
To add insult to injury, he was calling GC.KeepAlive(the static timer) that had a new reference inside the constructor. So, he had 21 timers running.

Are you disposing? Are you holding objects in memory? Are you holding on to some other unmanaged resource.
Leave it running, wait till it gets slow, attach a debugger, step through and see which lines are slow in your problem area.
EDIT:
If it only goes slow on site then, if the product is for a specific client you should construct a reference environment that matches the clients as closely as possible. This would be both useful for the future, and useful now for identifying the differences between your systems which are likely the cause of the problem.
I did have a similar sounding issue where we performed some remoting over sockets on background threads between several services on different machines. Unfortunately I can't remember the exact details (sigh.) As I recall we kept requesting at a set time interval but, the reponse time from the service got slower over time, eventually the response time exceded the set interval. This was fine for the first 1000 or so calls, .Net kept a nice growing stack of the callbacks we were expecting. However, eventually this list reached some internal limit and the message pump froze, including all the painting on client GUIs. This was resolved by ensuring that we would not call until we had had a response. This kind of race condition may or may not be what you are experiencing but I thought it may be worth mentioning.

Related

How can I make sure a C# console program ALWAYS exits?

I've written a small C# console app that is used by many users on a shared storage server. It's runtime should always be < 3 seconds or so, and is run automatically in the background to assist another GUI app the user is really trying to use. Because of this, I want to make sure the program ALWAYS exits completely, no matter if it throws an error or what not.
In the Application_Startup, I have the basic structure of:
try
{
// Calls real code here
}
catch
{
// Log any errors (and the logging itself has a try with empty catch around it
// so that there's no way it can causes problems)
}
finally
{
Application.Shutdown();
}
I figured that with this structure, it was impossible for my app to become a zombie process. However, when trying to push new versions of this app, I repeatedly find that I cannot delete and replace the executable because the "file is in use", meaning that it's hanging on someone's computer out there, even though it should only run for a few seconds and always shutdown.
So, how is it that my app is seemingly becoming a hanging process on peoples' computers with the code structure I have? What am I missing?
Edit: Added "Application." to resolve ShutDown() for clarity.
There are two options here:
Your console application doesn't really finish in 3 seconds, but rather takes a lot longer. You need to debug it and see what takes it that long.
Your console application takes 3 seconds to exit, but it is run every minute by the GUI, and you have more than 40 users, so the probability of finding the executable unused are slim.
If it's the first one, and you don't want to debug it, you can always start a second thread, wait for 3 seconds and then kill the entire process.
Maybe the code inside the try block is still executing for at least one of the clients and is not really limited to 3s or so. To prevent such case, you would need multithreaded application - one thread for processing and one in the background killing the working thread after a timeout. Prior to that you should ask yourself if such infrastructure is really needed.
Another thing that comes to mind would be that one of the users had the application running right at the moment, probability depends on the number of your users.
Maybe designing your support app as a always running multithreaded service would be a much better idea instead of instantiating one running application for each client request.

Pre-instantiate prototypes in spring.net

Context: I have a set of View/Presenters and I've noticed that for complex views I get some performance issues at the time of the InitializeComponent() call
Is there any way to instruct the spring container to pre-instantiate objects scoped as prototype? Something similar to a queue with the objects ready when the application requests them?
We had exactly the same problem. We also found that this performance overhead occured only the first time we requested a form from the container. We didn't find a clean solution, so we decided to write an initialization routine that runs in the background and requests all objects of type Form from the container. When this routine is finished, all forms open quickly.
Looking forward to a better sution, but this worked for us. Main disadvantage of this workaround is, that during the initialization routine, users might still experience some slow loading forms.

Very High Memory Usage in .NET 4.0

I have a C# Windows Service that I recently moved from .NET 3.5 to .NET 4.0. No other code changes were made.
When running on 3.5, memory utilzation for a given work load was roughly 1.5 GB of memory and throughput was 20 X per second. (The X doesn't matter in the context of this question.)
The exact same service running on 4.0 uses between 3GB and 5GB+ of memory, and gets less than 4 X per second. In fact, the service will typically end up stalling out as memory usage continue to climb until my system is siting at 99% utilization and page file swapping goes nuts.
I'm not sure if this has to do with garbage collection, or what, but I'm having trouble figuring it out. My window service uses the "Server" GC via the config file switch seen below:
<runtime>
<gcServer enabled="true"/>
</runtime>
Changing this option to false didn't seem to make a difference. Futhermore, from the reading I've done on the new GC in 4.0, the big changes only effect the workstation GC mode, not server GC mode. So perhaps GC has nothing to do with the issue.
Ideas?
Well this was an interesting one.
The root cause turns out to be a change in the behavior of SQL Server Reporting Services' LocalReport class (v2010) when running this on top of .NET 4.0.
Basically, Microsoft altered the behavior of RDLC processing so that each time a report was processed it was done so in a seperate application domain. This was actually done specifically to address a memory leak caused by the inability to unload assemblies from app domains. When the LocalReport class processed an RDLC file, it actually creates an assembly on the fly and loads it into the app domain.
In my case, due to the large volume of report I was processing, this was resulting in very large numbers of System.Runtime.Remoting.ServerIdentity objects being created. This was my tip off to the cause, as I was confused as to why processing an RLDC required remoting.
Of course, to call a method on a class in another app domain, remoting is exactly what you use. In .NET 3.5, this wasn't necessary as, by default, the RDLC-assembly was loaded into the same app domain. In .NET 4.0, however, a new app domain is created by default.
The fix was fairly easy. First I needed to go enable legacy security policy using the following config:
<runtime>
<NetFx40_LegacySecurityPolicy enabled="true"/>
</runtime>
Next, I needed to force the RDLCs to be processed in the same app domain as my service by calling the following:
myLocalReport.ExecuteReportInCurrentAppDomain(AppDomain.CurrentDomain.Evidence);
This resolved the issue.
I ran into this exact issue. And it is true that app domains are created and not cleaned up. However I wouldn't recommend reverting to legacy. They can be cleaned up by ReleaseSandboxAppDomain().
LocalReport report = new LocalReport();
...
report.ReleaseSandboxAppDomain();
Some other things I also do to clean up:
Unsubscribe to any SubreportProcessing events,
Clear Data Sources,
Dispose the report.
Our windows service processes several reports a second and there are no leaks.
I'm pretty late to this, but I have a real solution and can explain why!
It turns out that LocalReport here is using .NET Remoting to dynamically create a sub appdomain and run the report in order to avoid a leak internally somewhere. We then notice that, eventually, the report will release all the memory after 10 to 20 minutes. For people with a lot of PDFs being generated, this isn't going to work. However, the key here is that they are using .NET Remoting. One of the key parts to Remoting is something called "Leasing". Leasing means that it will keep that Marshal Object around for a while since Remoting is usually expensive to setup and its probably going to be used more than once. LocalReport RDLC is abusing this.
By default, the leasing time is... 10 minutes! Also, if something makes various calls into it, it adds another 2 minutes to the wait time! Thus, it can randomly be between 10 and 20 minutes depending how the calls line up. Luckily, you can change how long this timeout happens. Unluckily, you can only set this once per app domain... Thus, if you need remoting other than PDF generation, you will probably need to make another service running it so you can change the defaults. To do this, all you need to do is run these 4 lines of code at startup:
LifetimeServices.LeaseTime = TimeSpan.FromSeconds(5);
LifetimeServices.LeaseManagerPollTime = TimeSpan.FromSeconds(5);
LifetimeServices.RenewOnCallTime = TimeSpan.FromSeconds(1);
LifetimeServices.SponsorshipTimeout = TimeSpan.FromSeconds(5);
You'll see the memory use start to rise and then within a few seconds you should see the memory start coming back down. Took me days with a memory profiler to really track this down and realize what was happening.
You can't wrap ReportViewer in a using statement (Dispose crashes), but you should be able to if you use LocalReport directly. After that disposes, you can call GC.Collect() if you want to be doubly sure you are doing everything you can to free up that memory.
Hope this helps!
Edit
Apparently, you should call GC.Collect(0) after generating a PDF report or else it appears the memory use could still get high for some reason.
You might want to
profile the heap
use WinDbg + SOS.dll to establish what resource is being leaked and from where the reference is held
Perhaps some API has changed semantics or there might even be a bug in the 4.0 version of the framework
Just for completeness, if anyone is looking for the equivalent ASP.Net web.config setting, it is:
<system.web>
<trust legacyCasModel="true" level="Full"/>
</system.web>
ExecuteReportInCurrentAppDomain works the same.
Thanks to this Social MSDN reference.
It seems as though Microsoft tried putting the report into its own separate memory space to work around all of the memory leaks rather than fix them. In doing so, they introduced some hard crashes, and ended up having more memory leaks anyway. They seem to cache the report definition, but never use it and never clean it up, and every new report creates a new report definition, taking up more and more memory.
I played around with doing the same thing: use a separate app domain and marshal the report over to it. I think that is a terrible solution and makes a mess very quickly.
What I did instead is similar: split the reporting part of your program out into its own separate reports program. This turns out to be a good way to organize your code anyway.
The tricky part is passing information to the separate program. Use the Process class to start a new instance of the reports program and pass any parameters it needs on the command line. The first parameter should be an enum or similar value indicating the report that should be printed. My code for this in the main program looks something like:
const string sReportsProgram = "SomethingReports.exe";
public static void RunReport1(DateTime pDate, int pSomeID, int pSomeOtherID) {
RunWithArgs(ReportType.Report1, pDate, pSomeID, pSomeOtherID);
}
public static void RunReport2(int pSomeID) {
RunWithArgs(ReportType.Report2, pSomeID);
}
// TODO: currently no support for quoted args
static void RunWithArgs(params object[] pArgs) {
// .Join here is my own extension method which calls string.Join
RunWithArgs(pArgs.Select(arg => arg.ToString()).Join(" "));
}
static void RunWithArgs(string pArgs) {
Console.WriteLine("Running Report Program: {0} {1}", sReportsProgram, pArgs);
var process = new Process();
process.StartInfo.FileName = sReportsProgram;
process.StartInfo.Arguments = pArgs;
process.Start();
}
And the reports program looks something like:
[STAThread]
static void Main(string[] pArgs) {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
var reportType = (ReportType)Enum.Parse(typeof(ReportType), pArgs[0]);
using (var reportForm = GetReportForm(reportType, pArgs))
Application.Run(reportForm);
}
static Form GetReportForm(ReportType pReportType, string[] pArgs) {
switch (pReportType) {
case ReportType.Report1: return GetReport1Form(pArgs);
case ReportType.Report2: return GetReport2Form(pArgs);
default: throw new ArgumentOutOfRangeException("pReportType", pReportType, null);
}
}
Your GetReportForm methods should pull the report definition, make use of relevant arguments to obtain the dataset, pass the data and any other arguments to the report, and then place the report in a report viewer on a form and return a reference to the form. Note that it is possible to extract much of this process so that you can basically say 'give me a form for this report from this assembly using this data and these arguments'.
Also note that both programs must be able to see your data types that are relevant to this project, so hopefully you have extracted your data classes into their own library, which both of these programs can share a reference to. It would not work to have all of the data classes in the main program, because you would have a circular dependency between the main program and the report program.
Don't over do it with the arguments, either. Do any database querying you need in the reports program; don't pass a huge list of objects (which probably wouldn't work anyway). You should just be passing simple things like database ID fields, date ranges, etc. If you have particularly complex parameters, you might need to push that part of the UI to the reports program too and not pass them as arguments on the command line.
You can also put a reference to the reports program in your main program, and the resulting .exe and any related .dlls will be copied to the same output folder. You can then run it without specifying a path and just use the executable filename by itself (ie: "SomethingReports.exe"). You can also remove the reporting dlls from the main program.
One issue with this is that you will get a manifest error if you've never actually published the reports program. Just dummy publish it once, to generate a manifest and then it will work.
Once you have this working, it's very nice to see your regular program's memory stay constant when printing a report. The reports program appears, taking up more memory than your main program, and then disappears, cleaning it up completely with your main program taking up no more memory than it already had.
Another issue might be that each report instance will now take up more memory than before, since they are now entire separate programs. If the user prints a lot of reports and never closes them, it will use up a lot of memory very fast. But I think this is still much better since that memory can easily be reclaimed simply by closing the reports.
This also makes your reports independent of your main program. They can stay open even after closing the main program, and you can generate them from the command line manually, or from other sources as well.

Some questions coming from application programming (C#/Visual C++) to ASP.NET (C#)

At the new place I am working, I've been tasking with developing a web-application framework. I am new (6 months ish) to the ASP.NET framework and things seem pretty straight forward, but I have a few questions that I'd like to ask you ASP professionals. I'll note that I am no stranger to C#.
Long life objects/Caching
What is the preferred method to deal with objects that you don't want to re-initialize every time a page is it? I noticed that there was a cache manager that can be used, but are there any caveats to using this? For example, I might want to cache various things and I was thinking about writing a wrapper around the cache that prefixed cache names so that I could implement different caches using the same underlying .NET cache manager.
1) Are there any design considerations I need to think about the objects that I am want to cache?
2) If I want to implement a manager of some time that is around during the lifetime of the web application (thread-safe, obviously), is it enough to initialize it during app_start and kill it in app_end? Or is this practiced frowned upon and any managers are created uniquely in the constructor/init method of the page being served.
3) If I have a long-term object initialized at app start, is this likely to get affected when the app pool is recycled? If it is destroy at app end is it a case of it simply getting destroyed and then recreated again? I am fine with this restriction, I just want to get a little clearer :)
Long Life Threads
I've done a bit of research on this and this question is probably redundant. It seems it is not safe to start a worker thread in the ASP.NET environment and instead, use a windows service to do long-running tasks. The latter isn't exactly a problem, the target environments will have the facility to install services, but I just wanted to double check that this was absolutely necessary. I understand threads can throw exceptions and die, but I do not understand the reasoning behind prohibiting them. If .NET provided a a thread framework that encompassed System.Thread, but also provided notifications for when the Application Server was going to recycle the App-Pool, we could actually do something about it rather than just keel over and die at the point we were stopped.
Are there any solutions to threading in ASP.NET or is it basically "service"?
I am sure I'll have more queries, but this is it for now.
EDIT: Thankyou for all the responses!
So here's the main thing that you're going to want to keep in mind. The IIS may get reset or may reset itself (based on criteria) while you're working. You can never know when that will happen unless it stops rendering your page while you're waiting on the response (in which case you'll get a browser notice that the page stopped responding, eventually.
Threads
This is why you shouldn't use threads in ASP.NET apps. However, that's not to say you can't. Once again, you'll need to configure the IIS engine properly (I've had it hang when spawning a lot of threads, but that may have been machine dependent). If you can trust that nobody will cause ASP.NET to recompile your code/restart your application (by saving the web.config, for instance) then you will have less issues than you might otherwise.
Instead of running a Windows service, you could use an ASMX or WCF service which also run on IIS/.NET. That's up to you. But with multiple service pools it allows you to keep everything "in the same environment" as far as installations and builds are concerned. They obviously don't share the same processpool/memoryspace.
"You're Wrong!"
I'm sure someone will read this far and go "but you can't thread in ASP.NET!!!" so here's the link that shows you how to do it from that venerable MSDN http://msdn.microsoft.com/en-us/magazine/cc164128.aspx
Now onto Long life objects/Caching
Caching
So it depends on what you mean by caching. Is this per user, per system, per application, per database, or per page? Each is possible, but takes some contrivance and complexity, depending on needs.
The simplest way to do it per page is with static variables. This is also highly dangerous if you're using it for user-code-stuff because there's no indication to the end user that the variable is going to change, if more than one users uses the page. Instead, if you need something to live with the user while they work with the page in particular, you could either stuff it into session (serverside caching, stays with the user, they can use it across multiple pages) or you could stick it into ViewState.
The cachemanager you reference above would be good for application style caching, where everyone using the webapp can use the same datastore. That might be good for intensive queries where you want to get the values back as quickly as possible so long as they're not stale. That's up to you to decide. Also, things like application settings could be stored there, if you use a database layer for storage.
Long term cache objects
You could initialize it in the app_start with no problem, and the same goes for destroying it at the end if you felt the need, but yes, you do need to watch out for what I described at first about the system throwing all your code out and restarting.
Keel over and die
But you don't get notified when you're (the app pool here) going to be restarted (as far as I know) so you can pretty much keel over and die on anything. Always assume the app is going to go down on you before your request, and that every request is the first one.
Really tho, that just leads back into web-design in the first place. You don't know that this is the first visitor or the fifty millionth (unless you're storing that information in memory of course) so just like the app is stateless, you also need to plan your architecture to be stateless as much as possible. That's where web-apps are great.
If you need state on a regular basis, consider sticking with desktop apps. If you can live with stateless-ness, welcome to ASP.NET and web development.
1) The main thing about caching is understanding the lifetime of the cache, and the effects of caching (particularly large) objects in cache. Consider caching a 1MB object in memory that is generated each time your default.aspx page is hit; and after a year of production you're getting 10,000 hits an hour, and object lifetime is 2 hours. You can easily chew up TONS of memory, which can affect performance, and also may cause things to be prematurely expired from the cache, which in turn can cause other issues. As long as you understand the effects of all of this, you're fine.
2) Starting it up in Application_Start and shutting it down in Application_End is fine. You can also implement a custom HttpApplication with an http module.
3) Yes, when your app pool is recycled it calls Application_End and everything is shutdown and destroyed.
4) (Threads) The issue with threads comes up in relation to scaling. If you hit that default.aspx page, and it fires up a thread, and that page gets hit 10,000 in 2 minutes, you could potentially have a ton of threads running in your application pool. Again, as long as you understand the ramifications of firing up a thread, you can do it. ThreadPool is another story, the asp.net runtime uses the ThreadPool to process requests, so if you tie up all the threadpool threads, your application can potentially hang because there isn't a thread available to process the request.
1) Are there any design considerations I need to think about the objects that I am want to cache?
2) If I want to implement a manager of some time that is around during the lifetime of the web application (thread-safe, obviously), is it enough to initialize it during app_start and kill it in app_end? Or is this practiced frowned upon and any managers are created uniquely in the constructor/init method of the page being served.
There's a difference between data caching and output caching. I think you're looking for data caching which means caching some object for use in the application. This can be done via HttpContext.Current.Cache. You can also cache page output and differentiate that on conditions so the page logic doesn't have to run at all. This functionality is also built into ASP.NET. Something to keep in mind when doing data caching is that you need to be careful about the scope of the things you cache. For example, when using Entity Framework, you might be tempted to cache some object that's been retrieved from the DB. However, if your DB Context is scoped per request (a new one for every user visiting your site, probably the correct way) then your cached object will rely on this DB Context for lazy loading but the DB Context will be disposed of after the first request ends.
3) If I have a long-term object initialized at app start, is this likely to get affected when the app pool is recycled? If it is destroy at app end is it a case of it simply getting destroyed and then recreated again? I am fine with this restriction, I just want to get a little clearer :)
Perhaps the biggest issue with threading in ASP.NET is that it runs in the same process as all your requests. Even if this weren't an issue in and of itself, IIS can be configured (and if you don't own the servers almost certainly will be configured) to shut down the app if it's inactive (which you mentioned) which can cause issues for these threads. I have seen solutions to that including making sure IIS never recycles the app pool to spawning a thread that hits the site to keep it alive even on hosted servers

Not enough memory or not enough handles?

I am working on a large scale project where a custom (pretty good and robust) framework has been provided and we have to use that for showing up forms and views.
There is abstract class StrategyEditor (derived from some class in framework) which is instantiated whenever a new StrategyForm is opened.
StrategyForm (a customized window frame) contains StrategyEditor.
StrategyEditor contains StrategyTab.
StrategyTab contains StrategyCanvas.
This is a small portion of the big classes to clarify that there are many objects that will be created if one StrategyForm object is allocated in memory at run-time. My component owns all these classes mentioned above except StrategyForm whose code is not in my control.
Now, at run-time, user opens up many strategy objects (which trigger creation of new StrategyForm object.) After creating approx. 44 strategy objects, we see that the USER OBJECT HANDLES (I'll use UOH from here onwards) created by the application reaches to about 20k+, while in registry the default amount for handles is 10k. Read more about User Objects here. Testing on different machines made it clear that the number of strategy objects opened is different for message to pop-up - on one m/c if it is 44, then it can be 40 on another.
When we see the message pop-up, it means that the application is going to respond slowly. It gets worse with few more objects and then creation of window frames and subsequent objects fail.
We first thought that it was not-enough-memory issue. But then reading more about new in C# helped in understanding that an exception would be thrown if app ran out of memory. This is not a memory issue then, I feel (task manager also showed 1.5GB+ available memory.)
M/C specs
Core 2 Duo 2GHz+
4GB RAM
80GB+ free disk space for page file
Virtual Memory set: 4000 - 6000
My questions
Q1. Does this look like a memory issue and I am wrong that it is not?
Q2. Does this point to exhaustion of free UOHs (as I'm thinking) and which is resulting in failure of creation of window handles?
Q3. How can we avoid loading up of an StrategyEditor object (beyond a threshold, keeping an eye on the current usage of UOHs)? (we already know how to fetch number of UOHs in use, so don't go there.) Keep in mind that the call to new StrategyForm() is outside the control of my component.
Q4. I am bit confused - what are Handles to user objects exactly? Is MSDN talking about any object that we create or only some specific objects like window handles, cursor handles, icon handles?
Q5. What exactly causes to use up a UOH? (almost same as Q4)
I would be really thankful to anyone who can give me some knowledgeable answers. Thanks much! :)
[Update]
Based on Stakx answer, please note that the windows that are being opened, will be closed by the user only. This is kind of MDI app situation where way too many children windows are opened. So, Dispose can not be called whenever we want.
Q1
Sounds like you're trying to create far too many UI controls at the same time. Even if there's memory left, you're running out of handles. See below for a brief, but fairly technical explanation.
Q4
I understand a user object to be any object that is part of the GUI. At least until Windows XP, the Windows UI API resided in USER.DLL (one of the core DLLs making up Windows). Basically, the UI is made up of "windows". All controls, such as buttons, textboxes, checkboxes, are internally the same thing, namely "windows". To create them, you'd call the Win32 API function CreateWindow. That function would then return a handle to the created "window" (UI element, or "user object").
So I assume that a user object handle is a handle as returned by this function. (Winforms is based on the old Win32 API and would therefore use the CreateWindow function.)
Q2
Indeed you cannot create as many UI controls as you want. All those handles retrieved through CreateWindow must at some point be freed. In Winforms, the easiest and safest way to do this is through the use of the using block or by calling Dispose:
using (MyForm form = new MyForm())
{
if (form.ShowDialog() == DialogResult.OK) ...
}
Basically, all System.Windows.Forms.Control can be Disposed, and should be disposed. Sometimes, that's done for you automatically, but you shouldn't rely on it. Always Dispose your UI controls when you no longer need them.
Note on Dispose for modal & modeless forms:
Modal forms (shown with ShowDialog) are not automatically disposed. You have to do that yourself, as demonstrated in the code example above.
Modeless forms (shown with Show) are automatically disposed for you, since you have no control over when it will be closed by the user. No need to explicitly call Dispose!
Q5
Everytime you create a UI object, Winforms internally makes calls to CreateWindow. That's how handles are allocated. And they're not freed until a corresponding call to DestroyWindow is made. In Winforms, that call is triggered through the Dispose method of any System.Windows.Forms.Control. (Note: While I'm farily certain about this, I'm actually guessing a little. I may not be 100% correct. Having a look at Winforms internals using Reflector would reveal the truth.)
Q3
Assuming that your StrategyEditor creates a massive bunch of UI controls, I don't think you can do a lot. If you can't simplify that control (with respect to the number of child controls it creates), then it seems you're stuck in the situation where you are. You simply can't create infinitely many UI controls.
You could, however, keep track of how many StrategyEditors are opened at any one time (increase a counter whenever one is instantiated, and decrease it whenever one is closed -- you can track the latter using the FormClosing/FormClosed event of a form, or in the Dispose method of a control). Then you could limit the number of simultaneously opened StrategyEditors to a fixed number, say 5. If the limit is exceeded, you could throw an exception in the constructor, so that no more instances are created. Of course I can't say whether StrategyForm is going to handle an exception from your StrategyEditor constructor well...
public class StrategyEditor : ...
{
public StrategyEditor()
{
InitializeComponent();
if (numberOfLiveInstances >= maximumAllowedLiveInstances)
throw ...;
// not a nice solution IMHO, but if you've no other choice...
}
}
In either case, limiting the number of instantiated StrategyEditors seems like a temporary fix to me and won't solve the real problem.

Categories