I have a form with many controls, each having events (and their handlers), total some tens events .
I found out that many times, due to complex combinations, events are being fired while the controls are being initiated, mainly because I load saved settings from my settings file that may change the default controls' initial settings, causing events to fire.
To avoid this, I moved all my events to a special method (in Main) that is being called only after all controls have been built and and set.
It works fine, but the question is if this is good or common practice and what drawbacks it may have.
I have also tried to move the events to a special Maim subclass, but could not find a way to get access to the private controls from the subclass.
Not sure what you are expecting as an answer here. If the code works well enough, and is reasonably clear, I can't really see the problem.
Furthermore, it sounds as if this is necessary in your case: If you need to set up or modify controls based on saved state from settings, then it seems clear that this needs to be done before you can add events for those controls. As I already mentioned: As long as your code works fine, and is clear and easy to understand, it should be OK (the latter point matters because it means you will be able to fix it without too much hassle if it should prove to be problematic for some unforeseen reason later).
PS: If you want some more feedback about this, then you should add some actual code to comment on. That however, would probably imply that this question is better suited for the CodeReview StackExchange site than for this site. Perhaps you should add code, and post the question there instead.
Related
Back in 2012, I ran into issues with Property Interceptors that were recursive (that is, they trigger the interceptor on the same property but of a difference instance). It seems DevForce will ignore all but the first interceptor execution and that the behavior is expected and by design. See this forum post for the full details. I was able to redesign things in my application to work around that and all was well for a while.
Now I'm running into this same problem and I can't come up with anyway to work around it. My new scenario that is breaking is a feature in our app where changes to one field can trigger changes to other fields and where this behavior is all dynamic and controlled by runtime configuration. There are cases where we want a change of a property on one instance to change that same property on other instances (this is perhaps a simplification of our actual use case but it's hopefully close enough). After debugging some bugs in that logic, I've realized that doesn't work because of this same recursive limitation.
I tried digging into the DevForce code to see if there is anyway we can work around this but I have been unsuccessful. Is there anything I can do to get this to work? I can understand the concerns mentioned in the forum post about how this could lead to infinite loops but in our case, I am fine with that kind of thing. If I write code that would cause an infinite loop, I'd rather be greeted by a hung app that is easily debuggable rather than an app that silently goes on working even though things are subtly broken.
This reminds me of this other issue we ran into. I can understand how DevForce might want to try to be friendly and try to mask programming errors by default, but I also like to be in control of that kind of decision. From that other issue, an option was added so I could tell DevForce to have new behavior (EntityManagerOptions.ThrowAllLoadExceptions). Perhaps a similiar option could be added for this? I would love an AllowReentrantPropertyInterceptors option somewhere that I can flip to true (and would be false by default to avoid breaking backwards compatibility).
If a global option seemed too dangerous, I might be able to work with a public property on PropertyInterceptor like ReentrancySupported or something. I'd likely end up having to just loop through every property in my model and set that to true though so a global option would be better.
I just want the win32 control to ignore all moving and resizing operations. Is this possible?
Basically I am hosting an old win32 control inside a Winforms app but the main application is resizing and moving this control when using certain commands. I want the window to ignore these basically or become immune to these operations.
I am not sure that I understand the rationale behind your current design, as discussed in the comments. Since you appear to own all of the code, why not either (A) combine it so that the project creating the control is also the one responsible for managing it, or (B) change it so that you aren't moving/resizing the control when you clearly don't want to. But I trust that you have a reason for doing it the way you are, and I don't really have enough information to insist any more strongly.
Can I still subclass it? Because I am not creating this control from scratch, it's already there.
Yes, technically you can subclass any control, but you shouldn't unless you own it. Of course, that doesn't mean that you own the code for the control, it only means that you have ownership of the created object. I realize that's confusing, but I can't think of any better way to explain it in words.
So how about an example. Windows comes with a whole bunch of common controls that applications are encouraged to use. Your application doesn't own the code for any of those controls—it resides in DLLs that come with Windows. But because you are the one creating and using those controls, you have every right to subclass them to modify their behavior.
Not sure how I can subclass this control? I only have its hwnd handle, that's it.
The two standard approaches to subclassing a control, given its handle, in Win32 are documented here. Either one of those approaches would be a good bet.
Basically, you're just replacing the existing window procedure with your custom window procedure. That will get called first, allowing you to do whatever you want. You can either eat the message, or pass it on to the next window procedure in the chain. This may or may not be the default window procedure, depending on whether the window was subclassed previously, but that really doesn't matter to you.
Of course, you'll need to write a fair amount of P/Invoke code to make this work. Check pinvoke.net for help (but don't blindly rely on their declarations, I've seen plenty that are incorrect).
Do be sure that you unsubclass before destroying the window. A common mistake is forgetting to clean up after yourself.
I have done some research already as to how I can achieve the title of this question. The app I am working on has been under development for a couple of years or so (slow progress though, you all know how it is in the real world). It is now a requirement for me to put in Undo/Redo multiple level functionality. It's a bit late to say "you should have thought about this before you started" ... well, we did think about it - and we did nothing about it and now here it is. From searching around SO (and external links) I can see that the two most common methods appear to be ...
Command Pattern
Memento Pattern
The command pattern looks like it would be a hell of a lot of work, I can only imagine it throwing up thousands of bugs in the process too so I don't really fancy that one.
The Memento pattern is actually a lot like what I had in my head for this. I was thinking if there was some way to quickly take a snapshot of the object model currently in memory, then I would be able to store it somewhere (maybe also in memory, maybe in a file). It seems like a great idea, the only problem I can see for this, is how it will integrate with what we have already written. You see the app as we have it draws images in a big panel (potentially hundreds) and then allows the user to manipulate them either via the UI or via a custom built properties grid. The entire app is linked up with a big observer pattern. The second anything changes, events are fired and everything that needs to update does. This is nice but I cant help thinking that if a user is entering text into a texfield on the properties grid there will be a bit of delay before the UI catches up (seems as everytime the user presses a key, a new snapshot will be added to the undo list). So my question to you is ....
Do you know of any good alternatives to the Memento pattern that might work.
Do you think the Memento pattern will fit in here or will it slow the app down too much.
If the Memento pattern is the way to go, what is the most efficient way to make a snapshot of the object model (i was thinking serialising it or something)
Should the snapshots be stored in memory or is it possible to put them into files?
If you have got this far, thankyou kindly for reading. Any input you have will be valuable and very much appreciated.
Well , Here is my thought on this problem.
1- You need multi level undo/redo functionality. so you need to store user actions performed which can be stored in a stack.
2- Your second problem how to identify what has been changed by a operation i think through Memento pattern , it is quite a challenge. Memento is all about toring initial object state in your memory.
either , you need to store what is changed by a operation so that you can use this information to undo the opertions.
Command pattern is designed for the Undo/Redo functionality and i would say that its late but its worth while to implement the design which is being used for several years and works for most of the applications.
If performance allows it you could serialize your domain before each action. A few hundred objects is not much if the objects aren't big themselves.
Since your object graph is probably non trivial (i.e. uses inheritance, cycles,...) the integrated XmlSerializer and JsonSerializers are out of question. Json.net supports these, but does some lossy conversions on some types (local DateTimes, numbers,...) so it's bad too.
I think the protobuf serializers need either some form of DTD(.proto file) or decoration of all properties with attributes mapping their name to a number, so it might not be optimal.
BinaryFormatter can serialize most stuff, you just need to decorate all classes with the [Serializable] attribute. But I haven't used it myself, so there might be pitfalls I'm not aware of. Perhaps related to Singletons or events.
The critical things for undo/redo are
knowing what state you need to save and restore
knowing when you need to save the state
Adding undo/redo after the fact is always a painful thing to do - (I know this comment is of no use to you now, but it's always best to design support into the application framework before you start, as it helps people use undo-friendly patterns throughout development).
Possibly the simplest approach will be a memento-based one:
Locate all the data that makes up your "document". Can you unify this data in some way so that it forms a coherent whole? Usually if you can serialise your document structure to a file, the logic you need is in the serialisation system, so that gives you a way in. The down side to using this directly is usually that you will usually have to serialise everything so your undo will be huge and slow. If possible, refactor code so that (a) there is a common serialisation interface used throughout the application (so any and every part of your data can be saved/restored using a generic call), and (b) every sub-system is encapsulated so that modifications to the data have to go through a common interface (rather than lots of people modifying member variables directly, they should all call an API provided by the object to request that it makes changes to itself) and (c) every sub-portion of the data keeps a "version number". Every time an alteration is made (through the interface in (b)) it should increment that version number. This approach means you can now scan your entire document and use the version numbers to find just the parts of it that have changed since you last looked, and then serialise the minimal amount to save and restore the changed state.
Provide a mechanism whereby a single undo step can be recorded. This means allowing multple systems to make changes to the data structure, and then when everything has been updated, triggering an undo recording. Working out when to do this may be tricky, but it can usually be accomplished by scanning your document for changes (see above) in your message loop, when your UI has finished processing each input event.
Beyond that, I'd advise going for a command based approach, because there are many benefits to it besides undo/redo.
You may find the Monitored Undo Framework to be useful. http://muf.codeplex.com/
It uses something similar to the memento pattern, by monitoring for changes as they happen and allows you to put delegates on the undo stack that will reverse / redo the change.
I considered an approach that would serialize / deserialize the document but was concerned about the overhead. Instead, I monitor for changes in the model (or view model) on a property by property bases. Then, as needed, I use the MUF library to "batch" related changes so that they undo / redo as a unit of change.
The fact that you have your UI setup to react to changes in the underlying model is good. It sounds like you could inject the undo / redo logic there and the changes would bubble up to the UI.
I don't think that you'd see much lag or performance degradation. I have a similar application, with a diagram that we render based on the data in the model. We've had good results with this so far.
You can find more info and documentation on the codeplex site at http://muf.codeplex.com/. The library is also available via NuGet, with support for .NET 3.5, 4.0, SL4 and WP7.
I have a main form at present it has a tab control and 3 data grids (DevExpress xtragrid's). Along with the normal buttons combo boxes... I would say 2/3 rds of the methods in the main form are related to customizing the grids or their relevant event handlers to handle data input. This is making the main forms code become larger and larger.
What is an ok sort of code length for a main form?
How should I move around the code if necesary? I am currently thinking about creating a user control for each grid and dumping it's methods in there.
I build a fair number of apps at my shop and try to avoid, as a general rule, to clog up main forms with a bunch of control-specific code. Rather, I'll encapsulate behaviors and state setup into some commonly reusable user controls and stick that stuff in the user controls' files instead.
I don't have a magic number I shoot for in the main form, instead I'll use the 'Why would I put this here?' test. If I can't come up with a good reason as to why I'm thinking of putting the code in the main form, I'll avoid it. Otherwise, as you've mentioned, the main form starts growing and it becomes a real pain to manage everything.
I like to put my glue code (event handler stuff, etc.) separate from the main form itself.
At a minimum, I'll utilize some regions to separate the code out into logically grouped chunks. Granted, many folks hate the #region/#endregion constructs, but I've got the keystrokes pretty much all memorized so it isn't an issue for me. I like to use them simply because it organizes things nicely and collapses down well in VS.
In a nutshell, I don't put anything in the main form unless I convince myself it belongs there. There are a bunch of good patterns out there that, when employed, help to avoid the big heaping pile that otherwise tends to develop. I looked back at one file I had early on in my career and the darn thing was 10K lines long... absolutely ridiculous!
Anyway, that is my two cents.
Have a good one!
As with any class, having more than about 150 lines is a sign that something has gone horribly wrong. The same OO principles apply to classes relating to UI as everywhere else in your application.
The class should have a single responsibility.
A number is hard to come up with. In general, I agree with the previous two posters, it's all about responsibilities.
Ask yourself, what does the code to customize behavior for grid 1 have to do with grid 2?
What is the responsibility of the main form? Recently I have been subscribing to MVP design patterns (MVC is fine as well). In this pattern, you main form is the presenter, or ui layer. It's responsibility should be to present data and accept user input.
Without seeing your code, I can only guesstimate as to what is the best course of action. But I agree with your feelings that each grid and it's customization code should live in a different control. Perhaps the responsibility of the main form should be to merely pass the data to the correct control and pass requests from the control back to the controller/presenter. It is the responsibility of each control to understand the data passed to it, and display it accordingly.
Attached is an example MVP implementation.
http://www.c-sharpcorner.com/UploadFile/rmcochran/PassiveView01262008091652AM/PassiveView.aspx
With our next major release we are looking to globalize our ASP.Net application and I was asked to think of a way to keep track of what code has been already worked on in this effort.
My thought was to use a custom Attribute and place it on all classes that have been "fixed".
What do you think?
Does anyone have a better idea?
Using an attribute to determine which classes have been globalized would then require a tool to process the code and determine which classes have and haven't been "processed", it seems like it's getting a bit complicated.
A more traditional project tracking process would probably be better - and wouldn't "pollute" your code with attributes/other markup that have no functional meaning beyond the end of the globalisation project. How about having a defect raised for each class that requires work, and tracking it that way?
What about just counting or listing the classes and then work class by class? While an attribute may be an interesting idea, I'd regard it as over-engineered. Globalizing does nothing more than, well, going through each class and globalizing the code :)
You want to finish that anyway before the next release. So go ahead and just do it one by one, and there you have your progress. I'd regard a defect raised for each class as too much either.
In my last project, I started full globalization a little late. I just went through the list of code files, from top to bottom. Alphabetically in my case, and folder after folder. So I always only had to remember which file I last worked on. That worked pretty well for me.
Edit: Another thing: In my last project, globalizing mainly involved moving hard-coded strings to resource files, and re-generating all text when the language changes at runtime. But you'll also have to think about things like number formats and the like. Microsoft's FxCop helped me with that, since it marks all number conversions etc. without specifying a culture as violations. FxCop keeps track of this, so when you resolved such a violation and re-ran FxCop, it would report the violation as missing (i.e. solved). That's especially useful for these harder-to-see things.
How about writing a unit test for each page in the app? The unit test would load the page and perform a
foreach (System.Web.UI.Control c in Page.Controls)
{
//Do work here
}
For the work part, load different globalization settings and see if the .Text property (or relevant property for your app) is different.
My assumption would be that no language should come out the same in all but the simplest cases.
Use the set of unit tests that sucessfully complete to track your progress.