C# OO Design Question - c#

I have a windows form app written in C#. the main_form class instantiates an AccessProcess named AccessProcessWorker which is a class that inherits from BackgroundWorker, then main_form then initializes Process with the following code
AccessProcessWorker.DoWork += new DoWorkEventHandler(processWorker.worker_DoWork);
AccessProcessWorker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(processWorkerCompleted);
AccessProcessWorker.ProgressChanged += new ProgressChangedEventHandler(processProgressChanged);
this application has just gone from POC to "make it work fast".
I wrote this app to work against an Access database but now want to make it go against MS Sql, but leave the option to work against access also. So, I could do something ugly like instantiate and initialize a SqlProcessWorker or AccessProcessWorker based on some UI selection made by the user. But what I'd like to do is make it so main_form always creates something like an IProcess so I didn't have to add logic to main_form every time there is a new ProcessWorker. The problem in my design is that the initializing breaks when I do it the way I described.
If anyone has any ideas, or need further clairification please let me know.

What you look for is called "dependency injection".

At some point you will need to instantiate the correct type, but The Factory Pattern is usually to the goto here. Now, that may be a bit much if you will only ever have one of two types to 'new' in order to get your IProcess object.

In the interests of keeping it simple, I would actually just go with the "ugly" approach.
You've mentioned Access and SQL Server as the two current databases, but how many do you realistically believe your app is needing to support? In my experience an application's database platform is very rarely changed and not without serious thought.
If there were a large set of database platforms to support and you can't predict which in advance, then maybe a decoupled design would be useful. Otherwise KISS.

If both of the database are the same layout and structure, you can just use EntitySpaces and change the default connection of the application. I.e. you have one code base when it comes to data access and then you just set the current connection based on whether you want the Access or SQL Database.

I would wrap the "ugly" bits of code in a separate method, or preferably a class which takes care of choosing which DB to talk to and the synchronization with the actual BackgroundWorker instances. The important point here is to adhere to the DRY principle: Don't Repeat Yourself.

I think that for a work project, you should do it as fast as possible, without thinking of future databases, because it's probably not gonna happen.
Are you sure there's not, somewhere, a class which already works with both Sql Server and MS Access? For example, OleDbConnection, OleDbCommand?
For simple SQL, all you need is to change the connection string and you can work with both databases.
If you haven't coded the rest of the application yet, you should take a day or two to look for some frameworks, compare and choose one of your liking. It'll make you write less code, and in future apps you'll be spared most of the database plumbery code. I guarantee that the time invested will be returned to you manyfold (if you work with databases once in a while, that is).

Related

Most elegant way of delayed or repeatable initializing

I am trying to rewrite extremely ugly class in one application at work. In one of our classes, there are hundreds of lines of code that ensure initialization and re-initialization of some classes. Currently, this is done in the awful brute force-y way, where you write your init code and manually copy it to re-init part (as they are very similar).
Because of this , I started to rewrite it to a form of a list of delegates which are then called with a parameter in both places (bool isReinit). Then I noticed that most of the delegates are also identical, as the initialization process of 90 percent of the classes is identical. This means that I should be able to create some default initialization function to simplify the code drastically. Currently I created something like this :
https://dotnetfiddle.net/RVS5UT
I also created class CustomInitializer which implements IInitializer and only takes one Func as a parameter and runs it in Initialize, for the cases where the initialization is a lot different.
Now, this simplified and anonymized piece of working code, but it works. The problem is that the whole approach is very awkward and the constructor signature is ugly as hell. Is there some way to simplify this ? I can't find any pattern or approach that would help me ? Any step towards better code is welcome and maybe I am just missing something.
There is also another catch. One solution I figured out would be to store the property pairs (var1a + var1b, var2a+var2b, ..) in an object and pass it directly to Initialize method. But this would mean moving the properties, which is sadly not possible at the moment, because the file has over 18k lines and code reviewers would kill me for changing third of them because of refactoring of one method (even if its a long one). I need to leave the target properties (var1a, var1b, var2a, ..) where they are now. This could also mean that there is no elegant way to solve this.
I am using .NET 4.0, C# 5.0
EDIT: I have no access to the initialized types (another stupid catch)
Thanks for your help.
the file has over 18k lines
Wow, looks like a lot of fun.
It is absolutely good to try to improve it. And believe me, whatever your co-workers may think, there is nothing else to do than refactoring here, unless this code does not need to evolve.
But, it seems to me you go on the path of complexity, trying to be DRY instead of trying to be expressive. The idea of having StandardInitializer and CustomInitializer managing lambdas is extremely complex. The initialization of a class should be in the class it is responsible to initialize. If some behaviors are really shared, they may share a base class or a collaboration class.
I recommend you this discussion on Working Effectively With Legacy Code. As you'll see and probably already know, the first key point is to have tests.
Please don't try to refactor such a class without a test harness. Otherwise you'll introduce regression, you'll be frustrated, and your co-workers will be comforted in their vision that nothing can be done here without breaking everything.
And don't forget if tests are hard to create, it's because of bad code, not because tests are expensive. Bad code is expensive.
After some tests protect you, try to think in terms of responsibility and life cycle. For example in a WPF application, it is a common issue to have "initializable" ViewModel because they do some async web service call to initialize themselves.
In this case, the object with the responsibilty of lifecycle for a given ViewModel, has also the responsibility to init it. If it manages several Initializable view models, then this kind of code is fine:
foreach (var initializable in initializables)
{
initializable.Initialize();
}
But please, whatever solution you choose, keep a clear separation between Initialize and Reinitialize (if they have things in common, make them call an internal shared function). It is a very bad idea to write stuff like:
init.Initialize(true);
It clearly states that the behavior of your Initialize function will change depending of a boolean value. If you have 2 behaviors, you should have 2 functions with clear naming.

C# reference collection for storing reference types

I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better).
I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)?
So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?
The easiest way to do what you're trying to accomplish is to create a wrapper that holds on to the list. This wrapper will have an add method which takes in a ref. In the add it looks up the value in the list and creates it when it can't find the value. Or a Cache
But... this statement would make me worry.
I don't want re-create an object if
I've already created it before during
the entire application life
But as Raymond Chen points out that A cache with a bad policy is another name for a memory leak. What you've described is a cache with no policy
To fix this you should consider using for a non-web app either System.Runtime.Caching for 4.0 or for 3.5 and earlier the Enterprise Library Caching Block. If this is a Web App then you can use the System.Web.Caching. Or if you must roll your own at least get a sensible policy in place.
All of this of course assumes that your database's caching is insufficient.
Using Ioc will save you many many many bugs, and make your application easier to test and your modules will be less coupled.
Ioc performance are pretty good.
I recommend you to use the implementation of Castle project
http://stw.castleproject.org/Windsor.MainPage.ashx
maybe you'll need a day to learn it, but it's great.

Which DataContext method will be faster?

I have using a basic DataContext to create objects then submit these into a database.
Have written a couple of tests myself to see which is fast but just wondering which method is considered best practice out of the following.
Code iterates through a loop and instantiates an object which is to be persisted to the database. Is it better to:
1.) Create a list of objects then and assign each created object to the list then at the end use
MyDataContext.InsertAllOnSubmit(ListOfObjects)
2.) Assign each created object directly into the DataContext using
MyDataContext.InsertOnSubmit(Object)
Hope this makes sense, if anyone needs more information let me know!
Thanks
I assume we're talking about the performance impact on the submit event - there is no database connection opened immediately when these methods are called.
Since each implementation will only update the database on Submit, they are both very similar.
Any performance difference will be marginal (and will be countered by whatever processing you do to put the objects into the List or enumerate the List), so go with whichever fits better into your design.
You might find this page about premature optimization interesting - http://c2.com/cgi/wiki?PrematureOptimization
Premature optimization is the root of
all evil -- Donald Knuth.
I guess for the second option, you'll need to re-open the connection for each operation. Using a list is cleaner and a better option.

Is creating a "dummy record" to force data-base obey the business logic, a good idea or a dumb one?

In some projects I see that a dummy record is needed to create in Db in order to keep the business logic go on without breaking the Db constraints.
So far I have seen its usage in 2 ways:
By adding a field like IsDummy
By adding a field something called ObjectType which points a type: Dummy
Ok, it helps on what needs to be achieved.
But what makes me feel alert on such solutions is sometimes you have to keep in mind that some dummy records exist in the application which needs to be handled in some processes. If not, you face some problems until you realize their existence or until someone in the team tells you "Aha! You have forgotten the dummy records. You should also do..."
So the question is:
Is it a good idea to create dummy records to keep business logic as it is without making the Db complain? If yes, what is the best practice to prevent developers from skipping their existence? If not, what do you do to prevent yourself from falling in a situation where you end up with an only option of creating a dummy record?
Thanks!
Using dummy records is inferior to getting the constraints right.
There's often a temptation to use them because using dummy records can seem like the fastest way to deliver a new feature (and maybe sometimes it is), but they are never part of a good design, because they hide differences between your domain logic and data model.
Dummy records are only required when the modeller cannot easily change the Database Definition, which means the definition and/or the data model is pretty bad. One should never end up in a situation where there has to be special code in the app layer to handle special cases in the database. That is a guaranteed maintenance nightmare.
Any good definition or model will allow changes easily, without "affecting existing code".
All business logic [that is defined in the Database] should be implemented using ANSI SQL Constraints, Checks, and Rules. (Of course Lower level structures are already constrained via Domains/Datatypes, etc., but I would not classify them as "business rules".) I ensure that I don't end up having to implement dummies, simply by doing that.
If that cannot be done, then the modeller lacks knowledge and experience. Or higher level requirements such as Normalisation, have been broken, and that presents obstacles to implementing Constraints which are dependent on them; also meaning the modeller failed.
I have never needed to break such Constraints, or add dummy records (and I have worked on an awful lot of databases). I have removed dummy records (and duplicates) when I have reworked databases created by others.
I've never run across having to do this. If you need to do this, there's something wrong with your data structure, and it's going to cause problems further down the line for reporting...
Using Dummies is dumb.
In general you should aim to get your logic right without them. I have seen them used too, but only as an emergency solution. Your description sounds way too much like making it a standard practice. That would cause more problems than it solves.
The only reason I can see for adding "dummy" records is when you have a seriously bad app and database design.
It is most definitely not common practice.
If your business logic depends on a record existing then you need to do one of two things: Either make sure that a CORRECT record is created prior to executing that logic; or, change the logic to take missing information into account.
I think any situation where something isn't very easily distinguishable as "business-logic" is a cause for trying to think of a better way.
The fact that you mention "which points a type: Dummy" leads me to believe you are using some kind of ORM for handling your data access. A very good checkpoint (though not the only) for ORM solutions like NHibernate is that your source code VERY EXPLICITLY describes your data structures driving your application. This not only allows your data access to easily be managed under source control, but it also allows for easier debugging down the line should a problem occur (and let's face it, it's not a matter of IF a problem will occur, but WHEN).
When you introduce some kind of "crutch" like a dummy record, you are ignoring the point of a database. A database is there to enforce rules against your data, in an effort to ELIMINATE the need for this kind of thing. I recommend you take a look at your application logic FIRST, before resorting to this kind of technique. Think about your fellow dev's, or a new hire. What if they need to add a feature and forget your little "dummy record" logic?
You mention yourself in your question feeling apprehension. Go with your gut. Get rid of the dummy records.
I have to go with the common feeling here and argue against dummy records.
What will happen is that a new developer will not know about them and not code to handle them, or delete a table and forget to add in a new dummy record.
I have experienced them in legacy databases and have seen both of the above mentioned happen.
Also the longer they exist the harder it is to take them out and the more code you have to write to take into account these dummy records which could probably have been removed if you just did the original design without them.
The correct solution would be to update your business logic.
To quote your expanded explanation:
Assume that you have a Package object and you have implement a business logic that a Package without any content cannot be created. YOu created some business layer rules and designed your Db with relevant constraints. But after some years a new feature is requested and to accomplish that you have to be able to create a package without a contnent. To overcome this, you decide to create a dummy content which is not visible on UI but lets you to create an empty package.
So the at one time to a package w/o content was invalid thus business layer enforced existence of content in a package object. That makes sense. Now if the real world scenario has changed such there is now a need VALID reason to create Package objects without content it is the business logic layer which needs to be changed.
Almost universally using "dummy" anything anywhere is a bad idea and usually indicates an issue in implementation. In this instance you are using dummy data to allow "compliance" with a business layer which is no longer accurately representing the real world constraints of the business.
If package without content is not valid then dummy data to allow "compliance" with business layer is a foolish hack. In essence you wrote rules to protect your own system and then how are attempting to circumvent your own protection. On the other hand if package without content is valid then business layer shouldn't be enforcing bogus constraints. In neither instance is dummy data valid.

C# Where to place code that retrieves GUI data?

I'm puzzling with where to place some code. I have a listbox and the listitems are stored in the database. The thing is, where should I place the code that retrieves the data from the database? I do already have a database class with a method ExecuteSelectQuery(). Shall I create Utility class with a method public DataTable GetGroupData() ? /* group is the listbox */ This method then calls the method ExecuteSelectQuery() from the database class.
What should I do?
There are many data access patterns you could look at implementing. A Repository pattern might get you where you need to go.
The idea is to have a GroupRepository class that fetches Group data. It doesn't have to be overly complicated. A simple method like GetAllGroups could return a collection you can use for your ListBox items.
You could simply abstract the database code into a utility class, as you suggest, and this wouldn't be a terrible solution. (You probably can't get much better with WebForms.) Yet if the system is going to end up quite complicated, then you might want to pick a more formal architecture...
Probably the best option for ASP.NET is the ASP.NET MVC Framework, which fully replaces WebForms. It's built around the Model-View-Controller architecture pattern, which is designed specifically to clearly separate code for the user interface and backened logic, which seems to be exactly what you want. (Indeed, it makes the design of websites a lot more structured in many ways.)
Also, if you want to create a more structured model for your database system, consider using an ORM such as the ADO.NET Entity Framework. However, this may be overkill if your database isn't very complicated. Using the MVC pattern is probably going to make a lot more difference.
consider adding a layer between your UI and data access layers. There you can implement some kind of caching if the data is not being changed frequently - this way you will avoid retrieving multiple times the same data from the DB.
I would recommend you the Application Architecture Guide book.
Pavel
Looks like you already have the data access layer implemented via your database class. I guess the confusion right now is in implementing the presentation and business layers. To me 'GetGroupData' is more of a part of the Business layer and is the right thing to implement. It will allow you to make changes in future with minimal or no impact to the presentation layer. Therefore, the flow that you suggested looks right. LoadListBox followed by GetGroupData followed by ExecuteSelectQuery.

Categories