I've run into this issue quite a few times and never liked the solution chosen. Let's say you have a list of States (just as a simple example) in the database. In your code-behind, you want to be able to reference a State by ID and have the list of them available via Intellisense.
For example:
States.Arizona.Id //returns a GUID
But the problem is that I don't want to hard-code the GUIDS. Now in the past I've done all of the following:
Create class constants (hard-coding of the worst kind.. ugh!)
Create Lookup classes that have an ID property (among others) (still hard-coded and would require a rebuild of the project if ever updated)
Put all the GUIDS into the .config file, create an enumeration, and within a static constructor load the GUIDS from the .config into a Hashtable with the enumeration item as the key. So then I can do: StateHash[StatEnum.Arizona]. Nice, because if a GUID changes, no rebuild required. However, doesn't help if a new record is added or an old one removed, because the enumeration will need to be updated.
So what I'm asking is if someone has a better solution? Ideally, I'd want to be able to look up via Intellisense and not have to rebuild code when there's an update. Not even sure that's possible.
EDIT: Using states was just an example (probably a bad one). It could be a list of widgets, car types, etc. if that helps.
Personally, I would store lookup data in a database, and simply try to avoid the type of hard coding that binds rules to things like individual states. Perhaps some key property of those states (like .ApplyDoubleTax or something). And non-logic code doesn't need to use intellisense - it typically just needs to list them or find by name, which can be done easily enough however you have stored it.
Equally, I'd load the data once and cache it.
Arguably, coding the logic against states is hard coding - especially if you want to go international anytime soon - I hate it when a site asks me what state I live in...
Re the data changing... is the USA looking to annex anytime soon?
I believe that if it shows up in Intellisense, then, by definition, it is hard-coded into your program.
That said, if your goal is make the hard-coding as painless as possible, on thing you might try is auto-generating your enumeration based on what's in the database. That is, you can write a program that reads the database and creates a FOO.cs file containing your enumeration. Then just run that program every time the data changes.
This cries out for a custom MSBuild task. You really want an autogenerated enum or class in this case; if the IDs are sourced from a database and can/will change, and are not easily predicted. You could then put the task in your project and it would run before each build updating as necessary.
Or start looking at ORMs :)
Related
Using Protobuf-net, I want to know what properties of an object have been updated at the end of a merge operation so that I can notify interested code to update other components that may relate to those updated properties.
I noticed that there are a few different types of properties/methods I can add which will help me serialize selectively (Specified and ShouldSerialize). I noticed in MemberSpecifiedDecorator that the ‘read’ method will set the specified property to true when it reads. However, even if I add specified properties for each field, I’d have to check each one (and update code when new properties were added)
My current plan is to create a custom SerializationContext.context object, and then detect that during the desearalization process – and update a list of members. However… there are quite a few places in the code I need to touch to do that, and I’d rather do it using an existing system if possible.
It is much more desirable to get a list of updated member information. I realize that due to walking down an object graph that may result in many members, but in my use case I’m not merging complex objects, just simple POCO’s with value type properties.
Getting a delta log isn't an inbuilt feature, partly because of the complexity when it comes to complex models, as you note. The Specified trick would work, although this isn't the purpose it was designed for - but to avoid adding complexity to your own code,that would be something best handled via reflection, perhaps using the Expression API for performance. Another approach might be to use a ProtoReader to know in advance which fields will be touched, but that demands an understanding of the field-number/member map (which can be queried via RuntimeTypeModel).
Are you using habd-crafted models? Or are you using protogen? Yet another option would be to have code in the setters that logs changes somewhere. I don't think protogen currently emits partial method hooks, but it possibly could.
But let me turn this around: it isn't a feature that is built in right now, and it is somewhat limited due to complexity anyway, but: what would a "good" API for this look like to you?
As a side note: this isn't really a common features in serializers - you'd have very similar challenges in any mainstream serializer that I can think of.
I have a newb question. I have a winforms application that has a number of classes that referenced a number of UNC network paths. I started to notice I had a bunch of string duplication and then started trying to weed them out by consolidating them into the classes that just make more sense to have them. I was then referencing the class with the string I needed each time I needed to get the value of the string but I'm sure this was a sloppy way to do it.
Now I've settled on making a single class ("StringLibrary") and am referencing that class in each class I need to pull strings from. This seems much more efficient than what I was doing before, however, I'm still not sure if this is a good way to do it in general.
Is there a better way (i.e. more standardized way) to consolidate a group of strings or values in c#?
It depends on whether the strings are configuration or more permanent. For network paths, you may want to put them in your app.config file (see What is App.config in C#.NET? How to use it?), since they may change from time to time, or differ between deployments (and you do not want to recompile your code for every site) Depending on the nature of the data, you may alternatively want to store it in the registry or in a database.
If it is something more tightly tied to your code, like names of controls on a form, or names of columns in your database. Then you may want to centralise their definitions as you suggest, and reference them all from there. When there are a lot of them, your may want to split your StringLibrary into more classes with more relevant names (e.g. if you are speficying names of columns in your database, then you may want to create such a static class for each table in your database) If you take this approach, and since you are new to C# it may also help to read Static readonly vs const to decide if you want them to be const or static readonly.
These could be added to an application config file/ web config file,
resource files and/or settings files.
This way you can administer these strings, should they change, without having to re-build your application and also apply transformations (if in an app.config/web.config) when performing releases to different environments/deployments.
I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better).
I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)?
So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?
The easiest way to do what you're trying to accomplish is to create a wrapper that holds on to the list. This wrapper will have an add method which takes in a ref. In the add it looks up the value in the list and creates it when it can't find the value. Or a Cache
But... this statement would make me worry.
I don't want re-create an object if
I've already created it before during
the entire application life
But as Raymond Chen points out that A cache with a bad policy is another name for a memory leak. What you've described is a cache with no policy
To fix this you should consider using for a non-web app either System.Runtime.Caching for 4.0 or for 3.5 and earlier the Enterprise Library Caching Block. If this is a Web App then you can use the System.Web.Caching. Or if you must roll your own at least get a sensible policy in place.
All of this of course assumes that your database's caching is insufficient.
Using Ioc will save you many many many bugs, and make your application easier to test and your modules will be less coupled.
Ioc performance are pretty good.
I recommend you to use the implementation of Castle project
http://stw.castleproject.org/Windsor.MainPage.ashx
maybe you'll need a day to learn it, but it's great.
In some projects I see that a dummy record is needed to create in Db in order to keep the business logic go on without breaking the Db constraints.
So far I have seen its usage in 2 ways:
By adding a field like IsDummy
By adding a field something called ObjectType which points a type: Dummy
Ok, it helps on what needs to be achieved.
But what makes me feel alert on such solutions is sometimes you have to keep in mind that some dummy records exist in the application which needs to be handled in some processes. If not, you face some problems until you realize their existence or until someone in the team tells you "Aha! You have forgotten the dummy records. You should also do..."
So the question is:
Is it a good idea to create dummy records to keep business logic as it is without making the Db complain? If yes, what is the best practice to prevent developers from skipping their existence? If not, what do you do to prevent yourself from falling in a situation where you end up with an only option of creating a dummy record?
Thanks!
Using dummy records is inferior to getting the constraints right.
There's often a temptation to use them because using dummy records can seem like the fastest way to deliver a new feature (and maybe sometimes it is), but they are never part of a good design, because they hide differences between your domain logic and data model.
Dummy records are only required when the modeller cannot easily change the Database Definition, which means the definition and/or the data model is pretty bad. One should never end up in a situation where there has to be special code in the app layer to handle special cases in the database. That is a guaranteed maintenance nightmare.
Any good definition or model will allow changes easily, without "affecting existing code".
All business logic [that is defined in the Database] should be implemented using ANSI SQL Constraints, Checks, and Rules. (Of course Lower level structures are already constrained via Domains/Datatypes, etc., but I would not classify them as "business rules".) I ensure that I don't end up having to implement dummies, simply by doing that.
If that cannot be done, then the modeller lacks knowledge and experience. Or higher level requirements such as Normalisation, have been broken, and that presents obstacles to implementing Constraints which are dependent on them; also meaning the modeller failed.
I have never needed to break such Constraints, or add dummy records (and I have worked on an awful lot of databases). I have removed dummy records (and duplicates) when I have reworked databases created by others.
I've never run across having to do this. If you need to do this, there's something wrong with your data structure, and it's going to cause problems further down the line for reporting...
Using Dummies is dumb.
In general you should aim to get your logic right without them. I have seen them used too, but only as an emergency solution. Your description sounds way too much like making it a standard practice. That would cause more problems than it solves.
The only reason I can see for adding "dummy" records is when you have a seriously bad app and database design.
It is most definitely not common practice.
If your business logic depends on a record existing then you need to do one of two things: Either make sure that a CORRECT record is created prior to executing that logic; or, change the logic to take missing information into account.
I think any situation where something isn't very easily distinguishable as "business-logic" is a cause for trying to think of a better way.
The fact that you mention "which points a type: Dummy" leads me to believe you are using some kind of ORM for handling your data access. A very good checkpoint (though not the only) for ORM solutions like NHibernate is that your source code VERY EXPLICITLY describes your data structures driving your application. This not only allows your data access to easily be managed under source control, but it also allows for easier debugging down the line should a problem occur (and let's face it, it's not a matter of IF a problem will occur, but WHEN).
When you introduce some kind of "crutch" like a dummy record, you are ignoring the point of a database. A database is there to enforce rules against your data, in an effort to ELIMINATE the need for this kind of thing. I recommend you take a look at your application logic FIRST, before resorting to this kind of technique. Think about your fellow dev's, or a new hire. What if they need to add a feature and forget your little "dummy record" logic?
You mention yourself in your question feeling apprehension. Go with your gut. Get rid of the dummy records.
I have to go with the common feeling here and argue against dummy records.
What will happen is that a new developer will not know about them and not code to handle them, or delete a table and forget to add in a new dummy record.
I have experienced them in legacy databases and have seen both of the above mentioned happen.
Also the longer they exist the harder it is to take them out and the more code you have to write to take into account these dummy records which could probably have been removed if you just did the original design without them.
The correct solution would be to update your business logic.
To quote your expanded explanation:
Assume that you have a Package object and you have implement a business logic that a Package without any content cannot be created. YOu created some business layer rules and designed your Db with relevant constraints. But after some years a new feature is requested and to accomplish that you have to be able to create a package without a contnent. To overcome this, you decide to create a dummy content which is not visible on UI but lets you to create an empty package.
So the at one time to a package w/o content was invalid thus business layer enforced existence of content in a package object. That makes sense. Now if the real world scenario has changed such there is now a need VALID reason to create Package objects without content it is the business logic layer which needs to be changed.
Almost universally using "dummy" anything anywhere is a bad idea and usually indicates an issue in implementation. In this instance you are using dummy data to allow "compliance" with a business layer which is no longer accurately representing the real world constraints of the business.
If package without content is not valid then dummy data to allow "compliance" with business layer is a foolish hack. In essence you wrote rules to protect your own system and then how are attempting to circumvent your own protection. On the other hand if package without content is valid then business layer shouldn't be enforcing bogus constraints. In neither instance is dummy data valid.
I need to work on an application that consists of two major parts:
The business logic part with specific business classes (e.g. Book, Library, Author, ...)
A generic part that can show Books, Libraries, ... in data grids, map them to a database, ...).
The generic part uses reflection to get the data out of the business classes without the need to write specific data-grid or database logic in the business classes. This works fine and allows us to add new business classes (e.g. LibraryMember) without the need to adjust the data grid and database logic.
However, over the years, code was added to the business classes that also makes use of reflection to get things done in the business classes. E.g. if the Author of a Book is changed, observers are called to tell the Author itself that it should add this book to its collection of books written by him (Author.Books). In these observers, not only the instances are passed, but also information that is directly derived from the reflection (the FieldInfo is added to the observer call so that the caller knows that the field "Author" of the book is changed).
I can clearly see advantages in using reflection in these generic modules (like the data grid or database interface), but it seems to me that using reflection in the business classes is a bad idea. After all, shouldn't the application work without relying on reflection as much as possible? Or is the use of reflection the 'normal way of working' in the 21st century?
Is it good practice to use reflection in your business logic?
EDIT: Some clarification on the remark of Kirk:
Imagine that Author implements an observer on Book.
Book calls all its observers whenever some field of Book changes (like Title, Year, #Pages, Author, ...). The 'FieldInfo' of the changed field is passed in the observer.
The Author-observer then uses this FieldInfo to decide whether it is interested in this change. In this case, if FieldInfo is for the field Author of Book, the Author-Observer will update its own vector of Books.
The main danger with Reflection is that the flexibility can escalate into disorganized, unmaintainable code, particularly if more junior devs are used to make changes, who may not fully understand the Reflection code or are so enamored of it that they use it to solve every problem, even when simpler tools would suffice.
My observation has been that over-generalization leads to over-complication. It gets worse when the actual boundary cases turn out to not be accommodated by the generalized design, requiring hacks to fit in the new features on schedule, transmuting flexibility into complexity.
I avoid using reflection. Yes, it makes your program more flexible. But this flexibility comes at a high price: There is no compile-time checking of field names or types or whatever information you're collecting through reflection.
Like many things, it depends on what you're doing. If the nature of your logic is that you NEVER compare the field names (or whatever) found to a constant value, then using reflection is probably a good thing. But if you use reflection to find field names, and then loop through them searching for the fields named "Author" and "Title", you've just created a more-complex simulation of an object with two named fields. And what if you search for "Author" when the field is actually called "AuthorName", or you intend to search for "Author" and accidentally type "Auhtor"? Now you have errors that won't show up until runtime instead of being flagged at compile time.
With hard-coded field names, your IDE can tell you every place that a certain field is used. With reflection ... not so easy to tell. Maybe you can do a text search on the name, but if field names are passed around as variables, it can get very difficult.
I'm working on a system now where the original authors loved reflection and similar techniques. There are all sorts of places where they need to create an instance of a class and instead of just saying "new" and the class, they create a token that they look up in a table to get the class name. What does this gain? Yes, we could change the table to map that token to a different name. And this gains us ... what? When was the last time that you said, "Oh, every place that my program creates an instance of Customer, I want to change to create an instance of NewKindOfCustomer." If you have changes to a class, you change the class, not create a new class but keep the old one around for nostalgia.
To take a similar issue, I make a regular practice of building data entry screens on the fly by asking the database for a list of field names, types, and sizes, and then laying it out from there. This gives me the advantage of using the same program for all the simpler data entry screens -- just pass in the table name as a parameter -- and if a field is added or deleted, zero code change is required. But this only works as long as I don't care what the fields are. Once I start having validations or side effects specific to this screen, the system is more trouble than it's worth, and I'm better off to fall back to more explicit coding.
Based on your edit, it sounds like you are using reflection purely as a mechanism for identifying fields. This is as opposed to dynamic behavior such as looking up the fields, which should be avoided when possible (since such lookups usually use strings which ruin static type safety). Using FieldInfo to provide an identifier for a field is fairly harmless, though it does expose some internals (the info class) in a way that is not entirely ideal.
I tend not to use reflection where i can help it. by using interfaces and coding against these i can do a lot of things that some would use reflection for.
But im a big fan of if it works, it works.
Also by using reflection you probably have something that can adapt fairly easily.
Ie the only objection most would have is fairly religious ... and if your performance is fine and the code is maintainable and clear .... who cares?
Edit: based on your edit i would indeed use interfaces to achieve what you want. Unless i misunderstand you.
I think it is a good idea to stay away from Reflection when possible, but dont be afraid to resort to it when it provides a better or more flexible solution to your problem. The performance hit for anything but tight loop operations is likely to be minimal in the overall scheme of an application or Web Form request.
Just a good article to share about reflection -
http://www.simple-talk.com/dotnet/.net-framework/a-defense-of-reflection-in-.net/
I tend to use interfaces in my business layer and leave the reflection to my presentation layer. This is not an absolute but rather a guideline.