I have an application that uses SQLite, which is extremely light weight and quick. I have some preferences that don't necessarily need to be loaded on startup, but might need to be used at various times depending on where the user goes. That being said, I can't decide where to store this information.
Q1: Should I just go ahead and store it in the database? Should I store it in a config file?
Q2: Should I load and store the preferences and other data at startup even if they're not necessarily being used right away? Or should I just query the database when I need them?
Example: My application can store the company information for the company that is using the software. Company name, company phone, etc. The only time this information is used is when the software auto-prints a letter, or the user goes to edit their company information in the program.
EDIT: I've realized that this comes down to application settings vs user settings. My program does not have multiple users per copy of the software. That being said, I would suppose these would be application settings.
What you may want to do is write a class that encapsulates the settings and then reads them into Hashtable.
You could have a basic GetSetting method that looks up a setting based on a name. If the setting is located in the Hashtable, return the value, otherwise go to the DB to find the setting and then store it in the Hashtable. You can then write separate properties for each setting you want, each calling the GetSetting/SetSetting methods.
This allows you to store the settings in the DB easily, and caches the reads to avoid constantly reading the DB.
public class Settings {
private object SyncRoot = new object();
private System.Collections.Hashtable _cache = new System.Collections.Hashtable();
public T GetSetting<T>(string xPath, T defaultValue)
{
lock (SyncRoot)
{
if (!_cache.ContainsKey(xPath))
{
T val = GetSettingFromDB<T>(xPath, defaultValue);
_cache[xPath] = val;
return val;
}
return (T)_cache[xPath];
}
}
public T GetSettingFromDB<T>(string xPath, T defaultValue)
{
// Read from DB
}
public void SaveSetting<T>(string xPath, T value)
{
lock (SyncRoot)
{
if (_cache.ContainsKey(xPath))
_cache[xPath] = value;
}
SaveSettingToDB<T>(xPath, value);
}
public T SaveSettingToDB<T>(string xPath, T defaultValue)
{
// Read from DB
}
}
Then just create a class with a bunch of properties like this:
public static bool BooleanFeature
{
get { return Settings.GetSetting<bool>("BooleanFeature", true); }
set { Settings.SaveSetting<bool>("BooleanFeature", value); }
}
Now you can do this in your code:
if (Setting.BooleanFeature) {
// Run certain code
else {
// Run other code
}
How many settings are you looking to save? Using the built-in settings feature is pretty painless.
http://msdn.microsoft.com/en-us/library/aa730869.aspx
Storing configuration data in a file is good for light-weight settings that rarely change. Usually you'd do this for settings that are different between development and production and are used to get your application up and running.
After that, everything else is best stored in a database. This gives you the option for good user interfaces to modify the settings, load them when needed, save them during upgrades to your system, be available if you're using multiple front-ends (versus saving the configuration in a file and ensuring all front-ends have the same up-to-date files.)
In addition to JTAs answer I would like to add that I've used 3 methods and they all have their up and down sides.
Storing them in the built-in one
does actually lock them to the
running user. So for instance if
multiple users use your app, there
will be independent settings for
each user. If this is what you want,
pick this one.
Storing them in the database is
useful if you do not want it to be
bound to a user but rather to the
database. Though you cannot change
these settings from outside the app.
I've used a config-class that I
serialize with XML if I need to edit
it with an xml-editor. For instance
this is very useful if you are
running a service.
Related
I'm currently working on a site (ASP MVC) that support Multilanguage where every web content is stored in Database, for example I have english version, spanish version, etc. But I'm worried about the performance when it goes to production because everytime a user hit my page, there are at least 20 database calls to get the content.
Is there anything I can do to make this more efficient?
What I can't do is :
Merge all database call into 1 call on every page load (this takes
too much effort).
Make a separate html page for every language (I need the end user to be able to add new language without me changing the code hence the
database design).
I was thinking about caching everything in user's browser and compare it everytime they hit my page and it will only call the database if it doesn't match but I'm not sure how to approach this.
Any help will be appreciated and sorry for bad english.
I would suggest to go with static dictionary in this case, as #Vsevolod-Goloviznin suggested in the comments.
Let me elaborate my approach
You probably have a localized resource in your database identified with some named key and language key as well:
var homePageTitle = Database.GetResource("homeTitle", "es");
You should build up a static dictionary that will be used as cache and will have all the resources in the database in memory, for easy and convenient access:
public static MultiLanguageManager {
private static Dictionary<string, Dictionary<string, string>> ContentCache;
...
public static GetResource(string key, string language) {
return (ContentCache[language])[key];
}
}
Then in your front-end you will have something like this:
...
<title>#MultiLanguageManager.GetResource("aboutTitle", "en")</title>
...
Whenever user changes the localized content in the database, you should rebuild the ContentCache dictionary manually:
// update localized content
...
MultiLanguageManager.RebuildContentCache();
...
By using this approach you can reduce the number of database calls to minimum (retrieve only logic units, and not static resources), and control the consistency of your localization data.
One possible drawback might be the size of the static dictionary that you will build, but with Today's abundance of resources, this should not be a problem in 99% of the cases.
I have made a simple localization of messages. All messages are stored in the static class Lng
public static partial class Lng
{
public static readonly string AppName = "My application";
public static class Category1
{
public static readonly string ConfirmDelete = "Are you sure want to delete?";
}
}
In code usage is as simple as referencing fields
MessageBox.Show(Lng.Category1.ConfirmDelete, ...
Then there is a manager, which does following:
language selection
load corresponding translation
updating fields via reflection
export currently selected language on application exit for an update (in case if default language is selected - to create first translation for any other language)
It's irrelevant of how language files looks likes, but here is a reflection part
TranslateLng("Lng.", typeof(Lng));
...
private static void TranslateLng(string parent, Type type)
{
foreach (Type nested in type.GetNestedTypes())
{
string child = string.Format("{0}{1}.", parent, nested.Name);
TranslateLng(child, nested);
foreach (var field in nested.GetFields())
{
string key = child + field.Name;
DefaultAdd(key, (string)field.GetValue(null)); // store value in default language dictionary (if not created yet)
field.SetValue(null, GetValue(key)); // get value for currently selected language
}
}
This system has one problem: all messages are defined in one class, which required manual management (deleting and updating messages when updating code which uses them).
And I was thinking to change manager to register strings dynamically and simplify usage to something like
MessageBox.Show(Lng.Text("Are you sure want to delete?"), ...
So that text is defined right where it used, duplicated text can be handled by manager and so on.
There are however 2 problems:
I will need a complete list of all messages at the end of application run to export complete list of messages (for currently selected language). What if some of Lng.Text() are never called at that run? Is there a way to register them as they are used in code (compile time?)? So that all calls will be registered somehow, even if peace of code is never used.
How to generate key. I could use CallerMemberName, but right key are more useful, as they are telling exact purpose. To example, Lng.Configuration.Appearance.CaptionText. I could call Lng.Text(key, message), but then I have to manage keys, ensure in their uniqueness, which doesn't appeals me.
I recently worked on a project with internationaliztion and we used Resources in con junction with the Sisulizer program with great success. Having the resources solves your key problem as you manually enter the key when you extract the resources. You also get great support from Resharper which makes the whole process a breeze.
Sisulizer is then used to extract resources as well as strings hard-coded in our Win Forms and WPF classes. It can export a CSV which you can give your translators and it also supports pseudo translation, which makes testing such apps very easy as well.
I have an Windows Forms application VS 2008 - C#, that uses app.config.
In execution time, in Menu option of my application, I want editing values of app.config, save it and restart application.
any sample source code, any good patterns and practices ??
edit:
in MSDN Forums, Jean Paul VA:
Create an test windows forms application and add an app.config into it.
Add reference to System.confguration
Add a key named "font" in appSettings with value "Verdana"
Place a button on form and on click of it add the modification code.
System.Configuration.Configuration configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
configuration.AppSettings.Settings.Remove("font");
configuration.AppSettings.Settings.Add("font", "Calibri");
configuration.Save(ConfigurationSaveMode.Modified);
ConfigurationManager.RefreshSection("appSettings");
what you think about it ?
I don't think you can actually write to the configuration file at runtime - it may well be read-only, however, there may be a way around this by re-writing the file (with proper alterations as required) and essentially replacing the existing one, then, further, restarting the application to load the new values (I highly doubt this is desirable and personally would not try to instrument this malarkey).
You may also just consider storing application settings in the settings file which are easily manipulated in this way.
To use settings, first let's assume you have a Settings.settings file (if not then create one: Add Item->Settings File), then we have a setting configured named MyUnicornsName; In order to make a change and persist it you can simply do as follows:
Settings.Default.MyUnicornsName = "Lucifers Spawn";
Settings.Default.Save();
Similarly, to read a setting:
ATextDisplayControl.Text = Settings.Default.MyUnicornsName
In case you don't know, Visual Studio will open the settings editor when you open/double click the settings file in the IDE; using this interface you can add and edit your initial settings and their values, and string is not the only supported value, all primitives can be used, and, as far as I know, any serializable value too.
Is there any reason you can't use the usual auto-gen'd Properties.Settings to store the changing data in a settings file instead? One great thing is that you know what you're changing so you don't even have to restart the application!
Using Settings in C#
Runtime access of settings is as easy as:
this.BackColor = Properties.Settings.Default.myColor;
(There is no good pattern for modifing app.config itself simply b/c it's designed to be readonly in that context, with expert-user settings.)
Use the Project->Properties->Settings for these kinds of things.
well actually the Properties in the App.config are ReadOnly so u cant do it.
But there's a trick............................
In the Settings.cs file create a Function or method that is public so that it can be available with Properties.settings
and write the following code..
public void ChangeProperty(string propertyname, string value)
{
this[propertyname] = value;
}
remember to pass the exact string of the property name to the method. or better create a Writeonly Property for the setting.
update
here is a code for setting as a property, i am taking a connection string as an example, bet it can be anything. remember the property stored is of Object type so you can create property specific to that..
public string MyCustomConnectionstring
{
set
{
//replace the string with your connection string otr what ever setting you want to change
this["myConnectionString"] = value;
}
}
Now you can easily use this Property to change the ConnectionString at run time...
I want to be able to maintain certain objects between application restarts.
To do that, I want to write specific cached items out to disk in Global.asax Application_End() function and re-load them back on Application_Start().
I currently have a cache helper class, which uses the following method to return the cached value:
return HttpContext.Current.Cache[key];
Problem: during Application_End(), HttpContext.Current is null since there is no web request (it's an automated cleanup procedure) - therefore, I cannot access .Cache[] to retrieve any of the items to save to disk.
Question: how can I access the cache items during Application_End()?
If you want to get access to cache object before it will be disposed, you need to use somethink like this to add object to cache:
Import namespace System.Web.Caching to your application where you are using adding objects to cache.
//Add callback method to delegate
var onRemove = new CacheItemRemovedCallback(RemovedCallback);
//Insert object to cache
HttpContext.Current.Cache.Insert("YourKey", YourValue, null, DateTime.Now.AddHours(12), Cache.NoSlidingExpiration, CacheItemPriority.NotRemovable, onRemove);
And when this object is going to be disposed will be called following method:
private void RemovedCallback(string key, object value, CacheItemRemovedReason reason)
{
//Use your logic here
//After this method cache object will be disposed
}
I strongly urge you to rethink your approach. You may want to describe specifics of what are you trying to do, so we might help you with that.
But if you are totally set on it, then you can simply save values on disk when you actually set them, i.e. your helper class would looks something like this:
public static class CacheHelper
{
public static void SetCache(string key, object value)
{
HttpContext.Current.Cache[key] = value;
if (key == "some special key")
WriteValueOnDisk(value);
}
}
You can access the cache through HttpRuntime.Cache when you don't have an HttpContext available. However, at Application_End, i believe the cache is already flushed.
The solution Dima Shmidt outlines would be the best approach to store your cached values. That is by adding your items to cache with a CacheItemRemovedCallback, and store the values to disk there.
As an alternative solution you could store the data in Application object (Application[key]) or simply create a static class and use it to keep your data within app - in this case the data would sill be available upon Application_End.
I have multiple business objects in my application (C#, Winforms, WinXP). When the user executes some action on the UI, each of these objects are modified and updated by different parts of the application. After each modification, I need to first check what has changed and then log these changes made to the object. The purpose of logging this is to create a comprehensive tracking of activity going on in the application.
Many among these objects contain contain lists of other objects and this nesting can be several levels deep. The 2 main requirements for any solution would be
capture changes as accurately as possible
keep performance cost to minimum.
eg of a business object:
public class MainClass1
{
public MainClass1()
{
detailCollection1 = new ClassDetailCollection1();
detailCollection2 = new ClassDetailCollection2();
}
private Int64 id;
public Int64 ID
{
get { return id; }
set { id = value; }
}
private DateTime timeStamp;
public DateTime TimeStamp
{
get { return timeStamp; }
set { timeStamp = value; }
}
private string category = string.Empty;
public string Category
{
get { return category; }
set { category = value; }
}
private string action = string.Empty;
public string Action
{
get { return action; }
set { action = value; }
}
private ClassDetailCollection1 detailCollection1;
public ClassDetailCollection1 DetailCollection1
{
get { return detailCollection1; }
}
private ClassDetailCollection2 detailCollection2;
public ClassDetailCollection2 DetailCollection2
{
get { return detailCollection2; }
}
//more collections here
}
public class ClassDetailCollection1
{
private List<DetailType1> detailType1Collection;
public List<DetailType1> DetailType1Collection
{
get { return detailType1Collection; }
}
private List<DetailType2> detailType2Collection;
public List<DetailType2> DetailType2Collection
{
get { return detailType2Collection; }
}
}
public class ClassDetailCollection2
{
private List<DetailType3> detailType3Collection;
public List<DetailType3> DetailType3Collection
{
get { return detailType3Collection; }
}
private List<DetailType4> detailType4Collection;
public List<DetailType4> DetailType4Collection
{
get { return detailType4Collection; }
}
}
//more other Types like MainClass1 above...
I can assume that I will have access to the old values and new values of the object.
In that case I can think of 2 ways to try to do this without being told what has explicitly changed.
use reflection and iterate thru all properties of the object and compare
those with the corresponding
properties of the older object. Log
any properties that have changed. This
approach seems to be more flexible, in
that I would not have to worry if any
new properties are added to any of the
objects. But it also seems performance
heavy.
Log changes in the setter of all the properties for all the objects.
Other than the fact that this will
need me to change a lot of code, it
seems more brute force. This will be
maintenance heavy and inflexible if
some one updates any of the Object
Types. But this way it may also be
preformance light since I will not
need to check what changed and log
exactly what properties are changed.
Suggestions for any better approaches and/or improvements to above approaches are welcome
I developed a system like this a few years ago. The idea was to track changes to an object and store those changes in a database, like version control for objects.
The best approach is called Aspect-Oriented Programming, or AOP. You inject "advice" into the setters and getters (actually all method execution, getters and setters are just special methods) allowing you to "intercept" actions taken on the objects. Look into Spring.NET or PostSharp for .NET AOP solutions.
I may not be able to give you a good answer, but I will tell you that in the overwhelming majority of cases, option 1 is NOT a good answer. We're dealing with a very similar reflective "graph-walker" in our project; seemed like a good idea at the time, but it is a nightmare, for the following reasons:
You know the object changed, but without a high level of knowledge in the reflective "change handling" class about the workings of objects above it, you may not know why. If that information is important to you, you have to give it to the change handler, most l;ikely through a field or property on the domain object, requiring changes to your domain and imparting knowledge to the domain about the business logic.
Changes can affect multiple objects, but logs for changes at every level may not be desired; for instance, the client may not want to see a change to a Borrower's outstanding loan count in the log when a new Loan is approved, but they do want to see changes due to consolidations. Managing rules about logging in these cases requires change handling classes to know about more of the structure than just one object, which can very quickly make a change-handling object VERY big, and VERY brittle.
The requirements of your graph walker are probably more than you know; if your object graph includes backreferences or cross-references, the walker must know where it's been, and the simplest comprehensive way to do that is to keep a list of objects it's processed, and check the current object against those it's handled before processing it (making anti-backtracking an N^2 operation). It must also not consider changes to objects in the graph that will not be persisted when you persist the top level (references that are not "cascaded"). NHibernate gives you the ability to plug into its own graph-walker and abide by the cascade rukles in your mappings, which helps, but if you're using a roll-your-own DAL, or you DO want to log changes to objects that NHibernate won't cascade to, you're going to have to set this all up yourself.
A piece of logic in a handler may make a change that requires an update to a "parent" object (updating a calculated field, perhaps). Now, you have to go back and re-evaluate the changed object if the change is of interest to another piece of the change handling logic.
If you have logic that requires creation and persistence of a new object, you must do one of two things; attach the new object to the graph somewhere (where it may or may not be picked up by the walker), or persist the new object in its own transaction (if you're using an ORM, the object CANNOT reference an object from the other graph with a "cascade" setting that will cause it to be saved first).
Finally, being highly reflective in both walking the graph and finding the "handlers" for a particular object, passing a complex tree into such a framework is a guaranteed speed bump in your application.
I think you'll save yourself a lot of headaches if you skip the "change handler" reflective pattern, and include the creation of audit logs or any pre-persistence logic in the "unit of work" you're performing up at the business layer, through a set of "audit loggers". This allows the logic making the changes to employ an algorithm selection pattern such as Command or Strategy to tell your audit framework exactly what kind of change is happening, so it can pick the logger that will produce the required logging messages.
See here how adempiere did the changelog: http://wiki.adempiere.net/Change_Log