Should I store localization content in the application state - c#

I am developing my first multilingual C# site and everything is going ok except for one crucial aspect. I'm not 100% sure what the best option is for storing strings (typically single words) that will be translated by code from my code behind pages.
On the front end of the site I am going to use asp.net resource files for the wording on the pages. This part is fine. However, this site will make XML calls and the XML responses are only ever in english. I have been given an excel sheet with all the words that will be returned by the XML broken into the different languages but I'm not sure how best to store/access this information. There are roughly 80 words x 7 languages.
I am thinking about creating a dictionary object for each language that is created by my global.asax file at application run time and just keeping it stored in memory. The plus side for doing this is that the dictionary object will only have to be created once (until IIS restarts) and can be accessed by any user without needing to be rebuilt but the downside is that I have 7 dictionary objects constantly stored in memory. The server is a Win 2008 64bit with 4GB of RAM so should I even be concerned with memory taken up by using this method?
What do you guys think would be the best way to store/retrieve different language words that would be used by all users?
Thanks for your input.
Rich

From what you say, you are looking at 560 words which need to differ based on locale. This is a drop in the ocean. The resource file method which you have contemplated is fit for purpose and I would recommend using them. They integrate with controls so you will be making the most from them.
If it did trouble you, you could have them on a sliding cache, i.e. sliding cache of 20mins for example, But I do not see anything wrong with your choice in this solution.
OMO
Cheers,
Andrew
P.s. have a read through this, to see how you can find and bind values in different resource files to controls and literals and use programatically.
http://msdn.microsoft.com/en-us/magazine/cc163566.aspx

As long as you are aware of the impact of doing so then yes, storing this data in memory would be fine (as long as you have enough to do so). Once you know what is appropriate for the current user then tossing it into memory would be fine. You might look at something like MemCached Win32 or Velocity though to offload the storage to another app server. Use this even on your local application for the time being that way when it is time to push this to another server or grow your app you have a clear separation of concerns defined at your caching layer. And keep in mind that the more languages you support the more stuff you are storing in memory. Keep an eye on the amount of data being stored in memory on your lone app server as this could become overwhelming in time. Also, make sure that the keys you are using are specific to the language. Otherwise you might find that you are storing a menu in german for an english user.

Related

Is there absolutely no protection for Windows applications on dotnet?

I am in the middle of developing an application in Winforms and there just doesn't seem to be any protection from decompilation of the executables and other generated assemblies... there are tools that decompile literally within seconds. There are obfuscators available; however, I am specifically looking for a free one.
I store some pretty sensitive strings within the application, and though I try my best to keep them encrypted or in the local sqlite database, there is always that one, single point of failure that leaves the entire application vulnerable. I checked out a couple of obfuscators, both open source and commercial offerings. The open source one seems to be broken, and the commercial ones are priced quite steeply, which is unaffordable for an indie developer like me.
I am aware that an executable has to run in memory, and this in itself makes it vulnerable. And a determined attacker can eventually decompile an application. However, I want to make this process as expensive as possible. At the very least I would want to protect the strings within my application.
My question is, is there just no way to protect an application assembly from getting reverse engineered if I decide not to use any of the expensive options available?
There's lots of protection in Windows. But it's all there to protect the your users from you, not the other way around.
The simple fact of decompliation is: "As long as a computer can still execute it, it can also still decompile it."
Execution is a process that translates binary into actions.
Decompliation is a process that translates binary into code.
If one is blocked, the other is blocked as well.
Obfuscation can make it harder to read the decompiled code. But that is about all it can do.
I store some pretty sensitive strings within the application, and though I try my best to keep them encrypted or in the local sqlite database, there is always that one, single point of failure that leaves the entire application vulnerable
Every string, is only as save as the place you keep it at. The same applies for encryption keys. There are 2 limited workarounds:
if it is around comparing the input to something in the backend - like you do with passwords - password security can work. Modern PW security means not even the Adminsitrator can figure out the passwored, yet you can still compare user input to it.
You could move the strings into a seperate application. Instead of giving applications the SQL Server Connection string, you give it access to a WebService that you control. Only the WebService actually knows how to contact the Database.
It depends on your practical scenario what you can do. But in the end, if a string or other peice of information is to be useable, it is vulnerable in memory.

Best way to handle large amount of permanent data

I'm developing a PC app in Visual Studio where I'm showing the status of hundreds of sensors that are connected via WiFi. The thing is that I need to hold on to the sensor data even after I close the app, so I'm considering some form of permanent storage. These are the options I've considered:
1) My Sensor object is relatively compact with only a few properties. I could serialize all the objects before closing the app and load them every time the app starts anew.
2) I could throw all the properties (which are mostly strings and doubles) into a simple text file and create a custom protocol for storage and retrieval.
3) I could integrate a database with my app. Someone told me this is the best way to go about it, but I'm a bit hesitant seeing as I'm not familiar with DBs.
Which method would yield the best results in terms of resource usage and speed? Or is there some other, better way to go about this?
First thing you need is to understand is your problem. For example, when the program is running do you need to have everything in memory at the same time or do you work with your sensors one at a time?
What is a "large amount of data"? For example, to me that will never be less than million (or billion in some cases).
Once you know that you shouldn't be scared of using something just because you are not familiar to it. Otherwise you are not looking for the best solution for your problem, you are just hacking around it in a way that you feel comfortable.
This being said, you have several ways of doing this. Like you said you can serialize data, using json to store and a few other alternatives but if we are talking about a "large amount of data that we want to persist" I would always call for the use of Databases (the name says a lot). If you don't need to have everything in memory at the same time then I believe that this is you best option.
I personally don't like them (again, personal choice) but one way of not learning SQL (a lot) while you still use your objects is to use an ORM like NHibernate (you will also need to learn how to use it so you don't get things a slower).
If you need to have everything loaded at the same time (most often that is not the case so be sure of this) you need to know what you want to keep and serialize it. If you want that data to be readable by another tool or organize in a given way consider a data format like XML or JSON.
Also, you can use mmap-file.
File is permanent, and keep data between program run.
So, you just keep your data structs in the mmap-ed area, and no more.
MSDN manual here:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366556%28v=vs.85%29.aspx
Since you need to load all the data once at the start of the program, the database case seems doubtful. The DB necessary when you need to load a bit of data many times.
So first two cases seem more preferred. I would advice to hide a specific solution behind an interface, then you'll can change it later.
Standard .NET serialization of sensors' array is more simple probably, and it will be easier to expand.

A model defined in a view?

I'm working on a website which will be used all over the world and has to be highly disponible at anytime anywhere on the planet. That's why I try to use all the possible tricks to reduce at maximum the need of recompiling/restarting the website when minor maintenances must occur.
The ability in Asp.Net MVC to edit a view and have it automatically and dynamically recompiled by the framework without service interruption is really great and perfectly fits my needs. But its interest is strongly limited if I cannot edit the underlying model in a similar way and must recompile the whole stuff.
So my question : it is possible in any way (even an awful, hacky one) to define the view model class right inside the view itself in a code block ?
Otherwise, which trails could I explore to achieve a 'hot-editable' website (I mean : whose parts could be recompiled while the site is still alive, with changes taken into account straight away) ?
Thank you so much in advance ! :-)
If you are that concerned about performance and up time, consider using a server farm to host your site. When you need to make updates, you can take each server down separately so that your site is always available.
However, most deployments only take a few seconds. Your application may need more or less time to spin up (EF view generation may take 10-20 secs for example), but as long as you update during off peak hours you should be fine.
Also, I would NEVER EVER recommend changing code on a live server. You will break something eventually.
Eventually I managed to achieve the goal with another strategy.
All views have the same model called DataSource which globally is a recordset open before the rendering and closed after (if needed, reads are performed by the Razor code inside the view).
The column list of the recordset may change live without making the site crash.
For forms and validation, metadata taken from the database about the underlying stored procedure lead to a code emission which dynamically create a c# type, and that's it. Despite a new type is generated each time the sp is changed, the app pool recycling rate prevents too many obsolete types zombiing in memory.

why store internationalization "words" in separate (xml) files?

I've been reading up a bit on how people do internationalization. It seems that the common consensus is to save those strings in a separate file (usually xml) and load it when necessary.
I'm wondering why not just store those strings in a database instead? isn't it much better this way?
Btw the nature of my app is a website app.
The most important thing is to store your string tables outside of your compilation units so that incorporating updated translations does not require a rebuild. This allows for new or updated translations to be incorporated at a later point without too much hassle.
Of course, those string tables could be stored anywhere. If you want to put them in a database, knock yourself out. As long as your application can reach them and your translation staff know how to deliver them into the right place, it doesn't make a difference.
Serious Internationalization is always a big project with a lot of parts and players. As #zerkms alludes to, the translation task is very often an offline activity by individuals and teams around the world.
So it makes sense to have a clean work product that a translation team can produce (the translated XML file).
Once the file is translated, it is up to you how you handle the translations within your software. It is common to keep them in memory since you often need to substitute in variables into the placeholders.
If you do store in the database, you will introduce additional overhead of querying the database everytime your "locale" switches.
This the reason for resource bundles. You package it along with source code, but you dont have to change code to add support for languages.
You could also subclass the resourcebundle class yourself and implement jdbc support so the locale-specific strings are stored in the database.
http://java.sun.com/developer/technicalArticles/Intl/ResourceBundles/

Hold global data for an ASP.net webpage

I am currently working on a large-scale website, that is very dynamic, and so needs to store a large volume of information in memory on a near-permanent basis (things like configuration settings for the checkout, or the tree used to implement the menu structure).
This information is not session-specific, it is consistent for every thread using the website.
What is the best way to hold this data globally within ASP, so it can be accessed when needed, instead of re-loaded on each use?
Any AppSettings in web.config are automatically cached (i.e., they aren't read from the XML every time you need to use them).
You could also manually manipulate the cache yourself.
Edit: Better links...
Add items to the cache
Retrieve items from the cache
Caching Application Data
It's not precisely clear whether your information is session specific or not...if it is, then use the ASP Session object. Given your description of the scale, you probably want to look at storing the state in Sql Server:
http://support.microsoft.com/kb/317604
That's the 101 approach. If you're looking for something a little beefier, then check out memcached (that's pronounced Mem-Cache-Dee):
http://www.danga.com/memcached/
That's the system that apps like Facebook and Twitter use.
Good luck!
Using ASP.NET caching feature is a good option I think. In addition to John's answer, you can use Microsoft's Patterns & Practices team's Caching Application Block.
This is a good video exploring the different ways to can retain application state.
http://www.asp.net/learn/3.5-videos/video-11.aspx
It brushes on the Application object which is global for the whole application, for all users and shows you how to create a hit counter (obviously instead of storing an integer you could store objects). If you need to make changes, you do need to use a lock for concurrency, and I'm not sure how it handles LARGE amounts of data because I've never had to keep that much there.
I usually keep things like that in the Application object.
If the pages are dependent upon one another and they post to one another, you could use the page's request object. Probably not the answer you're looking for, but definitely one of the smallest in memory to use.
I have run into the same situation in the past and found an interface to be the most scalable solution. Application cache may be the answer today, but will it scale to meet your needs?
If you need to scale up, you may find cookies, or some type of temp database storage to be the trick. Simply add a new method to your interface, and set the interface to choose the "mode" from web.config.

Categories