A model defined in a view? - c#

I'm working on a website which will be used all over the world and has to be highly disponible at anytime anywhere on the planet. That's why I try to use all the possible tricks to reduce at maximum the need of recompiling/restarting the website when minor maintenances must occur.
The ability in Asp.Net MVC to edit a view and have it automatically and dynamically recompiled by the framework without service interruption is really great and perfectly fits my needs. But its interest is strongly limited if I cannot edit the underlying model in a similar way and must recompile the whole stuff.
So my question : it is possible in any way (even an awful, hacky one) to define the view model class right inside the view itself in a code block ?
Otherwise, which trails could I explore to achieve a 'hot-editable' website (I mean : whose parts could be recompiled while the site is still alive, with changes taken into account straight away) ?
Thank you so much in advance ! :-)

If you are that concerned about performance and up time, consider using a server farm to host your site. When you need to make updates, you can take each server down separately so that your site is always available.
However, most deployments only take a few seconds. Your application may need more or less time to spin up (EF view generation may take 10-20 secs for example), but as long as you update during off peak hours you should be fine.
Also, I would NEVER EVER recommend changing code on a live server. You will break something eventually.

Eventually I managed to achieve the goal with another strategy.
All views have the same model called DataSource which globally is a recordset open before the rendering and closed after (if needed, reads are performed by the Razor code inside the view).
The column list of the recordset may change live without making the site crash.
For forms and validation, metadata taken from the database about the underlying stored procedure lead to a code emission which dynamically create a c# type, and that's it. Despite a new type is generated each time the sp is changed, the app pool recycling rate prevents too many obsolete types zombiing in memory.

Related

How do you set OptionSet Values in Microsoft CRM 2011, based on the text or label?

I am implementing a web service that receives information and needs to map them on the MS Dynamics CRM.
So, when it comes to setting OptionSet values, since I am not the one who implemented the CRM, I have no idea what indices are set up. All I know are the labels. Naturally so do the ones consuming my service. e.g. I call an Opportunity Warm or Cold, not 10033004 and 10033005. But I still need to set this value on the Opportunity entity.
I have found this link - but I think it's really overkill and if that's the only way I can access the OptionSet, then that's just sad.
Couple of options here.
Use the metadata services e.g. Your link, I agree this feels like a bit of an overkill, but you could add caching to reduce the overhead of multiple service calls. If you really don't know what the value is going to be at run time then this is probably the best way.
Just hard code it, if you know at compile time what the values will be then this is probably the quickest option. I've done this before and its usually fine. However this will obviously break if someone changes CRM.
Use the strongly typed classes, this is effectively hard coding just the system does it for you. However you will have regenerate them if CRM changes.
So none of these are a perfect option I'm afraid, but they all get the job done.
Edit
Re: option 3; I mean the early bound entities described here: http://msdn.microsoft.com/en-us/library/gg328210.aspx. I'm not sure how much they will help in this situation. They are strongly types classes which are used instead of the entity class. E.g. contact.firstname instead of entity["firstname"]. I suppose you might be able to use them as a form of metadata - never tried it myself though. Also it has the same problem as option 2, when CRM changes they need to be updated and then compiled.
In this case I'm veering towards option 1 and querying the metadata services, if you do this once and cache the results at the beginning of your process you will always have the most up to date information. This example shows how to get all the metadata in the system http://msdn.microsoft.com/en-us/library/jj603008.

Custom Validators in ASP.NET Development - Clean vs Efficient

I'm working on a page that has a significant number of textboxes/dropdowns/etc to fill out. The majority of these are going to be performing some sort of custom validation. I should note that it's nothing of substantial size - all just string or integer values.
I always hear (and have typically always agreed) that as much validation should be performed on the client rather than on the server, but in this case I am unsure. The difference here is that this project will be passed on to an IT guy who knows about computers but is still new to programming - he will be the one in charge of making the minor updates and changes to the way these custom validations work in the future.
My idea shifted from being as efficient as possible to being a bit less efficient but much more readable. I created a new class specifically for all of my validations which will be used throughout the website. By forcing all of my custom validation code in this class, though, I eliminate any client-side validations I might be able to perform. I should also note that each page that requires a custom validation will generally need to perform at least one server-side validation, so I will never be able to use client-side 100%
Considering the relatively low level of activity on the website (currently and in the future), would you consider this as an acceptable solution? Or would you ALWAYS prefer to have as much validation on the client as possible in order to increase the responsiveness, even if it makes things a bit more messy for whoever may be working on it in the future?
The benefit of client-side validation is that the user doesn't have to wait for a page to postback.
Validation constraints are best declared server-side. Otherwise, someone could disable JavaScript on their browser and send corrupt data to your database.
If you want to get the speed of client-side validation, but keep the client clean for maintenance, you can subscribe the onblur event of each form input to do an AJAX call and validate the model, then constrain the form to not submit if the form is invalid. This could all be factored into an external .js file, so all your IT guy has to do is include it, and from there its just HTML.
You always want to aim for better user experience in my opinion. Generally speaking, if your code doesn't add value to the user experience, it doesn't really matter how you implement it in the back end. Having said that, you should always try to write "maintainable" code. If "messy" code is the best that you could do for the time being, add documentations that explains why that is.

Should I store localization content in the application state

I am developing my first multilingual C# site and everything is going ok except for one crucial aspect. I'm not 100% sure what the best option is for storing strings (typically single words) that will be translated by code from my code behind pages.
On the front end of the site I am going to use asp.net resource files for the wording on the pages. This part is fine. However, this site will make XML calls and the XML responses are only ever in english. I have been given an excel sheet with all the words that will be returned by the XML broken into the different languages but I'm not sure how best to store/access this information. There are roughly 80 words x 7 languages.
I am thinking about creating a dictionary object for each language that is created by my global.asax file at application run time and just keeping it stored in memory. The plus side for doing this is that the dictionary object will only have to be created once (until IIS restarts) and can be accessed by any user without needing to be rebuilt but the downside is that I have 7 dictionary objects constantly stored in memory. The server is a Win 2008 64bit with 4GB of RAM so should I even be concerned with memory taken up by using this method?
What do you guys think would be the best way to store/retrieve different language words that would be used by all users?
Thanks for your input.
Rich
From what you say, you are looking at 560 words which need to differ based on locale. This is a drop in the ocean. The resource file method which you have contemplated is fit for purpose and I would recommend using them. They integrate with controls so you will be making the most from them.
If it did trouble you, you could have them on a sliding cache, i.e. sliding cache of 20mins for example, But I do not see anything wrong with your choice in this solution.
OMO
Cheers,
Andrew
P.s. have a read through this, to see how you can find and bind values in different resource files to controls and literals and use programatically.
http://msdn.microsoft.com/en-us/magazine/cc163566.aspx
As long as you are aware of the impact of doing so then yes, storing this data in memory would be fine (as long as you have enough to do so). Once you know what is appropriate for the current user then tossing it into memory would be fine. You might look at something like MemCached Win32 or Velocity though to offload the storage to another app server. Use this even on your local application for the time being that way when it is time to push this to another server or grow your app you have a clear separation of concerns defined at your caching layer. And keep in mind that the more languages you support the more stuff you are storing in memory. Keep an eye on the amount of data being stored in memory on your lone app server as this could become overwhelming in time. Also, make sure that the keys you are using are specific to the language. Otherwise you might find that you are storing a menu in german for an english user.

Reducing a large single page AJAX application (jQuery, ASP.net)

I'm currently building a single page AJAX application. It's a large "sign-up form" that's been built as a multi-step wizard with multiple branches and different verbiage based on what choices the user makes. At the end of the form is an editable review page. Once the user submits the form, it sends a rather large email to us, and a small email to them. It's sort of like a very boring choose your own adventure book.
Feature creep has pushed the size of this app beyond the abilities of the current architecture, and it's too slow to work in any slower computers (not good for a web app), especially those using Internet Explorer. It currently has 64 individual steps, 5400 DOM elements and the .aspx file alone weighs in at 300kb (4206 LOC). Loading the app takes anywhere from 1.5 seconds on a fast machine running FireFox 3, to 20 seconds on a slower machine running IE7. Moving between steps takes about the same amount of time.
So let's recap the features:
Multi-Step, multi-path wizard style
form (64 steps)
Current step is shown in a fashion similar to this: http://codylindley.com/CSS/325/css-step-menu
Multiple validated fields
Changing verbiage based on user
choices
Final, editable review page
I'm using jQuery 1.3.2 and the following plugins:
jQuery Form Wizard Plugin
jQuery clueTip plugin
jQuery sexycombo
jQuery meioMask plugin
As well as some custom script for loading the verbiage from an XML file, running the review page and some aesthetic accoutrements.
I don't have this posted anywhere public, but I'm mostly looking for some tips on how to approach this sort of project and make it light weight and extensible. If anyone has any ideas as far as tools, tutorials or technologies, that's what I'm looking for. I'm a pretty novice programmer (I'm mostly a CSS/xHTML/Design guy), so speak gently. I just need a good plan of attack to make this app faster. Any ideas?
One way would be to break apart the steps into multiple pages / requests. To do this you would have to store the state of the previous pages somewhere. You could use a database to do this or some other method.
Another way would be to dynamically load the parts you need via AJAX. This won't help with the 54000 DOM elements though, but it would help with the initial page load.
Based on the question comments a quick way to "solve" this problem is to make a C# class that mirrors all the fields in your question. Something like this:
public class MySurvey
{
public string FirsName { get; set; }
public string LastName { get; set; }
// and so on...
}
Then you would store this in the session (too keep it easy... I know it's not the "best" way) like this
public MySurvey Survey
{
get
{
var survey = Session["MySurvey"] as MySurvey;
if (survey == null)
{
survey = new MySurvey();
Session["MySurvey"] = survey;
}
return survey;
}
}
This way you'll always have a non-null Survey object you can work with.
The next step would be to break that big form into smaller pages, let's say: step1.aspx, step2.aspx, step3.aspx etc. All these pages would inherit from a common base page that would include the property above. After this all you'd need to do is send the request from step1.aspx back and save it to Survey, similar to what you're doing now but for each small piece. When done redirect (Response.Redirect("~/stepX.aspx")) to the next page. The info from the previous page would be saved in the session object. If they close the browser page they won't be able to get back though.
Rather than saving it to the session you could save it in a database or in a cookie, but you're limited to 4K for cookies so it may not fit.
I agree with PBZ, saving the individual steps would be ideal. You can, however, do this with AJAX. If you did, though, it'd require some stuff that sounds like it might be outside of your skillset of mostly front-end development, you'd need to probably create a new database row and tie it to the user's session ID, and every time they click to the next step have it update that row. Possibly even tie it to their IP address so if the whole thing blows up they can come back and hit "remember me?" for your application to retrieve it.
As far as optimizing the existing structure, jQuery is fairly heavy when it comes to optimization, and adding a lot of jQuery modules doesn't help that. I'm not saying it's bad, because it saves you a lot of time, but there are some instances where you are using a module for one of its many functionalities, and you can replace that entire module with a few lines of jQuery enabled javascript.
As far as minimizing the individual DOM elements, the step above I mentioned could help slim that down, because you're probably loading a lot of extensible functions for those modules that you may or may not need.
On the back end, I'd have to see the source to see how to tell you to optimize it, but it sounds like there's a lot of redundancy in individual steps, some of that can probably be trimmed down into functions that include a little recursion, or at the least delegate some of the tasks to one another.
I wish I could help more but without digging through your source I can only suggest basic strategies. Best of luck, though!
Agree, break up the steps. 5400 elements is too many.
There are a few options if you need to keep it on one page.
AJAX requests to get back either raw HTML, or an array of objects to parse into HTML or DOM
Frames or Iframes
JavaScript to set innerHTML or manipulate the DOM based on the current step. Note with this option IE7 and especially IE6 will have memory leaks. Google IE6 JavaScript memory leaks for more info.
Use document.write to include only the .js file(s) needed for the current step.
HTH.
Sounds like mostly a JQuery optimization problem.
First suggestion would be switch as many selects into ID selectors as you can. I've had speedups of over 200-300x by being able to move to id attribute selection only.
Second suggestion is more of a plan of attack. Since IE is your main problem area, I suggest using the IE8 debugger. You just need to hit f12 in IE8... Tabs 3 and 4 are script and profiler respectively.
Once you've done as much of #1 as you think you can, to get a starting point, just go to profiler, hit start profiling, do some slow action on the webpage, and then stop profiling. You will see your longest method calls, and just work your way through it.
For finer testing/dev, go to the script tab. Breakpoints locals etc are there for analysis. You can dev/test changes via the immediate window... i.e. put a break point where you want to change a function, trigger the function, execute your javascript instead of the defined javascript in the immediate window.
When you think you have something figured out, profile your changes to make sure they are really improvements. Just start the profiler, run the old code, stop it and note your benchmark. Then re-start the profiler and use the immediate window to execute your altered function.
That's about it. If that flow can't take you far enough, as mentioned above, JQuery itself (and hence its plugins) are not terribly performant, and replacing with standard javascript will speed everything up. If your plugins benchmark slow, look at replacing them with other plugins.

Hold global data for an ASP.net webpage

I am currently working on a large-scale website, that is very dynamic, and so needs to store a large volume of information in memory on a near-permanent basis (things like configuration settings for the checkout, or the tree used to implement the menu structure).
This information is not session-specific, it is consistent for every thread using the website.
What is the best way to hold this data globally within ASP, so it can be accessed when needed, instead of re-loaded on each use?
Any AppSettings in web.config are automatically cached (i.e., they aren't read from the XML every time you need to use them).
You could also manually manipulate the cache yourself.
Edit: Better links...
Add items to the cache
Retrieve items from the cache
Caching Application Data
It's not precisely clear whether your information is session specific or not...if it is, then use the ASP Session object. Given your description of the scale, you probably want to look at storing the state in Sql Server:
http://support.microsoft.com/kb/317604
That's the 101 approach. If you're looking for something a little beefier, then check out memcached (that's pronounced Mem-Cache-Dee):
http://www.danga.com/memcached/
That's the system that apps like Facebook and Twitter use.
Good luck!
Using ASP.NET caching feature is a good option I think. In addition to John's answer, you can use Microsoft's Patterns & Practices team's Caching Application Block.
This is a good video exploring the different ways to can retain application state.
http://www.asp.net/learn/3.5-videos/video-11.aspx
It brushes on the Application object which is global for the whole application, for all users and shows you how to create a hit counter (obviously instead of storing an integer you could store objects). If you need to make changes, you do need to use a lock for concurrency, and I'm not sure how it handles LARGE amounts of data because I've never had to keep that much there.
I usually keep things like that in the Application object.
If the pages are dependent upon one another and they post to one another, you could use the page's request object. Probably not the answer you're looking for, but definitely one of the smallest in memory to use.
I have run into the same situation in the past and found an interface to be the most scalable solution. Application cache may be the answer today, but will it scale to meet your needs?
If you need to scale up, you may find cookies, or some type of temp database storage to be the trick. Simply add a new method to your interface, and set the interface to choose the "mode" from web.config.

Categories