I'm currently building a single page AJAX application. It's a large "sign-up form" that's been built as a multi-step wizard with multiple branches and different verbiage based on what choices the user makes. At the end of the form is an editable review page. Once the user submits the form, it sends a rather large email to us, and a small email to them. It's sort of like a very boring choose your own adventure book.
Feature creep has pushed the size of this app beyond the abilities of the current architecture, and it's too slow to work in any slower computers (not good for a web app), especially those using Internet Explorer. It currently has 64 individual steps, 5400 DOM elements and the .aspx file alone weighs in at 300kb (4206 LOC). Loading the app takes anywhere from 1.5 seconds on a fast machine running FireFox 3, to 20 seconds on a slower machine running IE7. Moving between steps takes about the same amount of time.
So let's recap the features:
Multi-Step, multi-path wizard style
form (64 steps)
Current step is shown in a fashion similar to this: http://codylindley.com/CSS/325/css-step-menu
Multiple validated fields
Changing verbiage based on user
choices
Final, editable review page
I'm using jQuery 1.3.2 and the following plugins:
jQuery Form Wizard Plugin
jQuery clueTip plugin
jQuery sexycombo
jQuery meioMask plugin
As well as some custom script for loading the verbiage from an XML file, running the review page and some aesthetic accoutrements.
I don't have this posted anywhere public, but I'm mostly looking for some tips on how to approach this sort of project and make it light weight and extensible. If anyone has any ideas as far as tools, tutorials or technologies, that's what I'm looking for. I'm a pretty novice programmer (I'm mostly a CSS/xHTML/Design guy), so speak gently. I just need a good plan of attack to make this app faster. Any ideas?
One way would be to break apart the steps into multiple pages / requests. To do this you would have to store the state of the previous pages somewhere. You could use a database to do this or some other method.
Another way would be to dynamically load the parts you need via AJAX. This won't help with the 54000 DOM elements though, but it would help with the initial page load.
Based on the question comments a quick way to "solve" this problem is to make a C# class that mirrors all the fields in your question. Something like this:
public class MySurvey
{
public string FirsName { get; set; }
public string LastName { get; set; }
// and so on...
}
Then you would store this in the session (too keep it easy... I know it's not the "best" way) like this
public MySurvey Survey
{
get
{
var survey = Session["MySurvey"] as MySurvey;
if (survey == null)
{
survey = new MySurvey();
Session["MySurvey"] = survey;
}
return survey;
}
}
This way you'll always have a non-null Survey object you can work with.
The next step would be to break that big form into smaller pages, let's say: step1.aspx, step2.aspx, step3.aspx etc. All these pages would inherit from a common base page that would include the property above. After this all you'd need to do is send the request from step1.aspx back and save it to Survey, similar to what you're doing now but for each small piece. When done redirect (Response.Redirect("~/stepX.aspx")) to the next page. The info from the previous page would be saved in the session object. If they close the browser page they won't be able to get back though.
Rather than saving it to the session you could save it in a database or in a cookie, but you're limited to 4K for cookies so it may not fit.
I agree with PBZ, saving the individual steps would be ideal. You can, however, do this with AJAX. If you did, though, it'd require some stuff that sounds like it might be outside of your skillset of mostly front-end development, you'd need to probably create a new database row and tie it to the user's session ID, and every time they click to the next step have it update that row. Possibly even tie it to their IP address so if the whole thing blows up they can come back and hit "remember me?" for your application to retrieve it.
As far as optimizing the existing structure, jQuery is fairly heavy when it comes to optimization, and adding a lot of jQuery modules doesn't help that. I'm not saying it's bad, because it saves you a lot of time, but there are some instances where you are using a module for one of its many functionalities, and you can replace that entire module with a few lines of jQuery enabled javascript.
As far as minimizing the individual DOM elements, the step above I mentioned could help slim that down, because you're probably loading a lot of extensible functions for those modules that you may or may not need.
On the back end, I'd have to see the source to see how to tell you to optimize it, but it sounds like there's a lot of redundancy in individual steps, some of that can probably be trimmed down into functions that include a little recursion, or at the least delegate some of the tasks to one another.
I wish I could help more but without digging through your source I can only suggest basic strategies. Best of luck, though!
Agree, break up the steps. 5400 elements is too many.
There are a few options if you need to keep it on one page.
AJAX requests to get back either raw HTML, or an array of objects to parse into HTML or DOM
Frames or Iframes
JavaScript to set innerHTML or manipulate the DOM based on the current step. Note with this option IE7 and especially IE6 will have memory leaks. Google IE6 JavaScript memory leaks for more info.
Use document.write to include only the .js file(s) needed for the current step.
HTH.
Sounds like mostly a JQuery optimization problem.
First suggestion would be switch as many selects into ID selectors as you can. I've had speedups of over 200-300x by being able to move to id attribute selection only.
Second suggestion is more of a plan of attack. Since IE is your main problem area, I suggest using the IE8 debugger. You just need to hit f12 in IE8... Tabs 3 and 4 are script and profiler respectively.
Once you've done as much of #1 as you think you can, to get a starting point, just go to profiler, hit start profiling, do some slow action on the webpage, and then stop profiling. You will see your longest method calls, and just work your way through it.
For finer testing/dev, go to the script tab. Breakpoints locals etc are there for analysis. You can dev/test changes via the immediate window... i.e. put a break point where you want to change a function, trigger the function, execute your javascript instead of the defined javascript in the immediate window.
When you think you have something figured out, profile your changes to make sure they are really improvements. Just start the profiler, run the old code, stop it and note your benchmark. Then re-start the profiler and use the immediate window to execute your altered function.
That's about it. If that flow can't take you far enough, as mentioned above, JQuery itself (and hence its plugins) are not terribly performant, and replacing with standard javascript will speed everything up. If your plugins benchmark slow, look at replacing them with other plugins.
Related
Basically I have a class called Asset which holds all the information for an Asset in my system. This can get quite big (Assets have Thumbnails, Filenames, Metadata, Ratings, Comments, etc).
On my results page, I list all the Assets that match a particular criteria which then can be filtered using jQuery.
I was finding performance issues in IE8 so the first thing I did was look at the Asset class and see what was not needed for displaying an Asset on the page. (Later I visited my jQuery and found that it was what was causing the performance issues).
So, when I stripped my class right down to basics, I made that the BaseAsset and derived Asset from that.
My question is, did I need to do that? was there any need?
I shall provide examples if necessary, but I am refraining at the moment, because the post could become quite big :)
I don't think you need a base class per say, I think what you need is to send what you need. It seems that the problem simply was the fact that you didn't need all of the data all of the time.
I know it's tedious, but only send what you need, when you need it, and you won't have any problems. When you need more data, then either load it asynchronously with an AJAX call, or even make another page that the user navigates to.
I would like to check the load speed of each page in a particular asp.net website (based on C#). I thought of creating a class for timing the loading and execution of a certain element. I'd like to create a stopwatch object on each user control and on the page itself to get diagnostics on how long the page takes to load altogether. When debug is enabled, we could then see the load times of each of the pages, as well as each of the elements.
I don't want to save it in session, but I really can't think of a way around it.
I was thinking that a nice implementation would be to include a class that I write which will track the stopwatch values for each of the master page, page, and user controls as they load, but where could I save each execution time when it was done? I suppose I could use Session, but is this the only way to save data across user controls and pages?
What would be the best way to do this?
Besides using a logging framework you could use the tracing you need by using the ASP.nets out of the box tracing and diagnostic features [http://msdn.microsoft.com/en-us/library/bb386420.aspx] or you could use something like elmah for other needs. Try not to re-invent the wheel but make a car out of it that flies. Good luck!!
I would use a logging framework like log4net etc... Especially since you're only doing it for debug mode. You can define warning levels to match your loading speeds if something is slower than expected...
I would agree with other answers in that leveraging a logging implementation would be sufficient while doing related logic, such as writing to the log, on page Unload().
If this is an MVC site, you could check out the MVC Mini Profiler. It is available as a Nuget package.
I have an ASP.NET MVC 3 site that is behind a CDN (combo of Azure CDN and CloudFlare). Each page view pulls data from SQL server (unless it's being cached by the CDN of course).
I'd like to be able to display a "N page views" on each page on my site (~10k pages) so when visitors view the pages, they know how popular (or unpopular) it is. Stackoverflow does this on each question page. I don't care WHO the users were, just the grand total per page.
I'm wondering what the best way to do this is. I've thought of using client code, but this is easily messed with by a malicious user to inflate page view count. So it seems the best way is to implement this on the server. I did my best to search Stackoverflow for code samples and recommended approaches but couldn't find something that applied to what I"m asking.
If I were implementing this, I would probably just stand on top of my existing analytics package. I like using Google Analytics, and it tracks all kinds of awesome data, including page hit count:
http://www.google.com/analytics/
In a little piece of async javascript I would just use their API to get the total page views:
http://code.google.com/apis/analytics/docs/
I know it's lazy, but there are far fewer things I can screw up by using their code :-)
Here's some example code I've written to do just this:
https://github.com/TomGullen/C-Unique-Pageview-Tracker/blob/master/Tracker.cs
Advantages
No reliance on database
Customisable memory usage
Disadvantages
Will double count unique views depending on cache expiry + max queue size
Example usage
var handler = new UniquePageviewHandler("BlogPost_" + blogPostID);
if(handler.ProcessPageView(visitorIPAddress)){
// This is a new page view, so process accordingly
// EG: Increment `unique page views` count in database by 1
}
You'll need to modify the code I've linked to slightly to get it working in your project but it should work without too much issues.
I'm new at web development with .NET, and I'm currently studying a page where I have both separated codebehinds (in my case, a .CS file associated to the ASPX file), and codebehind that is inside the ASPX file inside tags like this:
<script runat="server">
//code
</script>
Q1:What is the main difference (besides logical matters like organization, readability and ETC), what could be done in one way that could not be done in another? What is each mode best suited for ?
Q2:If I'm going to develop a simple page with database connection, library imports, access to controls (ascx) and image access in other folders.. which method should I choose ?
Anything you can do in a code-behind, you can do in an inline script like what you posted. But you should use a code-behind most of the time anyway. Some things (like using directives) are just a little easier there, and it helps keep your code organized.
Q1: Nothing. Aside from what you and the others have mentioned (separation, readability), you can do everything "code behind" can do with "inline" (code within page itself) coding.
Inline coding doesn't necessarily mean its like "spaghetti code" where UI and code are mixed in (like old-school ASP). All your code can live outside of UI/HTML but still be inline. You can copy/paste all the code-behind code into your inline page and make a few adjustments (wiring, namespaces, import declarations, etc.) and that's that.
The other comments hit the nail: portability and quick fixes/modifications.
Depending on your use case, you may not want certain sections of code exposed (proprietary), but available for use. This is common for web dev professionals. Inline code allows your customers to quickly/easily customize functionality any way they want to, and can use some of your (proprietary) libraries (dlls) whenever they want to, without having to be code jocks (if they were, they wouldn't have hired you in the first place).
So in practical terms, it's like sending off an "html" file to clients containing instructions on how to change things around (without breaking things)...instead of sending off source code files along with html (aspx) pages and hoping your clients know what to do with them....
Q2: While either style (inline or code-behind) will work, its really a matter of looking at your application in "tiers". Usually, it will be: UI, business logic and data tiers. Thinking about things this way will save you a lot of time.
Practical examples:
If more than one page of your web app must expose/access data, then having a data tier is the best approach. Actually, even if you currently have a 1 page need, its likely never going to stay that way, so think of it as best practice.
If more than one page of your web app will collect input from users (i.e. contact us, registration/sign up, etc.) then you're likely going to need to validate input. So instead of doing this on a page by page basis, a common input validation library will save you time, and lessen the amount of code you need.
In the above examples, you've "separated" a lot of the processing into their own tiers. Your individual html/aspx pages can then use the "code libraries" (data and input validation) quickly with minimal code at the "page level". Then the decision to use either inline or code-behind styles at the "page level" wouldn't matter much - you've essentially "dumbed it down" to whatever your use case is at the time.
Hope this helps....
Keep it separated. Use the .aspx page for your layout, use the .aspx.cs page for any page specific code and for preference, pull your data access/business logic out into their own layer, makes for much simpler maintenance/re-use later on.
Slight caveat there - ASP.net MVC uses inline scripts in it's views, and I've really come round to that idea - it can keep the simple stuff simple, but the architecture used in MVC ensures that your business code remains separate from your presentation code.
I'm not saying you should ever be hacking live code... but one bit of flexibility from having the "code behind" as in-line script is that you could hack in changes without having to rebuild/publish the site.
Personally, I don't ever do this but I've heard instances where people have done it to get in an emergency fix.
There is no difference between the script tag and code behind. The code behind option actually came out of using the script tag or the <% %> from "Classic ASP". A lot of developers didn't like the fact that they server side code sat along side the UI code, because it made the file look messy, and it was a lot more difficult for the HTML people (web designers or whatever you would like to call them) to develop on the same page as the developers at the same time.
Most people like using the code behind option (It's actually considered the standard way of doing things), because it keeps the UI and the Code separate. It's what I prefer, but you really can use either.
You can use all the same stuff
Always try to keep the code separated unless you have a compelling reason not to
Funnily enough, I used the <script runat="server"> in the code infront only today! I did this because you do not need to Build the whole web application to deploy a fix that needs code behind. Yes- it was a bug fix ;)
I am making a web application in asp.net mvc C# with jquery that will have different pricing plans.
Of course the more you pay the more features you get.
So I was planning to use roles. So if someone buys plan 1 then they get a role of plan 1. Then on the page use an if statement to check if they are in certain roles. If they are allowed to use the feature generate it. If not do nothing.
It could be very well be that the entire page might be shared among all the roles except maybe one feature on that page.
Now someone was telling me that I should not do the way I am thinking of it since if I add more features then my page will get more cluttered with if statements and it will be hard to maintain.
They said I should treat each plan as a separate application. So if I have 2 plans have 2 different links going to different files.
I agree with the person that it probably will be better in the long run since I won't have to keep tacking on if statements but the thing that gets me is say in this scenario.
In the future versions of my site I will have SMS and Email alerts.
Right now I have a Html table with a set of tasks the user has to do. In future versions of the site I will give the option to get alerted by email OR SMS. If they choose say to be alerted by email in a table column an envelope will appear.
Now this might only be for people who are on Plan 2 and not Plan 1. So the solution of the person was just copy and paste all the code for the table code stick in a file called Plan2.aspx. Then add the new row for the icons to the newly pasted code for Plan 2.
Now I would have a Plan1 file that has everything the same except for this extra row that is in the Plan2 file.
So I am not too crazy about that idea because of duplicate code if something is wrong with the table I now have to change it into 2 locations not one. If I add a 3rd plan now I need to keep track of 3 sets of the same code with some varying differences.
My original way would have been that row in the table that is only for plan2 would be surrounded by an if statement checking their role.
Like in some cases I probably will be able to put all the common code into one partial control and all the different code in another partial control but its situations like this that I am not sure about.
There will be many more of these situations this just one example.
So what is the best way to make your code maintainable but also have minimal amounts of duplicate code.
Sorry for the post its kinda hard to describe what I am trying to achieve and situations that are that are could be possible areas of trouble.
Edit
So I am still kinda confused by the examples people have given and would love to see little full examples of them and not just stubs.
I was also thinking but I am not sure if this would be good practice or what and might look pretty strange and some parts.
Is to have everything in common in a partial view even if it is just like one line. Then have 2 separate links and just put the partial views together depending on their role.
I am thinking about the 2 seperate links because of the jquery stuff. Like for instance if I had this
<textbox code>
<textbox code> // this textbox code is only for plan 2 ppl. This textbox needs to go here
<textbox code>
Each of these textboxes tags would be in its own partial view(so 3 in this case)
so I would have 2 aspx pages.
1st
<render partialView 1>
<render partialView 2>
<render partialView 3>
2nd
<render partialView 1>
<render partialView 3>
Then in each of these 2 aspx pages would have different javascript files linked up.
The thing what I was thinking of if I just have like one javascript file with my jquery someone can just go and add the missing html ones and have access to all those features.
So I am not sure how I would write it if I am using the "if statement" way.
But at the same time have everything in partialView will look very funny. Like if I am making a table or something.
One partial view would have the start tag and some rows then X partial views down the road would have the closing tag.
Will look very weird and hard to see the whole picture since you will have to open up X amount of files to see the whole picture.
So there is got to be a better way.
How well abstracted are the components?
My naive approach would be to create a separate layer that dishes the components out to the UI. Something like a repository pattern, with a method like this:
public IEnumerable<PlanRestrictedFeature> GetFeaturesForPlan(Role r)
{
//return all features that the user has access to based on role
//this forces all this logic to exist in one place for maintainability
}
Of course, the method could also take in a string, enum, or Plan object, if you have one. Internally, this may use some type of map to make things simpler.
Then, the View can simply call the Render method of each component. Make sure the repository passes them back in the correct order for rendering, and rely on CSS for placement.
Whilst it's not the best practice to have a view littered with if statements (see Rob Conery's blog post on the matter), some rudimentary logic, is in my opinion, acceptable. If you do this though, you should try to use partials to keep the view as uncluttered as possible. This, as you pointed out is what you think is the best solution.
Your view logic really should be as simple as possible though and your models would benefit from inheriting your price plan information to save duplicating the code itself.
Removed the other code, as you pointed out that you would just use the User class.
Regarding the textbox, this could be trickier. One thought is that you could have your scripts folders which contain global JS, and then subfolders that have JS specifically for other roles (Role 2 and 3 for example). These could be protected by a custom route constraint which prevents users from accessing the file/folder without the relevant level of authentication. You should also use a web.config to provide a similar level of protection. That, or just use the web.config file.
Take a look at the tutorials and sample projects on http://www.asp.net/mvc.
These all follow certain principles which would help you.