I'm very new to C# and .NET and I find myself faced with a problem and I'm not sure in which direction I need to head.
My company works with a third party subscription fulfillment system for many functions, including billing and renewals. This system has the capability of automatically sending an email when certain events are triggered. For example, each subscription goes through, what we call, a renewal series. This series consists of several efforts spread accross the life of the subscription.
When a subscription qualifies for a certain effort of this series, we can have an event generated that will cause the system to send a HTTP POST request to a given URI with an XML payload. The endpoint (an .aspx page) receives the request, processes it, and returns a response with, in this case, HTML code. That HTML is then emailed out by the fulfillment system.
I have a basic web application created with a few of these .aspx pages up and running. Each page has a corresponding .cs code behind file.
Here's where my question really starts. In our fulfillment system, we can only define one endpoint per combination of event and product. So, no matter which effort a subscription is qualifying for at the time, the event itself is the same. What is different, though, is the XML of the HTTP POST request. It's in that XML that I can tell for which effort the request has been generated. The reason that is important is because the HTML of the corresponding email is different for each effort. To phrase it a slightly different way, the HTML that should be rendered is different, top to bottom, for effort 1 than effort 2. Effort 2 is different than effort 3, and so on.
So, what I'm trying to figure out is how to "direct the traffic". Since all of these requests will come to a single endpoint, I need to dynamically return the correct HTML for the corresponding effort.
In a different .aspx page in this same app, there is some content that needs to be generated dynamically depending on the contents of the request. In that case, I used two PlaceHolder controls, one for each possible set of text. Then, in the code behind, set their Visible property to true or false as needed.
I dismissed the idea of doing that in this case early on since there are five or six HTML templates and stuffing all of them into one page would be messy and hard to maintain.
This is where I'm at the point that I don't know what to do next. I have a feeling that a User Control or Custom Control is going to be the way to go? But, is plain old redirection a better option? Or none of the above?
The solution you dismissed is very close to the correct one. However, there is something you can do to simplify it and make maintenance easier.
What we want to do here is build a custom control or a user control for each effort. This will let you maintain the code for the efforts separately, without mixing everything together in one place. Then your entire endpoint *.aspx page consists of a single placeholder control. When processing a request, your Page_Load method will parse your xml and figure out what kind of effort you need. It will then create a new instance of the proper control, add it to the placeholder, and pass the rest of the data to the control to finish processing.
Because there is some commonality among all the controls here (the ability to receive an xml message), you may first want to also create one base control for the individual effort controls to inherit.
From what I understand, you'd like to have a single endpoint but still be able to "route" requests internally. This is easily possible.
You could for example transfer requests internally, using Server.Transfer. This way you can have your 5 or 6 different HTML templates and then route incoming requests to a correct template depending on the content of the request.
enter link description here
Here we give An Example here to creating a new .aspx page at run time. We also give the option to give page name. The user can give as he/she like. Like Google blogging we make new page at runtime.The new page is not in the website, this page create needs to be created at runtime and needs to be dynamic.
Related
I want to build an "audit trail" for all requests incoming to the server, however it needs to be specific per user, per web page.
For instance I imagine something like this:
On initial view render I would store (cookie/ page variable/ something else) a unique Id saying the user browsed to /myapp.com/dashboard/1234. - maybe in the layout.cshtml.
Then the app fires off X number of GET/ POST requests to the server each having that same unique Id initially tied to the view rendered.
This allows me then to tie back all requests for a page and add up the server execution time.
I tried using path specific cookies but this won't work I realized since a user can have many tabs open with the same url. Also the user works in many areas of the app at once. They can have anywhere from 1 to 10+ tabs open. Each of these should have it's own unique Id and "audit trail" of all calls taking place on that page.
This is an existing app so modifying each of the GET/ POST to pass in the unique Id is out of scope. Just hoping I am missing something that might take care of this.
Thank you!
If I'm understanding you correctly, you have a single page load, and then additional requests made either for images and other resources or AJAX requests that you want tied to and tracked along with that initial page load.
The chief problem you're going to have here is that, based on the way HTTP works, each request is handled as its own thing and not considered as part of a greater whole. The web browser makes it all look seamless, but all the web server is doing is just responding to a bunch of (as far as it knows) unrelated requests for various different things. To track them all as one unit, you would either need to attach some unique id to the request itself (for a GET, that would be either as part of the URI path or query string) or lean on Session to introduce state between the requests. However, session state really only works in this scenario when all requests can be tied to a single initial request. Once the user starts working with multiple different pages at once, there's no reasonable to discern which request belongs to what, and you're back in the same boat.
In other words, your only real option is to send something along with the request, which would mean doing something like:
<link rel="stylesheet" type="text/css" href="/path/to/file.css?origin=#Request.RawUrl" />
Then, you could have an action filter that looks for origin in the query string of any request, and ties it to the logging for that particular page.
For what it's worth, it should be noted that by default, IIS will handle all requests for static resources directly, without involving ASP.NET. If you do want to track requests for static resources, you would have to pass them all through ASP.NET, which will be kind of a pain. If you only want to track AJAX requests, that's much simpler and shouldn't require anything special for the most part.
All that said, if the only purpose of this is to track page load time, there's far better and easier ways to do that. You can install Glimpse. You can use your browser's developer console. You can use something like Google Analytics. All of these are far preferable to the path you're going down here, for page load statistics.
Write an ActionFilter to do this. There are many examples of this
http://rion.io/2013/04/15/creating-advanced-audit-trails-using-actionfilters-in-asp-net-mvc/
http://blog.ploeh.dk/2014/06/13/passive-attributes/
I personally like Mark Seemann's example more since it clearly defines a nice separation of concerns for the attribute and the filter.
While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).
What is the best practice to handle dangerous characters in asp.net?
see example: asp.net sign up form
Should you:
use a JavaScript to prevent them from entering it into the textbox in the 1st place?
have a general function that does a find and replace on the server side?
The problem with #1, is it will increase page load time.
ASP .NET handles potentially dangerous characters for you, by default since ASP .NET 2.0. From Request Validation in ASP.NET:
Request validation is a feature in ASP.NET that examines an HTTP
request and determines whether it contains potentially dangerous
content. In this context, potentially dangerous content is any HTML
markup or JavaScript code in the body, header, query string, or
cookies of the request. ASP.NET performs this check because markup or
code in the URL query string, cookies, or posted form values might
have been added for malicious purposes.
Request validation helps prevent this kind of attack. If ASP.NET
detects any markup or code in a request, it throws a "potentially
dangerous value was detected" error and stops page processing.
Perhaps the most important bit of this is that it happens on the server; regardless of the client accessing your application they can not just turn of JavaScript to work around it.
Solution number 1 won't increment load time by much.
You should ALWAYS use solution number 2 along with solution number one, because users can turn off javascript in their browsers.
You accept them like regular characters on the write-side. When rendering you encode your output. You have to encode it anyway regardless of security so that you can display special characters.
What is the best practice to handle dangerous characters in asp.net?
I did not watch the screencast you link to (questions should be self-contained anyway), but there are no dangerous characters. It all depends on the context. Take Stack Overflow for example, it lets me input the characters Dangerous!'); DROP TABLE Questions--. Nothing dangerous there.
ASP.NET itself will do its best to prevent malicious input at the HTTP level: it won't let any user access files like web.config or files outside your web root.
As soon as you start doing something with user input, it's up to you. There's no silver bullet, no one rule that fits them all. If you're going to display the user input as HTML, you'll have to make sure you only allow harmless markup tags without any scriptable attributes. If you're allowing users to upload images, make sure only images get uploaded. If you're going to send input to an RDBMS, be sure to escape characters that have meaning for the database manipulation language.
And so on.
ALWAYS validate input on the server, this should not even be a discussion, just do it!
Client-side validation is just eye candy for the user, but the server is where it counts!
Thinking that
ASP .NET handles potentially dangerous characters for you, by default since ASP .NET 2.0. From Request Validation in ASP.NET:
is like thinking that a solid door will keep a thief out. It won't. It will only slow him. You have to know what are the most common vectors and what are the possible solutions. You must comprehend that every EVERY EVERY variable (field/property) you write in an HTML/CSS/Javascript is a potential attack vector that must be sanitized (through the use of appropriate libraries, like some methods included in newer MVC.NET, or at least the <%: %> of ASP.NET 4.0), no exceptions, every EVERY EVERY query you execute is a potential attach vector that must be sanitized through the exclusive use of ORM and parameterized queries, no exceptions. No passwords must be saved in the db. And tons of other similar things. It isn't very difficult, but laziness, complacence, ignorance will make it harder (if not nearly impossible). If it isn't you that will introduce the hole then it's the programmer on your left, or the programmer on your right. There is not hope.
I'm currently building a single page AJAX application. It's a large "sign-up form" that's been built as a multi-step wizard with multiple branches and different verbiage based on what choices the user makes. At the end of the form is an editable review page. Once the user submits the form, it sends a rather large email to us, and a small email to them. It's sort of like a very boring choose your own adventure book.
Feature creep has pushed the size of this app beyond the abilities of the current architecture, and it's too slow to work in any slower computers (not good for a web app), especially those using Internet Explorer. It currently has 64 individual steps, 5400 DOM elements and the .aspx file alone weighs in at 300kb (4206 LOC). Loading the app takes anywhere from 1.5 seconds on a fast machine running FireFox 3, to 20 seconds on a slower machine running IE7. Moving between steps takes about the same amount of time.
So let's recap the features:
Multi-Step, multi-path wizard style
form (64 steps)
Current step is shown in a fashion similar to this: http://codylindley.com/CSS/325/css-step-menu
Multiple validated fields
Changing verbiage based on user
choices
Final, editable review page
I'm using jQuery 1.3.2 and the following plugins:
jQuery Form Wizard Plugin
jQuery clueTip plugin
jQuery sexycombo
jQuery meioMask plugin
As well as some custom script for loading the verbiage from an XML file, running the review page and some aesthetic accoutrements.
I don't have this posted anywhere public, but I'm mostly looking for some tips on how to approach this sort of project and make it light weight and extensible. If anyone has any ideas as far as tools, tutorials or technologies, that's what I'm looking for. I'm a pretty novice programmer (I'm mostly a CSS/xHTML/Design guy), so speak gently. I just need a good plan of attack to make this app faster. Any ideas?
One way would be to break apart the steps into multiple pages / requests. To do this you would have to store the state of the previous pages somewhere. You could use a database to do this or some other method.
Another way would be to dynamically load the parts you need via AJAX. This won't help with the 54000 DOM elements though, but it would help with the initial page load.
Based on the question comments a quick way to "solve" this problem is to make a C# class that mirrors all the fields in your question. Something like this:
public class MySurvey
{
public string FirsName { get; set; }
public string LastName { get; set; }
// and so on...
}
Then you would store this in the session (too keep it easy... I know it's not the "best" way) like this
public MySurvey Survey
{
get
{
var survey = Session["MySurvey"] as MySurvey;
if (survey == null)
{
survey = new MySurvey();
Session["MySurvey"] = survey;
}
return survey;
}
}
This way you'll always have a non-null Survey object you can work with.
The next step would be to break that big form into smaller pages, let's say: step1.aspx, step2.aspx, step3.aspx etc. All these pages would inherit from a common base page that would include the property above. After this all you'd need to do is send the request from step1.aspx back and save it to Survey, similar to what you're doing now but for each small piece. When done redirect (Response.Redirect("~/stepX.aspx")) to the next page. The info from the previous page would be saved in the session object. If they close the browser page they won't be able to get back though.
Rather than saving it to the session you could save it in a database or in a cookie, but you're limited to 4K for cookies so it may not fit.
I agree with PBZ, saving the individual steps would be ideal. You can, however, do this with AJAX. If you did, though, it'd require some stuff that sounds like it might be outside of your skillset of mostly front-end development, you'd need to probably create a new database row and tie it to the user's session ID, and every time they click to the next step have it update that row. Possibly even tie it to their IP address so if the whole thing blows up they can come back and hit "remember me?" for your application to retrieve it.
As far as optimizing the existing structure, jQuery is fairly heavy when it comes to optimization, and adding a lot of jQuery modules doesn't help that. I'm not saying it's bad, because it saves you a lot of time, but there are some instances where you are using a module for one of its many functionalities, and you can replace that entire module with a few lines of jQuery enabled javascript.
As far as minimizing the individual DOM elements, the step above I mentioned could help slim that down, because you're probably loading a lot of extensible functions for those modules that you may or may not need.
On the back end, I'd have to see the source to see how to tell you to optimize it, but it sounds like there's a lot of redundancy in individual steps, some of that can probably be trimmed down into functions that include a little recursion, or at the least delegate some of the tasks to one another.
I wish I could help more but without digging through your source I can only suggest basic strategies. Best of luck, though!
Agree, break up the steps. 5400 elements is too many.
There are a few options if you need to keep it on one page.
AJAX requests to get back either raw HTML, or an array of objects to parse into HTML or DOM
Frames or Iframes
JavaScript to set innerHTML or manipulate the DOM based on the current step. Note with this option IE7 and especially IE6 will have memory leaks. Google IE6 JavaScript memory leaks for more info.
Use document.write to include only the .js file(s) needed for the current step.
HTH.
Sounds like mostly a JQuery optimization problem.
First suggestion would be switch as many selects into ID selectors as you can. I've had speedups of over 200-300x by being able to move to id attribute selection only.
Second suggestion is more of a plan of attack. Since IE is your main problem area, I suggest using the IE8 debugger. You just need to hit f12 in IE8... Tabs 3 and 4 are script and profiler respectively.
Once you've done as much of #1 as you think you can, to get a starting point, just go to profiler, hit start profiling, do some slow action on the webpage, and then stop profiling. You will see your longest method calls, and just work your way through it.
For finer testing/dev, go to the script tab. Breakpoints locals etc are there for analysis. You can dev/test changes via the immediate window... i.e. put a break point where you want to change a function, trigger the function, execute your javascript instead of the defined javascript in the immediate window.
When you think you have something figured out, profile your changes to make sure they are really improvements. Just start the profiler, run the old code, stop it and note your benchmark. Then re-start the profiler and use the immediate window to execute your altered function.
That's about it. If that flow can't take you far enough, as mentioned above, JQuery itself (and hence its plugins) are not terribly performant, and replacing with standard javascript will speed everything up. If your plugins benchmark slow, look at replacing them with other plugins.
I am making a web application in asp.net mvc C# with jquery that will have different pricing plans.
Of course the more you pay the more features you get.
So I was planning to use roles. So if someone buys plan 1 then they get a role of plan 1. Then on the page use an if statement to check if they are in certain roles. If they are allowed to use the feature generate it. If not do nothing.
It could be very well be that the entire page might be shared among all the roles except maybe one feature on that page.
Now someone was telling me that I should not do the way I am thinking of it since if I add more features then my page will get more cluttered with if statements and it will be hard to maintain.
They said I should treat each plan as a separate application. So if I have 2 plans have 2 different links going to different files.
I agree with the person that it probably will be better in the long run since I won't have to keep tacking on if statements but the thing that gets me is say in this scenario.
In the future versions of my site I will have SMS and Email alerts.
Right now I have a Html table with a set of tasks the user has to do. In future versions of the site I will give the option to get alerted by email OR SMS. If they choose say to be alerted by email in a table column an envelope will appear.
Now this might only be for people who are on Plan 2 and not Plan 1. So the solution of the person was just copy and paste all the code for the table code stick in a file called Plan2.aspx. Then add the new row for the icons to the newly pasted code for Plan 2.
Now I would have a Plan1 file that has everything the same except for this extra row that is in the Plan2 file.
So I am not too crazy about that idea because of duplicate code if something is wrong with the table I now have to change it into 2 locations not one. If I add a 3rd plan now I need to keep track of 3 sets of the same code with some varying differences.
My original way would have been that row in the table that is only for plan2 would be surrounded by an if statement checking their role.
Like in some cases I probably will be able to put all the common code into one partial control and all the different code in another partial control but its situations like this that I am not sure about.
There will be many more of these situations this just one example.
So what is the best way to make your code maintainable but also have minimal amounts of duplicate code.
Sorry for the post its kinda hard to describe what I am trying to achieve and situations that are that are could be possible areas of trouble.
Edit
So I am still kinda confused by the examples people have given and would love to see little full examples of them and not just stubs.
I was also thinking but I am not sure if this would be good practice or what and might look pretty strange and some parts.
Is to have everything in common in a partial view even if it is just like one line. Then have 2 separate links and just put the partial views together depending on their role.
I am thinking about the 2 seperate links because of the jquery stuff. Like for instance if I had this
<textbox code>
<textbox code> // this textbox code is only for plan 2 ppl. This textbox needs to go here
<textbox code>
Each of these textboxes tags would be in its own partial view(so 3 in this case)
so I would have 2 aspx pages.
1st
<render partialView 1>
<render partialView 2>
<render partialView 3>
2nd
<render partialView 1>
<render partialView 3>
Then in each of these 2 aspx pages would have different javascript files linked up.
The thing what I was thinking of if I just have like one javascript file with my jquery someone can just go and add the missing html ones and have access to all those features.
So I am not sure how I would write it if I am using the "if statement" way.
But at the same time have everything in partialView will look very funny. Like if I am making a table or something.
One partial view would have the start tag and some rows then X partial views down the road would have the closing tag.
Will look very weird and hard to see the whole picture since you will have to open up X amount of files to see the whole picture.
So there is got to be a better way.
How well abstracted are the components?
My naive approach would be to create a separate layer that dishes the components out to the UI. Something like a repository pattern, with a method like this:
public IEnumerable<PlanRestrictedFeature> GetFeaturesForPlan(Role r)
{
//return all features that the user has access to based on role
//this forces all this logic to exist in one place for maintainability
}
Of course, the method could also take in a string, enum, or Plan object, if you have one. Internally, this may use some type of map to make things simpler.
Then, the View can simply call the Render method of each component. Make sure the repository passes them back in the correct order for rendering, and rely on CSS for placement.
Whilst it's not the best practice to have a view littered with if statements (see Rob Conery's blog post on the matter), some rudimentary logic, is in my opinion, acceptable. If you do this though, you should try to use partials to keep the view as uncluttered as possible. This, as you pointed out is what you think is the best solution.
Your view logic really should be as simple as possible though and your models would benefit from inheriting your price plan information to save duplicating the code itself.
Removed the other code, as you pointed out that you would just use the User class.
Regarding the textbox, this could be trickier. One thought is that you could have your scripts folders which contain global JS, and then subfolders that have JS specifically for other roles (Role 2 and 3 for example). These could be protected by a custom route constraint which prevents users from accessing the file/folder without the relevant level of authentication. You should also use a web.config to provide a similar level of protection. That, or just use the web.config file.
Take a look at the tutorials and sample projects on http://www.asp.net/mvc.
These all follow certain principles which would help you.