Best way to organize/architect a web site [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have to take a key decision about our web site organization/architecture.
Here is my context.
Our main web site will be available in different countries. Even if the Business is nearly the same, there are some region-specific features. Of course it concerns translations, but also master/layouts and business process. These difference are because of different legislations. At the beginning we will have 4 or 5 derivations, but the target could be 20.
A simple comparison could be Stackoverflow and the Stack Exchange Network. Main features are quite the same between website, but there are site-specific business rules.
To my mind, there are basically two possible approachs :
Having a single web site that manage region/country-specific features.
The will keep core features on the same site, but will involve coupling between all regions. There is also a risk of "IF" in the code. Devs & Maintainability is optimal (unique fix for all) but risky (could break others). A way to do this is a combinaison of portable areas and a custom view engine (generic view template in parent folder and derivation in a sub folder)
Having one web site per region/country
There will be a common web site that will be implemented. There will have some common components but each web site will have its own lifecycle; Devs & Maintainability is easier but costly (if there are many derivations)
Please Note, another impact of this organization is deployment and avaibility.
What is the best way to organize this ?
Edit :
We already have some experiences in MVC and as a general guideline, we are aware of MVC Best Practices : thin controllers, DI, ViewModels, Action Filters, ...

Related

Server-side vs Client-side web application Performance [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am an entry level programmer with only a few months of experience.
Yesterday I was discussing with a colleague how we can improve the performance of a project we are working on together.
The project is built with C# + Ext.NET + JS
The plan was to move as many things as possible to client-side JavaScript instead of interacting with the server all the time.
I thought this was a good idea, but couldn't help but wonder if there is a point where bringing everything to client-side starts making the web application slower. I understand that interacting with the server and reloading unnecessary data all the time is a waste of time in most cases, but I've also seen websites loaded with so much JS that the browser actually lags and the browsing the web application is just a pain.
Is there a golden point? Are there certain 'rules'? How do you achieve maximum performance? Take Google Cloud apps, such as Docs for example, they're pretty fast for what they do, and they're web applications. That is some very good performance.
JavaScript is incredibly fast on the client-side. I assume Ext.NET is like AJAX? If not, you can use AJAX to communicate with the server using JavaScript. It will be pretty fast configured like that. However, the style of coding will change drastically if you're currently using .NET controls on the DOM with click events.
My 2 cents: Use lazy loading of xtypes whenever possible on the client (ie. you can define an xtype but it is only instantiated when it is needed). Especially if those xtypes make ajax calls!

Managing Different User Roles ASP.NET MVC [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am currently developing a ASP.NET MVC4 website, and I would like to know whats the best practices storing the logged-on user's data (include privileges) and authorize the user securely, being able to access this data in both my views and controllers.
One thing to important mention - I am NOT using the Membership class (I've saw that its an overhead for me to use it, and I would like to implement the exact things I need and learn from the process).
The only way I thought to do it is storing all the data inside the session object, and having a wrapper to the session object (static class) and use it like SessionManager.IsLoggedIn(), SessionManager.GetUserPriviliges() or simply creating a method that returns hard-typed UserSessionData SessionManager.GetSessionData() that contains all the data required.
This is one way to use it in both controllers and views. Shall I derive from Controller and create a RolesController which stores UserSessionData so I won't need to call it again and again in my controllers?
I guess I won't be able to use the common AuthorizedAttribute so I will have to implement it by using the session wrapper (Is it safe to use only the session data? since I am not using the 'official' authorization method and therefore I don't really know how it should be implemented).
As you see, I have an idea but since its my first time doing it I would like to learn about the best practices and the way it should be done correctly. I will be thankful if you will explain your answers since I want to get the complete idea and I haven't done it before in MVC.
Thanks in advance!
It is not safe to do anything you've described. Static classis are dangerous in asp.net because they are not multi-user safe. Static classes are shared between all threads in the app, including threads running other users requests.
Just use the default mamebership until you know what you're doing. You will just be creating a vulnerable architecture otherwise.

Run web service API on same or separate servers? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have a web portal and the web portal has web services API.
Which solution would be best and why?
Should I....
1) Run the web portal and the web portal API on the same server or
2) Run the web portal and the web portal API on separate servers
It's all a matter of trading off different forces, there just can't be the same answer for everybody.
Here's a few things to consider:
Having the UI (portal) and it's dependent services on the same box makes for a very clear set of dependencies, when diagnosing problems you've got just one place to look. You can scale by adding more such boxes, each being self-contained. Clarity has a lot of operational value.
But, it's likely that the portal or the services will have different resource requirements, hence you are scaling (say) the portal when the services are not using much resource. Hence you have more copies of something portal or service than you strictly need. This may have considerable costs. Examples:
Licence costs. Suppose you have 10 copies of portal but really only needed 5, then that's 5 licences wasted.
Memory consumption. Suppose there's a fixed overhead in getting the services (or portal) up irrespective of load demands (think caching or database connections) then you are paying that cost for the un-needed instances
Back-end costs. Your services may connect to enterprise systems, eg a database. Each connection costs resources on the back-end. If you have un-needed instances you pay needless costs.
3.Platform tuning. You may need to tune the platform differently for your portal and the services. This issue is more noticable when considering whether to co-locate the database too.

Code Generation - Domain/model first (DDD) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking for a 'complete' solution for code-generation based on DDD or model first approach. Ideally, this would be a separate application or VS plugin that we could use and re-use to generate as much of the standard plumbing code as possible, preserving my custom business logic as well.
I would like to generate VS projects, including WCF sercvice app, Data layer, entity model etc. and client applications such as ASP.MVC (and/or web-forms) sites with scaffolding, windows client.
I know there are many choices like Entity Framework vs NHibernate, open-source frameworks such as S#ahrp Architecture, and there are commercial products as well. I'm open to anything as I know most of the investment will be in time.
Update:
To add to this: The Entity Framework (4.0) is a big step forward as it will generate c# business classes as well as the database schema, allowing you to focus on the 'model', which is good. Is there anything that will go one level higher to allow generation of other objects based on a (meta)model of some kind.
I'd recommend taking a look at CodeSmith. It comes with several different template frameworks like PLINQO (Linq-to-SQL), NHibernate, CSLA and .netTiers (which sounds closer to what you are looking for).
Also take a look at the video tutorials on how to use the frameworks located here.
Thanks
-Blake Niemyjski
I understand that SparxEA (Enterprise Architect) supports code generation (and the generation of models from code) but I've never actually done that with it myself.
So this should definately allow you to model your system / domain and then generate appropriate code.
It also seems to support integration with Visual Studio: http://www.sparxsystems.com.au/products/mdg/int/vs/index.html

What are some best practices for making sure your .NET code will scale well? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Last week I interviewed for a position at a TripleA MMORPG game company here in NE. I didn't get the job but one of the areas that came up during the interview was the about the scalability of the code that you write and how it should be considered early on in the design of your architecture and classes.
Sadly to say I've never thought very much about the scalability of the .NET code that I've written (I work with single user desktop and mobile applications and our major concerns are usually with device memory and rates of data transmission). I'm interested in learning more about writing code that scales up well so it can handle a wide range of remote users in a client server environment, specifically MMORPGs.
Are there any books, web sites, best practices, etc. that could get me started researching this topic?
Here are some places to start:
http://highscalability.com/blog/2010/2/8/how-farmville-scales-to-harvest-75-million-players-a-month.html
http://www.cs.cornell.edu/people/~wmwhite/papers/2009-ICDE-Virtual-Worlds.pdf
In particular, http://highscalability.com is full or articles about huge websites that scale and how they do it (Digg, flickr, facebook, YouTube, ...)
Just one point I'd like to highlight here. Just cache your reads. Work out a proper caching policy where you determine which objects can be cached and for what periods. Having a distributed caching farm will take load off your DB servers, which will greatly benefit performance.
Even just caching some pieces of data for a few seconds - in a very high load multi-user scenario - will provide you with substantial benefit.
If you are looking for physical validation, what I usually find that helps is doing some prototyping. This gives you a good idea usually of any unforeseen problems that might be in your design and just how easy it is to add onto it. I would try to apply any design patterns possible to allow future scalability. Elements of Reusable Object-Oriented Software is a great reference for that. Here are some good examples that show before and after code using design patterns. This can help you visualize how design patterns could make your code more scalable as well. Here is an SO post about specific design patterns for software scalability.

Categories