I have a web app built with MVC 4 with WebApi on the back end which has entity framework.
I have used structure map to inject the entity framework to webapi. and injecting webapi client to MVC 4 app.
he application is running fine, but soon I will need scale.
MVC 4 app sits on one server, webapi is on another server and there is a database server.
How can I scale webapi horizontally? if i add webapi servers and database servers, is there a configuration for entity framework which will take multiple connection strings and do a round robin querying? is there sharding available for EF.
How about httpclient? how about failover such as client takes multiple IPs and if one fails, requests go to another server?
How can I scale them?
Typically one adds additional web servers and then uses a load balancer to distribute incoming requests among them. There's a few considerations here.
If the web server persists data across requests (via ASP.NET session), you will need to create a separate state server that all the web servers can share, or use a load balancer that is state aware.
If the performance issue is stemming from database IO problems, (missing table indexes, index fragmentation, requests pulling huge result sets, less than optimal disk\hardware configs, etc...) then adding more web servers will not address the problem. The first step is to monitor and profile your database and make sure it is performing well.
Related
I am writing a (C#) stateless Rest api that returns data from a MS SQL server. It replaces a curently existing JAVA API that uses a lot of hardcoded queries to access the data.
Inside the api I'm building I'll need to have the controllers be able to target multiple databases. There is no session management not are there any authentication issues
What is a good approach for handling data access?
one option is to create all the connection objects for each request. I worry that this would cause quite some overhead. Another option is to use a singleton for each database connection.
Any advice on this matter is welcome
I have an MVC and WebAPI application that needs to log activities performed by the users back to my database. This is almost always a single insert into a table that have less than 5 columns (i.e. very little data is crossing the wire). The data interface that I am currently using is Entity Framework 6
Every once in a while, I'll get a large number of users needing to log that they performed a single activity. In this case, "Large Number" could be a couple hundred requests every second. This typically will only last for a few minutes at most. The rest of the time, I see very manageable traffic to the site.
When the traffic spikes, Some of my clients are getting timeout errors because the page doesn't finish loading until the server has inserted the data into the database. Now, the actual inserting of the data into the database isn't necessary for the user to continue on using the application, so I can cache these requests somewhere locally, and then batch insert them later.
Is there any good solutions for ASP.NET MVC to buffer incoming request data and then batch insert them into the database every few seconds?
As for my environment, I have several servers running Server 2012 R2 in a load balanced Web Farm. I would prefer to stay stateless if at all possible, because users might hit different servers per request.
When the traffic spikes, Some of my clients are getting timeout errors because the page doesn't finish loading until the server has inserted the data into the database.
I would suggest using a message queue. Have the website rendering code simply post an object to the queue representing the action, and have a separate process (e.g. Windows Service) read off the queue and write to the database using Entity Framework.
UPDATE
Alternatively you could log access to a file (fast), and have a separate process read the file and write the information into your database.
I prefer the message queue option, but it does add another piece of architecture.
I have a asp.net web application which is hosted on two different servers one being primary other being secondary.
Using DNSMadeEasy I have setup a dns failover when primary server goes down secondary server takes up.
This setup is working fine according to the requirements however there is one last catch.
My application uses a windows service for the purpose of billing.
The billing service is always running on primary server and always stopped at secondary server.
I want that when dns failover occurs it should automatically start the billing service on secondary server and when dns switches back to primary the billing service should be stopped on secondary server.
What do I need to do to make this happen?
Changing your architecture a bit would help (as long as you can make these changes):
Put a load balancer (HAProxy, NGINX, etc) in front of both web servers.
Run the ASP.NET app and billing service on both web servers.
Re-architect your billing service so that it can run from both services at the same time without interference.
You're already running the standby, so you'll get lots of benefits to this:
better scalability: load is shared between web servers/billing services
zero-downtime updates: remove each web server one at a time from the load balancer, do your update and then add back into rotation.
robustness: if one web server fails, you automatically have an architecture that doesn't have to worry about DNS TTL problems and will failover automatically.
If you can't really change much of your architecture, here's an idea:
Listen for the first web request to the Secondary web server and then start the service when it occurs. Depending on how it's designed, you may need to stop the billing service on the Primary.
We are developing a multi-tenant application. With respect to architecture, we have designed shared middle tier for business logic and one database per tenant for data persistence. Saying that, business tier will establish set of connections (connection pool) with the database server per tenant. That means application maintain separate connection-pool for each tenant. If we expect around 5000 tenants, then this solution needs high resource utilization (connections between app server and database server per tenant), that leads to performance issue.
We have resolved that by keeping common connection pool. In order to maintain single connection pool across different databases, we have created a new database called ‘App-master’. Now, we always connect to the ‘App-master’ database first and then change the database to tenant specific database. That solved our connection-pool issue.
This solution works perfectly fine with on-premise database server. But it does not work with Azure Sql as it does not support change database.
Appreciate in advance to suggest how to maintain connection pool or better approach / best practice to deal with such multi-tenant scenario.
I have seen this problem before with multiple tenancy schemes with separate databases. There are two overlapping problems; the number of web servers per tenant, and the total number of tenants. The first is the bigger issue - if you are caching database connections via ADO.net connection pooling then the likelihood of any specific customer connection coming into a web server that has an open connection to their database is inversely proportional to the number of web servers you have. The more you scale out, the more any given customer will notice a per-call (not initial login) delay as the web server makes the initial connection to the database on their behalf. Each call made to a non-sticky, highly scaled, web server tier will be decreasingly likely to find an existing open database connection that can be reused.
The second problem is just one of having so many connections in your pool, and the likelihood of this creating memory pressure or poor performance.
You can "solve" the first problem by establishing a limited number of database application servers (simple WCF endpoints) which carry out database communications on behalf of your web server. Each WCF database application server serves a known pool of customer connections (Eastern Region go to Server A, Western Region go to Server B) which means a very high likelihood of a connection pool hit for any given request. This also allows you to scale access to the database separately to access to HTML rendering web servers (the database is your most critical performance bottleneck so this might not be a bad thing).
A second solution is to use content specific routing via a NLB router. These route traffic based on content and allow you to segment your web server tier by customer grouping (Western Region, Eastern Region etc) and each set of web servers therefore has a much smaller number of active connections with a corresponding increase in the likelihood of getting an open and unused connection.
Both these problems are issues with caching generally, the more you scale out as a completely "unsticky" architecture, the less likelihood that any call will hit cached data - whether that is a cached database connection, or read-cached data. Managing user connections to allow for maximum likelihood of a cache hit would be useful to maintain high performance.
Another method of restricting the number of connection pools per app server is to use Application Request Routing (ARR) to divide up your tenants and assign them to subsets of the web tier. This lends itself to a more scalable "pod" architecture where a "pod" is a small collection of web/app servers coupled to a subset of the databases. A good article on this approach is here:
http://azure.microsoft.com/blog/2013/10/31/application-request-routing-in-csf/
If you are building a multi-tenant DB application Azure you should also check-out the new Elastic Scale client libraries that simplify data-dependent routing and facilitate cross-shard queries and management operations. http://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-documentation-map/
I am working on a project in which a WCF service will be consumed by iOS apps. The number of hits expected on the webserver at any given point in time is around 900-1000. Every request may take 1-2 seconds to complete. The same number of requests are expected on every second 24/7.
This is what my plan:
Write WCF RESTful service (the instance context mode will be percall).
Request/Response will be in Json.
There are some information that needs to be persisted in the server - this information is actually received from another remote system - which is shared among all the requests. Since using a database may not be a good idea (response time is very important - 2 seconds is the max the customer can wait), would it be good to keep it in server memory (say a static Dictionary - assume this dictionary will be a collection of 150000 objects - each object consists of 5-7 string types and their keys). I know, this is volatile!
Each request will spawn a new thread (by using Threading.Timers) to do some cleanup - this thread will do some database read/write as well.
Now, if there is a load balancer introduced sometime later, the in-memory stored objects cannot be shared between requests routed through another node - any ideas?
I hope you gurus could help me by throwing your comments/suggestions on the entire architecture, WCF throttling, object state persistence etc. Please provide some pointers on the required Hardware as well. We plan to use Windows 2008 Enterprise Edition server, IIS and SQL Server 2008 Std edition database.
Adding more t #3:
As I said, we get some information to the service from a remote system. On the web server where the the WCF is hosted, a client of the remote system will be installed and WCF references one of this client dlls to get the information, in the form of a hashtable(that method returns a hashtable - around 150000 objects will be there in this collection). Would you suggest writing this information to the database, and the iOS requests (on every second) which reach the service retrieves this information from the database directly? Would it perform better than consuming directly from this hashtable if this is made static?
Since you are using Windows Server 2008 I would definitely use the Windows Server App Fabric Cache to store your state:
http://msdn.microsoft.com/en-us/library/ff383813.aspx
It is free to use, well supported and integrated and is (more or less) API compatible with the Windows Azure App Fabric Cache if you every shift your service to Azure. In our company (disclaimer: not my team) we used to use MemCache but changed to the App Fabirc Cache and don't regret it.
Let me throw some comments/suggestions based on my experience in serving a similar amount or request under the WCF framework, 3.5 back in the days.
I don't agree to #3. Using a database here is the right thing to do. To address response time, implement caching and possibly cache dependency in order to keep the data synchronized across all instances (assuming that you are load balanced)(also see App Fabric suggested above/below). In real world scenarios, data changes, often, and you must minimize the impact.
We used Barracuda hardware and software to handle scalability as far as I can tell.
Consider indexing keys/values with Lucene if applicable. Lucene delivers extremely good performances when it comes to read/write. Do not use it to store your entire data, read on it. A life saver if used correctly. Note that it could be complicated to implement on a load balanced environment.
Basically, caching might be the only necessary change to your architecture.