Could we replace multiple WCF calls by a single WCF call? - c#

I am currently working with mvc4 application that reads data from a set of wcf services. Currently when a user hits a page number, if wcf requests are triggered to get data for different parts of the page. I want to improve its performance.
My idea is, when a user lands on a page a single wcf call is made which retrieves all the necessary data that the multiple calls previously did and put the data from it in to the users request httpcontext.
Is this improving performance than the approach single but larger wcf call over named pipes or multiple smaller calls under named pipes? Are there any performance implications of putting a large set of data in to the httpcontext?

I think you are trying to solve one problem by producing even more problems.
If you query all the data at a time and store in httpcontext it will speed up performance for opening new pages but it will take considerably longer to open the page for the first time. Also you may easily run out of memory especially if you have many users at a time if storing data in httpcontext per a user.
I think first you need to localize the problem and find the root cause of poor performance. It may be a query or it may be some database locks.
in any case caching is a good idea, but don't use httpcontext for it. Use ASP.NET cahe or some distributed cache like App Fabric. These tools will provide you with a lot of built-in features and it will be easier for you to then scale your application.
Hope it helps.

Related

Pattern or library for caching data from web service calls and update in the background

I'm working on a web application that uses a number of external data sources for data that we need to display on the front end. Some of the external calls are expensive and some also comes with a monetary cost so we need a way to persist the result of these external requests to survive ie a app restart.
I've started with some proof of concept and my current solution is a combination of a persistent cache/storage (stores serialized json in files on disk) and a runtime cache. When the app starts it will populate runtime cache from the persistent cache, if the persistent cache is empty it would go ahead and call the webservices. Next time the app restarts we're loading from the persistent cache - avoiding the call to the external sources.
After the first population we want the cache to be update in the background with some kind of update process on a given schedule, we also want this update process to be smart enough to only update the cache if the request to the webservice was successful - otherwise keep the old version. Theres also a twist here, some webservices might return a complete collection while others requires one call per entity - so the update-process might differ depending on the concrete web service.
I'm thinking that this senario can't be totally unique, so I've looked around and done a fair bit of Googleing but I haven't fund any patterns or libraries that deals with something like this.
So what I'm looking for is any patterns that might be useful for us, if there is any C#-libraries or articles on the subject as well? I don't want to "reinvent the wheel". If anyone have solved similar problems I would love to hear more about how you approached them.
Thank you so much!

WCF calls CrossCallContamination (that's a thing right?)

I have a huuuuge problem. While developing a solution for a customer i never had a problem with this but after deploying the solution in their environment more and more issues are surfacing.
Setup
We have an (wpf) application that gets served some basic html markup and gets rendered (through Essential Objects). This is called through a (now) PERCALL setup of WCF.
Problem
Sometimes the call of user1 gets mixed up and gets served to user2. This happens more frequent as more users connects and uses the program. Since the data is used in day-to-day operations this is viewed as bad.
Bad solution
We've implemented a solution that, should work but the problem needs to be fixed anyway. We call each service three times and compares the results in order to ensure that the majority of responses are correct. Since the problem happens without any sort of pattern other than when many users are connected.
Better solution
I've been looking over WEBAPI/2 for a potential replacement structure for handling our calls, however we send large amounts of data and objects that, put simply, would require some reconstruction on both the server and client, and time is of the essence at this junction.
Googling has yielded few, if any, results on this. What am I missing?

Handling limitations in multithreaded server

In my client-server architecture I have few API functions which usage need to be limited.
Server is written in .net C# and it is running on IIS.
Until now I didn't need to perform any synchronization. Code was written in a way that even if client would send same request multiple times (e.g. create sth request) one call will end with success and all others with error (because of server code + db structure).
What is the best way to perform such limitations? For example I want no more that 1 call of API method: foo() per user per minute.
I thought about some SynchronizationTable which would have just one column unique_text and before computing foo() call I'll write something like foo{userId}{date}{HH:mm} to this table. If call end with success I know that there wasn't foo call from that user in current minute.
I think there is much better way, probably in server code, without using db for that. Of course, there could be thousands of users calling foo.
To clarify what I need: I think it could be some light DictionaryMutex.
For example:
private static DictionaryMutex FooLock = new DictionaryMutex();
FooLock.lock(User.GUID);
try
{
...
}
finally
{
FooLock.unlock(User.GUID);
}
EDIT:
Solution in which one user cannot call foo twice at the same time is also sufficient for me. By "at the same time" I mean that server started to handle second call before returning result for first call.
Note, that keeping this state in memory in an IIS worker process opens the possibility to lose all this data at any instant in time. Worker processes can restart for any number of reasons.
Also, you probably want to have two web servers for high availability. Keeping the state inside of worker processes makes the application no longer clustering-ready. This is often a no-go.
Web apps really should be stateless. Many reasons for that. If you can help it, don't manage your own data structures like suggested in the question and comments.
Depending on how big the call volume is, I'd consider these options:
SQL Server. Your queries are extremely simple and easy to optimize for. Expect 1000s of such queries per seconds per CPU core. This can bear a lot of load. You can use a SQL Express for free.
A specialized store like Redis. Stack Overflow is using Redis as a persistent, clustering-enabled cache. A good idea.
A distributed cache, like Microsoft Velocity. Or others.
This storage problem is rather easy because it fits a key/value store model well. And the data is near worthless so you don't even need to backup.
I think you're overestimating how costly this rate limitation will be. Your web-service is probably doing a lot more costly things than a single UPDATE by primary key to a simple table.

how many webservices

I have a web service that looks like this:
public class TheService : System.Web.Services.WebService
{
[WebMethod(EnableSession = true)]
public string GetData(string Param1, string Param2) { ... }
}
In other words, it's contained in one class and in there, I have one public method and there is another private method that does a read to the database.
The issue I'm facing is in terms of scalability. I'm building a web app that should work for 1,000 daily users and each user will do about 300-500 calls a day to the web service and so that's about 300,000 to 500,000 requests per day. I need to add 9 more calls to the web service. Some of these calls will involve database writes.
My question is this: am I better off creating 9 separate web services or continue with the one service I have and add the other methods. Or may be something different and better. I'm planning to deploy the application on Azure so I'm not really concerned about hardware, just the application side of things.
I wouldn't base my decision off the volume, or for performance/scalability reasons. You won't get much if any performance benefit from keeping them lumped together or separating them. Any grouping or filtering that can be done while the services are grouped one way can also be done with the services grouped the other way. The ability to partition between servers will be the same, too.
Design
Instead I would focus on trying to make your code understandable and maintainable. Group your services how they make the most sense architecturally within your program. Keep them logically grouped how they make the most sense to be grouped, from a problem-domain perspective (as opposed to a solution domain perspective).
Since you're free to group them how you want, I recommend you read up on SOLID, which is a set of guiding principles for creating software architecture.
One of the principles listed that is particularly important is the Interface Segregation Principle, which can be defined by the notion that "many client specific interfaces are better than one general purpose interface."
Performance and scalability
Since you mentioned performance and scalability being a concern, I recommend you follow this plan:
Determine how long you can wait until you can patch/maintain the software
Determine your expected load, including both average and peak load-per-time (you've determined the average), and how much you expect this traffic to grow over time (specifically over the period you can go without patching/maintaining the software)
Create a model describing exactly which calls will be done and in which ratios (per time and per server)
Create automation that mirrors these models as closely as you can. Try to model both average and peak traffic, and surpassing your highest scale traffic
Profile your code, DB, network traffic, and disk traffic while running this automation
Determine the bottlenecks, and if they are within acceptable tolerance
Optimize your bottlenecks (as required), and repeat from the profiling step forward
The next release of your software, repeat from the top to add scenarios/load/automation
Perform regression testing using your existing tests, altered to fit the new scale
Splitting the web methods into several web services won't help you here; load balancing will.
The number of web services will not have any affect on scalability of the app.
Finding your bottlenecks will help scalability. If you're bottleneck is the DB, you may need to find ways to tune your queries, partition your data across more stores, etc... If you're bottleneck is CPU on the web services (web roles in azure), then adding more than one web role to your cluster will help. Azure supports that.
But, simply don't start adding roles. Understand where your bottlenecks are. Measure, profile and tune.
Azure has devfabric and IIS locally to help you profile locally as well.
Splitting the web-services into multiple web roles because of physical constraints and not necessarily due to logical layout may be worth considering because:
Using Azure you can scale out your Roles independently of one another. This means that IF different web methods need to scale in different patterns (ie: your first web method has the biggest volume in the mornings and after lunch and your other two web methods have the biggest volume in the evening and during the night), and the last 2 web methods are usually flat throughout the day, it very well maybe worth it to split your methods across Roles by scalability constraints and not by logical constraints.
By increasing/decreasing the servers allocated to each method independently you maybe able to fine-tune your optimal power vs. need with a much greater precision.
HTH
Actually, creating separate Web Services, as Igorek suggested, will provide much more granular scale-out. In that scenario, you can deploy different Web Services to different Roles, each role getting its own set of instances (along with the option to create different instance sizes per role). Windows Azure will load-balance across all the instances of a Role.
So from a granularity standpoint:
Least granular: Combine all methods into a single Web Service, hosted on a single Role. As you scale out to multiple instances, all service method requests are load-balanced across all instances. Because you're combining everything into one Role, you will find this to be optimized for cost: You can run all Web Services code in a single instance (really 2 instances to give yourself SLA).
More granular: Create separate Web Services, each with their own methods, and host on the same Role (allows you to exercise SOLID principles, as Merlyn described). Same basic performance characteristics as the first option, as all requests are still load-balanced across the same set of instances.
Most granular: Create separate Web Services, each with their own methods, and host each Web Service endpoint on a separate Role, allowing for independent VM sizing and scale-out of each Web Service endpoint. This option has a higher runtime cost to it, as you now have a minimum of one instance per Web Service endpoint (again, 2 instances in a real world, live application).
I am not sure about exact your case, but moving expensive (from CPU/DB point of view) tasks to separate Worker Role usually are good solution for Azure. In that case you will have one WebRole with services that will receive requests (it will be light weight, so you sjould not have many Instances for it) and create tasks for Worker Roles and one or few Worker Roles that will process that tasks - #1 Worker Roles can be created per kind of task (to group similar actions like reading/writing data to DB) or #2 one Worker Role can handle any type of task. I don't see any benefits in #2, because to get the same behavior you can just create one WebRole with many instances and handle all there. So you will have ability to control processing time by adding/removing Worker Roles.
As other people suggested - using Azure platform by itself will not make app scalable, especially if you are using SQL Azure, you will need to implement sharding or add many DBes to avoid one big DB for all requests.
I don't know if that's related to this questing, but just to let you know - Azure is dropping connections which are not active during 60 sec (I did not find some way to increase that timeout, you can Google this problem). This may be an issue is you are porting web-services to Azure and your responses can reach 60 seconds. One way to avoid it is keeping connection active, which is pretty simple if clients know about this "feature".

Resources for learning how to handle heavy traffic asp.net mvc site?

how much traffic is heavy traffic? what are the best resources for learning about heavy traffic web site development?.. like what are the approaches?
There are a lot of principles that apply to any web site, irrelevant of the underlying stack:
use HTTP caching facilities. For one there is the user agent cache. Second, the entire web backbone is full of proxies that can cache your requests, so use this to full advantage. A request that does even land on your server will add 0 to your load, you can't optimize better than that :)
corollary to the point above, use CDNs (Content Delivery Network, like CloudFront) for your static content. CSS, JPG, JS, static HTML and many more pages can be served from a CDN, thus saving the web server from a HTTP request.
second corollary to the first point: add expiration caching hints to your dynamic content. Even a short cache lifetime like 10 seconds will save a lot of hits that will be instead served from all the proxies sitting between the client and the server.
Minimize the number of HTTP requests. Seems basic, but is probably the best overlooked optimization available. In fact, Yahoo best practices puts this as the topmost optimization, see Best Practices for Speeding Up Your Web Site. Here is their bets practices list:
Minimize HTTP Requests
Use a Content Delivery Network
Add an Expires or a Cache-Control Header
Gzip Components
... (the list is quite long actually, just read the link above)
Now after you eliminated as much as possible from the superfluous hits, you still left with optimizing whatever requests actually hit your server. Once your ASP code starts to run, everything will pale in comparison with the database requests:
reduce number of DB calls per page. The best optimization possible is, obviously, not to make the request to the DB at all to start with. Some say 4 reads and 1 write per page are the most a high load server should handle, other say one DB call per page, still other say 10 calls per page is OK. The point is that fewer is always better than more, and writes are significantly more costly than reads. Review your UI design, perhaps that hit count in the corner of the page that nobody sees doesn't need to be that accurate...
Make sure every single DB request you send to the SQL server is optimized. Look at each and every query plan, make sure you have proper covering indexes in place, make sure you don't do any table scan, review your clustered index design strategy, review all your IO load, storage design, etc etc. Really, there is no short cut you can take her, you have to analyze and optimize the heck out of your database, it will be your chocking point.
eliminate contention. Don't have readers wait for writers. For your stack, SNAPSHOT ISOLATION is a must.
cache results. And usually this is were the cookie crumbles. Designing a good cache is actually quite hard to pull off. I would recommend you watch the Facebook SOCC keynote: Building Facebook: Performance at Massive Scale. Somewhere at slide 47 they show how a typical internal Facebook API looks like:
.
cache_get (
$ids,
'cache_function',
$cache_params,
'db_function',
$db_params);
Everything is requested from a cache, and if not found, requested from their MySQL back end. You probably won't start with 60000 servers thought :)
On the SQL Server stack the best caching strategy is one based on Query Notifications. You can almost mix it with LINQ...
I will define heavy traffic as traffic which triggers resource intensive work. Meaning, if one web request triggers multiple sql calls, or they all calculate pi with a lot of decimals, then it is heavy.
If you are returning static html, then your bandwidth is more of an issue than what a good server today can handle (more or less).
The principles are the same no matter if you use MVC or not when it comes to optimize for speed.
Having a decoupled architecture
makes it easier to scale by adding
more servers etc
Use a repository
pattern for data retrieval (makes
adding a cache easier)
Cache data
which is expensive to query
Data to
be written could be written thru a
cache, so that the client don't have
to wait for the actual database
commit
There's probably more ground rules as well. Maybe you can you say something about the architecture of your application, and how much load you need to plan for?
MSDN has some resources on this. This particular article is out of date, but is a start.
I would suggest also not limiting yourself to reading about the MVC stack: many principles are cross-platform.

Categories