Speed up SOAP request in c# - c#

I've a piece of code like this:
foreach (var e in foobar)
{
var myObj = new MyObj();
GenericResult res = soapClient.doSomething(e);
if(res.success == true){
myObj.a = e.a;
myObj.b = e.b;
}
}
Every soapRequest takes about 500 milliseconds and sometime foobar is 1000+ elements so I'm wasting a lot of time waiting for the soapClient response. I've tried using Parallel.ForEach but it doesn't work because the SOAP provider accept only serialized requests. The provider suggest using async calls like soapClient.doSomthingAsync the problem is I haven't anything to do until I got the soapClient response.
The only solution I'm thinking of is using a Parallel.ForEach and a lock in the soap call

Just a few things you could try out.
What type of authentication is applied to the service call. If the service etc. authenticates against an AD, you should ensure that only the first call is authenticated and the rest of the calls just piggybacks on that. An AD authentication can take substantial amount of time (0.3 - 1.0 s)
Try installing Fiddler and use it as a WCF proxy. Fiddler will give you the means to break down the time being spent in various parts of the execution of the service call.
Have you tried to ping the server you target - are the ping timings acceptable?.
How much time is being spent on the first invocation, compared to the following calls. The first call is always going to take a significant amount of time as the CLR runtime have to generate a boat load of dynamic XML code. Just generating the XmlSerializer JIT code is costly, as it dynamically generates C# code, kicks of the CSC compiler and load the generated DLL. You might have a look at the SGEN tool, which makes it possible to generate the XmlSerializer DLL's at compile time and instead of runtime (note this will only help on the first execution timings)
I can't see how much time is actually being spent on the server side, inside the execution of the doSomething(), so it's difficult to see how much time is actually being spent on the network. Faulty network hardware, cables, switches as well as firewalls, routing tables, SLA's etc. might have a negative impact on what performance you can get out of this.
Of cause as already mentioned, having such a chatty interface as you have to use, is far from optimal and the service owner might run into all sorts of server side performance problems, if this is their normal way of exposing interfaces - but that is another story :)

Related

ASP.NET Web API performance issue

I have a simple Web API that returns the list of contacts:
public class ContactsApi : ApiController
{
public List<Contact> GetContacts()
{
Stopwatch watch = new Stopwatch();
watch.Start();
// Doing some business to get contacts;
watch.Stop();
// The operation only takes less than 500 milliseconds
// returning list of contacts
}
}
As I've used Stopwatch to test data-retrieval performance, it's apparent that it takes less than a second. However, when I issue a request to the GetContacts action via Chrome browser, it takes 4 to 5 seconds to return data.
Apparently that delay has nothing to do with my data-retrieval code. It seems to me that Web API is running slow. But I have no idea how to debug and trace that.
Is there any utility to log timing for ASP.NET HTTP request process pipeline? I mean, something like Navigation Timing to show that each event has occurred in what time?
How big is your response? Maybe it is a cost of serialization and transfer? However, there is a lot of possibilities to profile it, I would start from profiling with one of the tools in the market like ANTS Performance Profiler or dotTrace
Are you running it with the debugger? Do some tests without the debugger.
I had similar problems with a web API project I am currently developing and for us
turning off the debugger made the test take milliseconds instead of seconds.
There also seems to be some startup cost when calling a API the first time, subsequent request are always faster.
Try using Fiddler (http://fiddler2.com/), a free web debugging tool. It has most of the features that you are looking for.
4.5 seconds is pretty huge. If you use EF, you could use MiniProfiler.EF
I experienced some slowdown ( in the past) by incorrectly using Entity Framework Queryable ( converting it to lists, expanding, ...).
If you are using EF, keep it IQueryable as long as possible ( .ToList() executes a Query).
According to your needs, use debugging tools like MiniProfiler, MiniProfiler.Ef and tools other suggested are probably good too ( although i haven't used them in the past).
The cost of serialization could be important ( if ou are using DTO's), AutoMapper ( and probably other tools) seems slow on large lists. I'd suggest manually mapping them in an extension method, if you really want performance on big lists.

Minimizing WCF service client memory usage

I'm implementing a WCF service client which is aimed to test several service methods. That's done by using standard generated proxy class created by Add Web Reference (inherited from System.Web.Services.Protocols.SoapHttpClientProtocol). What I need to do is execute certain type of requests many times simultaneously to see how it will affect server performance (something like capacity testing for server infrastructure).
Here's the problem - each of responses to these requests is pretty large (~10-100 mb) and I see that only few calls like
// parametersList.Count = 5
foreach(var param in parametersList)
{
var serviceResponse = serviceWebReferenceProxy.ExecuteMethod(param);
// serviceResponse is not saved anywhere else,
// expected to be GC'd after iteration
}
causes Private bytes of process to jump to ~500 mb of memory and Working Set to 200-300 mb. I suspect running them in parallel and increasing iterations count to 100-200 as needed will definitely cause StackOverflow/OutOfMemoryException. How this can be done then? I'm expecting removal of assigning service method response to variable will help, but that's a problem because I need to see each response's size. I'm looking for some sort of instant and guaranteed memory cleanup after each iteration.
Refactored logic to reuse existing objects as much as possible, which gave an ability to run more clients. After certain period of time garbage collecting becomes very slow but performance is acceptable.

Sending multiple messages to WCF operation using plinq (multithreading)

I'm not exactly sure how a WCF proxy class will handle sending requests through plinq. Does the following code snippit look ok, or does it look like it could cause problems with shared state across multiple threads? (also, I already understand that a using block is not ideal, this is just an example)
using (var proxy = new ServiceProxyOfSomeSort())
{
_aBunchOfMessagesToSend.AsParallel()
.WithDegreeOfParallelism(SomeDegree).ForAll(m =>
{
proxy.SomeOperation(m);
}
}
Should I be creating the proxy once per thread? Is it ok to share the proxy across threads? I don't want to create more proxies than I need to because that is a somewhat expensive operation.
Edit:
I don't really have any of the implementation details of the service on the server side. From a requirements standpoint they should have developed it so that multiple clients could call it at anytime (async). Assuming that they can handle async calls (which may be a big assumption) I'm just trying to figure out if this an acceptable approach from the client side. It is working, I just don't know if there are any gotchas with this approach.
In response to your question about the proxy: you should create a new one per thread. Reusing the proxy tends to work for a while and then throw a fault after tens of requests. As they love to say, its behavior is undefined. The performance overhead of creating new proxies is not huge (after the first one is created), so it shouldn't be a big deal.
As for the discussion of multiple concurrent requests swamping the server: a few, or even a dozen, should be fine - the server can probably handle as many requests as your processor can create threads. Then again, the sudden influx of requests from a single source might be interpreted as a Denial Of Service attack, so you should be wary, particularly if your control over the service is limited.

locking call to webservice in ASP.NET to avoid Oracle CRM time per web service call limit

I have a web application using ASP.NET, that is connecting to Oracle CRM as a back end. The ASP.Net uses some business objects to call into the Oracle CRM webservices, and this works fine.
Except, however, Oracle CRM has a limitation where they only allow you to make 20 web service calls per second (or one call per 50mS), and if you exceed this rate a SOAPException is returned "The maximum rate of requests was exceeded. Please try again in X ms."
The traffic to the site has increased recently, so we are now getting a lot of these SOAPExceptions, but as the code that calls the webservice is wrapped up in a business object, I thought I would modify it to ensure that the 50ms limit is never breached.
I use the following code
private static object lock_obj = new object();
lock (lock_obj)
{
call webservice;
System.Threading.Thread.Sleep(50);
}
However, I am still getting some SOAP Exceptions. I did try writing the code using mutexes instead of lock(), but the performance impact proved to be a problem.
Can anyone explain to me why my solution isn't workinf, and perhaps suggest an alternative?
Edit: Moved to answer. Possible due to > 1 IIS worker process. I don't think object locking spans worker processes so subsequent simultaneous threads could be started but I could be wrong
http://hectorcorrea.com/Blog/Log4net-Thread-Safe-but-not-Process-Safe
My suggestion would be an application variable which stores the tick of the last request, then from that you can work out when it's safe to fire the next.
As long as your application is running with only one ASP.NET worker process you should be ok with what you have, but there are a few things to potentially consider.
Are you using a Web Garden? If so this creates multiple worker processes and therefore a lock is only obtained per/process
Are you in a load balanced environment? If so you will need to go to a different method.
OK, it turns out that a compounding issue was that we have a windows service running on the same server that was also calling into some of the same objects every 4 minutes (running on a different process of course). When I turn it off (and having bumped the sleep up to 100 as per Mitchel's suggestion) the problem seems to have gone away almost entirely.
I say almost, because every so often I still get the odd mysterious soapexception, but I think by and large the problem is sorted. I'm still a bit mystified as to how we can get any of these Exceptions, but we will live with it for now.
I think Oracle should publicise this feature of Oracle CRM on Demand a little more widely.

Proper way to handle thousands of calls to external service from asp.net (mvc)

I'm tasked to create a web application. I'm currently using c# & asp.net (mvc - but i doubt its relevant to the question) - am a rookie developer and somewhat new to .net.
Part of the logic in the application im building is to make requests to an external smsgateway by means of hitting a particular url with a request - either as part of a user-initiated action in the webapp (could be a couple of messages send) or as part of a scheduledtask run daily (could and will be several thousand message send).
In relation to a daily task, i am afraid that looping - say - 10.000 times in one thread (especially if im also to take action depending on the response of the request - like write to a db) is not the best strategy and that i could gain some performance/timesavings from some parallelization.
Ultimately i'm more afraid that thousands of users at the same time (very likely) will perform the action that triggers a request. With a naive implementation that spawns some kind of background thread (whatever its called) for each request i fear a scenario with hundreds/thousands of requests at once.
So if my assumptions are correct - how do i deal with this? do i have to manually spawn some appropriate number of new Thread()s and coordinate their work from a producer/consumer-like queue or is there some easy way?
Cheers
If you have to make 10,000 requests to a service then it means that the service's API is anemic - probably CRUD-based, designed as a thin wrapper over a database instead of an actual service.
A single "request" to a well-designed service should convey all of the information required to perform a single "unit of work" - in other words, those 10,000 requests could very likely be consolidated into one request, or at least a small handful of requests. This is especially important if requests are going to a remote server or may take a long time to complete (and 2-3 seconds is an extremely long time in computing).
If you do not have control over the service, if you do not have the ability to change the specification or the API - then I think you're going to find this very difficult. A single machine simply can't handle 10,000 outgoing connections at once; it will struggle with even a few hundred. You can try to parallelize this, but even if you achieve a tenfold increase in throughput, it's still going to take half an hour to complete, which is the kind of task you probably don't want running on a public-facing web site (but then, maybe you do, I don't know the specifics).
Perhaps you could be more specific about the environment, the architecture, and what it is you're trying to do?
In response to your update (possibly having thousands of users all performing an action at the same time that requires you to send one or two SMS messages for each):
This sounds like exactly the kind of scenario where you should be using Message Queuing. It's actually not too difficult to set up a solution using WCF. Some of the main reasons why one uses a message queue are:
There are a large number of messages to send;
The sending application cannot afford to send them synchronously or wait for any kind of response;
The messages must eventually be delivered.
And your requirements fit this like a glove. Since you're already on the Microsoft stack, I'd definitely recommend an asynchronous WCF service backed by MSMQ.
If you are working with SOAP, or some other type XML request, you may not have an issue dealing with the level of requests in a loop.
I set up something similar using a SOAP server with 4-5K requests with no problem...
A SOAP request to a web service (assuming .NET 2.0 and superior) looks something like this:
WebServiceProxyClient myclient = new WebServiceProxyClient();
myclient.SomeOperation(parameter1, parameter2);
myclient.Close();
I'm assuming that this code will will be embedded into your business logic that you will be trigger as part of the user initiated action, or as part of the scheduled task.
You don't need to do anything especial in your code to cope with a high volume of users. This will actually be a matter of scalling on your platform.
When you say 10.000 request, what do you mean? 10.000 request per second/minute/hour, this is your page hit per day, etc?
I'd also look into using an AsyncController, so that your site doesn't quickly become completely unusable.

Categories