I'm currently developing a secure WCF service that will receive large numbers of calls, e.g. over 3000. My original approach has been to call the webservice methods using 'async' however rather quickly I realised that I needed to use the Task.WaitAll to ensure that all the calls were successfully made before the execution dropped out.
However, by utilising the WaitAll I'm now overloading the service with 70% of calls returning a combination of 'CommunicationException' and 'ServerTooBusyException' type messages. I have reviewed the WCF throttling options but still finding that these do not appear to have any direct effect, i.e. (Note: the webservice is being ran locally in this instance on Local IIS)
<serviceThrottling
maxConcurrentCalls="4096"
maxConcurrentSessions="65536"
maxConcurrentInstances="2147483647"/>
Running the webservice call sychronously works fine but runs too slow and I'm not terribly bothered about waiting for any callback from the webservice - I literally just need to 'fire and forget' these calls to the service.
Here's a rough example of what I'm doing on the client-side...
var numberOfIterations = 3000;
var allCalls = new List<Task>();
using (var service = new WebserviceServiceClient())
{
for (var n = 0; n < numberOfIterations; n++)
{
var someObject = new SomeObject(DateTime.UtcNow);
allCalls.Add(service.WebserviceMethodAsynch(SomeObject));
}
}
Task.WaitAll(allCalls.ToArray());
Can anyone advise on an elegant approach to bombarding a WCF webservice from a client without an attritional amount of failed calls?
Note: one approach would be utilise queues (in this case Azure Queues), ironically the service being called is performing some minor preprocessing prior to adding the object onto a queue to be picked up by a separate, more intensive process.
Thanks in advance
Your client and your web service are not on the same machine, right? Anyway, I believe you'd better use a Load Test to achieve the results that you're looking for:
http://www.sandeepsnotes.com/2013/05/load-and-performance-testing-of-wcf.html
Related
Scenario: A Azure WebJob that will get all the Vendor record from NetSuite via WSDL.
Problem: The dataset is too large. Even with service set to 12 minutes time out. It still time out and the code failed.
NetSuite have a async process that basically run whatever you want on the server and it will return a JobId that allowed you to check the process on the server.
What I did currently is by making a search call first asking for all the Vendor records and it is to be process on the server. After I got the JobId, i wrote a void Recursion that check if the job is finish on the server with Thread Sleep set to 10 minutes.
private static bool ChkProcess(VendorsService vendorService, string jobId)
{
var isJobDone = false;
//Recursion
void ChkAsyncProgress(bool isFinish)
{
if (isFinish) return;
var chkJobProgress = vendorService.NsCheckProcessStatus(jobId);
if (chkJobProgress.OperationResult.IsFinish) isJobDone = true;
Thread.Sleep(TimeSpan.FromMinutes(10));
ChkAsyncProgress(isJobDone);
}
ChkAsyncProgress(false);
return isJobDone;
}
It work but is there a better approach?
Thanks
I think that since you're working with Azure already, with Service BUS you can implement a really low cost solution for this (if not free, depending on how much frequent is your job running)
Basically it's a queue where you enqueue messages (which can be objects with properties too, so they could also contain your result of the elaboration potentially).
A service bus is used to enqueue.
An azure function of type ServiceBusTrigger listens automatically if any new message on service bus has arrived and gets triggered if so (or, you can set messages to be enqueued, but be available after a certain future time only).
So, in the webjob code, at the end you could add code to enqueue a message which will mark the webjob has finished elaboration.
The azure function will get immediately noticed as soon as the message gets in the queue and you can retrieve the data without polling constantly for job completion, as azure will take care of all of that for you for a ridiculous price and without any effort by you.
Also, these task aren't priced timely based, but execution based, so you will pay only when it effectively put a message in queue.
They have a certain number of executions free, so it might be that you don't even need to pay anything.
Here some microsoft code sample for doing so.
Is it possible to test WCF throttling behaviour through Wcftest client?
If Yes then How?
I have a code below for ServiceHost
ServiceThrottlingBehavior stb = _servicehost.Description.Behaviors.Find<ServiceThrottlingBehavior>();
if (stb == null)
{
stb = new ServiceThrottlingBehavior();
stb.MaxConcurrentCalls = 1;
stb.MaxConcurrentInstances = 1;
stb.MaxConcurrentSessions = 1;
_servicehost.Description.Behaviors.Add(stb);
}
My service has a method such as:
public string ThrottlingCheck()
{
Thread.Sleep(new TimeSpan(0, 0, 0, 5, 0));//5 seconds
return "Invoke Complete";
}
In the event that you are using “web” bindings, you could use the open-source soapUI/loadUI test tools.
SoapUI is a free and open source cross-platform Functional Testing solution. With an easy-to-use graphical interface, and enterprise-class features, SoapUI allows you to easily and rapidly create and execute automated functional, regression, compliance, and load tests.
Reference:
http://www.soapui.org/
http://www.soapui.org/Load-Testing/using-loadui-for-loadtesting.html
As your request is taking 5 seconds, you can easily test this by invoking two operations at the same time by using two WCF Test Client or by opening two tabs in the same WCF client.
An integration test is certainly a better choice to check this behavior.
In addition, if your want to check that the behavior is really applied to your service, you could use WCF diagnostics such as WCF counters, especially "Percent of Max Concurrent XXX".
No, it is not possible using WCF Test Client. If you have Visual Studio Ultimate you can use load tests/performance tests to test the throttling.
http://blogs.msdn.com/b/rickrain/archive/2009/06/26/wcf-instancing-concurrency-and-throttling-part-3.aspx?Redirected=true
If you company has a copy of LoadRunner (hp product), you'll be able to build up enough fake transaction to actually test throttling.
In our case, we actually built a multi-instance, multi-threaded program to slam our web service with 1000+ concurrent (fake) users, each uploading 40 files. It was only then that we were able to see the throttling begin to take effect.
BTW, we tried a bunch of different combinations to see if we could tweek the settings and increase the performance, but in the end, the fastest we were able to get our web service running was under the default settings for throttling ... in other words, no throttling at all, just letting WCF manage the traffic and queue. Weird, huh?
I need to push notifications to tens of thousands of iOS devices that my app installed. I'm trying to do it with PushSharp, but I'm missing some fundamental concepts here. At first I tried to actually run this in a Windows service, but couldn't get it work - getting null reference errors coming from _push.QueueNotification() call. Then I did exactly what the documented sample code did and it worked:
PushService _push = new PushService();
_push.Events.OnNotificationSendFailure += new ChannelEvents.NotificationSendFailureDelegate(Events_OnNotificationSendFailure);
_push.Events.OnNotificationSent += new ChannelEvents.NotificationSentDelegate(Events_OnNotificationSent);
var cert = File.ReadAllBytes(HttpContext.Current.Server.MapPath("..pathtokeyfile.p12"));
_push.StartApplePushService(new ApplePushChannelSettings(false, cert, "certpwd"));
AppleNotification notification = NotificationFactory.Apple()
.ForDeviceToken(deviceToken)
.WithAlert(message)
.WithSound("default")
.WithBadge(badge);
_push.QueueNotification(notification);
_push.StopAllServices(true);
Issue #1:
This works perfectly and I see the notification pop up on the iPhone. However, since it's called a Push Service, I assumed it would behave like a service - meaning, I instantiate it and call _push.StartApplePushService() within a Windows service perhaps. And I thought to actually queue up my notifications, I could do this on the front-end (admin app, let's say):
PushService push = new PushService();
AppleNotification notification = NotificationFactory.Apple()
.ForDeviceToken(deviceToken)
.WithAlert(message)
.WithSound("default")
.WithBadge(badge);
push.QueueNotification(notification);
Obviously (and like I already said), it didn't work - the last line kept throwing a null reference exception.
I'm having trouble finding any other kind of documentation that would show how to set this up in a service/client manner (and not just call everything at once). Is it possible or am I missing the point of how PushSharp should be utilized?
Issue #2:
Also, I can't seem to find a way to target many device tokens at once, without looping through them and queuing up notifications one at a time. Is that the only way or am I missing something here as well?
Thanks in advance.
#baramuse explained it all, if you wish to see a service "processor" you can browse through my solution on https://github.com/vmandic/DevUG-PushSharp where I've implemented the workflow you seek for, i.e. a win service, win processor or even a web api ad hoc processor using the same core processor.
From what I've read and how I'm using it, the 'Service' keyword may have mislead you...
It is a service in a way that you configure it once and start it.
From this point, it will wait for you to push new notifications inside its queue system and it will raise events as soon as something happens (delivery report, delivery error...). It is asynchronous and you can push (=queue) 10000 notifications and wait for the results to come back later using the event handlers.
But still it's a regular object instance you will have to create and access as a regular one. It doesn't expose any "outside listener" (http/tcp/ipc connection for example), you will have to build that.
In my project I created a small selfhosted webservice (relying on ServiceStack) that takes care about the configuration and instance lifetime while only exposing the SendNotification function.
And about the Issue #2, there indeed isn't any "batch queue" but as the queue function returns straight away (enqueue and push later) it's just a matter of a looping into your device tokens list...
public void QueueNotification(Notification notification)
{
if (this.cancelTokenSource.IsCancellationRequested)
{
Events.RaiseChannelException(new ObjectDisposedException("Service", "Service has already been signaled to stop"), this.Platform, notification);
return;
}
notification.EnqueuedTimestamp = DateTime.UtcNow;
queuedNotifications.Enqueue(notification);
}
I work on a multi-tier application and I need to optimize a long-running process in three ways :
Avoiding EF update concurrency problems.
Improving speed.
Informing the user of the progress.
Actually, the client code calls a WCF service using a method that does all the work (evaluating the number of entities to update, querying the entities to update, updating them and finally, saving them back to the database).
The process is very long and nothing is sent back to the user except the final result once the process is done. The user can stay in front of the wait form for up to 10 minutes, not knowing what is happening.
The number, and depth of the queried entities can become really big and I sometimes hit OutOfMemoryExceptions. I had to change the service method to process entity updates 100 entities at a time, so my DbContext will be refreshed often and won't become too big.
My actual problem is that I cannot inform the user each time an entity is updated because my service method does the whole process before returning it's result to the user.
I read about implementing a duplex service but since I have to return two different callbacks to the user (one callback to return the number of entities to update and another callback for the result of each entity update) I have to use multiple interface inheritance on a generic callback interface and it's becoming a little messy (well, to my taste).
Wouldn't it be better to have one WCF service method to return the number of entities to evaluate, and another WCF method that will return a simple entity update result, which will be hit for every entity to update ? My DBContext will be living only for the time of a single entity update, so it would not grow very much, which I think is good. However, I am concerned about hitting the WCF service really often during that process.
What are you thoughts ? What can you suggest ?
Have you thought about adding a WCF host to your client? That way you get full two way comms.
Client connects to server and gives server connection details back to client
Client request long running operation to begin
Server sends multiple updates to the clients WCF host as work progresses.
Server sends work complete to client.
This leaves your client free to do other things, consuming the messages from the server as you see fit. Maybe updating a status area as messages come in.
You could even get the server to maintain a list of clients and send updates to them all.
--------EDIT---------
When I say WCF host I mean a ServiceHost
It can be created automatically from your XML in App.config or though code directly.
var myUri = new Uri[0];
myUri[0] = new Uri("net.tcp://localhost:4000");
var someService = new SomeService(); //implements ISomeService interface
var host = new ServiceHost(someService, myUri);
var binding = new NetTcpBinding(); // need to configure this
host.AddServiceEndpoint(typeof(ISomeService), binding, "");
host.Open();
Proxy is a term I use for what a client uses to connect to the server, it was in an early example I came across and its stuck with me since. Again can be created both ways.
var binding = new NetTcpBinding(); // need to configure this
var endpointAddress = new EndpointAddress("net.tcp://localhost:4000");
var factory = new ChannelFactory<ISomeService>(binding, endpointAddress);
var proxy = factory.CreateChannel();
proxy.DoSomeWork();
So in a typical client/server app you have
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHost<-------proxy
What I am suggesting is that you can make the client be a "server" too
CLIENT APP 1 SERVER APP CLIENT APP 2
proxy------------->ServiceHostA<------proxy
ServiceHostB<------proxy1
proxy2------------>ServiceHostB
If you do that, you can still split your large task into smaller ones if needed (you mentioned memory issues), but from the sounds of things they still might take some time and this way progress updates can still be sent back to the client or even all clients if you want everyone to be aware of whats happening. No callbacks needed, though you can still use them if you want.
Avoiding EF update concurrency problems.
See this question/answer Long running Entity Framework transaction
Improving speed.
Some suggestions:
Try using SQL Profiler to see what SQL query is being executed, and optimize the linq query
Or try improving the query itself or calling a stored procedure.
Can the updates be done in parallel? different threads? different processors?
Informing the user of the progress.
I would suggest changing the client to call an async method, or a method which then starts the long running operation asynchronously. This would return control back to the client immediately. Then it would be up to the long running operation to provide feed back as to its progress.
See this article for updating progress from a background thread
Update
the "architecture" I would suggest would be as follows:
. Service . . .
________ . _________ _______ ____
| | . | WCF | | EF | | |
| Client |---->| Service |->| Class |->| DB |
|________| . |_________| |_______| |____|
.
. .
The WCF service is only responsible for accepting client requests, and starting off the long running operation in the EF Class. The client should send an async request to the WCF service so it retains control and responsiveness. The EF class is responsible for updating the database, and you may choose to update all or a subset or records at a time. The EF class can then notify the client via the WCF service of any progress it has made - as required.
My project was standalone application then I decided to split it as client & server because I need powerful CPU usage and portability at the same time. Now multiple clients can connect to one server.
It was easy when 1 by 1 processing did the job. Now I need to call the same function & scope area again & again at the same time -via client requests-
Please can anyone give me some clue how should I handle these operations, I need to know how can I isolate clients' processes from each other at the server side? My communication is asynchronous, server receives a request and starts a new thread. I think I pass a parameter which one carries the client information, and another parameter as job id -to help client back, client may ask for multiple jobs and some jobs finish quicker than others-
Should I instantiate the class Process on each call? Can I use a static method, etc, any explanation will be of great help!
Below is the part of my code to need modification
class static readonly Data
{
public variable listOfValues[]
}
class Process
{
local variable bestValue
function findBestValue(from, to)
{
...
if(processResult > bestValue) bestValue = processResult
...
}
...
for(i=0;i<10;i++) startThread(findBestValue(i*1000,i*1000+999));
...
}
EDIT: I think I have to instantiate a
new Process class and call the
function for each client and ignore
the same client for same job since job is already running.
Not getting into your application design, since you didn't talk much about it, I think that your problem is ideal for using WCF WebServices. You get client isolation by design because every request will start in it's own thread. You can create WCF host as standalone application/windows service.
You can wrap your communication with WCF service and configure it to be PerCall service (meaning each request will be processed separately from others).
So you'll clean up your buisness logic from syncronization stuff. That's the best way, because managing and creating threads is not difficult to implement, but it is difficult to implement correctly and optimized for resources consumption.