How to analyze webservice delay - c#

I need help to find a strategy to analyze a problem.
Suddenly, my application starts to behave strange.
Summarizing, my application
1. (.net 4.0) uses a webservice
2. (svc, .net 3.5) that executes some procedures. I measured the time of procedures and total time is under the one second (call this time).
Most of the time the wait is few milliseconds: fair enough.
Sometimes though (and unfortunately seems to be random), wait can go up to a couple of minutes and then goes to timeout (correctly); if I check for time, it is still under one second.
Where am I losing this time?
How can I figure out what is happening?
Do you have any tools, hints or whatever to suggest me to understand what is going on?
Thanks

In a similar situation, I would start with the following:
Configure WCF Tracing on both the client and service
... (http://msdn.microsoft.com/en-us/library/ms733025(v=vs.110).aspx)
Configure Fiddler to view the web service communications.
... The Fiddler and Monitoring Web Service Traffic SO post provides good reference links.
Configure performance monitor with WCF counters
... Windows Communication Foundation (WCF) includes a large set of performance counters, scoped to three different levels: Service, Endpoint and Operation, which will help monitor application performance. The MSDN article provides a detailed explanation:
... http://msdn.microsoft.com/en-us/library/ms735098(v=vs.110).aspx
Conduct a network trace to watch the actual network traffic
Note: The following article provides a really good overview of WCF performance optimization:
http://weblogs.asp.net/sweinstein/archive/2009/01/03/creating-high-performance-wcf-services.aspx
Good luck.

Related

IIS High and unstable TTFB

I have my MVC application with API used in it running on IIS 6.0(7.0 on production servers). For the API, I use IHttpHandler implementation in API.ashx file.
I have many different API calls being made to my API.ashx file, but I'll tell about one, that has no DB calls, so it's definitely NOT database issue.
At the very beginning of ProcessRequest method I've added Diagnostics.Stopwatch to track performance and stopping it at the last method's line.
The output of my stopwatch is always stable(+-2ms) and shows 5ms(!!!) in average.
But on my site, I see absolutely unstable and different Time to First Byte. It may start from 15ms and may grow up to 1 SECOND, and demonstrates 300 ms in average, but in logs I'll still have my stable 5ms from stopwatch.
This happens on every server I use, even locally(so this is not network related-problem) and on production. BTW all static resources are loaded really fast(<10ms)
Can anyone suggest the solution to this?
This sounds like a difficult one to diagnose without a little more detail. Could you edit your question and add a waterfall chart showing the slow API call in question? A really good tool to produce waterfall charts is http://webpagetest.org
I also recommend reading this article about diagnosing slow TTFBs.
http://www.websiteoptimization.com/speed/tweak/time-to-first-byte/
It goes into great detail about some of the reasons behind a slow response.
Here are some server performance issues that may be slowing down your server.
Memory leaks
Too many processes / connections
External resource delays
Inefficient SQL Queries
Slow database calls
Insuficient server resources
Overloaded Shared Servers
Inconsistent website response times
Hope that helps!

running timer from global.asax vs quartz.net

I am developing a asp.net site that needs hit a few social media sites daily for blanket friend/follower data. I have chosen arvixe business class as my hosting. In the future if we grow, I'd love to get onto a dedicated server and run a windows service, however since that is not in the cards at this point I need another reliable way of running scheduled tasks. I am familiar with running a thread timer from the app_code(global.aspx). However the app pool recycling will cause some problems with the timer. I have never used task scheduling like quartz but have read a lot about it on stackoverflow. I was looking for some advise as to how to approach my goal. One big problem I have using either method is that I will need the crawler threads to sleep for up to an hour regularly due to api call limits. My first thoughts were to use the db to save the starting and ending of a job. When the app pool recycles I would clear out any parts not completed and only start parts that do not have a record of running on that day. What do the experts here think? any good links to sample architecture of this type of scheduling?
It doesn't really matter what method you use, whether you roll your own or use Quartz. You are at the mercy of ASP.NET/IIS because that's where you want to host it.
Do you have a spare computer laying around that can just run a scheduled task and upload data to a hosted database? To be honest, it's possibly safer (depending on your use case) to just do it that way then try to run a scheduler in ASP.NET.
Somewhat along the lines of Bryan's post;
Find a spare computer.
Instead of allowing DB access have it call up a web service on your site. This service call should be the initiator of the process you are trying to do. Don't try to put params into it, just something like "StartProcess()" should work fine.
As far as going to sleep and resuming later take a look at Workflow Foundation. There are some nice built in features to persist state.
Don't expose your DB to the outside world, instead expose that page or web service and wraps some security around that. WCF has some nice built in security features for that.
The best part is when you decide to move off, you can keep your web service and have it called from a Windows Service in the same manner.
As long as you use a persistent job store (like a database) and you write and schedule your jobs so that they can handle things like being killed half way through, having IIS recycle your process is not that big a deal.
The bigger issue is that IIS shuts your site down if it doesn't have traffic. If you can keep your site up, then just make sure you set the misfire policy appropriately and that your jobs store any state data needed to pick up where they left off, you should be able to pull it off.
If you are language-agnostic and don't mind writing your "job-activation-script" in your favourite, Linux-supported language...
One solution that has worked very well for me is:
Getting relatively cheap, stable Linux hosting(from reputable
companies),
Creating a WCF service on your .Net hosted platform that will contain the logic you want to run regularly (RESTfully or SOAP or XMLRPC... whichever suits you),
Handling the calls through your Linux hosted cron jobs, written in your language of choice(I use PHP).
Working very well, like I said. No VPS expense,configurable and externally activated. I have one central place where my jobs are activated, with 99 to 100% uptime(never had any failures).

WCF service health monitoring

I just implemented a WCF service and I am currently looking at service monitoring options. Our server team that currently hosts only java services wants us to have instances running all the time, so it can gather data in that instance during its lifetime and they said they will use one of our operations with webmon to get statistical information. But we are using per call and I dont think that will work under this architecture.
I am wondering if there is a way to get the statistics of how an operation in the service did in certain amount of time and provide an another operation for webmon to use that gives an integer value about its performace in certain time period, webmon, then decides weather to alert the admin or not.
I was considering parsing of log files to get statistics but that might be an expensive operation if done every 15 mins.
If not what are my options for detailed automatic health monitoring of wcf applications?
My company very recently agreed to open-source (under the GPL License) the tool that we use internally to monitor our live web services and for producing availability and response time reports. It's called ServiceMon and it may meet your needs.
It runs on Windows as a standalone application and works by following a simple script of operations that dictate the services to be monitored. For example, to check a web page contains a particular value, in a similar manner to webmon, you'd use this line:
http-get "http://www.google.com" must-contain "I'm Feeling Lucky"
The frequency at which it executes the script operations can be easily configured as can the order which it processes them.
In addition to monitoring web pages and web services we use ServiceMon to track availability statistics of each service and to produce response time statistics.
ServiceMon is written using a plugin architecture so you can use .NET to add new types of monitoring operations. So, for example, if your web service uses funky authentication you can fairly easily plug this in to the utility.
Full documentation and download instructions here
I hope you find it useful and I'd love to hear your thoughts
Disclaimer: I developed ServiceMon so I may be a little bit biased :)

WCF Service calling another WCF Service is slow

I have a design whereby we have a WCF Service that accesses a datastore that is represented as another WCF service. The idea behind this is to adhere to the SOA and have the potential to load balance by the actual service and the data access layer, as well as enable the datastore to change massively with no impact on the initial service.
Problem is these are running on IIS6 and encryption must be enabled.
With both services enabled we are getting averages of approximately
Average Number of requests per second: 4.75469280423686 over 400 calls.
But if I remove the service call to the second service and replace with an absolute reference this nearly doubles to
Average Number of requests per second: 8.52248037501811 over 400 calls.
Does anyone have any clues as to how/what I can do to optimise this?
I should add these are not concurrent calls.
Are both web services running on the same machine and the same app pool? I've had that exact issue before; we eventually cut that architecture completely, but I believe it could have been helped by putting them in different app pools.
Also, since you mentioned IIS6, .Net may be holding back on you: Check out http://msdn.microsoft.com/en-us/library/ff647787.aspx (Chapter 6: Improving ASP.NET Performance) - especially the "Threading Explained" section. (IIS6 by default doesn't have the appropriate number of .Net threads for your processor - IIS7+ does.)
Good luck!

Need advice to query data from sql server on every 5 seconds and send it to other app.(.NET C#)

I have a require ment to read data from a table(SQL 2005) and send that data to other application for every 5 seconds. I am looking for the best approach to do the same.
Right now I am planning to write a console application(.NET and C#) which will read the data from sql server 2005(QUEUE table which will be filled through different applications) and send to other application through TCP/IP(Central server). Run that console application under schedule task for every 5 seconds. I am assuming scheduled task will take care to discard new run event if task is already running(avoid to run concurrent executions).
Does any body come accross similar situation? Please share your experience and advice me for best approach.
Thanks in advance for your valuable time spending for my request.
-Por-hills-
We have done simliar work. If you are going to query a sql database every 5 seconds, be sure to use a stored procedure that is optimized to be very fast. It should not update data unless aboslutely necessary. This approach is typically called 'polling' and I've found that it is acceptable if your sqlserver is not otherwise bogged down with too many other calls.
In approaches we've used, a Windows Service that does the polling works well.
To communicate results to another app, it all depends on what your other app is doing and what type of interface you can make into it, and how quickly you need the results. The WCF class libraries from Microsoft provide many workable approaches for real time communication. My preference is to write to the applications database, and then have the application read the data (if it works for that app). If you need something real time, WCF is the way to go, and I'd suggest using a stateless protocol like http if < 5 sec response time is required, (using standard HTTP posts), or TCP/IP if subsecond response time is required.
since I assume your central storage is also SQL 2005, have you considered using what SQL Server 2005 offers out of the box to achieve your requirements? Rather than pool every 5 seconds, marshal and unmarshal TCP/IP, implement authentication and authorization for the TCP/IP pipe, scale TCP transmission with boxcaring, manage message acknowledgments and retries, deal with central site availability, fragment large messages, implement fairness in transmission and so on and so forth, why not simply use Service Broker? It does all you need and more, out of the box, already tested, already tuned for performance and scalability.
Getting reliable messaging right is not trivial and you should focus your efforts in meeting your business specifics, not reiventing the wheel.
I would recommend writing a Windows Service (since you are C#) that has some timer which runs every 5 seconds. That way you wont be starting and stopping an application all the time, it can run even when there is no one logged into the machine, and it will automatically start when the machine is restarted.
For one of my projects, I needed to do something periodically. I opted for a service and set up a timer that takes care of reading the data. You might consider that solution. It has worked well for me.
I suggest to create a windows service and not an application and to perform the timing yourself - create a timer and execute one step on each timer event. For the communication you have many choices - I would consider using standard technologies like a webservice or Winows Communication Foundation.
Besides this custom solution I would evaluate if the task can be solved using Microsoft Integration Services .
Finally other question comes to mind - why do you need this application? Why doesn't/don't the application(s) consuming the data query the database? Is the expensive polling required? Is it possible for the data producers to signal the availibilty of new data directly to the data consumers?
I am not sure about the details of your project, specifically related to security but maybe it would be better to create an SSIS package and schedule it as a job?

Categories