Good morning,
I have a WCF service using the following technologies:
.Net 4.7.2
Castle Windsor + WcfIntegration Package
IIS 10
Visual Studio 22
Originally I asked a question relating to some Azure App Insights anomalies I was seeing that can be found here: Azure App Insights Question
I am now able to replicate the behaviour I was seeing that proves the Wcf Service is queueing the requests. I have a test harness that uses multi threading to send off a specified number of requests at once. If I fire off 100 requests and then an additional 5 from different machines, the 5 won't be returned until the 100 has finished. I believe this is on the connectivity to the service and not anything the service is doing. To prove this I created a blank method with a Sleep 100ms on it that then returned a blank memory stream. The above behaviour still occurred.
It's my understanding that WCF accepts Concurrent calls by default so I find the behaviour bizarre and hoping someone might have some information on configuration/settings I can try.
Thanks in advance.
Related
We are deploying a web application to an internally hosted windows kubernetes cluster. Through a mechanism I don't fully understand (but I don't believe is relevant to the question), after deployment to the K8s cluster our application is addressable by a DNS entry that includes that name and version number of the application.
As such, the exact DNS value is not created until the deployment is complete. And in fact, it takes a little while after the deployment is complete before it is addressable.
Now to the .Net Core HttpClient question...
(And in case it is relevant, the code in question is running on Windows Server 2012 via Team City)
I have written some code using .Net Core 2.2 HttpClient (via HttpClientFactory) to poll the healthcheck endpoint of the application to know when the deployment is complete. At present it waits a period of time before making any requests. It then polls periodically for a while (up to 5 minutes) before giving up.
The issue I'm having is that if the first polling request happens before the DNS entry is available it (rightly) fails with a 'no such host is known' exception. But all further requests will also fail with the same exception.
The sweet spot in my case seems to be to wait three minutes. When doing this the first request to my application will succeed and all is good. But if I only wait two minutes, it will fail, and keep failing even if I keep trying for another 5 minutes. To me it looks like once the first DNS fails, I'm doomed.
Strangely I have found that once my polling fails enough and gives up, a subsequent call to nslookup succeeds (from powershell - i.e. separate to the .Net Core console application). But then more code is executed, this time running .Net Framework 4.6 code... and it too has problems with the DNS.
So could I be correct that a DNS failure is remembered somewhere, even if it not across the board (as evidenced by the nslookup success)? And if so, how can the DNS lookup be forced to be retried?
I have tried setting PooledConnectionIdleTimeout and PooledConnectionLifetime values for the SocketsHttpHandler in question (to 20 seconds, as my pause between retries is 30 seconds), but that has not helped. But even if they were the answer and I've done something wrong with it, that wouldn't explain why a separate .Net process running straight after has the same issue.
I'm writing a web socket application which I intend on hosting on the cloud using an Azure web app.
The web socket is implemented using a fairly standard piece of Owin Middleware and is fully functional for the first 100 seconds. After this time the websocket seems to enter the aborted state like clockwork.
[CLIENT][06/04/2018 11:27:21] WS client connected
[CLIENT][06/04/2018 11:29:01] WS client disconnected
Trying this on an IIS Express instance gives the same issue, although the delay seems to be 90 seconds rather than 100 (this is also consistent).
Running the same websocket as part of an Owin self-host app yields stability for over 25 minutes - so this definitely seems to be a problem caused by the hosting server.
On the azure web app hosting I've enabled web sockets, and also tried to enable the "Always On" feature in hopes that this would prevent the issue by preventing the server from going into a standby state - but this has not helped.
Are there any azure settings that I'm not aware of that could be tweaked such that these web sockets can stay open for longer periods of time?
Many thanks
Unfortunately, it seems that azure and IIS were both red herrings, the issue instead was an issue experienced with using the C# implementation of System.Net.Websockets.ClientWebSocket. The detail is brilliantly summarized at: .NET WebSockets forcibly closed despite keep-alive and activity on the connection
To others experiencing this issue, an effective (but not ideal) work around is to run the following once before creating a client web socket:
ServicePointManager.MaxServicePointIdleTime = int.MaxValue;
We are experiencing this error in IE10/11 and have spent the last 2 days researching about it and have not been able to find a solution. We are running a asp.net mvc 5 web application utilizing signalR in specific areas hosted on a azure website that is scaled up to 2 instances. We are using the Redis backplane mod to make sure our SignalR talks back to all instances of the application. The error is intermittent and causes the rest of the application to hang. We do not believe this is a SignalR issue because we removed the invocation of SignalR from the page and was still able to get the issue to occur.
We have tried the following
GET request before POST
Setting the charset=utf-8
Our JS libraries are update to date
We need to find a fix for this issue quickly so if anyone has any ideas I would be really greatful.
thanks in advance
One problem when scaling a SignalR application on more instances is that clients connected to a server only receive updates from other clients from the same server. (The messages are not automatically broadcast between all servers).
http://www.asp.net/signalr/overview/performance/scaleout-in-signalr
One solution is to have a backplane that automatically sends the messages between servers, so that any server has any message at any time so it can push updates to their connected interested clients.
There are two implementations that Microsoft explains in the link above, one using Redis Pub/Sub and the other using Azure Service Bus.
With Redis you simply get an instance running (it can be on Linux, Windows or in Azure) and configure each server to push messages and subscribe to messages on the same channel.
I hope it helps your problem (since you said that the problem is intermitent, I can assume that it appears when clients are connected to different instances of your application).
Good luck!
EDIT:
Thanks for updating your question.
Have a look at this SO post:
IE10/11 Ajax XHR error - SCRIPT7002: XMLHttpRequest: Network Error 0x2ef3
And at this post:
http://www.kewlcodes.com/posts/5/SCRIPT7002-XMLHttpRequest-Network-Error-0x2ef3-Could-not-complete-the-operation-due-to-error-00002ef3
Good luck!
I also ran into this issue. In my case, I used IOwinContext.Request.ReadFormAsync() in my server code. I found out that if I send a post request with json content type, ReadFormAsync() hangs.
I solved it by checking if the request's content type is json, and if so, I use a different method to parse the body.
I know this is going to sound stupid - but we've spent close to 4 weeks trying to implement a WCF callback system (Subscription Service), but to no avail.
Can anyone verify that they have successfully got this working in a multiple client production environment?
All the examples I've come across work on localhost but in production fails miserably.
The specific problem is that subscriptions and un-subscriptions work perfectly, up until the client is closed and re-opened at which point; multiple CallBacks are made or the Publish Method times out.
Again, can anyone confirm they have this working, or perhaps direct me to some documentation that provides a real-world PROVEN example.
Background info on my existing configuration found here:
WCF to WCF Communication
Thanks
I am using SOAP in C# .Net 3.5 to consume a web service, from a video game company. I am having lots of SOAP Exceptions with the error "Operation Timed Out"
While one process is timing out, others fly by with no problems. I would like to rule out a problem on my end, but I have no idea where to begin. My timeout is 5 minutes. For every 5,000 requests, maybe 500 fail.
Anyone have some advice for diagnosing web services failures? The web service owner will probably give no support to helping me on this, as it's a free service.
Thanks
I've had to do a lot of debugging connecting to a SOAP Service using PHP and timeouts are the worst problem. Normally the problem is the 'client' doesn't have a high enough timeout and bombs after something like 30s.
I test making the calls using SoapUI. I keep using a higher client-side timeout using that until I find something that works. Once I find that out I use the newly found time to my client and re-test.
Your only solution may be to make sure your 'clients' have a high enough timeout that will work for everything. 5 minutes should be fine for most of your server-side timeouts.
OK this is a huge question and there is a lot that it could be.
Have you tackled HTTP two connection limit? http://madskristensen.net/post/Optimize-HTTP-requests-and-web-service-calls.aspx
Have you got enough IO threads to cater for the load? Use the performance monitoring to check this for your App Pool - I think there is a IO threads counter. A quick google turned this up - http://www.guidanceshare.com/wiki/ASP.NET_2.0_Performance_Guidelines_-_Threading
Are you exhausting your bandwidth? Use performance monitoring again to check the usage of your network card.
This is a really hard subject to broach textually, as it so dependent on environment but I hop these might help.
This also looks interesting - http://software.intel.com/en-us/articles/how-to-tune-the-machineconfig-file-on-the-aspnet-platform/