I have a windows service, where every hour on a scheduled basis it downloads an FTP file from an FTP server. It uses the following code to do this:
var _request = (FtpWebRequest)WebRequest.Create(configuration.Url);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
_request.Timeout = 20000;
_request.Credentials = new NetworkCredential("auser", "apassword");
using (var _response = (FtpWebResponse)_request.GetResponse())
using (var _responseStream = _response.GetResponseStream())
using (var _streamReader = new StreamReader(_responseStream))
{
this.c_fileData = _streamReader.ReadToEnd();
}
Normally, the downloading the FTP data works perfectly fine, however every few months the FTP server provider notifies us that some maintenance needs to be performed. So once maintenance is started (usually only 2 or 3 hours), our hourly attempt of a FTP download fails - i.e. it timeout, which is expected.
The problem is that post the maintenance window our windows service continues to timeout every time it attempts to download the file. Our windows service also has retry logic, but each retry also times out.
Once we do a restart of the windows service, the application starts downloading FTP files successfully again.
Does anyone know why we have to restart the windows service in order to recover from this failure?, Could it be a network issue e.g. DNS?
Note 1: There are similar questions to this one already, but they do not involve a maintenance window and they also do not have any credible answers either
Note 2: We profiled the memory of the application and it seems all ftp objects are being disposed of correctly.
Note 3: We executed a console app with same FTP code post maintenance window and it works fine, while the windows service was still timing out
Any help much appreciated
We eventually got to the bottom of this issue albeit not all questions were answered.
We found that when we used a different memory profiler, it showed up that two FtpWebRequest objects were in memory and had not been disposed for days in the process. These objects were what was causing the problem i.e. they were not being properly disposed.
From research, to solve the issue, we did the following:
Set the keep-alive to false
Set the connections lease timeout to a limited timeout value
Set the max idle time to a limited timeout value
Wrapped in a try/catch/finally, where the request is aborted in the finally block
We changed the code to the following:
var _request = (FtpWebRequest)WebRequest.Create(configuration.Url);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
_request.Timeout = 20000;
_request.Credentials = new NetworkCredential("auser", "apassword");
_request.KeepAlive = false;
_request.ServicePoint.ConnectionLeaseTimeout = 20000;
_request.ServicePoint.MaxIdleTime = 20000;
try
{
using (var _response = (FtpWebResponse)_request.GetResponse())
using (var _responseStream = _response.GetResponseStream())
using (var _streamReader = new StreamReader(_responseStream))
{
this.c_fileData = _streamReader.ReadToEnd();
}
}
catch (Exception genericException)
{
throw genericException;
}
finally
{
_request.Abort();
}
To be honest we are not sure if we needed to do everything here but the problem no longer exists i.e. objects do not hang around, the application still functions post a maintenance window so we are happy!
Related
When I perform a web request using HttpWebRequest in C#, I noticed the first call to an URL/domain takes slightly longer than subsequent ones. Slightly longer in this case means about 100-150 ms longer, i.e. overall time 150-200 ms instead of 50 ms.
I googled this and came across several users reporting such behaviour. However, in all of these cases there was a delay of several seconds and the problem seems to be related to the proxy settings. That is not the case in my situation.
From experimenting with the "Connection keep alive" header I deduced that it has something to do with the opening of an connection. When I use "keep alive", starting from the second request, the delay is normal. When I use "Connection close", all requests suffer from the described delay.
Here's the minimal code I use for reproducing this problem:
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
for (int i = 0; i < 3; ++i) {
var url = "https://www.google.de";
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(url);
req.KeepAlive = true;
req.ReadWriteTimeout = 1500;
req.Timeout = 1500;
req.ServerCertificateValidationCallback = delegate { return true; };
req.Proxy = null;
req.ProtocolVersion = HttpVersion.Version11;
var start = DateTime.Now;
var resp = req.GetResponse();
var end = DateTime.Now;
Console.WriteLine((end - start).TotalMilliseconds);
resp.Dispose();
}
This normally produces an output like this:
173.0381
57.3195
66.4853
One might be tempted to say establishing the connection simply takes that long. So I analyzed the traffic with the analyzation tool Fiddler. I added Console.WriteLine()-calls for the two variables start and end into the code. That gives:
Start 07.08.2020 21:27:49.225
End 07.08.2020 21:27:49.430
Now I look at what Fiddler reports:
ClientConnected: 21:27:49.237
ClientBeginRequest: 21:27:49.335
GotRequestHeaders: 21:27:49.336
ClientDoneRequest: 21:27:49.336
ServerConnected: 21:27:49.274
FiddlerBeginRequest: 21:27:49.337
ServerGotRequest: 21:27:49.337
ServerBeginResponse: 21:27:49.401
GotResponseHeaders: 21:27:49.402
ServerDoneResponse: 21:27:49.430
ClientBeginResponse: 21:27:49.430
ClientDoneResponse: 21:27:49.430
Overall Elapsed: 0:00:00.094
So despite being connected at 21:27:49.274 the request only starts about 50 ms later at 21:27:49.335.
Things I've tried include the common recommendations that were given on similar issues on stackoverflow and the web:
Set the proxy explicitly to null to prevent automatic search for system proxy
In Internet Explorer network settings disable "Automatic Detection of Settings"
Use another URL. In this example here I use Google so everyone can reproduce this, but I also tested it with an URL of my own web server and a simple PHP script just echoing the time.
Disabling the certificate check both via request specific req.ServerCertificateValidationCallback and the global ServicePointManager.ServerCertificateValidationCallback
Using a non SSL-URL. In this case the difference between the first and subsequent requests is still there, however it seems to be smaller.
Bypass the DNS lookup by providing the IP address in the HttpWebRequest.Create() call and later changing the Host-Property of the request object
Changing other ServicePointManager-related settings, i.e. disabling the Nagle algorithm and the "Excpect 100 Continue".
Use another computer. Use another Internet connection from a different provider. Use a VPN.
Use different versions of .NET. Normally I compile with Framework 4.8, but previous versions show the same bevaviour. I tried .NET Core also. That has an even worse overall performance and the first request is still consideraby slower than subsequent ones.
Use WebClient instead of HttpWebRequest
None of this resulted in a significant change of the behaviour, the first call is still slightly slower than all subsequent ones.
The one thing that did actually work was building the HTTPS-request on my own using TcpClient and SslStream. In this case, all requests have the same latency of about 50 ms for Google. For most cases this is probably not the best solution, I would prefer to use an integrated .NET class.
My questions are: Can you reproduce this? Might this be a .NET bug? Any more suggestions what I could try to prevent this?
I am trying to iterate over a list of 20,000+ customer records. I am using a Parallel.ForEach() loop to attempt to speed up the processing. Inside the delegate function, I am making an HTTP POST to an external web service to verify the customer information. In doing so, the loop is limited to 2 threads, or logical cores. If I attempt to increase the Degree of Parallelism, the process throws an error "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server"
Is this default behavior of the loop when working with external processes or a limitation of the receiving web server?
My code is rather straight forward:
Parallel.ForEach ( customerlist, new ParallelOptions {MaxDegreeOfParallelism = 3 },( currentCustomer ) =>
{
if ( IsNotACustomer ( currentCustomer.TIN ) == true ) <--IsNotCustomer is where the HTTP POST takes place
{
...Write data to flat file...
}
});
If I change the MaxDegreesOfParallelism to 2 the loop runs fine.
This code takes about 80 minutes to churn through 20,000 records. While that is not unacceptable, if I could shorten that time by increasing the number of threads, then all the better.
Full exception message (without stack trace):
System.Net.WebException: The underlying connection was closed: A
connection that was expected to be kept alive was closed by the
server.
at System.Net.HttpWebRequest.GetResponse()
Any assistance would be greatly appreciated.
EDIT
The HTTP POST code is:
HttpWebRequest request = ( HttpWebRequest )WebRequest.Create ( AppConfig.ESLBridgeURL + action );
request.Method = "POST";
request.GetRequestStream ( ).Write ( Encoding.UTF8.GetBytes ( body ), 0, body.Length );
Stream stream = request.GetResponse ( ).GetResponseStream ( );
StreamReader reader = new StreamReader ( stream );
output = reader.ReadToEnd ( );
The URL is to an in-house server running proprietary Web Sphere MQ services. The gist of which is to check internal data sources to see whether or not we have a relationship with the customer.
We run this same process in our customer relationship management process in hundreds of sites per day. So I do not believe there is any licensing issue and I am certain these MQ services can accept multiple calls per client.
EDIT 2
A little more research has shown the 2 connection limit is valid. However, using a ServicePointManager may be able to bypass this limitation. What I cannot find is a C# example of using the ServicePointManager with HttpWebRequests.
Can anyone point me to a valid resource or provide a code example?
You might be running up against the default 2 client limit. See System.Net.ServicePointManager.DefaultConnectionLimit on MSDN.
The maximum number of concurrent connections allowed by a ServicePoint object. The default value is 2.
Possibly relevant question: How Can I programmatically remove the 2 connection limit in WebClient?
Thank you Matt Stephenson and Matt Jordan for pointing me in the correct direction.
I found a solution that has cut my processing in half. I will continue to tweak to get the best results, but here is what I arrived at.
I added the following to the application config file:
<system.net>
<connectionManagement>
<add address="*" maxconnection="100"/>
</connectionManagement>
</system.net>
I then figured out how to use the ServicePointManager and set the following:
int dop = Environment.ProcessorCount;
ServicePointManager.MaxServicePoints = 4;
ServicePointManager.MaxServicePointIdleTime = 10000;
ServicePointManager.UseNagleAlgorithm = true;
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = dop * 10;
ServicePoint sp = ServicePointManager.FindServicePoint ( new Uri ( AppConfig.ESLBridgeURL ) );
For my development machine, the Processor Count is 8.
This code, as is, allows me to process my 20,000+ records in roughly 45 minutes (give or take).
I am working on a WCF Service that is hosted in Windows Service, using nettcpbinding.
when i tried to perform load test on the service i built a simple client that call the service about 1000 call in second, the return from the service take about 2 to 8 seconds at first and after leaving the simple client running for about half hour the time to return the result increased, and some client gives some time out exceptions for the send time which was configured to be 2 minutes.
i revised the service throltting configuration and it's like this
these are the steps i tried to perform:
revised the service throttling configuration
<serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentInstances="2147483647" maxConcurrentSessions="2147483647"/>
was working on Windows 7 machine, so i moved to server 2008 but the same result.
update the configuration of tcp binding to be like the following
NetTcpBinding baseBinding = new NetTcpBinding(SecurityMode.None, true);
baseBinding.MaxBufferSize = int.MaxValue;
baseBinding.MaxConnections = int.MaxValue;
baseBinding.ListenBacklog = int.MaxValue;
baseBinding.MaxBufferPoolSize = long.MaxValue;
baseBinding.TransferMode = TransferMode.Buffered;
baseBinding.MaxReceivedMessageSize = int.MaxValue;
baseBinding.PortSharingEnabled = true;
baseBinding.ReaderQuotas.MaxDepth = int.MaxValue;
baseBinding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
baseBinding.ReaderQuotas.MaxArrayLength = int.MaxValue;
baseBinding.ReaderQuotas.MaxBytesPerRead = int.MaxValue;
baseBinding.ReaderQuotas.MaxNameTableCharCount = int.MaxValue;
baseBinding.ReliableSession.Enabled = true;
baseBinding.ReliableSession.Ordered = true;
baseBinding.ReliableSession.InactivityTimeout = new TimeSpan(23, 23, 59, 59);
BindingElementCollection elements = baseBinding.CreateBindingElements();
ReliableSessionBindingElement reliableSessionElement = elements.Find<ReliableSessionBindingElement>();
if (reliableSessionElement != null)
{
reliableSessionElement.MaxPendingChannels = 128;
TcpTransportBindingElement transport = elements.Find<TcpTransportBindingElement>();
transport.ConnectionPoolSettings.MaxOutboundConnectionsPerEndpoint = 1000;
CustomBinding newBinding = new CustomBinding(elements);
newBinding.CloseTimeout = new TimeSpan(0,20,9);
newBinding.OpenTimeout = new TimeSpan(0,25,0);
newBinding.ReceiveTimeout = new TimeSpan(23,23,59,59);
newBinding.SendTimeout = new TimeSpan(0,20,0);
newBinding.Name = "netTcpServiceBinding";
return newBinding;
}
else
{
throw new Exception("the base binding does not " +
"have ReliableSessionBindingElement");
}
changed my services function to use async and await
public async Task<ReturnObj> Connect(ClientInfo clientInfo)
{
var task = Task.Factory.StartNew(() =>
{
// do the needed work
// insert into database
// query some table to return information to client
});
var res = await task;
return res;
}
and updated the client to use async and await in it's call to the service.
applied the Worker thread solution proposed in this link
https://support.microsoft.com/en-us/kb/2538826
although i am using .net 4.5.1, and set the MinThreads to 1000 worker and 1000 IOCP
after all this the service start to handle more requests but the delay still exist and the simple client take about 4 hours to give time out
the strange thing that i found the service handle about 8 to 16 call within 100 ms, regarding the number of threads currently a live in the service.
i found a lot of articles talk about configuration needed to be placed in machine.config and Aspnet.config, i think this is not related to my case as i am using nettcp on windows service not IIS, but i have implemented these changes and found no change in the results.
could some one point me to what i am missing or i want from the service something it can't support?
It could be how your test client is written. With NetTcp, when you create a channel, it tries to get one from the idle connection pool. If it's empty, then it opens a new socket connection. When you close a client channel, it's returned back to the idle connection pool. The default size of the idle connection pool is 10, which means once there are 10 connections in the idle pool, any subsequent closes will actually close the TCP socket. If your test code is creating and disposing of channels quickly, you could be discarding connections in the pool. You could then be hitting a problem of too many sockets in the TIME_WAIT state.
Here is a blog post describing how to modifying the pooling behavior.
This is most likely due to the concurrency mode set to Single (this is default value). Try setting ConcurrencyMode to Multiple by adding ServiceBehaviourAttribute to your service implementation.
Be sure to check documenttation: https://msdn.microsoft.com/en-us/library/system.servicemodel.concurrencymode(v=vs.110).aspx
Example:
// With ConcurrencyMode.Multiple, threads can call an operation at any time.
// It is your responsibility to guard your state with locks. If
// you always guarantee you leave state consistent when you leave
// the lock, you can assume it is valid when you enter the lock.
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MultipleCachingHttpFetcher : IContract
You may be interested also in Sessions, Instancing, and Concurrency article which describes concurrency problems.
I believe after lengthy research and searching, I have discovered that what I want to do is probably better served by setting up an asynchronous connection and terminating it after the desired timeout... But I will go ahead and ask anyway!
Quick snippet of code:
HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);
webReq.Timeout = 5000;
HttpWebResponse response = (HttpWebResponse)webReq.GetResponse();
// this takes ~20+ sec on servers that aren't on the proper port, etc.
I have an HttpWebRequest method that is in a multi-threaded application, in which I am connecting to a large number of company web servers. In cases where the server is not responding, the HttpWebRequest.GetResponse() is taking about 20 seconds to time out, even though I have specified a timeout of only 5 seconds. In the interest of getting through the servers on a regular interval, I want to skip those taking longer than 5 seconds to connect to.
So the question is: "Is there a simple way to specify/decrease a connection timeout for a WebRequest or HttpWebRequest?"
I believe that the problem is that the WebRequest measures the time only after the request is actually made. If you submit multiple requests to the same address then the ServicePointManager will throttle your requests and only actually submit as many concurrent connections as the value of the corresponding ServicePoint.ConnectionLimit which by default gets the value from ServicePointManager.DefaultConnectionLimit. Application CLR host sets this to 2, ASP host to 10. So if you have a multithreaded application that submits multiple requests to the same host only two are actually placed on the wire, the rest are queued up.
I have not researched this to a conclusive evidence whether this is what really happens, but on a similar project I had things were horrible until I removed the ServicePoint limitation.
Another factor to consider is the DNS lookup time. Again, is my belief not backed by hard evidence, but I think the WebRequest does not count the DNS lookup time against the request timeout. DNS lookup time can show up as very big time factor on some deployments.
And yes, you must code your app around the WebRequest.BeginGetRequestStream (for POSTs with content) and WebRequest.BeginGetResponse (for GETs and POSTSs). Synchronous calls will not scale (I won't enter into details why, but that I do have hard evidence for). Anyway, the ServicePoint issue is orthogonal to this: the queueing behavior happens with async calls too.
Sorry for tacking on to an old thread, but I think something that was said above may be incorrect/misleading.
From what I can tell .Timeout is NOT the connection time, it is the TOTAL time allowed for the entire life of the HttpWebRequest and response. Proof:
I Set:
.Timeout=5000
.ReadWriteTimeout=32000
The connect and post time for the HttpWebRequest took 26ms
but the subsequent call HttpWebRequest.GetResponse() timed out in 4974ms thus proving that the 5000ms was the time limit for the whole send request/get response set of calls.
I didn't verify if the DNS name resolution was measured as part of the time as this is irrelevant to me since none of this works the way I really need it to work--my intention was to time out quicker when connecting to systems that weren't accepting connections as shown by them failing during the connect phase of the request.
For example: I'm willing to wait 30 seconds on a connection request that has a chance of returning a result, but I only want to burn 10 seconds waiting to send a request to a host that is misbehaving.
Something I found later which helped, is the .ReadWriteTimeout property. This, in addition to the .Timeout property seemed to finally cut down on the time threads would spend trying to download from a problematic server. The default time for .ReadWriteTimeout is 5 minutes, which for my application was far too long.
So, it seems to me:
.Timeout = time spent trying to establish a connection (not including lookup time)
.ReadWriteTimeout = time spent trying to read or write data after connection established
More info: HttpWebRequest.ReadWriteTimeout Property
Edit:
Per #KyleM's comment, the Timeout property is for the entire connection attempt, and reading up on it at MSDN shows:
Timeout is the number of milliseconds that a subsequent synchronous request made with the GetResponse method waits for a response, and the GetRequestStream method waits for a stream. The Timeout applies to the entire request and response, not individually to the GetRequestStream and GetResponse method calls. If the resource is not returned within the time-out period, the request throws a WebException with the Status property set to WebExceptionStatus.Timeout.
(Emphasis mine.)
From the documentation of the HttpWebRequest.Timeout property:
A Domain Name System (DNS) query may
take up to 15 seconds to return or
time out. If your request contains a
host name that requires resolution and
you set Timeout to a value less than
15 seconds, it may take 15 seconds or
more before a WebException is thrown
to indicate a timeout on your request.
Is it possible that your DNS query is the cause of the timeout?
No matter what we tried we couldn't manage to get the timeout below 21 seconds when the server we were checking was down.
To work around this we combined a TcpClient check to see if the domain was alive followed by a separate check to see if the URL was active
public static bool IsUrlAlive(string aUrl, int aTimeoutSeconds)
{
try
{
//check the domain first
if (IsDomainAlive(new Uri(aUrl).Host, aTimeoutSeconds))
{
//only now check the url itself
var request = System.Net.WebRequest.Create(aUrl);
request.Method = "HEAD";
request.Timeout = aTimeoutSeconds * 1000;
var response = (HttpWebResponse)request.GetResponse();
return response.StatusCode == HttpStatusCode.OK;
}
}
catch
{
}
return false;
}
private static bool IsDomainAlive(string aDomain, int aTimeoutSeconds)
{
try
{
using (TcpClient client = new TcpClient())
{
var result = client.BeginConnect(aDomain, 80, null, null);
var success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(aTimeoutSeconds));
if (!success)
{
return false;
}
// we have connected
client.EndConnect(result);
return true;
}
}
catch
{
}
return false;
}
I am batch uploading products to a database.
I am download the image urls to the site to be used for the products.
The code I written works fine for the first 25 iterations (always that number for some reason), but then throws me a System.Net.WebException "The operation has timed out".
if (!File.Exists(localFilename))
{
using (WebClient Client = new WebClient())
{
Client.DownloadFile(remoteFilename, localFilename);
}
}
I checked the remote url it was requesting and it is a valid image url that returns an image.
Also, when I step through it with the debugger, I don't get the timeout error.
HELP! ;)
If I were in your shoes, here's a few possibilities I'd investigate:
if you're running this code from multiple threads, you may be bumping up against the System.Net.ServicePointManager.DefaultConnectionLimit property. Try increasing it to 50-100 when you start up your app. note that I don't think this is your problem, but trying this is easier than the other stuff below. :-)
another possibility is that you're swamping the server. This is usually hard to do with a single-threaded client, but is possible since multiple other clients may be hitting the server also. But because the problem always happens at #25, this seems unlikely since you'd expect to see more variation.
you may be running into a problem with keepalive HTTP connections backing up between your client and the server. this also seems unlikely.
the hard cutoff of 25 makes me think that this may be a proxy or firewall limit, either on your end or the server's, where >25 connections made from one client IP to one server (or proxy) will get throttled.
My money is on the latter one, since the fact that it always breaks at a nice round number of requests, and that stepping in the debugger (aka slower!) doesn't trigger the problem.
To test all this, I'd start with the easy thing: stick in a delay (Thread.Sleep) before each HTTP call, and see if the problem goes away. If it does, reduce the delay until the problem comes back. If it doesn't, increase the delay up to a large number (e.g. 10 seconds) until the problem goes away. If it doesn't go away with a 10 second delay, that's truly a mystery and I'd need more info to diagnose.
If it does go away with a delay, then you need to figure out why-- and whether the limit is permanent (e.g. server's firewall which you can't change) or something you can change. To get more info, you'll want to time the requests (e.g. check DateTime.Now before and after each call) to see if you see a pattern. If the timings are all consistent and suddenly get huge, that suggests a network/firewall/proxy throttling. If the timings gradually increase, that suggests a server you're gradually overloading and lengthening its request queue.
In addition to timing the requests, I'd set the timeout of your webclient calls to be longer, so you can figure out if the timeout is infinite or just a bit longer than the default. To do this, you'll need an alternative to the WebClient class, since it doesn't support a timeout. This thread on MSDN Forums has a reasonable alternative code sample.
An alternative to adding timing in your code is to use Fiddler:
download fiddler and start it up.
set your webclient code's Proxy property to point to the fiddler proxy (localhost:8888)
run your app and look at fiddler.
it seems that WebClient is not closing the Response object it uses when done which will cause, in your case, many responses to be opened at the same time and with a limit of 25 connections on the remote server, you got the 'Timeout exception'. When you debug, early opened reponses get closed due to their inner timeout, etc...
(I inpected WebClient that with Reflector, I can't find an instruction for closing the response).
I propse that you use HttpWebRequest & HttpWebResponse so that you can clean objects after each download:
HttpWebRequest request;
HttpWebResponse response = null;
try
{
FileStream fs;
Stream s;
byte[] read;
int count;
read = new byte[256];
request = (HttpWebRequest)WebRequest.Create(remoteFilename);
request.Timeout = 30000;
request.AllowWriteStreamBuffering = false;
response = (HttpWebResponse)request.GetResponse();
s = response.GetResponseStream();
fs = new FileStream(localFilename, FileMode.Create);
while((count = s.Read(read, 0, read.Length))> 0)
{
fs.Write(read, 0, count);
count = s.Read(read, 0, read.Length);
}
fs.Close();
s.Close();
}
catch (System.Net.WebException)
{
//....
}finally
{
//Close Response
if (response != null)
response.Close();
}
Here's a slightly simplified version of manji's answer:
private static void DownloadFile(Uri remoteUri, string localPath)
{
var request = (HttpWebRequest)WebRequest.Create(remoteUri);
request.Timeout = 30000;
request.AllowWriteStreamBuffering = false;
using (var response = (HttpWebResponse)request.GetResponse())
using (var s = response.GetResponseStream())
using (var fs = new FileStream(localPath, FileMode.Create))
{
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = s.Read(buffer, 0, buffer.Length)) > 0)
{
fs.Write(buffer, 0, bytesRead);
bytesRead = s.Read(buffer, 0, buffer.Length);
}
}
}
I have the same problem and I solve it adding this lines to the configuration file app.config:
<system.net>
<connectionManagement>
<add address="*" maxconnection="100" />
</connectionManagement>
</system.net>