I have a simple console application which sends HTTP POST from multiple threads:
List<Task> tasks = new List<Task>();
for (int i = 0; i < 100; i++)
{
tasks.Add(Task.Factory.StartNew(() => SendQuery(url1, query1)));
tasks.Add(Task.Factory.StartNew(() => SendQuery(url2, query2)));
}
Task.WaitAll(tasks.ToArray());
SendQuery(string uri, string requestString) looks like this:
Uri url = new Uri(uri);
try
{
using (HttpClient client = new HttpClient { Timeout = new TimeSpan(0, 0, 10, 0) })
{
StringContent content = new StringContent(requestString);
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
HttpResponseMessage response = client.PostAsync(url, content).Result;
response.EnsureSuccessStatusCode();
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
The program works without any errors, all the queries are processed finally, but after filling tasks list each thread hangs on client.PostAsync(url, content).Result, after several minutes IIS starts to process queries. Why does this delay occur? What's happening during this time? I am using IIS 7.5 running on Windows Server 2008 R2 to host web-services which provide url1 and url2.
Set this value to 500 or 1000 at the start of your program and let us know the effects.
Your requests maybe getting throttled at the default value of 2. (depending on the .net version)
ServicePointManager.DefaultConnectionLimit = 500;
Related
I'm trying to mesure duration process on my API.
For that, I'm using a simple code which call n time the API at the same time.
private HttpClient _httpClient { get; } = new HttpClient();
[TestCase]
public async Task Perf_ExecuteAuthThreads()
{
await Parallel.ForEachAsync(Enumerable.Range(0, NB_THREADS).ToList(), new ParallelOptions
{
MaxDegreeOfParallelism = NB_THREADS
}, async (item, cancellationToken) => await CreateAuthThread(item));
}
private async Task CreateAuthThread(int i)
{
string tokenAuth = string.Empty;
DateTime startDate = DateTime.Now;
var stopWatchGlobal = new Stopwatch();
var stopWatchDetails = new Stopwatch();
stopWatchGlobal.Restart();
try
{
stopWatchDetails.Restart();
using var request = new HttpRequestMessage(HttpMethod.Post, RequestFactory.GetRoute(RequestFactory.RequestType.AUTH));
request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
request.Content = JsonContent.Create(RequestFactory.CreateRequest(RequestFactory.RequestType.AUTH));
// Authenticate Method
using HttpResponseMessage response = await _httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
response.EnsureSuccessStatusCode();
stopWatchDetails.Stop();
if (response.IsSuccessStatusCode)
{
tokenAuth = (await response.Content.ReadFromJsonAsync<Token>())!.JwtToken;
}
else
{
string responseAsString = await response.Content.ReadAsStringAsync();
//Assert.Fail("Internal server Error : {0}", responseAsString);
}
}
catch { /* DO NOTHING */ }
stopWatchGlobal.Stop();
// Get business transaction Guid
var transactionGuid = new JwtSecurityToken(jwtEncodedString: tokenAuth).Claims.FirstOrDefault(x => x.Type == Constants.JWT_CLAIMS_TRANSACTION_GUID)?.Value ?? null;
// Thread Id;TransactionGuid;Begin Date;End Date;Auth Duration;Business Duration;Result Business;Total Duration;
Console.WriteLine($"\"{i}\";\"{transactionGuid}\";\"{startDate.ToString("MM/dd/yyyy hh:mm:ss.fff tt")}\";\"{DateTime.Now.ToString("MM/dd/yyyy hh:mm:ss.fff tt")}\";\"{stopWatchDetails.ElapsedMilliseconds}\";\"{(tokenAuth != string.Empty ? true : false)}\";\"{stopWatchGlobal.ElapsedMilliseconds}\"");
}
Assuming my local computer and the server have synchronous clock, I notice that a difference about 3 seconds appears between send request to API and receive request by API. This difference don't exist between the response by API and the end of the call on my computer.
I know transport layers cost time, but 3 seconds appears at expensive.
Do you know if is it possible to reduce this time ? I don't think it's "burst time" (meaning : initialization time of communication)...
My API (and my test code) are on .Net 6 and using TLS connection (HTTPS). Request and Response are classic Json, without a lot of datas.
I'm trying to send multiple API requests from C# webservice on every 5 minutes using HttpClient. After some time all sockets are exhausted because they are not closing after request is sent. How I can close all w3wp.exe sockets from web service method?
private static HttpClient client = new HttpClient();
public async Task sendReq()
{
var url = "http://url.com";
var sp = ServicePointManager.FindServicePoint(new Uri(url));
sp.ConnectionLeaseTimeout = 20 * 1000;
client.DefaultRequestHeaders.ConnectionClose = true;
client.DefaultRequestHeaders.Add("Connection", "Keep-Alive");
client.DefaultRequestHeaders.Add("Keep-Alive", "300");
client.DefaultRequestHeaders.ConnectionClose = true;
var res = await client.GetStringAsync(url);
}
I'm working on my Grade Project and run into this issue.
I'm making a control panel for a parking lot and I use Arduino for reading the sensors and WebServer that sends the values of the sensors in a JSON file to a connected client.
I've made a C# form that indicates if there is a car parked on spot 1,2,... etc.
So, I designed a little form that does this, it contains a second threat that runs an infinite loop which creates a WebClient that will collect the JSON from the Arduino WebServer and when done the JSON will be handled and the spots are being indicated.
After a few seconds, the thread freezes, more specific the WebClient that is opening the read just freezes.
I've spent a few days now to find out how to fix this issue, no success yet.
So I tried to make another thread that checks if the connection started and it takes more than a second to get the data it will try to abort the thread of the WebClient and creating a new one, to resume getting the JSON I thought.
But when the Timer hits a second the thread that tries to abort the other threat also freezes.
I gave up and decided to ask it on StackOverflow because I can't find a solution.
this is my thread for the WebClient:
private void startConnectionThread()
{
new Thread(() =>
{
while (true)
{
try
{
WebClient client = new WebClient();
Stream data = client.OpenRead("http://192.168.0.177/");
StreamReader reader = new StreamReader(data);
strResult = reader.ReadToEnd();
data.Close();
reader.Close();
handleResult(strResult);
Thread.Sleep(1000);
}
catch (Exception ex)
{
}
}
}).Start();
}
I can use all your help. Thanks in advance.
EDIT:
This is my method with HttpClient the same issue but after a few seconds the freezing connection refreshes and it works again, I see this as a solution to my problem. And when I add client.Timeout = TimeSpan.FromMilliseconds(300); it does abridge the time out.
private void startConnectionThread()
{
new Thread(async() =>
{
HttpClient client = new HttpClient();
client.Timeout = TimeSpan.FromMilliseconds(200);
while (true)
{
try
{
HttpResponseMessage response = await client.GetAsync("http://192.168.0.177/");
strResult = await response.Content.ReadAsStringAsync();
handleResult(strResult);
}
catch (Exception ex)
{
}
}
}).Start();
}
You need to use the same WebClient instance, not recreate it in the loop as the other previously created clients are still in use, taking up connections and listening.
So move it out of your loop...
var client = new WebClient();
while (true)
{
try
{
using (var data = client.OpenRead("http://192.168.0.177/"))
{
using (var reader = new StreamReader(data))
{
strResult = reader.ReadToEnd();
handleResult(strResult);
Thread.Sleep(1000);
}
}
}
catch (Exception ex)
{
}
}
This seems to be the solution for me, using HttpClient and the change in the timeout property did make the change.
private void startConnectionThread()
{
new Thread(async() =>
{
HttpClient client = new HttpClient();
client.Timeout = TimeSpan.FromMilliseconds(200);
while (true)
{
try
{
HttpResponseMessage response = await client.GetAsync("http://192.168.0.177/");
strResult = await response.Content.ReadAsStringAsync();
handleResult(strResult);
}
catch (Exception ex)
{
}
}
}).Start();
}
I have written an API endpoint and created a simple .net core console app to send off multiple requests to the API endpoint simultaneously to test how the API endpoint works
The code looks as below
static async Task Main(string[] args)
{
// ARRANGE
int request_number = 200;
Task<HttpResponseMessage>[] tasks = new Task<HttpResponseMessage>[request_number];
Action[] actions = new Action[request_number];
for (int i = 0; i < request_number; i++)
{
int temp = i;
actions[temp] = () =>
{
tasks[temp] = CallMyAPI();
};
}
// ACT
Parallel.Invoke(actions);
await Task.WhenAll(tasks);
// ASSERT
string sample1 = await tasks[0].Content.ReadAsStringAsync();
for (int i = 1; i < request_number; i++)
{
string toBeTested = await tasks[i].Content.ReadAsStringAsync();
if (toBeTested != sample1)
{
Console.WriteLine("Wrong! i = " + i);
}
}
Console.WriteLine("finished");
Console.WriteLine("Press any key to complete...");
Console.Read();
}
static async Task<HttpResponseMessage> CallMyAPI()
{
var request = new HttpRequestMessage();
request.Method = HttpMethod.Post;
string contentString = "some json string as http body";
request.Content = new StringContent(contentString, System.Text.Encoding.UTF8, "application/json");
request.RequestUri = new Uri("http://myAPIendpoint.com");
HttpResponseMessage response;
using (HttpClient httpClient = new HttpClient())
{
response = await httpClient.SendAsync(request);
}
return response;
}
So basically what I have been trying to do by the code is to send off multiple requests once, and wait and record all the responses. Then I compare them to verify that they all return the same response.
Initially, when I set the variable request_number as small numbers, like 50, 100, the test app runs well. However, as the request_numbergoes up and reaches around 200, it starts to throw an exception that looks like:
Inner Exception 1:
IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request.
Inner Exception 2:
SocketException: The I/O operation has been aborted because of either a thread exit or an application request
What is this kind of exception supposed to mean?
Your problem is this:
using (HttpClient httpClient = new HttpClient())
{..
}
Each of these uses a new socket.
Use a single httpclient, e.g. a static one
i need to download about 2 million files from the SEC website. each file has a unique url and is on average 10kB. this is my current implementation:
List<string> urls = new List<string>();
// ... initialize urls ...
WebBrowser browser = new WebBrowser();
foreach (string url in urls)
{
browser.Navigate(url);
while (browser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents();
StreamReader sr = new StreamReader(browser.DocumentStream);
StreamWriter sw = new StreamWriter(), url.Substring(url.LastIndexOf('/')));
sw.Write(sr.ReadToEnd());
sr.Close();
sw.Close();
}
the projected time is about 12 days... is there a faster way?
Edit: btw, the local file handling takes only 7% of the time
Edit: this is my final implementation:
void Main(void)
{
ServicePointManager.DefaultConnectionLimit = 10000;
List<string> urls = new List<string>();
// ... initialize urls ...
int retries = urls.AsParallel().WithDegreeOfParallelism(8).Sum(arg => downloadFile(arg));
}
public int downloadFile(string url)
{
int retries = 0;
retry:
try
{
HttpWebRequest webrequest = (HttpWebRequest)WebRequest.Create(url);
webrequest.Timeout = 10000;
webrequest.ReadWriteTimeout = 10000;
webrequest.Proxy = null;
webrequest.KeepAlive = false;
webresponse = (HttpWebResponse)webrequest.GetResponse();
using (Stream sr = webrequest.GetResponse().GetResponseStream())
using (FileStream sw = File.Create(url.Substring(url.LastIndexOf('/'))))
{
sr.CopyTo(sw);
}
}
catch (Exception ee)
{
if (ee.Message != "The remote server returned an error: (404) Not Found." && ee.Message != "The remote server returned an error: (403) Forbidden.")
{
if (ee.Message.StartsWith("The operation has timed out") || ee.Message == "Unable to connect to the remote server" || ee.Message.StartsWith("The request was aborted: ") || ee.Message.StartsWith("Unable to read data from the transport connection: ") || ee.Message == "The remote server returned an error: (408) Request Timeout.") retries++;
else MessageBox.Show(ee.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
goto retry;
}
}
return retries;
}
Execute the downloads concurrently instead of sequentially, and set a sensible MaxDegreeOfParallelism otherwise you will try to make too many simultaneous request which will look like a DOS attack:
public static void Main(string[] args)
{
var urls = new List<string>();
Parallel.ForEach(
urls,
new ParallelOptions{MaxDegreeOfParallelism = 10},
DownloadFile);
}
public static void DownloadFile(string url)
{
using(var sr = new StreamReader(HttpWebRequest.Create(url)
.GetResponse().GetResponseStream()))
using(var sw = new StreamWriter(url.Substring(url.LastIndexOf('/'))))
{
sw.Write(sr.ReadToEnd());
}
}
Download files in several threads. Number of threads depends on your throughput. Also, look at WebClient and HttpWebRequest classes. Simple sample:
var list = new[]
{
"http://google.com",
"http://yahoo.com",
"http://stackoverflow.com"
};
var tasks = Parallel.ForEach(list,
s =>
{
using (var client = new WebClient())
{
Console.WriteLine($"starting to download {s}");
string result = client.DownloadString((string)s);
Console.WriteLine($"finished downloading {s}");
}
});
I'd use several threads in parallel, with a WebClient. I recommend setting the max degree of parallelism to the number of threads you want, since unspecified degree of parallelism doesn't work well for long running tasks. I've used 50 parallel downloads in one of my projects without a problem, but depending on the speed of an individual download a much lower might be sufficient.
If you download multiple files in parallel from the same server, you're by default limited to a small number (2 or 4) of parallel downloads. While the http standard specifies such a low limit, many servers don't enforce it. Use ServicePointManager.DefaultConnectionLimit = 10000; to increase the limit.
I think the code from o17t H1H' S'k seems right and all but to perform I/O bound tasks an async method should be used.
Like this:
public static async Task DownloadFileAsync(HttpClient httpClient, string url, string fileToWriteTo)
{
using HttpResponseMessage response = await httpClient.GetAsync(url, HttpCompletionOption.ResponseHeadersRead);
using Stream streamToReadFrom = await response.Content.ReadAsStreamAsync();
using Stream streamToWriteTo = File.Open(fileToWriteTo, FileMode.Create);
await streamToReadFrom.CopyToAsync(streamToWriteTo);
}
Parallel.Foreach is also available with Parallel.ForEachAsync. Parallel.Foreach has a lot of features that the async does't has, but most of them are also depracticed. You can implement an Producer Consumer system with Channel or BlockingCollection to handle the amount of 2 million files. But only if you don't know all URLs at the start.
private static async void StartDownload()
{
(string, string)[] urls = new ValueTuple<string, string>[]{
new ("https://dotnet.microsoft.com", "C:/YoureFile.html"),
new ( "https://www.microsoft.com", "C:/YoureFile1.html"),
new ( "https://stackoverflow.com", "C:/YoureFile2.html")};
var client = new HttpClient();
ParallelOptions options = new() { MaxDegreeOfParallelism = 2 };
await Parallel.ForEachAsync(urls, options, async (url, token) =>
{
await DownloadFileAsync(httpClient, url.Item1, url.Item2);
});
}
Also look into this NuGet Package. The Github Wiki gives examples how to use it. To download 2 million files this is a good library and has also a retry function. To download a file you only have to create an instance of LoadRequest and it downloads it with the name of the file into the Downloads directory.
private static void StartDownload()
{
string[] urls = new string[]{
"https://dotnet.microsoft.com",
"https://www.microsoft.com",
" https://stackoverflow.com"};
foreach (string url in urls)
new LoadRequest(url).Start();
}
I hope this helps to improve the code.