This question already has answers here:
Parallel tasks performance in c#
(2 answers)
Closed 1 year ago.
I sent over than 100 requests to web service I am using Parallel.Foreach and it handle it well, but when see the traffic using Wireshark I only see 4 or up to 10 requests per second.
Then I tried the same case on same machine in SOAPUI TOOL in bulk multi threading
Mode then I saw that the 100 requests are sent in same second.
Any advice nothing that I am using
C# 2017
Framework 4.5
OS win 10
CPU cores 4 I7
RAM 16 GB
This is not a parallel.foreach issue - the issue is ServicePointManager which throttles the number of parallel requests to any domain name. The standard is, as you found out, 4 parallel requests.
https://learn.microsoft.com/en-us/dotnet/api/system.net.servicepointmanager?view=net-5.0
Parallel.ForEach will quue all the tasks, but then the requests run into the limitation. Oh, and that manager can be configured ;)
Related
This question already has answers here:
Does TcpClient write method guarantees the data are delivered to server?
(5 answers)
Can I free my data immediately after send() in blocking mode?
(2 answers)
what happens when I write data to a blocking socket, faster than the other side reads?
(2 answers)
Closed 2 years ago.
I created this proof of concept code to exercise my understanding of how Socket.Send behaves.
The host im pointing to is actually on Australia Central azure datacenter (and im in Brazil, so its half-world distance) and yet the avg TICK is between 40 and 70. And im not talking milliseconds, TICKS!
Can anyone explain to me what is going on?
I was expecting to have the avg milliseconds close to 200 or something... but right now its not even close to 1ms!
From the docs:
[...] A successful completion of the Send method means that the underlying system has had room to buffer your data for a network send.
So there's no guarantee that the data has actually reached the destination once the Send method returns.
This question already has answers here:
How do I obtain the latency between server and client in C#?
(5 answers)
Closed 4 years ago.
enter image description here
I am trying to obtain the latency time between 2 servers. So how would it be possible to obtain it in milliseconds?
Is round trip time calculated while pinging the same as latency time?
Yes, just ping the other server, it's the latency.
See this question for examples: Using ping in c#
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have 20 text files stored in hard disk each contains millions of information's about an educational organization.Suppose i have a method which will iterate text files in a loop and process .Which is best way to do the work starting each thread for each text file(Factory.startnew()) or each process for each textfile(process.start())
EDIT
I have 8GB RAM ,8core server ,so thought of to process them in thread or process.Currently i am using process and i don't find any bottleneck as of now.But i am in dilemma for using threads or process
The reading speed of the harddisk will most likely be the bottleneck here.
So, depending on the processing you need to do on the data, it might or might not be interesting to use multiple threads (and I would certainly not use processes).
The most important thing however, will be to make sure that no multiple threads are accessing the same physical disk at the same time, because that would lead to a slowdown because of constantly switching and seeking of the hdd-heads.
I have done some testing with that recently, and in some cases (depending on the hdd and/or pc) the OS takes care of it and it doesn't make a big difference, but on another combination however, a slowdown could be seen to 1/10 of the normal speed.
So, if using multiple threads (only needed if the processing of your data takes longer than the reading from your hdd!), make sure you have a lock somewhere to prevent multiple threads reading from the disk at the same time.
You might also want to look into memory mapped files for this.
edit:
In case you are working with buffers, you could start one thread to continuously fill the buffers, while another thread processes the data.
edit2 (in answer to Micky):
"Process or thread which is best ,faster and take less memory?"
As I said, I would not use processes (due to the extra overhead). That leaves threads, or no threads at all - depending on the amount of processing that needs to be done on the data. If data is read directly from memory buffers (instead of using something like readline for example, where all bets would be off), one or max. two threads would probably be the best option (if the processing of the data is fast enough - testing and timing would be needed to be sure).
As for speed and memory usage: best option (for me) would be memory mapped files (with the files opened in forward only mode). This would not only take advantage of the efficiency of the OS disk cache, but would also access the kernel-memory directly - while, when working with (user)buffers, memory has to be copied from kernel- to userspace, which takes time and uses extra memory.
IOCP: ok, but depends on what the threads would be asking. For example, if 10 threads would be asking 100kB each time in turn (on the different files), 10 x 10ms seektime would be needed, while reading 100Kb would take less than 1ms. Seektimes for future requests would depend on how IOCP handles the caching, which would probably be the same as using memory-mapping, but I don't think IOCP would be any faster in this case.
And using IOCP, would probably also be copying/filling buffers in userspace (and probably harder to handle in general). But I have to say, while writing my answer I was thinking C/C++ (using direct access to memory buffers) only to see later that it was C#. Although the principles stay the same, maybe there's an easy way in C# to use async I/O with IOCP.
As for the speed-testing and avoiding the reading at the same time: I have done testing with more than 50 threads on large files (via memory mapping) - and if done correctly, no reading-speed is lost. On the other hand, when just firing some threads and letting them access the hdd at random (even in large blocks), total reading-speed could come down to 10% in some cases - and sometimes not at all. Same PC, other hdd, other results.
This question already has answers here:
How to get memory available or used in C#
(6 answers)
Closed 9 years ago.
I'm creating a server (Console App), but after doing some long-term testing I found out it grows eating RAM. For the local test suite, I am not working with much RAM
(8GB-DDR3 #2400MHz)Is there a way (In Program.cs, I assume) to restart the program if it is using over 'x' amount of RAM? Also, one way could be a timed loop/checkup?
You can use GC.GetTotalMemory. It returns an approximate value (long) of how much memory your program has allocated.
You can create a Timer object and make this comparison under the Tick event handler.
For more information, you can look here: http://msdn.microsoft.com/en-us/library/system.gc.gettotalmemory.aspx
I agree with what others have said about fixing your memory leak.
If you want to restart your program, create a second application that monitors the first process. Then, when memory gets too high in your original app, safely shut it down and allow the second application to launch it again.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How Can I Set Processor Affinity in .NET?
I have i7 930x . my computer have cpu 4core.And 8 processor(thread 8)
so.. processor 1 , 2 ,3 ,4 .... 8.
but i want only number 1 processor. i don't want using other phsycal.
c# is possible?
example) my calc program using cpu No1.
my sound program using cpu No2.<br>
my Network A Program using cpu No1.<br>
my Network B Program using cpu No3.<br>
It is possible to set a processor affinity mask for threads/processes to accomplish that. For some high performance programs using thread pools and work queues it can improve performance. In all other cases it is better to let the OS handle the scheduling.