Ten times Thread.Sleep(100) or a single Thread.Sleep(1000)? - c#

Is there a difference (from performance perspective) between:
Thread.Sleep(10000);
and
for(int i=0; i<100; i++)
{
Thread.Sleep(100);
}
Does the single call to Thread.Sleep(10000) also results in context switches within this 10 seconds (so the OS can see if it's done sleeping), or is this thread really not served for 10 seconds?

The second code (for loop) requires more process swaps and should be little slower than Thread.Sleep(10000);
Anyway you can use System.Diagnostics.Stopwatch class to determine the exact time for these two approaches. I believe the difference will be very very small.

in any case second loop will take time because of following overheads
Memory utilization for 10 different thread objects
10 different callbacks will be initiated once you call thread.sleep
Overhead cost for running loop
if we want to run the code on single thread so why do we want a loop if we don't have any break point even.

Related

Why the following C# program uses limited (10) number of threads? [duplicate]

I have just did a sample for multithreading using This Link like below:
Console.WriteLine("Number of Threads: {0}", System.Diagnostics.Process.GetCurrentProcess().Threads.Count);
int count = 0;
Parallel.For(0, 50000, options,(i, state) =>
{
count++;
});
Console.WriteLine("Number of Threads: {0}", System.Diagnostics.Process.GetCurrentProcess().Threads.Count);
Console.ReadKey();
It gives me 15 thread before Parellel.For and after it gives me 17 thread only. So only 2 thread is occupy with Parellel.For.
Then I have created a another sample code using This Link like below:
var options = new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount * 10 };
Console.WriteLine("MaxDegreeOfParallelism : {0}", Environment.ProcessorCount * 10);
Console.WriteLine("Number of Threads: {0}", System.Diagnostics.Process.GetCurrentProcess().Threads.Count);
int count = 0;
Parallel.For(0, 50000, options,(i, state) =>
{
count++;
});
Console.WriteLine("Number of Threads: {0}", System.Diagnostics.Process.GetCurrentProcess().Threads.Count);
Console.ReadKey();
In above code, I have set MaxDegreeOfParallelism where it sets 40 but is still taking same threads for Parallel.For.
So how can I increase running thread for Parallel.For?
I am facing a problem that some numbers is skipped inside the Parallel.For when I perform some heavy and complex functionality inside it. So here I want to increase the maximum thread and override the skipping issue.
What you're saying is something like: "My car is shaking when driving too fast. I'm trying to avoid this by driving even faster." That doesn't make any sense. What you need is to fix the car, not change the speed.
How exactly to do that depends on what are you actually doing in the loop. The code you showed is obviously placeholder, but even that's wrong. So I think what you should do first is to learn about thread safety.
Using a lock is one option, and it's the easiest one to get correct. But it's also hard to make it efficient. What you need is to lock only for a short amount of time each iteration.
There are other options how to achieve thread safety, including using Interlocked, overloads of Parallel.For that use thread-local data and approaches other than Parallel.For(), like PLINQ or TPL Dataflow.
After you made sure your code is thread safe, only then it's time to worry about things like the number of threads. And regarding that, I think there are two things to note:
For CPU-bound computations, it doesn't make sense to use more threads than the number of cores your CPU has. Using more threads than that will actually usually lead to slower code, since switching between threads has some overhead.
I don't think you can measure the number of threads used by Parallel.For() like that. Parallel.For() uses the thread pool and it's quite possible that there already are some threads in the pool before the loop begins.
Parallel loops use hardware CPU cores. If your CPU has 2 cores, this is the maximum degree of paralellism that you can get in your machine.
Taken from MSDN:
What to Expect
By default, the degree of parallelism (that is, how many iterations run at the same time in hardware) depends on the
number of available cores. In typical scenarios, the more cores you
have, the faster your loop executes, until you reach the point of
diminishing returns that Amdahl's Law predicts. How much faster
depends on the kind of work your loop does.
Further reading:
Threading vs Parallelism, how do they differ?
Threading vs. Parallel Processing
Parallel loops will give you wrong result for summation operations without locks as result of each iteration depends on a single variable 'Count' and value of 'Count' in parallel loop is not predictable. However, using locks in parallel loops do not achieve actual parallelism. so, u should try something else for testing parallel loop instead of summation.

C# How to loop the maximum amount of times

When going through a really long array, or have a complicated calculations per each index, is there a way to yield after iterating through the array for the maximum amount of time. The maximum amount of time is the maximum time per each frame.
For example:
for(int i = 0; i < 100000; i++){
do something complicated;
if(maximum amount of time /*right before the user feels lag*/)
yield; (come back and resume i where it last yielded)
}
}
//order does not matter
So basically, what I want to achieve is high percent usage for the cpu, however, I do not want it to go beyond 100%, which the user will experience lag
edit:
Sorry for the little confusion. A more clear example might be 3d rendering in a program such as blender. When the user hits render, it calculates each pixels to determine what color it needs to be. When one looks at the cpu usage, it is close to 100%. however, it does not freeze while it calculates the pixels while it calculates the maximum amount as possible
If you are running your code on multiple CPUs (as implied by the multithreading tag), there should (in the usual case) be no need to stop executing the loop in order for your user interface to remain responsive. Perform the calculation on one or more background threads, and have those background threads update the UI thread as appropriate.
is there a way to yield after iterating through the array for the maximum amount of time
If by yield you mean just stop (and restart from the beginning next frame), then sure. You can pass a CancellationToken to your thread, and have it periodically check for a cancellation request. You can use a timer at the start of each frame to fire off that request, or more likely, use an existing mechanism that already does end-of-frame processing to trigger the thread to stop work.
If by yield you mean stop where I am and resume at that place at the start of the next frame, I would ask why stop given that you have multiple CPUs. If you must stop, you can use the CancellationToken as before, but just keep track of where you are in the loop, resuming from there instead of at the start.
So basically, what I want to achieve is high percent usage for the cpu, however, I do not want it to go beyond 100%, which the user will experience lag
You can never go over 100% CPU usage by definition. To avoid the feeling of lag when the CPU utilization is high, use thread priorities to ensure that the foreground thread has a higher priority than your background threads.
Unless I'm missing something....
double MAX_PROCESSTIME = 50.0;
DateTime loopStart = DateTime.Now();
for(int i = 0; i < 100000; i++){
// do something complicated;
double timePassed = (DateTime.Now() - loopStart).TotalMilliseconds;
if(timePassed > MAX_PROCESSTIME)
{
break;
}
}
How about you consider use a push model instead, to iterate in parallel and raising an event so the consumer just treat each item as they come?
Usually the solution to this problem is to move the work to a separate thread that can't interrupt the UI, and let the UI or a controller thread cancel the work when called for.
Another option is that I've read somewhere typical humans have a perception level of about 25 milliseconds; two events are perceived to occur at the same time as long as they are less than 25 milliseconds apart. Sadly, I can no longer find the original reference, but I did at least find a corroborating article. You can use this fact to set a timer for about that long and let the process run as much as you want until the timer goes off. You may also want to account for the atypical human as well, especially if your app is in an area catering to people that may have above average reflexes.

How capturing the accurate execution time in c#

I try to capture the exact execution time of function
Stopwatch regularSW = new Stopwatch();
for (int i = 0; i < 10; i++) {
regularSW.Start();
//function();
regularSW.Stop();
Console.WriteLine("Measured time: " + regularSW.Elapsed);
}
I also tried with DateTime and Process.GetCurrentProcess().TotalProcessorTime
but each time I get a different value.
How i can get same value ?
With StopWatch you already use the most accurate way. But you are not re-starting it in the loop. It always starts at the value where it ended. You either have to create a new StopWatch or call StopWatch.Restart instead of Start:
Stopwatch regularSW = new Stopwatch();
for (int i = 0; i < 10; i++) {
regularSW.Restart();
//function();
regularSW.Stop();
Console.WriteLine("Measured time: " + regularSW.Elapsed);
}
That's the reason for the different values. If you now still get different values, then the reason is that the method function really has different execution times which is not that unlikely(f.e. if it's a database query).
Since this question seems to be largely theoretical(regarding your comments), consider following things if you want to measure time in .NET:
compile and run in release mode, Any CPU (on an x64 machine) and optimizations on
A tick is 0.0001 milliseconds, so don't overestimate your results
They are different because you cannot control what other operations your system might need to perform in the background while your C# progam is running
If you for example claim memory in the method because you fill a local list, then the garbage collector might attempt to reclaim garbage(memory)
C# code is compiled Just In Time. The first time you go through a loop can therefore be hundreds or thousands of times more expensive than every subsequent time due to the cost of the jitter analyzing the code that the loop calls. If you are intending on measuring the "warm" cost of a loop then you need to run the loop once before you start timing it. If you are intending on measuring the average cost including the jit time then you need to decide how many times makes up a reasonable number of trials, so that the average works out correctly
you are running your code in a multithreaded, multiprocessor environment where threads can be switched at will, and where the thread quantum (the amount of time the operating system will give another thread until yours might get a chance to run again) is about 16 milliseconds. 16 milliseconds is about fifty million processor cycles. Coming up with accurate timings of sub-millisecond operations can be quite difficult if the thread switch happens within one of the several million processor cycles that you are trying to measure. Take that into consideration.
The last two points were copied from this answer of Eric Lippert (worth reading).

Acceptable use of Thread.Sleep()

I'm working on a console application which will be scheduled and run at set intervals, say every 30 minutes. Its only purpose is to query a Web Service to update a batch of database rows.
The Web Service API reccommends calling once every 30 seconds, and timeout after a set interval. The following pseudocode is given as an example:
listId := updateList(<list of terms>)
LOOP
WHILE NOT isUpdatingComplete(listId)
END LOOP
statuses := getStatuses(“LIST_ID = {listId}”)
I have coded this roughly in C# as:
int callCount = 0;
while( callCount < 5 && !client.isUpdateComplete(listId, out messages) )
{
listId = client.updateList(options, terms, out messages);
callCount++;
Thread.Sleep(30000);
}
// Get resulting status...
Is it OK in this situation to use Thread.Sleep()? I'm aware it is not generally good practice but from reading reasons not to use it this seems like acceptable usage.
Thanks.
Thread.Sleep ensures the current thread doesn't return until at least the specified milliseconds have passed. There are plenty of places it's appropriate to do that, and your example seems fine, assuming it's running on a background thread.
Some example places you don't want to use it - on the UI thread or where you need to do exact timing.
Generally speaking, Thread.Sleep is like any other tool: perfectly OK to use, except when it's terribly misused. I disagree with the "not generally good practice" part, which is the result of people abusing Thread.Sleep when they should be doing something else (i.e. blocking on a synchronization object).
In your case the program is single-threaded, it has no UI (i.e. the thread has no message loop) and you do not want to synchronize with external events. Therefore Thread.Sleep is just fine.
The general objection against Sleep() is that it wastes a Thread.
In your case there is only 1 Thread (maybe 2) so that is not really a problem.
So I think it looks fine (but I would sleep 29 seconds to cut some slack).
It's fine, except that you cannot interrupt it once it goes into sleep, without aborting the thread (which is not recommended).
That's why a ManualResetEvent might be a better idea, since it can be signalled ("awaken") from a different thread.
you could stick with the Thread.Sleep method. But it would be more elegant to schedule it to run every 30 minutes - so you don't have to take care of the waiting inside your application.
Thread.Sleep isn't the best for executing periodic logic. Thread.Sleep(n) means your thread will relinquish control for n milliseconds. There is no guarantee that it will regain control after n milliseconds, it depends on the CPU load.
If you are locking the thread for 30 mins case you should schedule a windows task every 30 mins, so the program executes and then ends. That way you are not locking a thread for so long.
For shorter times, like 30 secs / 1 min, System.Thread.Sleep() is perfectly fine. For more than 5 mins i would use a windows task. (Im spanish i think on the english version are called like that, im talking about the tasks you schedule from the control panel ;-) )

multithread performance problem for web service call

Here is my sample program for web service server side and client side. I met with a strnage performance problem, which is, even if I increase the number of threads to call web services, the performance is not improved. At the same time, the CPU/memory/network consumption from performance panel of task manager is low. I am wondering what is the bottleneck and how to improve it?
(My test experience, double the number of threads will almost double the total response time)
Client side:
class Program
{
static Service1[] clients = null;
static Thread[] threads = null;
static void ThreadJob (object index)
{
// query 1000 times
for (int i = 0; i < 100; i++)
{
clients[(int)index].HelloWorld();
}
}
static void Main(string[] args)
{
Console.WriteLine("Specify number of threads: ");
int number = Int32.Parse(Console.ReadLine());
clients = new Service1[number];
threads = new Thread[number];
for (int i = 0; i < number; i++)
{
clients [i] = new Service1();
ParameterizedThreadStart starter = new ParameterizedThreadStart(ThreadJob);
threads[i] = new Thread(starter);
}
DateTime begin = DateTime.Now;
for (int i = 0; i < number; i++)
{
threads[i].Start(i);
}
for (int i = 0; i < number; i++)
{
threads[i].Join();
}
Console.WriteLine("Total elapsed time (s): " + (DateTime.Now - begin).TotalSeconds);
return;
}
}
Server side:
[WebMethod]
public double HelloWorld()
{
return new Random().NextDouble();
}
thanks in advance,
George
Although you are creating a multithreaded client, bear in mind that .NET has a configurable bottleneck of 2 simultaneous calls to a single host. This is by design.
Note that this is on the client, not the server.
Try adjusting your app.config file in the client:
<system.net>
<connectionManagement>
<add address=“*” maxconnection=“20″ />
</connectionManagement></system.net>
There is some more info on this in this short article :
My experience is generally that locking is the problem: I had a massively parallel server once that spent more time context switching than it did performing work.
So - check your memory and process counters in perfmon, if you look at context switches and its high (more than 4000 per second) then you're in trouble.
You can also check your memory stats on the server too - if its spending all its time swapping, or just creating and freeing strings, it'll appear to stall also.
Lastly, check disk I/O, same reason as above.
The resolution is to remove your locks, or hold them for a minimum of time. Our problem was solved by removing the dependence on COM BSTRs and their global lock, you'll find that C# has plenty of similar synchronisation bottlenecks (intended to keep your code working safely). I've seen performance drop when I moved a simple C# app from a single-core to a multi-core box.
If you cannot remove the locks, the best option is not to create as many threads :) Use a thread pool instead to let the CPU finish one job before starting another.
I don't believe that you are running into a bottleneck at all actually.
Did you try what I suggested ?
Your idea is to add more threads to improve performance, because you are expecting that all of your threads will run perfectly in parallel. This is why you are assuming that doubling the number of threads should not double the total test time.
Your service takes a fraction of a second to return and your threads will not all start working at exactly the same instant in time on the client.
So your threads are not actually working completely in parallel as you have assumed, and the results you are seeing are to be expected.
You are not seeing any performance gain because there is none to be had. The one line of code in your service (below) probably executes without a context switch most of the time anyway.
return new Random().NextDouble();
The overhead involved in the web service call is higher than than the work you are doing inside of it. If you have some substantial work to do inside the service (database calls, look-ups, file access etc) you may begin to see some performance increase.
Just parallelizing a task will not automatically make it faster.
-Jason
Of course adding Sleep will not improve performance.
But the point of the test is to test with a variable number of threads.
So, keep the Sleep in your WebMethod.
And try now with 5, 10, 20 threads.
If there are no other problems with your code, then the increase in time should not be linear as before.
You realize that in your test, when you double the amount of threads, you are doubling the amount of work that is being done. So if your threads are not truly executing in parallel, then you will, of course, see a linear increase in total time...
I ran a simple test using your client code (with a sleep on the service).
For 5 threads, I saw a total time of about 53 seconds.
And for 10 threads, 62 seconds.
So, for 2x the number of calls to the webservice, it only took 17% more time.. That is what you are expecting, no ?
Well, in this case, you're not really balancing your work between the chosen n.º of threads... Each Thread you create will be performing the same Job. So if you create n threads and you have a limited parallel processing capacity, the performance naturally decreases. Another think I notice is that the required Job is a relatively fast operation for 100 iterations and even if you plan on dividing this Job through multiple threads you need to consider that the time spent in context switching, thread creation/deletion will be an important factor in the overall time.
As bruno mentioned, your webmethod is a very quick operation. As an experiment, try ensuring that your HelloWorld method takes a bit longer. Throw in a Thread.Sleep(1000) before you return the random double. This will make it more likely that your service is actually forced to process requests in parallel.
Then try your client with different amounts of threads, and see how the performance differs.
Try to use some processor consuming task instead of Thread.Sleep. Actually combined approach is the best.
Sleep will just pass thread's time frame to another thread.
IIS AppPool "Maximum Worker Processes" is set to 1 by default. For some reason, each worker process is limited to process 10 service calls at a time. My WCF async server-side function does Sleep(10*1000); only.
This is what happens when Maximum Worker Processes = 1
http://s4.postimg.org/4qc26cc65/image.png
alternatively
http://i.imgur.com/C5FPbpQ.png?1
(First post on SO, I need to combine all pictures into one picture.)
The client is making 48 async WCF WS calls in this test (using 16 processes). Ideally this should take ~10 seconds to complete (Sleep(10000)), but it takes 52 seconds. You can see 5 horizontal lines in the perfmon picture (above link) (using perfmon for monitoring Web Service Current Connections in server). Each horizontal line lasts 10 seconds (which Sleep(10000) does). There are 5 horizontal lines because the server processes 10 calls each time then closes that 10 connections (this happens 5 times to process 48 calls). Completion of all calls took 52 seconds.
After setting Maximum Worker Processes = 2
(in the same picture given above)
This time there are 3 horizontal lines because the server processes 20 calls each time then closes that 20 connections (this happens 3 times to process 48 calls). Took 33 secs.
After setting Maximum Worker Processes = 3
(in the same picture given above)
This time there are 2 horizontal lines because the server processes 30 calls each time. (happens 2 times to process 48 calls) Took 24 seconds.
After setting Maximum Worker Processes = 8
(in the same picture given above)
This time there is 1 horizontal line because the server processes 80 calls each time. (happens once to process 48 calls) Took 14 seconds.
If you don't care this situation, your parallel (async or threaded) client calls will be queued by 10s in the server, then all of your threaded calls (>10) won't get processed by the server in parallel.
PS: I was using Windows 8 x64 with IIS 8.5. The 10 concurrent request limit is for workstation Windows OSes. Server OSes doesn't have that limit according to another post on SO (I can't give link due to rep < 10).

Categories