Best way to Unit Test if Some Code is Thread Safe? [duplicate] - c#

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit test for thread safe-ness?
I'm looking for best way to unit test if some code is thread safe.
I'm using NUnit and Moq as unit test framework.

Well, does your code use concurrency? Because if it doesn't it already thread-safe. I believe your question is fundamentally wrong and should have been something along the line of "How do I design thread safe code?"
The problem with such a question is that it's very broad and there are a plethora of things to consider when designing code to be thread-safe.
However, something you can do to test your code, is to use brute force and multiple threads over an extended period of time. If the results are inconsistent, then there could be a synchronization problem. The issue here is of course that the inconsistent results doesn't have to be a concurrency related issue, it could still have happen using a single thread.
What you need to do is to look at the code that you expect to be thread-safe and basically ask yourself "What happens if I sleep for an indefinite amount of time here?". If you conclude that everything works while running the concurrent code with a lot of random sleep durations interleaved (this makes concurrency issues more apparent) then you're on the right track.

Related

Do something at a given (odd) BPM [duplicate]

This question already has answers here:
High resolution timer in C#
(5 answers)
Closed 5 years ago.
No matter where I look, I can't find a good answer to this question. I'd like to have something happen at a given BPM (in my example, I'm using BPM), but the basic C# Timer class isn't working for me. Since it only measures in milliseconds, any actions performed within the timer get noticeably unsynced from the music. I've attempted to use this MicroTimer Library but with no luck! Though it can be quite fine grained, it's resource heavy and it doesn't have the resolution necessary. I understand I can have a function with a counter, but is there a good way to do this with Visual Studio's libraries (like the basic timer)? I hear those aren't as processor hungry.
I doubt you'll get the kind of time resolution you're looking for in a managed language like C#.
Hell, even if you were writing in C the OS could decide another process is more important and just like that you're out of sync.
Maybe consider using the timer, but resyncing every second or half second? I'd default to another user if they have experience in this area, but I'd at least give that a shot. Or go by the system clock ticks?

Why ConfigureAwait(false) is not the default option? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As you know, it it a good idea to call Task.ConfigureAwait(false) when you are waiting on a task in a code that does not need to capture a synchronization context, because it can cause deadlocks otherwise.
Well, how often do you need to capture a synchronization context? I my practice, very rarely. In most situations I am working with "library" code that pretty much forces me to use Task.ConfigureAwait(false) all the time.
So my question is pretty simple: why Task.ConfigureAwait(false) is not the default option for a task? Would not it be much better to force "high-level" code to use Task.ConfigureAwait(true)? Is there a historical reason for it, or am I missing something?
Most code that works with .ConfigureAwait(false) also works, although subobtimal, with .ConfigureAwait(true). Yes, not all code, but still most. The current default lets the highest percentage of code work without tinkering with settings that an average programmer might not understand.
A different default would just lead to thousands of questions about why the code does not work, and worse yet, thousands of answers in the form of "Microsoft sucks, they make you write Control.CheckForIllegalCrossThreadCalls = false; in every program. Why isn't that the default?" rather than actually adding the appropriate .ConfigureAwait(true) calls.
Look at the second example solution from that link:
public async void Button1_Click(...)
{
var json = await GetJsonAsync(...);
textBox1.Text = json;
}
public class MyController : ApiController
{
public async Task<string> Get()
{
var json = await GetJsonAsync(...);
return json.ToString();
}
}
If the default behaviour was ConfigureAwait(false), the textBox1.Text = json; statement would execute on a random thread pool thread instead of the UI thread.
Both snippets look like code someone could reasonably write, and by default one of them has to be broken. Since deadlocks are a lot less dangerous and easier to detect than thread-unsafe accesses, picking ConfigureAwait(true) as the default is the more conservative choice.
Just because your typical use case requires ConfigureAwait(false), it doesn't mean that it is the "correct" or most used option.
One of the things async/await is designed for, is to write responsive GUI programs. In such cases, returning to the UI thread after offloading some work to a Task is critical, since UI updates can only happen from the main thread on most Windows GUI platforms. Async/await helps GUI developers do the right thing.
This is not the only example where the default option makes better sense. I can only speculate, but I would suspect that the decision for the ConfigureAwait default is based on making sure async works with as little friction as possible, for the use cases that Microsoft anticipates it will be used for the most. Not everyone writes frameworks.

What is the measurements for determining If the code is Thread safe or not in .net [duplicate]

This question already has answers here:
Multi Threading [closed]
(5 answers)
Closed 9 years ago.
How can I measure a code if it is thread-safe or not?
may be general guidelines or best practices
I know that the code to be threading safe is to work across threads without doing unpredictable behavior, but that's sometimes become very tricky and hard to do!
I came up with one simple rule, which is probably hard to implement and therefore theoretical in nature. Code is not thread safe if you can inject some Sleep operations to some places in the code and so change the outcome of the code in a significant way. The code is thread safe otherwise (there's no such combination of delays that can change the result of code execution).
Not only your code should be taken into account when considering thread safety, but other parts of the code, the framework, the operating system, the external factors, like disk drives and memory... everything. That is why this "rule of thumb" is mainly theoretical.
I think The best answer would be here
Multi Threading, I couldn't have notice such an answer before writing this question
I think it is better to close is it !
thanks
Edit by 280Z28 (since I can't add a new answer to a closed question)
Thread safety of an algorithm or application is typically measured in terms of the consistency model which it is guaranteed to follow in the presence of multiple threads of execution (or multiple processes for distributed systems). The two most important things to examine are the following.
Are the pre- and post-conditions of individual methods preserved when multiple threads are used? For example, if your method "adds an element to a dynamically-sized list", then one post condition would be that the size of the list increases by 1 as a result of the add method. If your algorithm is thread-safe, then calling the add method 2 times would result in the size increasing by exactly 2, regardless of which threads were used for the add operations. On the other hand, if the algorithm is not thread-safe, then using multiple threads for the 2 calls could result in anything, ranging from correctly adding the 2 items all the way to the possibility of crashing the program entirely.
When changes are made to data used by algorithms in the program, when do those changes become visible to the other threads in the system. This is the consistency model of your code. Consistency models can be very difficult to understand fully so I'll leave the link above as the starting place for your continued learning, along with a note that systems guaranteeing linearizability or sequential consistency are often the easiest to work with, although not necessarily the easiest to create.

A good reason to use lock (this)? [duplicate]

This question already has answers here:
Why is lock(this) {...} bad?
(18 answers)
Closed 9 years ago.
There are many posts, votes and answers indicating using lock (this) is not a recommends pattern (not to mention a bad one).
Have a look at this one, for example.
As I'm trying to investigate this pattern a little bit, and wanted to ask whether anyone someone can think of a scenario in which using lock (this) is actually recommended, or even a must?
Locking on THIS is evil. This means that someone may decide to lock on your instance. This means that your instance will wait until someone else releases it.
Rule of thumb: never lock on this but create a seperate (private) object to lock.
But... The problem is deeper: locking has a purpose, by locking you provide protection on the upper object(s) but it doesn't prevent updating the underlying objects in for instance a collection.
In most cases a lock isn't a need. Read up on the subject is what I suggest.
Multiple questions on SO cover you question. Shouldn't be hard to build an opinion about the motivation to not lock on this.
An example and pointers for further reading can be found on the blog of Phil Haack

Where should Rx be used? [duplicate]

This question already has answers here:
Good introduction to the .NET Reactive Framework [closed]
(16 answers)
Closed 9 years ago.
I'm thinking about bringing in Rx to my workplace but the more I learn about it the more I think it doesn't really give you an advantage.
We have a lot of server apps that take input data at one end and output it at the other end. Which is perfect for the actor model and "infinite" threading scalability, till now I've used ConcurrentQueues to implement message passing and I thought that Rx might be a good more functional alternative that can make concurrency more implicit that helps me move some of the data flow decisions from imperative code to the declarations of observables.
But reading about it and trying it I don't see much advantage over using regular old threads with ConcurrentQueues for message passing. What advantages does Rx give me? It is always said that even though .NET 4.5 made a lot of Rx obsolete (though async and Dataflow) it's still good for handling event streams. What cases present event streams and how do I identify them?
If you need to parallelize some tasks, use TPL.
If you need to perform asynchronous operations, use Task & async/await.
If you need to receive, filter and combine streams of events, use Rx. Note that Rx is not necessarily asynchronous - it is simply a model for dealing with event streams in the same way that LINQ is a model for dealing with collections.
Your use case sounds like the first option.
There's lots of similar questions on SO....
Rx is all about mathematically-based composition of asynchronous operations. TPL and "regular old threads" are non-compositional. You must see non-trivial examples before you can see where the composition really benefits you.
Take a look at this page of Intro to Rx (and the rest of it), and I am sure you'll begin to grok the reasons for Rx:
http://introtorx.com/Content/v1.0.10621.0/01_WhyRx.html#WhyRx

Categories