A good reason to use lock (this)? [duplicate] - c#

This question already has answers here:
Why is lock(this) {...} bad?
(18 answers)
Closed 9 years ago.
There are many posts, votes and answers indicating using lock (this) is not a recommends pattern (not to mention a bad one).
Have a look at this one, for example.
As I'm trying to investigate this pattern a little bit, and wanted to ask whether anyone someone can think of a scenario in which using lock (this) is actually recommended, or even a must?

Locking on THIS is evil. This means that someone may decide to lock on your instance. This means that your instance will wait until someone else releases it.

Rule of thumb: never lock on this but create a seperate (private) object to lock.
But... The problem is deeper: locking has a purpose, by locking you provide protection on the upper object(s) but it doesn't prevent updating the underlying objects in for instance a collection.
In most cases a lock isn't a need. Read up on the subject is what I suggest.
Multiple questions on SO cover you question. Shouldn't be hard to build an opinion about the motivation to not lock on this.
An example and pointers for further reading can be found on the blog of Phil Haack

Related

IDisposable, using and GarbageColleciton [duplicate]

This question already has answers here:
what is relation between GC, Finalize() and Dispose?
(3 answers)
Does C# app exit automatically dispose managed resources?
(4 answers)
What are the uses of "using" in C#?
(29 answers)
Closed 2 years ago.
I got stuck in the weeds of how IDisposable and the GarbageCollector work.
Suppose you have an IDisposable object, which doesn't actually have an resources that it's holding onto (but the Dispose() method is going to do something when called). And suppose you declare it in the head of a using block, but don't actually interact with the object over the course of the block.
What guarantees do I have about how GarbageCollection will operate?
i.e.
using(new MyConceptuallyDisposableObject())
{
DoSomeWork();
await DoSomeAsyncWork();
}//PointX
Note that:
MyConceptuallyDisposableObject doesn't declare a finaliser / destructor.
(Assume that developers will never forget to using`.Dispose()` my object)
MyConceptuallyDisposableObject doesn't call GC.SuppressFinalise(this) anywhere.
Am I guaranteed that the object that I constructed will:
Will not have .Dispose() called on it before PointX?
Will have .Dispose() called on it at exactly PointX?
Will not get GarbageCollected/Finalised at any point before PointX?
Will not get GarbageCollected/Finalised before it has had .Dispose() called on it?
Suppose I then change my code to make MyConceptuallyDisposableObject call GC.SuppressFinalise(this) in its constructor. (Bearing in mind that there isn't any Destructor or Finaliser)
Does that change any of the answers to the specific questions above?
Does anything change in general, then?
Does it mean that the GC never cleans up my object at all and I'll end up with a memory leak?
*Context:*
Posted for those who are inevitably curious, but PLEASE don't answer based suggesting other ways to achieve this or that I shouldn't do this. Right now, I'm much more invested in understanding the guts of the above concepts in their abstract sense, not discussing whether my initial attempt was sensible.
I want to write a DisposableAction() class, which accepts 2 Actions. One to perform when you construct it, and one to perform when you Dispose() it.
I thought I knew all the answers to the above (and that they were "Yes", "Yes", "Yes", "Yes", "No", "Almost nothing unless you're incredibly perf-sensitive", and "No".), but I've been trying to diagnose a bug which appears to contradict these beliefs.
IDisposable and garbage collection are unrelated, except for the one situation where an object implements a finalizer that happens to call .Dispose().
Unless you know for sure, 100%, and explicitly, that an object has a finalizer that calls .Dispose() then you must call .Dispose() explicitly (or with a using) to ensure the object is disposed.
The answers to your questions:
Yes.
Yes.
Yes.
Yes.
No.
No.
No.

What is the measurements for determining If the code is Thread safe or not in .net [duplicate]

This question already has answers here:
Multi Threading [closed]
(5 answers)
Closed 9 years ago.
How can I measure a code if it is thread-safe or not?
may be general guidelines or best practices
I know that the code to be threading safe is to work across threads without doing unpredictable behavior, but that's sometimes become very tricky and hard to do!
I came up with one simple rule, which is probably hard to implement and therefore theoretical in nature. Code is not thread safe if you can inject some Sleep operations to some places in the code and so change the outcome of the code in a significant way. The code is thread safe otherwise (there's no such combination of delays that can change the result of code execution).
Not only your code should be taken into account when considering thread safety, but other parts of the code, the framework, the operating system, the external factors, like disk drives and memory... everything. That is why this "rule of thumb" is mainly theoretical.
I think The best answer would be here
Multi Threading, I couldn't have notice such an answer before writing this question
I think it is better to close is it !
thanks
Edit by 280Z28 (since I can't add a new answer to a closed question)
Thread safety of an algorithm or application is typically measured in terms of the consistency model which it is guaranteed to follow in the presence of multiple threads of execution (or multiple processes for distributed systems). The two most important things to examine are the following.
Are the pre- and post-conditions of individual methods preserved when multiple threads are used? For example, if your method "adds an element to a dynamically-sized list", then one post condition would be that the size of the list increases by 1 as a result of the add method. If your algorithm is thread-safe, then calling the add method 2 times would result in the size increasing by exactly 2, regardless of which threads were used for the add operations. On the other hand, if the algorithm is not thread-safe, then using multiple threads for the 2 calls could result in anything, ranging from correctly adding the 2 items all the way to the possibility of crashing the program entirely.
When changes are made to data used by algorithms in the program, when do those changes become visible to the other threads in the system. This is the consistency model of your code. Consistency models can be very difficult to understand fully so I'll leave the link above as the starting place for your continued learning, along with a note that systems guaranteeing linearizability or sequential consistency are often the easiest to work with, although not necessarily the easiest to create.

C# lock vs Java syncronized - Is there any difference in runtime? [duplicate]

This question already has answers here:
Are there any differences between Java's "synchronized" and C#'s "lock"?
(3 answers)
Closed 9 years ago.
I'm wondering if there is any difference in runtime at lock vs syncronized.
I have learnd that syncronized is a slow operation and outdated at Java.
Today I saw the lock at C# and I'm wondering if they are the same and lock is something I "want" to avoid same as in Java or maybe he is much faster and I want to use it...
Thanks!
1 synchronized is not outdated, java.util.concurrent.locks package simply provides extended functions which are not always needed.
2 Locking is done at CPU level and there is no difference between Java and C# in this regard
see http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html
... special instructions, called memory barriers, are required to flush or invalidate the local processor cache in order to see writes made by other processors or make writes by this processor visible to others. These memory barriers are usually performed when lock and unlock actions are taken; they are invisible to programmers in a high level language.

Best way to Unit Test if Some Code is Thread Safe? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit test for thread safe-ness?
I'm looking for best way to unit test if some code is thread safe.
I'm using NUnit and Moq as unit test framework.
Well, does your code use concurrency? Because if it doesn't it already thread-safe. I believe your question is fundamentally wrong and should have been something along the line of "How do I design thread safe code?"
The problem with such a question is that it's very broad and there are a plethora of things to consider when designing code to be thread-safe.
However, something you can do to test your code, is to use brute force and multiple threads over an extended period of time. If the results are inconsistent, then there could be a synchronization problem. The issue here is of course that the inconsistent results doesn't have to be a concurrency related issue, it could still have happen using a single thread.
What you need to do is to look at the code that you expect to be thread-safe and basically ask yourself "What happens if I sleep for an indefinite amount of time here?". If you conclude that everything works while running the concurrent code with a lot of random sleep durations interleaved (this makes concurrency issues more apparent) then you're on the right track.

Are immutable objects good practice? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Should I make my classes immutable where possible?
I once read the book "Effective Java" by Joshua Bloch and he recommended to make all business objects immutable for various reasons. (for example thread safety)
Does this apply for C# too?
Do you try to make your objects immutable, so you have less problems when working with them?
Or is it not worth the inconvenience you have to create them?
The immutable Eric Lippert has written a whole series of blog posts on the topic. Part one is here.
Quoting from the earlier post that he links to:
ASIDE: Immutable data structures are the way of the future in C#. It is much easier to reason about a data structure if you know that it will never change. Since they cannot be modified, they are automatically threadsafe. Since they cannot be modified, you can maintain a stack of past “snapshots” of the structure, and suddenly undo-redo implementations become trivial. On the down side, they do tend to chew up memory, but hey, that’s what garbage collection was invented for, so don’t sweat it.
This is going to be more of an opinion type answer but...
I find that the ease of understanding a program, i.e. maintaining and debugging said application, is inversly proportional to the amount of stateful transitions that occur during the processing of each component. The less state I need to cart around in my head, the more focus I can pay attention to the logic within the algorithms as it is written.
Immutable objects are the central feature of functional programming; it has its own advantages and disadvantages. (E.g. linked lists are practically impossible to be immutable, but immutable objects make parallelism a piece of cake.) So as a comment on your post noted, the answer is "it depends".
Off the top of my head, I can't think of a reason for immutable objects making thread safe code somehow "better".
If I want an object to be thread safe, I will either put a lock around it or I will make a copy of it and update the reference once I'm done working on it. I typically wouldn't want a new object for every little change.
For me, immutable strings create more headaches for threading than it helps.
I actually went out of my way to make an "in-place" "ToUpper" using unsafe code isntead of the built in String.ToUpper(). It runs about 4 times faster and consumes 1/2 the peak memory.
Another nice benefit of immutable structures is that you can locally cache instances of them and reuse them across multiple threads without fear of unexpected behaviors as would be the case if they were mutable.
For instance, suppose you are using an external caching service such as memcached or Velocity or some other equally simplistic distributed hashtable service. You could just use the C# client library and call it good enough. However, that is being wasteful with resources given a short-lived context like a web request scenario. What you really want is to pull each object from the cache once and only once in your context.
The safest way to get this job done is to place a local hashtable in your process in front of the cache provider. On the first request for the cache key you'd pull down the serialized byte stream that represents the object you wish to use and store that byte stream in your local hashtable. On subsequent requests for the same cache key, just look up the byte stream in the local hashtable and deserialize the object to a new instance for each request. This is to prevent multiple redundant trips to the cache server node for the same information that assumedly has not changed over the lifetime of your context.
With immutable structures, you could deserialize the byte stream only once on the first request and get away with storing the deserialized instance in the hashtable instead of the byte stream and just share that one single immutable instance of your object. This obviously cuts down on deserialization penalties which can add up rather quickly if your consuming code is written in such a fashion that it does not care how many calls it makes to the caching provider, assuming the cache is faster than querying your underlying data store.
Perhaps this is more of a subjective answer, but it's a specific problem that can be solved uniquely by using immutable structures so I thought it was relevant to share.

Categories