Related
In a standard dispose/finalise pattern such as Finalizers with Dispose() in C# the Dispose(bool) does not touch managed objects if the method is called from the finalizer, it is considered unsafe as they may have already been collected by the garbage collector.
What is special about IntPtr etc that makes them safe?
As some background, to keep the cleanup code near the allocate code I'm adding the cleanup action to an event as soon as I allocate, then calling the event from the dispose method:
class TestClass : IDisposable
{
private IntPtr buffer;
public void test()
{
buffer = Marshal.AllocHGlobal(1024);
OnFreeUnmanagedResource += (() => Marshal.FreeHGlobal(buffer));
}
private List<IDisposable> managedObjectsToBeDisposed = new List<IDisposable>();
private event Action OnFreeUnmanagedResource = delegate { };
private bool _isDisposed = false;
private void Dispose(bool itIsSafeToAlsoFreeManagedObjects)
{
if (_isDisposed) return;
OnFreeUnmanagedResource();
if (itIsSafeToAlsoFreeManagedObjects)
for (var i = managedObjectsToBeDisposed.Count - 1; i >= 0; i--)
{
var managedObjectToBeDisposed = managedObjectsToBeDisposed[i];
managedObjectToBeDisposed.Dispose();
managedObjectsToBeDisposed.Remove(managedObjectToBeDisposed);
}
_isDisposed = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
~TestClass()
{
Dispose(false);
}
}
I'm uncertain of this code because OnFreeUnmanagedResource may be collected before the class, but why would this not be the case for buffer?
With that anti-pattern (really you're better off either only having managed fields or only an unmanaged field rather than mixing them and then having to be clever about how that mixing is dealt with, but alas the pattern is still with us and has to be dealt with sometimes) the danger is not that disposable managed objects may have been collected (they won't be, they're being kept alive by the field in the class in question, and now at least rooted from that object in the finalisation queue, if not elsewhere) but that they may have been finalised by their own finaliser. Or conversely, that they might get finalised here and then finalised again because they are already in the finalisation queue.
If you'd arrived at this code via Dispose() then they would not (assuming no bugs, obviously) have been cleaned up because they only path prior to a collection attempt that can clean them is that very method.
If you'd arrived at this code via the finaliser then this object has had collection attempted on it, and been put in the finalisation queue, which means that likely the objects accessible only through it also had collection attempted on it, and if finalisable had been put in the queue, and there's no guarantee of which was put there first.
If the object was disposable but not finalisable, it quite likely had fields in turn that were finalisable and likewise are likely to be on that queue.
And if the object was disposable but not finalisable, and had no finalisable fields then it doesn't matter that you don't do anything with it.
This would be OK:
private void Dispose(bool itIsSafeToAlsoFreeManagedObjects)
{
if (_isDisposed) return;
Marshal.FreeHGlobal(buffer));
if (itIsSafeToAlsoFreeManagedObjects)
for (var i = managedObjectsToBeDisposed.Count - 1; i >= 0; i--)
{
var managedObjectToBeDisposed = managedObjectsToBeDisposed[i];
managedObjectToBeDisposed.Dispose();
managedObjectsToBeDisposed.Remove(managedObjectToBeDisposed);
}
_isDisposed = true;
}
An IntPtr is a struct, that essentially contains a native handle. The resource refered to by the handle is not touched by the garbage collector. The struct itself is also still valid.
I'm not so sure about your code, where you use a managed object of reference type (the delegate attached to OnFreeUnmanagedResource) in the finalizer.
Edit: After reading the other answer, I think, your code is also OK, as the delegate doesn't have a finalizer.
I'm trying to write a drop-in replacement for System.Media.SoundPlayer using the waveOut... API. This API makes a callback to a method in my version of SoundPlayer when the file/stream passed it has completed playback.
If I create a form-scoped instance of my SoundPlayer and play something, everything works fine because the form keeps the object alive, so the delegate is alive to receive the callback.
If I use it like this in, say, a button click event:
SoundPlayer player = new SoundPlayer(#"C:\whatever.wav");
player.Play();
... it works fine 99% of the time, but occasionally (and frequently if the file is long) the SoundPlayer object is garbage-collected before the file completes, so the delegate is no longer there to receive the callback, and I get an ugly error.
I know how to "pin" objects using GCHandle.Alloc, but only when something else can hang onto the handle. Is there any way for an object to pin itself internally, and then un-pin itself after a period of time (or completion of playback)? If I try GCHandle.Alloc (this, GCHandleType.Pinned);, I get a run-time exception "Object contains non-primitive or non-blittable data."
You could just have a static collection of all the "currently playing" sounds, and simply remove the SoundPlayer instance when it gets the "finished playing" notification. Like this:
class SoundPlayer
{
private static List<SoundPlayer> playing = new List<SoundPlayer>();
public void Play(...)
{
...
playing.Add(this);
}
// assuming this is your callback when playing has finished
public void OnPlayingFinished(...)
{
...
playing.Remove(this);
}
}
(Obviously locking/multithreading, error checking and so on required)
Your SoundPlayer object should just be stored in a private field of your form class so that it stays referenced long enough. You probably need to dispose it when your form closes.
Fwiw, pinning doesn't work because your class is missing a [StructLayout] attribute. Not that it will work effectively with one, you would have to store the returned GCHandle somewhere so that you can unpin it later. Your form class is the only logical place to store it. Make it simple.
GCHandle is the way to go; just don't specify the Pinned enum value.
The problem with a class static member (such as shown here) is that the objects may be collected too early if the managed portion of the program no longer references them. The referenced example shows using a callback "OnPlayingFinished", but now you have to worry about keeping the delegate (the one that references OnPlayingFinished) from itself being garbage collected.
You will still need to register for OnPlayingFinished, and keep the delegate alive. However, the GCHandle is keeping your object alive, so you can keep the delegate around with:
class SoundPlayer
{
public void Play(...)
{
var h = GCHandle.Alloc(this);
SomeNativeAPI.Play(this, h.ToIntPtr());
}
// assuming this is your callback when playing has finished
delegate void FinishedCallback(IntPtr userData);
static FinishedCallback finishedCallback = OnPlayingFinished;
public static void OnPlayingFinished(IntPtr userData)
{
var h = GCHandle.FromIntPtr(userData);
SoundPlayer This = (SoundPlayer)h.Target;
h.Free();
... // use 'This' as your object
}
}
We've ensured our SoundPlayer remains reachable via the GCHandle. And as an instance of SoundPlayer remains reachable, its static members must also remain reachable.
At least, that's my best educated guess as to how you might go about it.
The best way to do this is to keep a [ThreadStatic] list of active SoundPlayers in a private static field, and remove each instance from the list when the sound finishes.
For example:
[ThreadStatic]
static List<SoundPlayer> activePlayers;
public void Play() {
if(activePlayers == null) activePlayers = new List<SoundPlayer>();
activePlayers.Add(this);
//Start playing the sound
}
void OnSoundFinished() {
activePlayers.Remove(this);
}
This might sound too simple, but just make a strong reference to the SoundPlayer object in your own class. That ought to keep GC away as long as the object is alive.
I.e. instead of:
public class YourProgram
{
void Play()
{
SoundPlayer player = new SoundPlayer(#"c:\whatever.wac");
player.Play();
}
}
This:
public class YourProgram
{
private SoundPlayer player;
void Play()
{
player = new SoundPlayer(#"c:\whatever.wac");
player.Play();
}
}
Are you simply trying to prevent your object from being garbage collected? Couldn't you call GC.KeepAlive( this ) to protect it from the GC?
This is a detail question for C#.
Suppose I've got a class with an object, and that object is protected by a lock:
Object mLock = new Object();
MyObject property;
public MyObject MyProperty {
get {
return property;
}
set {
property = value;
}
}
I want a polling thread to be able to query that property. I also want the thread to update properties of that object occasionally, and sometimes the user can update that property, and the user wants to be able to see that property.
Will the following code properly lock the data?
Object mLock = new Object();
MyObject property;
public MyObject MyProperty {
get {
lock (mLock){
return property;
}
}
set {
lock (mLock){
property = value;
}
}
}
By 'properly', what I mean is, if I want to call
MyProperty.Field1 = 2;
or whatever, will the field be locked while I do the update? Is the setting that's done by the equals operator inside the scope of the 'get' function, or will the 'get' function (and hence the lock) finish first, and then the setting, and then 'set' gets called, thus bypassing the lock?
Edit: Since this apparently won't do the trick, what will? Do I need to do something like:
Object mLock = new Object();
MyObject property;
public MyObject MyProperty {
get {
MyObject tmp = null;
lock (mLock){
tmp = property.Clone();
}
return tmp;
}
set {
lock (mLock){
property = value;
}
}
}
which more or less just makes sure that I only have access to a copy, meaning that if I were to have two threads call a 'get' at the same time, they would each start with the same value of Field1 (right?). Is there a way to do read and write locking on a property that makes sense? Or should I just constrain myself to locking on sections of functions rather than the data itself?
Just so that this example makes sense: MyObject is a device driver that returns status asynchronously. I send it commands via a serial port, and then the device responds to those commands in its own sweet time. Right now, I have a thread that polls it for its status ("Are you still there? Can you accept commands?"), a thread that waits for responses on the serial port ("Just got status string 2, everything's all good"), and then the UI thread which takes in other commands ("User wants you to do this thing.") and posts the responses from the driver ("I've just done the thing, now update the UI with that"). That's why I want to lock on the object itself, rather than the fields of the object; that would be a huge number of locks, a, and b, not every device of this class has the same behavior, just general behavior, so I'd have to code lots of individual dialogs if I individualized the locks.
No, your code won't lock access to the members of the object returned from MyProperty. It only locks MyProperty itself.
Your example usage is really two operations rolled into one, roughly equivalent to this:
// object is locked and then immediately released in the MyProperty getter
MyObject o = MyProperty;
// this assignment isn't covered by a lock
o.Field1 = 2;
// the MyProperty setter is never even called in this example
In a nutshell - if two threads access MyProperty simultaneously, the getter will briefly block the second thread until it returns the object to the first thread, but it'll then return the object to the second thread as well. Both threads will then have full, unlocked access to the object.
EDIT in response to further details in the question
I'm still not 100% certain what you're trying to achieve, but if you just want atomic access to the object then couldn't you have the calling code lock against the object itself?
// quick and dirty example
// there's almost certainly a better/cleaner way to do this
lock (MyProperty)
{
// other threads can't lock the object while you're in here
MyProperty.Field1 = 2;
// do more stuff if you like, the object is all yours
}
// now the object is up-for-grabs again
Not ideal, but so long as all access to the object is contained in lock (MyProperty) sections then this approach will be thread-safe.
Concurrent programming would be pretty easy if your approach could work. But it doesn't, the iceberg that sinks that Titanic is, for example, the client of your class doing this:
objectRef.MyProperty += 1;
The read-modify-write race is pretty obvious, there are worse ones. There is absolutely nothing you can do to make your property thread-safe, other than making it immutable. It is your client that needs to deal with the headache. Being forced to delegate that kind of responsibility to a programmer that is least likely to get it right is the Achilles-heel of concurrent programming.
As others have pointed out, once you return the object from the getter, you lose control over who accesses the object and when. To do what you're wanting to do, you'll need to put a lock inside the object itself.
Perhaps I don't understand the full picture, but based on your description, it doesn't sound like you'd necessarily need to have a lock for each individual field. If you have a set of fields are simply read and written via the getters and setters, you could probably get away with a single lock for these fields. There is obviously potential that you'll unnecessarily serialize the operation of your threads this way. But again, based on your description, it doesn't sound like you're aggressively accessing the object either.
I would also suggest using an event instead of using a thread to poll the device status. With the polling mechanism, you're going to be hitting the lock each time the thread queries the device. With the event mechanism, once the status changes, the object would notify any listeners. At that point, your 'polling' thread (which would no longer be polling) would wake up and get the new status. This will be much more efficient.
As an example...
public class Status
{
private int _code;
private DateTime _lastUpdate;
private object _sync = new object(); // single lock for both fields
public int Code
{
get { lock (_sync) { return _code; } }
set
{
lock (_sync) {
_code = value;
}
// Notify listeners
EventHandler handler = Changed;
if (handler != null) {
handler(this, null);
}
}
}
public DateTime LastUpdate
{
get { lock (_sync) { return _lastUpdate; } }
set { lock (_sync) { _lastUpdate = value; } }
}
public event EventHandler Changed;
}
Your 'polling' thread would look something like this.
Status status = new Status();
ManualResetEvent changedEvent = new ManualResetEvent(false);
Thread thread = new Thread(
delegate() {
status.Changed += delegate { changedEvent.Set(); };
while (true) {
changedEvent.WaitOne(Timeout.Infinite);
int code = status.Code;
DateTime lastUpdate = status.LastUpdate;
changedEvent.Reset();
}
}
);
thread.Start();
The lock scope in your example is in the incorrect place - it needs to be at the scope of the 'MyObject' class's property rather than it's container.
If the MyObject my object class is simply used to contain data that one thread wants to write to, and another (the UI thread) to read from then you might not need a setter at all and construct it once.
Also consider if placing locks at the property level is the write level of lock granularity; if more than one property might be written to in order to represent the state of a transaction (eg: total orders and total weight) then it might be better to have the lock at the MyObject level (i.e. lock( myObject.SyncRoot ) ... )
In the code example you posted, a get is never preformed.
In a more complicated example:
MyProperty.Field1 = MyProperty.doSomething() + 2;
And of course assuming you did a:
lock (mLock)
{
// stuff...
}
In doSomething() then all of the lock calls would not be sufficient to guarantee synchronization over the entire object. As soon as the doSomething() function returns, the lock is lost, then the addition is done, and then the assignment happens, which locks again.
Or, to write it another way you can pretend like the locks are not done amutomatically, and rewrite this more like "machine code" with one operation per line, and it becomes obvious:
lock (mLock)
{
val = doSomething()
}
val = val + 2
lock (mLock)
{
MyProperty.Field1 = val
}
The beauty of multithreading is that you don't know which order things will happen in. If you set something on one thread, it might happen first, it might happen after the get.
The code you've posted with lock the member while it's being read and written. If you want to handle the case where the value is updated, perhaps you should look into other forms of synchronisation, such as events. (Check out the auto/manual versions). Then you can tell your "polling" thread that the value has changed and it's ready to be reread.
In your edited version, you are still not providing a threadsafe way to update MyObject. Any changes to the object's properties will need to be done inside a synchronized/locked block.
You can write individual setters to handle this, but you've indicated that this will be difficult because of the large number fields. If indeed the case (and you haven't provided enough information yet to assess this), one alternative is to write a setter that uses reflection; this would allow you to pass in a string representing the field name, and you could dynamically look up the field name and update the value. This would allow you to have a single setter that would work on any number of fields. This isn't as easy or as efficient but it would allow you to deal with a large number of classes and fields.
You have implemented a lock for getting/setting the object but you have not made the object thread safe, which is another story.
I have written an article on immutable model classes in C# that might be interesting in this context: http://rickyhelgesson.wordpress.com/2012/07/17/mutable-or-immutable-in-a-parallel-world/
Does C# locks not suffer from the same locking issues as other languages then?
E.G.
var someObj = -1;
// Thread 1
if (someObj = -1)
lock(someObj)
someObj = 42;
// Thread 2
if (someObj = -1)
lock(someObj)
someObj = 24;
This could have the problem of both threads eventually getting their locks and changing the value. This could lead to some strange bugs. However you don't want to unnecessarily lock the object unless you need to. In this case you should consider the double checked locking.
// Threads 1 & 2
if (someObj = -1)
lock(someObj)
if(someObj = -1)
someObj = {newValue};
Just something to keep in mind.
I have a C# singleton class that multiple classes use. Is access through Instance to the Toggle() method thread-safe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it?
public class MyClass
{
private static readonly MyClass instance = new MyClass();
public static MyClass Instance
{
get { return instance; }
}
private int value = 0;
public int Toggle()
{
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
return value;
}
}
Is access through 'Instance' to the 'Toggle()' class threadsafe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it?
No, it's not threadsafe.
Basically, both threads can run the Toggle function at the same time, so this could happen
// thread 1 is running this code
if(value == 0)
{
value = 1;
// RIGHT NOW, thread 2 steps in.
// It sees value as 1, so runs the other branch, and changes it to 0
// This causes your method to return 0 even though you actually want 1
}
else if(value == 1)
{
value = 0;
}
return value;
You need to operate with the following assumption.
If 2 threads are running, they can and will interleave and interact with eachother randomly at any point. You can be half way through writing or reading a 64 bit integer or float (on a 32 bit CPU) and another thread can jump in and change it out from underneath you.
If the 2 threads never access anything in common, it doesn't matter, but as soon as they do, you need to prevent them from stepping on each others toes. The way to do this in .NET is with locks.
You can decide what and where to lock by thinking about things like this:
For a given block of code, if the value of something got changed out from underneath me, would it matter? If it would, you need to lock that something for the duration of the code where it would matter.
Looking at your example again
// we read value here
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
// and we return it here
return value;
In order for this to return what we expect it to, we assume that value won't get changed between the read and the return. In order for this assumption to actually be correct, you need to lock value for the duration of that code block.
So you'd do this:
lock( value )
{
if(value == 0)
... // all your code here
return value;
}
HOWEVER
In .NET you can only lock Reference Types. Int32 is a Value Type, so we can't lock it.
We solve this by introducing a 'dummy' object, and locking that wherever we'd want to lock 'value'.
This is what Ben Scheirman is referring to.
The original impplementation is not thread safe, as Ben points out
A simple way to make it thread safe is to introduce a lock statement. Eg. like this:
public class MyClass
{
private Object thisLock = new Object();
private static readonly MyClass instance = new MyClass();
public static MyClass Instance
{
get { return instance; }
}
private Int32 value = 0;
public Int32 Toggle()
{
lock(thisLock)
{
if(value == 0)
{
value = 1;
}
else if(value == 1)
{
value = 0;
}
return value;
}
}
}
I'd also add a protected constructor to MyClass to prevent the compiler from generating a public default constructor.
That is what I thought. But, I I'm
looking for the details... 'Toggle()'
is not a static method, but it is a
member of a static property (when
using 'Instance'). Is that what makes
it shared among threads?
If your application is multi-threaded and you can forsee that multiple thread will access that method, that makes it shared among threads. Because your class is a Singleton you know that the diferent thread will access the SAME object, so be cautioned about the thread-safety of your methods.
And how does this apply to singletons
in general. Would I have to address
this in every method on my class?
As I said above, because its a singleton you know diferent thread will acess the same object, possibly at the same time. This does not mean you have to make every method obtain a lock. If you notice that a simultaneos invocation can lead to corrupted state of the class, then you should apply the method mentioned by #Thomas
Can I assume that the singleton pattern exposes my otherwise lovely thread-safe class to all the thread problems of regular static members?
No. Your class is simply not threadsafe. The singleton has nothing to do with it.
(I'm getting my head around the fact that instance members called on a static object cause threading problems)
It's nothing to do with that either.
You have to think like this: Is it possible in my program for 2 (or more) threads to access this piece of data at the same time?
The fact that you obtain the data via a singleton, or static variable, or passing in an object as a method parameter doesn't matter. At the end of the day it's all just some bits and bytes in your PC's RAM, and all that matters is whether multiple threads can see the same bits.
Your thread could stop in the middle of that method and transfer control to a different thread. You need a critical section around that code...
private static object _lockDummy = new object();
...
lock(_lockDummy)
{
//do stuff
}
I was thinking that if I dump the singleton pattern and force everyone to get a new instance of the class it would ease some problems... but that doesn't stop anyone else from initializing a static object of that type and passing that around... or from spinning off multiple threads, all accessing 'Toggle()' from the same instance.
Bingo :-)
I get it now. It's a tough world. I wish I weren't refactoring legacy code :(
Unfortunately, multithreading is hard and you have to be very paranoid about things :-)
The simplest solution in this case is to stick with the singleton, and add a lock around the value, like in the examples.
Quote:
if(value == 0) { value = 1; }
if(value == 1) { value = 0; }
return value;
value will always be 0...
Well, I actually don't know C# that well... but I am ok at Java, so I will give the answer for that, and hopefully the two are similar enough that it will be useful. If not, I apologize.
The answer is, no, it's not safe. One thread could call Toggle() at the same time as the other, and it is possible, although unlikely with this code, that Thread1 could set value in between the times that Thread2 checks it and when it sets it.
To fix, simply make Toggle() synchronized. It doesn't block on anything or call anything that might spawn another thread which could call Toggle(), so that's all you have to do save it.
I am re-factoring some code and am wondering about the use of a lock in the instance constructor.
public class MyClass {
private static Int32 counter = 0;
private Int32 myCount;
public MyClass() {
lock(this) {
counter++;
myCount = counter;
}
}
}
Please confirm
Instance constructors are thread-safe.
The lock statement prevents access to that code block, not to the static 'counter' member.
If the intent of the original programmer were to have each instance know its 'count', how would I synchronize access to the 'counter' member to ensure that another thread isn't new'ing a MyClass and changing the count before this one sets its count?
FYI - This class is not a singleton. Instances must simply be aware of their number.
If you are only incrementing a number, there is a special class (Interlocked) for just that...
http://msdn.microsoft.com/en-us/library/system.threading.interlocked.increment.aspx
Interlocked.Increment Method
Increments a specified variable and stores the result, as an atomic operation.
System.Threading.Interlocked.Increment(myField);
More information about threading best practices...
http://msdn.microsoft.com/en-us/library/1c9txz50.aspx
I'm guessing this is for a singleton pattern or something like it. What you want to do is not lock your object, but lock the counter while your are modifying it.
private static int counter = 0;
private static object counterLock = new Object();
lock(counterLock) {
counter++;
myCounter = counter;
}
Because your current code is sort of redundant. Especially being in the constructor where there is only one thread that can call a constructor, unlike with methods where it could be shared across threads and be accessed from any thread that is shared.
From the little I can tell from you code, you are trying to give the object the current count at the time of it being created. So with the above code the counter will be locked while the counter is updated and set locally. So all other constructors will have to wait for the counter to be released.
#ajmastrean
I am not saying you should use the singleton pattern itself, but adopt its method of encapsulating the instantiation process.
i.e.
Make the constructor private.
Create a static instance method that returns the type.
In the static instance method, use the lock keyword before instantiating.
Instantiate a new instance of the type.
Increment the count.
Unlock and return the new instance.
EDIT
One problem that has occurred to me, if how would you know when the count has gone down? ;)
EDIT AGAIN
Thinking about it, you could add code to the destructor that calls another static method to decrement the counter :D
You can use another static object to lock on it.
private static Object lockObj = new Object();
and lock this object in the constructor.
lock(lockObj){}
However, I'm not sure if there are situations that should be handled because of compiler optimization in .NET like in the case of java
The most efficient way to do this would be to use the Interlocked increment operation. It will increment the counter and return the newly set value of the static counter all at once (atomically)
class MyClass {
static int _LastInstanceId = 0;
private readonly int instanceId;
public MyClass() {
this.instanceId = Interlocked.Increment(ref _LastInstanceId);
}
}
In your original example, the lock(this) statement will not have the desired effect because each individual instance will have a different "this" reference, and multiple instances could thus be updating the static member at the same time.
In a sense, constructors can be considered to be thread safe because the reference to the object being constructed is not visible until the constructor has completed, but this doesn't do any good for protecting a static variable.
(Mike Schall had the interlocked bit first)
I think if you modify the Singleton Pattern to include a count (obviously using the thread-safe method), you will be fine :)
Edit
Crap I accidentally deleted!
I am not sure if instance constructors ARE thread safe, I remember reading about this in a design patterns book, you need to ensure that locks are in place during the instantiation process, purely because of this..
#Rob
FYI, This class may not be a singleton, I need access to different instances. They must simply maintain a count. What part of the singleton pattern would you change to perform 'counter' incrementing?
Or are you suggesting that I expose a static method for construction blocking access to the code that increments and reads the counter with a lock.
public MyClass {
private static Int32 counter = 0;
public static MyClass GetAnInstance() {
lock(MyClass) {
counter++;
return new MyClass();
}
}
private Int32 myCount;
private MyClass() {
myCount = counter;
}
}