Thank you for your attention. I'm new to some VS2013 code which is a mixture of C++ plus some microsoft specific extensions. The code has a class like
ref class Foo {
Bar^ bar_; // somewhere else, bar_ = gcnew Bar...
};
Now I'd need to add an unmanaged member, from online search it seems like I can do
ref class Foo {
Bar ^ bar_;
Unmanaged* ptr_; // somewhere else, ptr = new Unmanaged();
~Foo() {
this->!Foo();
}
!Foo() {
delete ptr_;
// do I need anything to deal with bar_?
}
};
The questions are:
1) is this finalizer/destructor the way to go?
2) do I need to write anything extra for bar_ now that I'm explicitly writing the finalizer/destructor?
3) are there cleaner way to do it?
1) is this finalizer/destructor the way to go?
Yes.
2) do I need to write anything extra for bar_
Nothing that is obvious from the snippets. But if the Bar class is disposable as well then you probably ought to add delete bar_; to the destructor. Not to the finalizer. And not if you passed the reference to other code so you can't be sure that this reference is the last one still using the Bar object.
3) are there cleaner way to do it?
No. There are other ways to do it. You could for example consider to not add the destructor. Having one gives the code that uses the class the burden of calling it. Typically that would be C# or VB.NET code, it would have to use the using statement or call Dispose() explicitly. Keep in mind that they often forget. Or don't have a good way to call it.
If such code is not expected to create a lot of instances of Foo and the Unmanaged class merely uses a bit of memory then the finalizer might well be good enough. Or if the Foo object is expected to live for the lifetime of the app, pretty common, then disposing is pointless. Even if it does use a lot of memory then GC::AddMemoryPressure() is a pretty nice alternative. Makes your class easier to use.
And you could consider wrapping the Unmanaged pointer in its own class so Foo doesn't need a finalizer anymore. Along the pattern of the SafeHandle classes in .NET, SafeBuffer is the closest match. That tends to be overkill however in a C++/CLI wrapper, delete failure in particular is nothing you want to hide.
But what you have gets the job done.
I have a doubt with the objects declarations in c#. I explain with this example
I can do this:
MyObject obj = New MyObject();
int a = obj.getInt();
Or I can do this
int a = new MyObject().getInt();
The result are the same, but, exists any diferences between this declarations? (without the syntax)
Thanks.
This isn't a declararation: it's a class instantiation.
There's no practical difference: it's all about readability and your own coding style.
I would add that there're few cases where you will need to declare reference to some object: when these objects are IDisposable.
For example:
// WRONG! Underlying stream may still be locked after reading to the end....
new StreamReader(...).ReadToEnd();
// OK! Store the whole instance in a reference so you can dispose it when you
// don't need it anymore.
using(StreamReader r = new StreamReader(...))
{
} // This will call r.Dispose() automatically
As some comment has added, there're a lot of edge cases where instantiating a class and storing the object in a reference (a variable) will be better/optimal, but about your simple sample, I believe the difference isn't enough and it's still a coding style/readability issue.
It's mostly syntax.
The main difference is that you can't use the instance of MyObject in the second example. Also, it may be nominated for Garbage Collection immediately.
No, technically they are the same.
The only thing I would suggest to consider in this case, as if the function does not actual need of instance creation, you may consider declare it static, so you can simply call it like:
int a = MyObject.getInt();
but this naturally depends on concrete implementation.
I have various classes that wrap an IntPtr. They don't store their own data (other than the pointer), but instead use properties and methods to expose the data at the pointer using an unmanaged library. It works well, but I've gotten to the point where I need to be able to refer to these wrapper objects from other wrapper objects. For example:
public class Node {
private IntPtr _ptr;
public Node Parent {
get { return new Node(UnmanagedApi.GetParent(_ptr)); }
}
internal Node(IntPtr ptr) {
_ptr = ptr;
}
}
Now, I can simply return a new Node(parentPtr) (as above), but there is the potential for having tens of thousands of nodes. Wouldn't this be a bad idea, since multiple wrapper objects could end up referring to the same IntPtr?
What can I do to fix this? I thought about using a static KeyedCollection class that uses each IntPtr as the key. So, instead of returning a new Node each time, I can just look it up. But that would bring up threading issues, right?
Is there a better way?
The biggest problem I can see is who is responsible for deleting the objects referred to by the pointer?
Reusing the same object is not necessarily a threading issue, although if you are responsible for calling delete on the unmanaged objects you'll need to implement some sort of reference counting in your objects.
Using multiple objects with the same pointer might be easier if your objects are read-only. If they have state that can be changed then you'll need to understand the impact of making a change if multiple objects hold a pointer to that state.
You might also want to look at C++/CLI (managed C++) to provide a layer between the C# and unmanaged library and do the hard work of translation/manipulation in there and provide a simpler API for the C# to work with.
This whole code doesn't look right.
Your use of the GetParent function seems to imply that you have a tree-like structure. Let me make a few guesses about your code, they could be wrong.
You want to make extensive use of the UnmanagedAPI and don't want to duplicate this code in your .NET code.
You simply want to make sure you don't end up with memory problems by accessing your unamaged code.
I would suggest that instead of creating .NET code on a node-by-node basis, you create a .NET wrapper for the entire tree/graph structure and provide a .NET API that may pass unmanaged API pointers as arguments, but handles strictly the allocation/deallocation so that you avoid memory problems. This will avoid the unnecessary allocation of a new memory structure simply to allocate something that already exists, i.e. the GetParent function.
I had a related issue. Deleting unmanaged objects was done explicitly. So what I did was making a base class for all wrappers that contained static dictionary for available wrappers instances. Objects were added to dictionary in constructor and deleted in WrapperBase.Delete() method. Note that it is important to have explicit Delete() method for such approach - otherwise GC will never free wrappers instances because of references from static dictionary.
I had trouble coming up with a good way to word this question, so let me try to explain by example:
Suppose I have some interface. For simplicity's sake, I'll say the interface is IRunnable, and it provides a single method, Run. (This is not real; it's only an example.)
Now, suppose I have some pre-existing class, let's call it Cheetah, that I can't change. It existed before IRunnable; I can't make it implement my interface. But I want to use it as if it implements IRunnable--presumably because it has a Run method, or something like it. In other words, I want to be able to have code that expects an IRunnable and will work with a Cheetah.
OK, so I could always write a CheetahWrapper sort of deal. But humor me and let me write something a little more flexible--how about a RunnableAdapter?
I envision the class definition as something like this:
public class RunnableAdapter : IRunnable {
public delegate void RunMethod();
private RunMethod Runner { get; set; }
public RunnableAdapter(RunMethod runner) {
this.Runner = runner;
}
public void Run() {
Runner.Invoke();
}
}
Straightforward enough, right? So with this, I should be able to make a call like this:
Cheetah c = new Cheetah();
RunnableAdapter ra = new RunnableAdapter(c.Run);
And now, voila: I have an object that implements IRunner and is, in its heart of hearts, a Cheetah.
My question is: if this Cheetah of mine falls out of scope at some point, and gets to the point where it would normally be garbage collected... will it? Or does this RunnableAdapter object's Runner property constitute a reference to the original Cheetah, so that it won't be collected? I certainly want that reference to stay valid, so basically I'm wondering if the above class definition is enough or if it would be necessary to maintain a reference to the underlying object (like via some private UnderlyingObject property), just to prevent garbage collection.
Yes, that reference remains valid, and can in fact be retrieved using the Delegate.Target property -- in your code, as ra.Runner.Target.
As others said it counts as a reference. You might find this story interesting.
http://asserttrue.blogspot.com/2008/11/garbage-collection-causes-car-crash.html
If not, that sounds like a broken garbage collector.
Yes, the delegate counts as a reference. Your object will not be garbage collected until the delegate is also unreachable.
Short Version
For those who don't have the time to read my reasoning for this question below:
Is there any way to enforce a policy of "new objects only" or "existing objects only" for a method's parameters?
Long Version
There are plenty of methods which take objects as parameters, and it doesn't matter whether the method has the object "all to itself" or not. For instance:
var people = new List<Person>();
Person bob = new Person("Bob");
people.Add(bob);
people.Add(new Person("Larry"));
Here the List<Person>.Add method has taken an "existing" Person (Bob) as well as a "new" Person (Larry), and the list contains both items. Bob can be accessed as either bob or people[0]. Larry can be accessed as people[1] and, if desired, cached and accessed as larry (or whatever) thereafter.
OK, fine. But sometimes a method really shouldn't be passed a new object. Take, for example, Array.Sort<T>. The following doesn't make a whole lot of sense:
Array.Sort<int>(new int[] {5, 6, 3, 7, 2, 1});
All the above code does is take a new array, sort it, and then forget it (as its reference count reaches zero after Array.Sort<int> exits and the sorted array will therefore be garbage collected, if I'm not mistaken). So Array.Sort<T> expects an "existing" array as its argument.
There are conceivably other methods which may expect "new" objects (though I would generally think that to have such an expectation would be a design mistake). An imperfect example would be this:
DataTable firstTable = myDataSet.Tables["FirstTable"];
DataTable secondTable = myDataSet.Tables["SecondTable"];
firstTable.Rows.Add(secondTable.Rows[0]);
As I said, this isn't a great example, since DataRowCollection.Add doesn't actually expect a new DataRow, exactly; but it does expect a DataRow that doesn't already belong to a DataTable. So the last line in the code above won't work; it needs to be:
firstTable.ImportRow(secondTable.Rows[0]);
Anyway, this is a lot of setup for my question, which is: is there any way to enforce a policy of "new objects only" or "existing objects only" for a method's parameters, either in its definition (perhaps by some custom attributes I'm not aware of) or within the method itself (perhaps by reflection, though I'd probably shy away from this even if it were available)?
If not, any interesting ideas as to how to possibly accomplish this would be more than welcome. For instance I suppose if there were some way to get the GC's reference count for a given object, you could tell right away at the start of a method whether you've received a new object or not (assuming you're dealing with reference types, of course--which is the only scenario to which this question is relevant anyway).
EDIT:
The longer version gets longer.
All right, suppose I have some method that I want to optionally accept a TextWriter to output its progress or what-have-you:
static void TryDoSomething(TextWriter output) {
// do something...
if (output != null)
output.WriteLine("Did something...");
// do something else...
if (output != null)
output.WriteLine("Did something else...");
// etc. etc.
if (output != null)
// do I call output.Close() or not?
}
static void TryDoSomething() {
TryDoSomething(null);
}
Now, let's consider two different ways I could call this method:
string path = GetFilePath();
using (StreamWriter writer = new StreamWriter(path)) {
TryDoSomething(writer);
// do more things with writer
}
OR:
TryDoSomething(new StreamWriter(path));
Hmm... it would seem that this poses a problem, doesn't it? I've constructed a StreamWriter, which implements IDisposable, but TryDoSomething isn't going to presume to know whether it has exclusive access to its output argument or not. So the object either gets disposed prematurely (in the first case), or doesn't get disposed at all (in the second case).
I'm not saying this would be a great design, necessarily. Perhaps Josh Stodola is right and this is just a bad idea from the start. Anyway, I asked the question mainly because I was just curious if such a thing were possible. Looks like the answer is: not really.
No, basically.
There's really no difference between:
var x = new ...;
Foo(x);
and
Foo(new ...);
and indeed sometimes you might convert between the two for debugging purposes.
Note that in the DataRow/DataTable example, there's an alternative approach though - that DataRow can know its parent as part of its state. That's not the same thing as being "new" or not - you could have a "detach" operation for example. Defining conditions in terms of the genuine hard-and-fast state of the object makes a lot more sense than woolly terms such as "new".
Yes, there is a way to do this.
Sort of.
If you make your parameter a ref parameter, you'll have to have an existing variable as your argument. You can't do something like this:
DoSomething(ref new Customer());
If you do, you'll get the error "A ref or out argument must be an assignable variable."
Of course, using ref has other implications. However, if you're the one writing the method, you don't need to worry about them. As long as you don't reassign the ref parameter inside the method, it won't make any difference whether you use ref or not.
I'm not saying it's good style, necessarily. You shouldn't use ref or out unless you really, really need to and have no other way to do what you're doing. But using ref will make what you want to do work.
No. And if there is some reason that you need to do this, your code has improper architecture.
Short answer - no there isn't
In the vast majority of cases I usually find that the issues that you've listed above don't really matter all that much. When they do you could overload a method so that you can accept something else as a parameter instead of the object you are worried about sharing.
// For example create a method that allows you to do this:
people.Add("Larry");
// Instead of this:
people.Add(new Person("Larry"));
// The new method might look a little like this:
public void Add(string name)
{
Person person = new Person(name);
this.add(person); // This method could be private if neccessary
}
I can think of a way to do this, but I would definitely not recommend this. Just for argument's sake.
What does it mean for an object to be a "new" object? It means there is only one reference keeping it alive. An "existing" object would have more than one reference to it.
With this in mind, look at the following code:
class Program
{
static void Main(string[] args)
{
object o = new object();
Console.WriteLine(IsExistingObject(o));
Console.WriteLine(IsExistingObject(new object()));
o.ToString(); // Just something to simulate further usage of o. If we didn't do this, in a release build, o would be collected by the GC.Collect call in IsExistingObject. (not in a Debug build)
}
public static bool IsExistingObject(object o)
{
var oRef = new WeakReference(o);
#if DEBUG
o = null; // In Debug, we need to set o to null. This is not necessary in a release build.
#endif
GC.Collect();
GC.WaitForPendingFinalizers();
return oRef.IsAlive;
}
}
This prints True on the first line, False on the second.
But again, please do not use this in your code.
Let me rewrite your question to something shorter.
Is there any way, in my method, which takes an object as an argument, to know if this object will ever be used outside of my method?
And the short answer to that is: No.
Let me venture an opinion at this point: There should not be any such mechanism either.
This would complicate method calls all over the place.
If there was a method where I could, in a method call, tell if the object I'm given would really be used or not, then it's a signal to me, as a developer of that method, to take that into account.
Basically, you'd see this type of code all over the place (hypothetical, since it isn't available/supported:)
if (ReferenceCount(obj) == 1) return; // only reference is the one we have
My opinion is this: If the code that calls your method isn't going to use the object for anything, and there are no side-effects outside of modifying the object, then that code should not exist to begin with.
It's like code that looks like this:
1 + 2;
What does this code do? Well, depending on the C/C++ compiler, it might compile into something that evaluates 1+2. But then what, where is the result stored? Do you use it for anything? No? Then why is that code part of your source code to begin with?
Of course, you could argue that the code is actually a+b;, and the purpose is to ensure that the evaluation of a+b isn't going to throw an exception denoting overflow, but such a case is so diminishingly rare that a special case for it would just mask real problems, and it would be really simple to fix by just assigning it to a temporary variable.
In any case, for any feature in any programming language and/or runtime and/or environment, where a feature isn't available, the reasons for why it isn't available are:
It wasn't designed properly
It wasn't specified properly
It wasn't implemented properly
It wasn't tested properly
It wasn't documented properly
It wasn't prioritized above competing features
All of these are required to get a feature to appear in version X of application Y, be it C# 4.0 or MS Works 7.0.
Nope, there's no way of knowing.
All that gets passed in is the object reference. Whether it is 'newed' in-situ, or is sourced from an array, the method in question has no way of knowing how the parameters being passed in have been instantiated and/or where.
One way to know if an object passed to a function (or a method) has been created right before the call to the function/method is that the object has a property that is initialized with the timestamp passed from a system function; in that way, looking at that property, it would be possible to resolve the problem.
Frankly, I would not use such method because
I don't see any reason why the code should now if the passed parameter is an object right created, or if it has been created in a different moment.
The method I suggest depends from a system function that in some systems could not be present, or that could be less reliable.
With the modern CPUs, which are a way faster than the CPUs used 10 years ago, there could be the problem to use the right value for the threshold value to decide when an object has been freshly created, or not.
The other solution would be to use an object property that is set to a a value from the object creator, and that is set to a different value from all the methods of the object.
In this case the problem would be to forget to add the code to change that property in each method.
Once again I would ask to myself "Is there a really need to do this?".
As a possible partial solution if you only wanted one of an object to be consumed by a method maybe you could look at a Singleton. In this way the method in question could not create another instance if it existed already.