Is it possible to reassign a ref local? - c#

C#'s ref locals are implemented using a CLR feature called managed pointers, that come with their own set of restrictions, but luckily being immutable is not one of them. I.e. in ILAsm if you have a local variable of managed pointer type, it's entirely possible to change this pointer, making it "reference" another location. (C++/CLI also exposes this feature as interior pointers.)
Reading the C# documentation on ref locals it appears to me that C#'s ref locals are, even though based on the managed pointers of CLR, not relocatable; if they are initialized to point to some variable, they cannot be made to point to something else. I've tried using
ref object reference = ref some_var;
ref reference = ref other_var;
and similar constructs, to no avail.
I've even tried to write a small struct wrapping a managed pointer in IL, it works as far as C# is concerned, but the CLR doesn't seem to like having a managed pointer in a struct, even if in my usage it doesn't ever go to the heap.
Does one really have to resort to using IL or tricks with recursion to overcome this? (I'm implementing a data structure that needs to keep track of which of its pointers were followed, a perfect use of managed pointers.)

[edit:] "ref-reassign" is on the schedule for C# 7.3. The 'conditional-ref' workaround, which I discuss below, was deployed in C# 7.2.
I've also long been frustrated by this and just recently stumbled on a workable answer.
Essentially, in C# 7.2 you can now initialize ref locals with a ternary operator, and this can be con­triv­ed. somewhat torturously, into a simulation of ref-local reassignment. You "hand off" the ref local assignments downwards through multiple variables, as you move down in the lexical scope of your C# code.
This approach requires a great deal of unconventional thinking and a lot of planning ahead. For certain situations or coding scenarios, it may not be possible to anticipate the gamut of runtime con­figurations such that any conditional assignment scheme might apply. In this case you're out of luck. Or, switch to C++/CLI, which exposes managed tracking references. The tension here is that, for C#, the vast and indisputable gains in concision, elegance, and efficiency which are immediately realized by introducing the conventional use of managed pointers (these points are discussed fur­ther below) is frittered away with the degree of contortion required to overcome the reassignment problem.
The syntax that had eluded me for so long is shown next. Or, check the link I cited at the top.
C# 7.2 ref-local conditional assignment via ternary oerator ? :
ref int i_node = ref (f ? ref m_head : ref node.next);
This line is from a canonical problem case for the ref local dilemma that the questioner posed here. It's from code which maintains back-pointers while walking a singly-linked list. The task is trivial in C/C++, as it should be (and is quite beloved by CSE101 instructors, perhaps for that par­ticular reason)—but is entirely agonizing using managed pointers C#.
Such a complaint is entirely legitimate too, thanks to Microsoft's own C++/CLI language showing us how awesome managed pointers can be in the .NET universe. Instead, most C# developers seem to just end up using integer indices into arrays, or of course full blown native pointers with unsafe C#.
Some brief comments on the linked-list walking example, and why one would be interested in going to so much trouble over these managed pointers. We assume all of the nodes are actually structs in an array (ValueType, in-situ) such as m_nodes = new Node[100]; and each next pointer is thus an integer (its index in the array).
struct Node
{
public int ix, next;
public char data;
public override String ToString() =>
String.Format("{0} next: {1,2} data: {2}", ix, next, data);
};
As shown here, the head of the list will be a standalone integer, stored apart from the records. In the next snippet, I use the new C#7 syntax for ValueTuple to do so. Obviously it's no problem to traverse forward using these integer links—but C# has traditionally lacked an elegant way to main­tain a link to the node you came from. It's a problem since one of the integers (the first one) is a special case owing to not being embedded in a Node structure.
static (int head, Node[] nodes) L =
(3,
new[]
{
new Node { ix = 0, next = -1, data = 'E' },
new Node { ix = 1, next = 4, data = 'B' },
new Node { ix = 2, next = 0, data = 'D' },
new Node { ix = 3, next = 1, data = 'A' },
new Node { ix = 4, next = 2, data = 'C' },
});
Additionally, there's presumably a decent amount of processing work to do on each node, but you really don't want to pay the (double) performance costs of imaging each (possibly large) ValueType out of its cozy array home—and then having to image each one back when you're done! After all, surely the reason we're using value types here is to maximize performance. As I discuss at length elsewhere on this site, structs can be extremely efficient in .NET, but only if you never accident­ally "lift" them out of their storage. It's easy to do and it can immediately destroy your memory bus bandwidth.
The trival approach to not-lifting the structs just repeats array indexing like so:
int ix = 1234;
arr[ix].a++;
arr[ix].b ^= arr[ix].c;
arr[ix].d /= (arr[lx].e + arr[ix].f);
Here, each ValueType field access is independently dereferenced on every access. Although this "optimization" does avoid the bandwidth penalties mentioned above, repeating the same array indexing operation over and over again can instead implicate an entirely different set of runtime penalties. The (opportunity) costs now are due to unnecessarily wasted cycles where .NET re­computes provably invariant physical offsets or performs redundant bounds checks on the array.
JIT optimizations in release-mode may mitigate these issues somewhat—or even dramatically—by recognizing and consolidating redundancy in the code you supplied, but maybe not as much as you'd think or hope (or eventually realize you don't want): JIT optimizations are strongly constrained by strict adherence to the .NET Memory Model.[1], which requires that whenever a storage location is publicly visible, the CPU must execute the relevant fetch sequence exactly as authored in the code. For the previous example, this means that if ix is shared with other threads in any way prior to the operations on arr, then the JIT must ensure that the CPU actually touches the ix storage location exactly 6 times, no more, no less.
Of course the JIT can do nothing to address the other obvious and widely-acknowledged problem with repetitive source code such as the previous example. In short, it's ugly, bug-prone, and harder to read and maintain. To illustrate this point, ☞ ...did you even notice the bug I intentionally put in the preceding code?
The cleaner version of the code shown next doesn't make bugs like this "easier to spot;" in­stead, as a class, it precludes them en­tirely, since there's now no need for an array-in­dexing variable at all. Variable ix doesn't need exist in the following, since 1234 is used only once. It follows that the bug I so deviously introduced earlier cannot be propagated to this example because it has no means of expression, the benefit being that what can't exist can't introduce a bug (as opposed to 'what does not exist...', which most certainly could be a bug)
ref Node rec = ref arr[1234];
rec.a++;
rec.b ^= rec.c;
rec.d /= (rec.e + rec.f);
Nobody would disagree that this is an improvement. So ideally we want to use managed pointers to directly read and write fields in the structure in situ. One way to do this is to write all of your in­ten­sive processing code as instance member functions and properties in the ValueType itself, though for some reason it seems that many people don't like this approach. In any case, the point is now moot with C#7 ref locals...
✹ ✹ ✹
I'm now realizing that fully explaining the type of programming required here is probably too in­volved to show with a toy example and thus beyond the scope of a StackOverflow article. So I'm going to jump ahead and in order to wrap up I'll drop in a section of some working code I have showing simulated managed pointer reassignment. This is taken from a heavily modified snap­shot of HashSet<T> in the .NET 4.7.1 reference source[direct link], and I'll just show my version without much explanation:
int v1 = m_freeList;
for (int w = 0; v1 != -1; w++)
{
ref int v2 = ref (w == 0 ? ref m_freeList : ref m_slots[v1].next);
ref Slot fs = ref m_slots[v2];
if (v2 >= i)
{
v2 = fs.next;
fs = default(Slot);
v1 = v2;
}
else
v1 = fs.next;
}
This is just an arbitrary sample fragment from the working code so I don't expect anyone to follow it, but the gist of it is that the 'ref' variables, designated v1 and v2, are intertwined across scope blocks and the ternary operator is used to coordinate how they flow down. For example, the only purpose of the loop variable w is to handle which variable gets activated for the special case at the start of the linked-list traversal (discussed earlier).
Again, it turns out to be a very bizarre and tortured constraint on the normal ease and fluidity of modern C#. Patience, determination, and—as I mentioned earlier—a lot of planning ahead is required.
&lsqb;1.]If you're not familiar with what's called the .NET Memory Model, I strongly suggest taking a look. I believe .NET's strength in this area is one of its most compelling features, a hidden gem and the one (not-so-)secret superpower that most fatefully em­barrasses those ever-strident friends of ours who yet adhere to the 1980's-era ethos of bare-metal coding. Note an epic irony: imposing strict limits on wild or unbounded aggression of compiler optimization may end up enabling apps with much better performance, because stronger constraints expose re­liable guarantees to developers. These, in turn imply stronger programming abstractions or suggest advanced design paradigms, in this case relevant to concurrent systems.For example, if one agrees that, in the native community, lock-free programming has languished in the margins for decades, perhaps the unruly mob of optimizing compilers is to blame? Progress in this specialty area is easily wrecked without the reliable determinism and consistency provided by a rigorous and well-defined memory model, which, as noted, is somewhat at odds with unfettered compiler optimization. So here, constraints mean that the field can at last innovate and grow. This has been my experience in .NET, where lock-free programming has become a viable, realistic—and eventually, mundane—basic daily programming vehicle.

Related

How to make a tree structure on stack only ( no reference type, nothing on heap) in C#?

For performance perpose I am doing multithreading and data is intended to be saved on stack of each thread.
First of all, I want to save a collection of data as a value type(so it can be saved on stack). However all the commonly used collection types, even array are reference type.
By what I know the only posibility is Span, a struct collection. I have a ref struct, which is made ref struct to be able to contain a span.
public ref struct Level{
...
public Span<something> span;
}
This ref struct is also going to be saved into a collection, in this case, it was going to be a span just declared in the thread method:
private void threadStart() {
Span<Level> levelSpan = stackalloc Level[100];
...
}
Unfortunately, this is not allowed,
CS0306: The type 'Level' may not be used as a type argument
Is there any other posible approach to save a tree structed data on stack?
Possible? Probably - it would involve some horrible stackallocs etc, and lots of fighting the compiler to convince it that things are safe when they aren't verifiable, coercing pointers to spans (because the compiler won't trust your spans, but as soon as you touch pointers, the compiler gives up and just let's you do whatever you're doing, because it knows it can't help) - but honestly: it isn't a good idea. The heap is very unlikely to be your limiting factor here, and even if it was - there are things you can do with array-pool leases mapped to spans. Alternatively, it is possible (but not trivial) to create something like an arena allocator in .NET, especially when dealing with value-tuples; there is one in Pipelines.Sockets.Unofficial, for example.
Additionally, when talking about scalability, my main thought is "async", which is anathema to retained stack allocations (since the stack-frames unwind for await) - resumable state machines need to use the heap (or at least: not the stack).

Is pass by reference just a legacy?

Perhaps this is a bit naive of me, but I can't really seem to find/think of a decent use-case for "pass by reference". Changing an immutable string (as some other Q/As have mentioned) is generally avoidable and returning multiple variables is generally better handled by returning a Tuple, List, array, etc.
The example on MSDN is terrible in my opinion; I would simply be returning a value in the Square method, instead of having it declared as void.
It seems to me like it's a bit of a legacy part of C#, rather than an integral part of it. Can someone smarter than me try to explain why it's still around and/or some real-world use-cases that are actually practical (i.e. Changing an immutable string is avoidable in almost every case).
P.S.: I followed up on some of the comments by #KallDrexx and #newacct. I see now that they were right and I was wrong: my answer was somewhat misleading. The excellent article "Java is pass-by-value, dammit!" by Scott Stanchfield (Java-specific, but still mostly relevant to C#) finally convinced me so.
I'll leave the misleading bits of my answer striked through
for now, but might later remove them.
Pass by reference is not just used with ref or out parameters. More importantly, all reference types are passed by reference (thus their name), although this happens transparently.
Here are three frequent use cases for pass-by-reference:
Prevent copying of large structs when passing them around. Imagine you have a byte[] array representing a binary large object (BLOB), possibly a few megabytes in size value of some struct type that contains lots of fields. A value of that type might potentially occupy quite a lot of memory. Now you want to pass this value to some method. Do you really want to pass it by value, i.e. create a temporary copy?
You can avoid unnecessary copying of large structs by passing them by reference.
(Luckily for us, arrays such as byte[] are reference types, so the array's contents are already passed by refence.)
It is often suggested (e.g. in Microsoft's Framework Design Guidelines) that types having value-type semantics should be implemented as reference types if they exceed a certain size (32 bytes), so this use case should not be very frequent.
Mutability. If you want a method to be able to mutate a struct value that is passed to it, and you want the caller to observe the mutation of his version of that object, then you need pass by reference (ref). If the value is passed to the method by value, it receives a copy; mutating the copy will leave the original object unmodified.
This point is also mentioned in the Framework Design Guideline article linked to above.
Note the widespread recommendation against mutable value types (See e.g. "Why are mutable structs evil?"). You should rarely have to use ref or out parameters together with value types.
COM interop as mentioned in this answer often requires you to declare ref and out parameters.
Suppose you want to have a function that mutates a value (Notice, value, not object), and you also want that function to return some success indicator. A good practice would be to return a boolean indicating success / failure, but what about the value? So you use a ref:
bool Mutate(ref int val)
{
if(val > 0)
{
val = val * 2;
return true;
}
return false;
}
It's true that in C# there are usually alternatives to ref and out - for example, if you want to return more than one value to the caller, you could return a tuple, or a custom type, or receive a reference type as a parameter and change multiple values inside it.
However, these keywords can still be a convenient solution in situations like interop:
// C
int DoSomething(int input, int *output);
// C#
[DllImport(...)]
static extern int DoSomething(int input, ref int output);
There are indeed only few cases where explicit ref parameters are useful from a functional point of view.
One example I have seen:
public static void Swap<T>(ref T a, ref T b)
{
var temp = a;
a = b;
b = temp;
}
Such a method is useful in some sort algorithms, for example.
One reason why ref parameters are sometimes used, is as an optimization technique. A large struct (value type) is sometimes passed by ref, even if the called method has no intent to modify the struct's value, to avoid copying the structs contents. You could argue that a large struct would better be a class (i.e. a reference type), but that has disadvantages when you keep large arrays of such objects around (for example in graphics processing). Being an optimization, such a technique does not improve code readability, but improves performance in some situations.
A similar question could be asked about pointers. C# has support for pointers, through the unsafe and fixed keywords. They are hardly ever needed from a functional point of view: I can't really think of a function that could not be coded without the use of pointers. But pointers are not used to make the language more expressive or the code more readable, they are used as a low level optimization technique.
In fact, passing a struct by reference really is a safe way to pass a pointer to the struct's data, a typical low level optimization technique for large structs.
Is a feature that enables low level optimizations a legacy feature? That may depend on your point of view, but you aren't the only C# user. Many of them will tell you that the availability of pointers and other low level constructs is one of the big advantages C# has over some other languages. I for one don't think that ref parameters are a legacy feature, they are an integral part of the language, useful in some scenarios.
That does not mean that you have to use ref parameters, just like you don't have to use LINQ, async/await or any other language feature. Just be happy it's there for when you do need it.

How long does variable assignment take?

I've frequently come across code that made liberal use of variables like var self = this; so their code would look nicer. While I don't think assignments like these are going to be significant at all in any piece of code, I've always wondered how long an assignment like above would take.
With that said: How long does it take, assuming it isn't optimized away? How do the times compare between different languages- e.g. C#, Java, and C++? Common value types (including pointers)? 32 / 64-bit architectures?
EDIT: Erased the part about "noticeable difference". I meant that part as a side-question, but many people have seen that and started downvoting me for premature optimization (in spite of me highlighting the bottleneck part in bold).
The code:
var self = this;
Isn't creating a new instance of object 'this', but it is referencing a pointer to the object 'this'. At the machine level there is only one pointer since the C# compiler optimizes these types of reference out. So the "how long it takes" is actually zero.
So, why do 'this'? Because it makes code easier to read.

Why is it okay that this struct is mutable? When are mutable structs acceptable?

Eric Lippert told me I should "try to always make value types immutable", so I figured I should try to always make value types immutable.
But, I just found this internal mutable struct, System.Web.Util.SimpleBitVector32, in the System.Web assembly, which makes me think that there must be a good reason for having a mutable struct. I'm guessing the reason that they did it this way is because it performed better under testing, and they kept it internal to discourage its misuse. However, that's speculation.
I've C&P'd the source of this struct. What is it that justifies the design decision to use a mutable struct? In general, what sort of benefits can be gained by the approach and when are these benefits significant enough to justify the potential detriments?
[Serializable, StructLayout(LayoutKind.Sequential)]
internal struct SimpleBitVector32
{
private int data;
internal SimpleBitVector32(int data)
{
this.data = data;
}
internal int IntegerValue
{
get { return this.data; }
set { this.data = value; }
}
internal bool this[int bit]
{
get {
return ((this.data & bit) == bit);
}
set {
int data = this.data;
if (value) this.data = data | bit;
else this.data = data & ~bit;
}
}
internal int this[int mask, int offset]
{
get { return ((this.data & mask) >> offset); }
set { this.data = (this.data & ~mask) | (value << offset); }
}
internal void Set(int bit)
{
this.data |= bit;
}
internal void Clear(int bit)
{
this.data &= ~bit;
}
}
Given that the payload is a 32-bit integer, I'd say this could easily have been written as an immutable struct, probably with no impact on performance. Whether you're calling a mutator method that changes the value of a 32-bit field, or replacing a 32-bit struct with a new 32-bit struct, you're still doing the exact same memory operations.
Probably somebody wanted something that acted kind of like an array (while really just being bits in a 32-bit integer), so they decided they wanted to use indexer syntax with it, instead of a less-obvious .WithTheseBitsChanged() method that returns a new struct. Since it wasn't going to be used directly by anyone outside MS's web team, and probably not by very many people even within the web team, I imagine they had quite a bit more leeway in design decisions than the people building the public APIs.
So, no, probably not that way for performance -- it was probably just some programmer's personal preference in coding style, and there was never any compelling reason to change it.
If you're looking for design guidelines, I wouldn't spend too much time looking at code that hasn't been polished for public consumption.
Actually, if you search for all classes containing BitVector in the .NET framework, you'll find a bunch of these beasts :-)
System.Collections.Specialized.BitVector32 (the sole public one...)
System.Web.Util.SafeBitVector32 (thread safe)
System.Web.Util.SimpleBitVector32
System.Runtime.Caching.SafeBitVector32 (thread safe)
System.Configuration.SafeBitVector32 (thread safe)
System.Configuration.SimpleBitVector32
And if you look here were resides the SSCLI (Microsoft Shared Source CLI, aka ROTOR) source of System.Configuration.SimpleBitVector32, you'll find this comment:
//
// This is a cut down copy of System.Collections.Specialized.BitVector32. The
// reason this is here is because it is used rather intensively by Control and
// WebControl. As a result, being able to inline this operations results in a
// measurable performance gain, at the expense of some maintainability.
//
[Serializable()]
internal struct SimpleBitVector32
I believe this says it all. I think the System.Web.Util one is more elaborate but built on the same grounds.
SimpleBitVector32 is mutable, I suspect, for the same reasons that BitVector32 is mutable. In my opinion, the immutable guideline is just that, a guideline; however, one should have a really good reason for doing so.
Consider, also, the Dictionary<TKey, TValue> - I go into some extended details here. The dictionary's Entry struct is mutable - you can change TValue at any time. But, Entry logically represents a value.
Mutability must make sense. I agree with the #JoeWhite: somebody wanted something that acted kind of like an array (while really just being bits in a 32-bit integer); also that both BitVector structs could easily have been ... immutable.
But, as a blanket statement, I disagree with it was probably just some programmer's personal preference in coding style and lean more toward there was never [nor is there] any compelling reason to change it. Simply know and understand the responsibility of using a mutable struct.
Edit
For the record, I do heartily agree that you should always try to make a struct immutable. If you find that requirements dictate member mutability, revisit the design decision and get peers involved.
Update
I was not initially confident in my assessment of performance when considering a mutable value type v. immutable. However, as #David points out, Eric Lippert writes this:
There are times when you need to wring every last bit of performance
out of a system. And in those scenarios, you sometimes have to make a
tradeoff between code that is clean, pure, robust ,
understandable, predictable, modifiable and code that is none of the
above but blazingly fast.
I bolded pure because a mutable struct does not fit the pure ideal that a struct should be immutable. There are side-affect of writing a mutable struct: understability and predictability are compromised, as Eric goes on to explain:
Mutable value types ... behave
in a manner that many people find deeply counterintuitive, and thereby
make it easy to write buggy code (or correct code that is easily
turned into buggy code by accident.) But yes, they are real fast.
The point Eric is making is that you, as the designer and/or developer need to make a conscious and informed decision. How do you become informed? Eric explains that also:
I would consider coding up two benchmark solutions -- one using
mutable structs, one using immutable structs -- and run some
realistic user-scenario-focused benchmarks. But here's the thing: do not pick the faster one. Instead, decide BEFORE you run the benchmark
how slow is unacceptably slow.
We know that altering a value type is faster than creating a new value type; but considering correctness:
If both solutions are acceptable, choose the one that is clean,
correct and fast enough.
The key is being fast enough to offset side affects of choosing mutable over immutable. Only you can determine that.
Using a struct for a 32- or 64-bit vector as shown here is reasonable, with a few caveats:
I would recommend using an Interlocked.CompareExchange loop when performing any updates to the structure, rather than just using the ordinary Boolean operators directly. If one thread tries to write bit 3 while another tries to write bit 8, neither operation should interfere with the other beyond delaying it a little bit. Use of an Interlocked.CompareExchange loop will avoid the possibility of errant behavior (thread 1 reads value, thread 2 reads old value, thread 1 writes new value, thread 2 writes value computed based on old value and undoes thread 1's change) without needing any other type of locking.
Structure members, other than property setters, which modify "this" should be avoided. It's better to use a static method which accepts the structure as a reference parameter. Invoking a structure member which modifies "this" is generally identical to calling a static method which accepts the member as a reference parameter, both from a semantic and performance standpoint, but there's one key difference: If one tries to pass a read-only structure by reference to a static method, one will get a compiler error. By contrast, if one invokes on a read-only structure a method which modifies "this", there won't be any compiler error but the intended modification won't happen. Since even mutable structures can get treated as read-only in certain contexts, it's far better to get a compiler error when this happens than to have code which will compile but won't work.
Eric Lippert likes to rail on about how mutable structures are evil, but it's important to recognize his relation to them: he's one of the people on the C# team who is charged with making the language support features like closures and iterators. Because of some design decisions early in the creation of .net, properly supporting value-type semantics is difficult in some contexts; his job would be a lot easier if there weren't any mutable value types. I don't begrudge Eric for his point of view, but it's important to note that some principles which may be important in framework and language design are not so applicable to application design.
If I understand correctly, you cannot make a serializable immutable struct simply by using SerializableAttribute. This is because during deserialization, the serializer instantiates a default instance of the struct, then sets all the fields following instantiation. If they are readonly, deserialization will fail.
Thus, the struct had to be mutable, else a complex serialization system would have been necessary.

C# to C++ 'Gotchas'

I have been developing a project that I absolutely must develop part-way in C++. I need develop a wrapper and expose some C++ functionality into my C# app. I have been a C# engineer since the near-beginning of .NET, and have had very little experience in C++. It still looks very foreign to me when attempting to understand the syntax.
Is there anything that is going to knock me off my feet that would prevent me from just picking up C++ and going for it?
C++ has so many gotchas that I can't enumerate them all. Do a search for "C# vs C++". A few basic things to know:
In C++:
struct and a class are basically the same thing (Default visibility for a struct is public, it's private for a class).
Both struct and class can be created either on the heap or the stack.
You have to manage the heap yourself. If you create something with "new", you have to delete it manually at some point.
If performance isn't an issue and you have very little data to move around, you can avoid the memory management issue by having everything on the stack and using references (& operator).
Learn to deal with .h and .cpp. Unresolved external can be you worse nightmare.
You shouldn't call a virtual method from a constructor. The compiler will never tell you so I do.
Switch case doesn't enforce "break" and go thru by default.
There is not such a thing as an interface. Instead, you have class with pure virtual methods.
C++ aficionados are dangerous people living in cave and surviving on the fresh blood of C#/java programmers. Talk with them about their favorite language carefully.
Garbage collection!
Remember that everytime you new an object, you must be responsible for calling delete.
There are a lot of differences, but the biggest one I can think of that programmers coming from Java/C# always get wrong, and which they never realize they've got wrong, is C++'s value semantics.
In C#, you're used to using new any time you wish to create an object. And whenever we talk about a class instance, we really mean "a reference to the class instance". Foo x = y doesn't copy the object y, it simply creates another reference to whatever object y references.
In C++, there's a clear distinction between local objects, allocated without new (Foo f or Foo f(x, y), and dynamically allocated ones (Foo* f = new Foo() or Foo* f = new Foo(x, y)). And in C# terms, everything is a value type. Foo x = y actually creates a copy of the Foo object itself.
If you want reference semantics, you can use pointers or references: Foo& x = y creates a reference to the object y. Foo* x = &y creates a pointer to the address at which y is located. And copying a pointer does just that: it creates another pointer, which points to whatever the original pointer pointed to. So this is similar to C#'s reference semantics.
Local objects have automatic storage duration -- that is, a local object is automatically destroyed when it goes out of scope. If it is a class member, then it is destroyed when the owning object is destroyed. If it is a local variable inside a function, it is destroyed when execution leaves the scope in which it was declared.
Dynamically allocated objects are not destroyed until you call delete.
So far, you're probably with me. Newcomers to C++ are taught this pretty soon.
The tricky part is in what this means, how it affects your programming style:
In C++, the default should be to create local objects. Don't allocate with new unless you absolutely have to.
If you do need dynamically allocated data, make it the responsibility of a class. A (very) simplified example:
class IntArrayWrapper {
explicit IntArrayWrapper(int size) : arr(new int[size]) {} // allocate memory in the constructor, and set arr to point to it
~IntArrayWrapper() {delete[] arr; } // deallocate memory in the destructor
int* arr; // hold the pointer to the dynamically allocated array
};
this class can now be created as a local variable, and it will internally do the necessary dynamic allocations. And when it goes out of scope, it'll automatically delete the allocated array again.
So say we needed an array of x integers, instead of doing this:
void foo(int x){
int* arr = new int[x];
... use the array ...
delete[] arr; // if the middle of the function throws an exception, delete will never be called, so technically, we should add a try/catch as well, and also call delete there. Messy and error-prone.
}
you can do this:
void foo(int x){
IntArrayWrapper arr(x);
... use the array ...
// no delete necessary
}
Of course, this use of local variables instead of pointers or references means that objects are copied around quite a bit:
Bar Foo(){
Bar bar;
... do something with bar ...
return bar;
}
in the above, what we return is a copy of the bar object. We could return a pointer or a reference, but as the instance created inside the function goes out of scope and is destroyed the moment the function returns, we couldn't point to that. We could use new to allocate an instance that outlives the function, and return a function to that -- and then we get all the memory management headaches of figuring out whose responsibility it is to delete the object, and when that should happen. That's not a good idea.
Instead, the Bar class should simply be designed so that copying it does what we need. Perhaps it should internally call new to allocate an object that can live as long as we need it to. We could then make copying or assignment "steal" that pointer. Or we could implement some kind of reference-counting scheme where copying the object simply increments a reference counter and copies the pointer -- which should then be deleted not when the individual object is destroyed, but when the last object is destroyed and the reference counter reaches 0.
But often, we can just perform a deep copy, and clone the object in its entirety. If the object includes dynamically allocated memory, we allocate more memory for the copy.
It may sound expensive, but the C++ compiler is good at eliminating unnecessary copies (and is in fact in most cases allowed to eliminate copy operations even if they have side effects).
If you want to avoid copying even more, and you're prepared to put up with a little more clunky usage, you can enable "move semantics" in your classes as well as (or instead of) "copy semantics". It's worth getting into this habit because (a) some objects can't easily be copied, but they can be moved (e.g. a Socket class), (b) it's a pattern established in the standard library and (c) it's getting language support in the next version.
With move semantics, you can use objects as a kind of "transferable" container. It's the contents that move. In the current approach, it's done by calling swap, which swaps the contents of two objects of the same type. When an object goes out of scope, it is destructed, but if you swap its contents into a reference parameter first, the contents escape being destroyed when the scope ends. Therefore, you don't necessarily need to go all the way and use reference counted smart pointers just to allow complex objects to be returned from functions. The clunkiness comes from the fact that you can't really return them - you have to swap them into a reference parameter (somewhat similar to a ref parameter in C#). But the language support in the next version of C++ will address that.
So the biggest C# to C++ gotcha I can think of: don't make pointers the default. Use value semantics, and instead tailor your classes to behave the way you want when they're copied, created and destroyed.
A few months ago, I attempted to write a series of blog posts for people in your situation:
Part 1
Part 2
Part 3
I'm not 100% happy with how they turned out, but you may still find them useful.
And when you feel that you're never going to get a grip on pointers, this post may help.
No run-time checks
One C++ pitfall is the behaviour when you try to do something that might be invalid, but which can only be checked at runtime - for example, dereferencing a pointer that could be null, or accessing an array with an index that might be out of range.
The C# philosophy emphasises correctness; all behaviour should be well-defined and, in cases like this, it performs a run-time check of the preconditions and throws well-defined exceptions if they fail.
The C++ philosophy emphasises efficiency, and the idea that you shouldn't pay for anything you might not need. In cases like this, nothing will be checked for you, so you must either check the preconditions yourself or design your logic so that they must be true. Otherwise, the code will have undefined behaviour, which means it might (more or less) do what you want, it might crash, or it might corrupt completely unrelated data and cause errors that are horrendously difficult to track down.
Just to throw in some others that haven't been mentioned yet by other answers:
const: C# has a limited idea of const. In C++ 'const-correctness' is important. Methods that don't modify their reference parameters should take const-references, eg.
void func(const MyClass& x)
{
// x cannot be modified, and you can't call non-const methods on x
}
Member functions that don't modify the object should be marked const, ie.
int MyClass::GetSomething() const // <-- here
{
// Doesn't modify the instance of the class
return some_member;
}
This might seem unnecessary, but is actually very useful (see the next point on temporaries), and sometimes required, since libraries like the STL are fully const-correct, and you can't cast const things to non-const things (don't use const_cast! Ever!). It's also useful for callers to know something won't be changed. It is best to think about it in this way: if you omit const, you are saying the object will be modified.
Temporary objects: As another answer mentioned, C++ is much more about value-semantics. Temporary objects can be created and destroyed in expressions, for example:
std::string str = std::string("hello") + " world" + "!";
Here, the first + creates a temporary string with "hello world". The second + combines the temporary with "!", giving a temporary containing "hello world!", which is then copied to str. After the statement is complete, the temporaries are immediately destroyed. To further complicate things, C++0x adds rvalue references to solve this, but that's way out of the scope of this answer!
You can also bind temporary objects to const references (another useful part of const). Consider the previous function again:
void func(const MyClass& x)
This can be called explicitly with a temporary MyClass:
func(MyClass()); // create temporary MyClass - NOT the same as 'new MyClass()'!
A MyClass instance is created, on the stack, func2 accesses it, and then the temporary MyClass is destroyed automatically after func returns. This is convenient and also usually very fast, since the heap is not involved. Note 'new' returns a pointer - not a reference - and requires a corresponding 'delete'. You can also directly assign temporaries to const references:
const int& blah = 5; // 5 is a temporary
const MyClass& myClass = MyClass(); // creating temporary MyClass instance
// The temporary MyClass is destroyed when the const reference goes out of scope
Const references and temporaries are frequent in good C++ style, and the way these work is very different to C#.
RAII, exception safety, and deterministic destructors. This is actually a useful feature of C++, possibly even an advantage over C#, and it's worth reading up on since it's also good C++ style. I won't cover it here.
Finally, I'll just throw in this is a pointer, not a reference :)
The traditional stumbling blocks for people coming to C++ from C# or Java are memory management and polymorphic behavior:
While objects always live on the heap and are garbage collected in C#/Java, you can have objects in static storage, stack or the heap ('free store' in standard speak) in C++. You have to cleanup the stuff you allocate from the heap (new/delete). An invaluable technique for dealing with that is RAII.
Inheritance/polymorphism work only through pointer or reference in C++.
There are many others, but these will probably get you first.
Virtual destructors.
Header files! You'll find yourself asking, "so why do I need to write method declarations twice every time?"
Pointers and Memory Allocation
...I'm a C# guy too and I'm still trying to wrap my head around proper memory practices in C/C++.
Here is a brief overview of Managed C++ here. An article about writing an Unmanaged wrapper using the Managed C++ here. There is another article here about mixing Unmanaged with Managed C++ code here.
Using Managed C++ would IMHO make it easier to use as a bridge to the C# world and vice versa.
Hope this helps,
Best regards,
Tom.
The biggest difference is C#'s reference semantics (for most types) vs. C++'s value semantics. This means that objects are copied far more often than they are in C#, so it's important to ensure that objects are copied correctly. This means implementing a copy constructor and operator= for any class that has a destructor.
Raw memory twiddling. Unions, memsets, and other direct memory writes. Anytime someone writes to memory as a sequence of bytes (as opposed to as objects), you lose much of the ability to reason about the code.
Linking
Linking with external libraries is not as forgiving as it is in .Net, $DEITY help you if you mix something compiled with different flavors of the same msvcrt (debug, multithread, unicode...)
Strings
And you'll have to deal with Unicode vs Ansi strings, these are not exactly the same.
Have fun :)
The following isn't meant to dissuade in any way :D
C++ is a minefield of Gotcha's, it's relatively tame if you don't use templates and the STL -- and just use object orientation, but even then is a monster. In that case object based programming (rather than object-oriented programming) makes it even tamer -- often this form of C++ is enforced in certain projects (i.e., don't use any features that have even a chance of being naively used).
However you should learn all those things, as its a very powerful language if you do manage to traverse the minefield.If you want to learn about gotcha's you better get the books from Herb Sutter, Scott Myers, and Bjarne Stroustrup. Also Systematically going over the C++ FAQ Lite will help you to realize that it indeed does require 10 or so books to turn into a good C++ programmer.

Categories