Marshal.PtrToStringUni() vs new String()? - c#

Suppose i have a pointer of type char* to unicode string, and i know the length:
char* _unmanagedStr;
int _unmanagedStrLength;
and i have 2 ways to convert it to .NET string:
Marshal.PtrToStringUni((IntPtr)_unmanagedStr, _unmanagedStrLength);
and
new string(_unmanagedStr, 0, _unmanagedStrLength);
In my tests, both calls gives me exactly the same result, but the new string() is like 1.8x times faster than Marshal.PtrToStringUni().
Why is that performance difference?
Is there any another functional difference between the both?

Judging from available source code (Rotor), the System.String(Char*) constructor uses a heavily optimized code path through CtorCharPtr(), it allocates the string with FastAllocateString(). Marshal.PtrToStringUni() follows an entirely different code path, it is written in C++ and looks to be copying the string twice, without the benefit of a "fast allocator".
Clearly, not the same programmer worked on this. Almost certainly not even the same team since the code fits a different programming model. The closest manager in common was probably four levels up.
Not sure how that would be helpful, use the fast one. Mishaps would generate a similar kind of exception on Windows.

The second is not CLS compliant, requires unsafe code and might have undetermined behavior which is why probably it's faster. There's also a need to pin the pointer to the unmanaged address or the garbage collector might reallocate it which leades to a more cluttered code. Unless you've determined that this is a bottleneck for your application you'll probably want to use the PtrToStringUni function.

Related

Why cannot marshal struct with auto layout

I encountered an odd behaviour when marshalling a struct with auto layout kind.
For example: let's take a simple code:
[StructLayout(LayoutKind.Auto)]
public struct StructAutoLayout
{
byte B1;
long Long1;
byte B2;
long Long2;
byte B3;
}
public static void Main()
{
Console.WriteLine("Sizeof struct is {0}", Marshal.SizeOf<StructAutoLayout>());
}
it throws an exception:
Unhandled Exception: System.ArgumentException: Type
'StructAutoLayout' cannot be marshaled as an unmanaged
structure; no meaningful size or offset can be computed.
So it means that compiler doesn't know struct size at compile time? I was sure that this attribute reorders struct fields and then compiles it, but it doesn't.
It doesn't make any sense. Marshalling is used for interop - and when doing interop, the two sides have to agree exactly on the structure of the struct.
When you use auto layout, you defer the decision about the structure layout to the compiler. Even different versions of the same compiler can result in different layouts - that's a problem. For example, one compiler might use this:
public struct StructAutoLayout
{
byte B1;
long Long1;
byte B2;
long Long2;
byte B3;
}
while another might do something like this:
public struct StructAutoLayout
{
byte B1;
byte B2;
byte B3;
byte _padding;
long Long1;
long Long2;
}
When dealing with native/unmanaged code, there's pretty much no meta-data involved - just pointers and values. The other side has no way of knowing how the structure is actually laid out, it expects a fixed layout you both agreed upon in advance.
.NET has a tendency to make you spoiled - almost everything just works. This is not the case when interoping with something like C++ - if you just guess your way around, you'll most likely end up with a solution that usually works, but once in a while crashes your whole application. When doing anything with unmanaged / native code, make sure you understand perfectly what you're doing - unmanaged interop is just fragile that way.
Now, the Marshal class is designed specifically for unmanaged interop. If you read the documentation for Marshal.SizeOf, it specifically says
Returns the size of an unmanaged type in bytes.
And of course,
You can use this method when you do not have a structure. The layout must be sequential or explicit.
The size returned is the size of the unmanaged type. The unmanaged and managed sizes of an object can differ. For character types, the size is affected by the CharSet value applied to that class.
If the type can't possibly be marshalled, what should Marshal.SizeOf return? That doesn't even make sense :)
Asking for the size of a type or an instance doesn't make any sense in a managed environment. "Real size in memory" is an implementation detail as far as you are concerned - it's not a part of the contract, and it's not something to rely on. If the runtime / compiler wanted, it could make every byte 77 bytes long, and it wouldn't break any contract whatsoever as long as it only stores values from 0 to 255 exactly.
If you used a struct with an explicit (or sequential) layout instead, you would have a definite contract for how the unmanaged type is laid out, and Marshal.SizeOf would work. However, even then, it will only return the size of the unmanaged type, not of the managed one - that can still differ. And again, both can be different on different systems (for example, IntPtr will be four bytes on a 32-bit system and eight bytes on a 64-bit system when running as a 64-bit application).
Another important point is that there's multiple levels of "compilation" in a .NET application. The first level, using a C# compiler, is only the tip of the iceberg - and it's not the part that handles reordering fields in the auto-layout structs. It simply marks the struct as "auto-layouted", and it's done. The actual layouting is handled when you run the application by the CLI (the specification is not clear on whether the JIT compiler handles that, but I would assume so). But that has nothing to do with Marshal.SizeOf or even sizeof - both of those are still handled at runtime. Forget everything you know from C++ - C# (and even C++/CLI) is an entirely different beast.
If you need to profile managed memory, use a memory profiler (like CLRProfiler). But do understand that you're still profiling memory in a very specific environment - different systems or .NET versions can give you different results. And in fact, there's nothing saying two instances of the same structure must be the same size.

Is it safe to keep C++ pointers in C#?

I'm currently working on some C#/C++ code which makes use of invoke. In the C++ side there is a std::vector full of pointers each identified by index from the C# code, for example a function declaration would look like this:
void SetName(char* name, int idx)
But now I'm thinking, since I'm working with pointers couldn't I sent to C# the pointer address itself then in code I could do something like this:
void SetName(char*name, int ptr)
{
((TypeName*)ptr)->name = name;
}
Obviously that's a quick version of what I'm getting at (and probably won't compile).
Would the pointer address be guaranteed to stay constant in C++ such that I can safely store its address in C# or would this be too unstable or dangerous for some reason?
In C#, you don't need to use a pointer here, you can just use a plain C# string.
[DllImport(...)]
extern void SetName(string name, int id);
This works because the default behavior of strings in p/invoke is to use MarshalAs(UnmanagedType.LPStr), which converts to a C-style char*. You can mark each argument in the C# declaration explicitly if it requires some other way of being marshalled, eg, [MarshalAs(UnmanagedType.LPWStr)], for an arg that uses a 2-byte per character string.
The only reason to use pointers is if you need to retain access to the data pointed to after you've called the function. Even then, you can use out parameters most of the time.
You can p/invoke basically anything without requiring pointers at all (and thus without requiring unsafe code, which requires privileged execution in some environments).
Yes, no problem. Native memory allocations never move so storing the pointer in an IntPtr on the C# side is fine. You need some kind of pinvoked function that returns this pointer, then
[DllImport("something.dll", CharSet = CharSet.Ansi)]
void SetName(IntPtr vector, string name, int index);
Which intentionally lies about this C++ function:
void SetName(std::vector<std::string>* vect, const char* name, int index) {
std::string copy = name;
(*vect)[index] = copy;
}
Note the usage of new in the C++ code, you have to copy the string. The passed name argument points to a buffer allocated by the pinvoke marshaller and is only valid for the duration of the function body. Your original code cannot work. If you intend to return pointers to vector<> elements then be very careful. A vector re-allocates its internal array when you add elements. Such a returned pointer will then become invalid and you'll corrupt the heap when you use it later. The exact same thing happens with a C# List<> but without the risk of dangling pointers.
I think it's stable till you command C++ code and perfectly aware what he does, and other developers that work on the same code know about that danger too.
So by my opinion, it's not very secure way of architecture, and I would avoid it as much as I can.
Regards.
The C# GC moves things, but the C++ heap does not move anything- a pointer to an allocated object is guaranteed to remain valid until you delete it. The best architecture for this situation is just to send the pointer to C# as an IntPtr and then take it back in C++.
It's certainly a vastly, incredibly better idea than the incredibly BAD, HORRIFIC integer cast you've got going there.

True Unsafe Code Performance

I understand unsafe code is more appropriate to access things like the Windows API and do unsafe type castings than to write more performant code, but I would like to ask you if you have ever noticed any significant performance improvement in real-world applications by using it when compared to safe c# code.
Some Performance Measurements
The performance benefits are not as great as you might think.
I did some performance measurements of normal managed array access versus unsafe pointers in C#.
Results from a build run outside of Visual Studio 2010, .NET 4, using an
Any CPU | Release build on the following PC specification: x64-based PC, 1 quad-core processor. Intel64 Family 6 Model 23 Stepping 10 GenuineIntel ~2833 Mhz.
Linear array access
00:00:07.1053664 for Normal
00:00:07.1197401 for Unsafe *(p + i)
Linear array access - with pointer increment
00:00:07.1174493 for Normal
00:00:10.0015947 for Unsafe (*p++)
Random array access
00:00:42.5559436 for Normal
00:00:40.5632554 for Unsafe
Random array access using Parallel.For(), with 4 processors
00:00:10.6896303 for Normal
00:00:10.1858376 for Unsafe
Note that the unsafe *(p++) idiom actually ran slower. My guess this broke a compiler optimization that was combining the loop variable and the (compiler generated) pointer access in the safe version.
Source code available on github.
As was stated in other posts, you can use unsafe code in very specialised contexts to get a significant performance inprovement. One of those scenarios is iterating over arrays of value types. Using unsafe pointer arithmetic is much faster than using the usual pattern of for-loop/indexer..
struct Foo
{
int a = 1;
int b = 2;
int c = 0;
}
Foo[] fooArray = new Foo[100000];
fixed (Foo* foo = fooArray) // foo now points to the first element in the array...
{
var remaining = fooArray.length;
while (remaining-- > 0)
{
foo->c = foo->a + foo->b;
foo++; // foo now points to the next element in the array...
}
}
The main benefit here is that we've cut out array index checking entirely..
While very performant, this kind of code is hard to handle, can be quite dangerous (unsafe), and breaks some fundamental guidelines (mutable struct). But there are certainly scenarios where this is appropriate...
A good example is image manipulations. Modifying the Pixels by using a pointer to their bytes (which requires unsafe code) is quite a bit faster.
Example: http://www.gutgames.com/post/Using-Unsafe-Code-for-Faster-Image-Manipulation.aspx
That being said, for most scenarios, the difference wouldn't be as noticeable. So before you use unsafe code, profile your application to see where the performance bottlenecks are and test whether unsafe code is really the solution to make it faster.
Well, I would suggest to read this blog-post: MSDN blogs: Array Bounds Check Elimination in the CLR
This clarifies how bounds-checks are done in C#. Moreover, Thomas Bratts tests seem to me useless (looking at the code) since the JIT removes in his 'save' loops the bound-checks anyway.
I am using unsafe code for video manipulation code.
In such code you want it to run as fast as possible without internal checks on values etc. Without unsafe attributes my could would not be able to keep up with the video stream at 30fps or 60 fps. (depending on used camera).
But because of speed its widely used by people who code graphics.
To all that are looking at these answers I would like to point out that even though the answers are excellent, a lot has changed sins the answers have been posted.
Please note that .net has changed quite a bit and one now also has the possibility to access new data types like vectors, Span, ReadOnlySpan as well as hardware specific libraries and classes like those found in System.Runtime.Intrinsics in core 3.0
Have a look at this blog post to see how hardware optimized loops could be used and this blog how to fallback to safe methods if optimal hardware is not available.

How do you explain C++ pointers to a C#/Java developer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am a C#/Java developer trying to learn C++. As I try to learn the concept of pointers, I am struck with the thought that I must have dealt with this concept before. How can pointers be explained using only concepts that are familiar to a .NET or Java developer? Have I really never dealt with this, is it just hidden to me, or do I use it all the time without calling it that?
Java objects in C++
A Java object is the equivalent of a C++ shared pointer.
A C++ pointer is like a Java object without the garbage collection built in.
C++ objects.
C++ has three ways of allocating objects:
Static Storage Duration objects.
These are created at startup (before main) and die after main exits.
There are some technical caveats to that but that is the basics.
Automatic Storage Duration objects.
These are created when declared and destroyed when they go out of scope.
I believe these are like C# structs
Dynamic Storage Duration objects
These are created via new and the closest to a C#/Java object (AKA pointers)
Technically pointers need to be destroyed manually via delete. But this is considered bad practice and under normal situations they are put inside Automatic Storage Duration Objects (usually called smart pointers) that control their lifespan. When the smart pointer goes out of scope it is destroyed and its destructor can call delete on the pointer. Smart pointers can be though of as fine grain garbage collectors.
The closest to Java is the shared_ptr, this is a smart pointer that keeps a count of the number of users of the pointer and deletes it when nobody is using it.
You are "using pointers" all the time in C#, it's just hidden from you.
The best way I reckon to approach the problem is to think about the way a computer works. Forget all of the fancy stuff of .NET: you have the memory, which just holds byte values, and the processor, which just does things to these byte values.
The value of a given variable is stored in memory, so is associated with a memory address. Rather than having to use the memory address all the time, the compiler lets you read from it and write to it using a name.
Furthermore, you can choose to interpret a value as a memory address at which you wish to find another value. This is a pointer.
For example, lets say our memory contains the following values:
Address [0] [1] [2] [3] [4] [5] [6] [7]
Data 5 3 1 8 2 7 9 4
Let's define a variable, x, which the compiler has chosen to put at address 2. It can be seen that the value of x is 1.
Let's now define a pointer, p which the compiler has chosen to put at address 7. The value of p is 4. The value pointed to by p is the value at address 4, which is the value 2. Getting at the value is called dereferencing.
An important concept to note is that there is no such thing as a type as far as memory is concerned: there are just byte values. You can choose to interpret these byte values however you like. For example, dereferencing a char pointer will just get 1 byte representing an ASCII code, but dereferencing an int pointer may get 4 bytes making up a 32 bit value.
Looking at another example, you can create a string in C with the following code:
char *str = "hello, world!";
What that does is says the following:
Put aside some bytes in our stack frame for a variable, which we'll call str.
This variable will hold a memory address, which we wish to interpret as a character.
Copy the address of the first character of the string into the variable.
(The string "hello, world!" will be stored in the executable file and hence will be loaded into memory when the program loads)
If you were to look at the value of str you'd get an integer value which represents an address of the first character of the string. However, if we dereference the pointer (that is, look at what it's pointing to) we'll get the letter 'h'.
If you increment the pointer, str++;, it will now point to the next character. Note that pointer arithmetic is scaled. That means that when you do arithmetic on a pointer, the effect is multiplied by the size of the type it thinks it's pointing at. So assuming int is 4 bytes wide on your system, the following code will actually add 4 to the pointer:
int *ptr = get_me_an_int_ptr();
ptr++;
If you end up going past the end of the string, there's no telling what you'll be pointing at; but your program will still dutifully attempt to interpret it as a character, even if the value was actually supposed to represent an integer for example. You may well be trying to access memory which is not allocated to your program however, and your program will be killed by the operating system.
A final useful tip: arrays and pointer arithmetic are the same thing, it's just syntactic sugar. If you have a variable, char *array, then
array[5]
is completely equivalent to
*(array + 5)
A pointer is the address of an object.
Well, technically a pointer value is the address of an object. A pointer object is an object (variable, call it what you prefer) capable of storing a pointer value, just as an int object is an object capable of storing an integer value.
["Object" in C++ includes instances of class types, and also of built-in types (and arrays, etc). An int variable is an object in C++, if you don't like that then tough luck, because you have to live with it ;-)]
Pointers also have static type, telling the programmer and the compiler what type of object it's the address of.
What's an address? It's one of those 0x-things with numbers and letters it it that you might sometimes have seen in a debugger. For most architectures we can consider memory (RAM, to over-simplify) as a big sequence of bytes. An object is stored in a region of memory. The address of an object is the index of the first byte occupied by that object. So if you have the address, the hardware can get at whatever's stored in the object.
The consequences of using pointers are in some ways the same as the consequences of using references in Java and C# - you're referring to an object indirectly. So you can copy a pointer value around between function calls without having to copy the whole object. You can change an object via one pointer, and other bits of code with pointers to the same object will see the changes. Sharing immutable objects can save memory compared with lots of different objects all having their own copy of the same data that they all need.
C++ also has something it calls "references", which share these properties to do with indirection but are not the same as references in Java. Nor are they the same as pointers in C++ (that's another question).
"I am struck with the thought that I must have dealt with this concept before"
Not necessarily. Languages may be functionally equivalent, in the sense that they all compute the same functions as a Turing machine can compute, but that doesn't mean that every worthwhile concept in programming is explicitly present in every language.
If you wanted to simulate the C memory model in Java or C#, though, I suppose you'd create a very large array of bytes. Pointers would be indexes in the array. Loading an int from a pointer would involve taking 4 bytes starting at that index, and multiplying them by successive powers of 256 to get the total (as happens when you deserialize an int from a bytestream in Java). If that sounds like a ridiculous thing to do, then it's because you haven't dealt with the concept before, but nevertheless it's what your hardware has been doing all along in response to your Java and C# code[*]. If you didn't notice it, then it's because those languages did a good job of creating other abstractions for you to use instead.
Literally the closest the Java language comes to the "address of an object" is that the default hashCode in java.lang.Object is, according to the docs, "typically implemented by converting the internal address of the object into an integer". But in Java, you can't use an object's hashcode to access the object. You certainly can't add or subtract a small number to a hashcode in order to access memory within or in the vicinity of the original object. You can't make mistakes in which you think that your pointer refers to the object you intend it to, but actually it refers to some completely unrelated memory location whose value you're about to scribble all over. In C++ you can do all those things.
[*] well, not multiplying and adding 4 bytes to get an int, not even shifting and ORing, but "loading" an int from 4 bytes of memory.
References in C# act the same way as pointers in C++, without all the messy syntax.
Consider the following C# code:
public class A
{
public int x;
}
public void AnotherFunc(A a)
{
a.x = 2;
}
public void SomeFunc()
{
A a = new A();
a.x = 1;
AnotherFunc(a);
// a.x is now 2
}
Since classes are references types, we know that we are passing an existing instance of A to AnotherFunc (unlike value types, which are copied).
In C++, we use pointers to make this explicit:
class A
{
public:
int x;
};
void AnotherFunc(A* a) // notice we are pointing to an existing instance of A
{
a->x = 2;
}
void SomeFunc()
{
A a;
a.x = 1;
AnotherFunc(&a);
// a.x is now 2
}
"How can pointers be explained using only concepts that are familiar to a .NET or Java developer? "
I'd suggest that there are really two distinct things that need to be learnt.
The first is how to use pointers, and heap allocated memory, to solve specific problems. With an appropriate style, using shared_ptr<> for example, this can be done in a manner analogous to that of Java. A shared_ptr<> has a lot in common with a Java object handle.
Secondly, however, I would suggest that pointers in general are a fundamentally lower level concept that Java, and to a lesser extent C#, deliberately hides. To program in C++ without moving to that level will guarantee a host of problems. You need to think in terms of the underlying memory layout and think of pointers as literally pointers to specific pieces of storage.
To attempt to understand this lower level in terms of higher concepts would be an odd path to take.
Get two sheets of large format graph paper, some scissors and a friend to help you.
Each square on the sheets of paper represents one byte.
One sheet is the stack.
The other sheet is the heap. Give the heap to your friend - he is the memory manager.
You are going to pretend to be a C program and you'll need some memory. When running your program, cut out chunks from the stack and the heap to represent memory allocation.
Ready?
void main() {
int a; /* Take four bytes from the stack. */
int *b = malloc(sizeof(int)); /* Take four bytes from the heap. */
a = 1; /* Write on your first little bit of graph paper, WRITE IT! */
*b = 2; /* Get writing (on the other bit of paper) */
b = malloc(sizeof(int)); /* Take another four bytes from the heap.
Throw the first 'b' away. Do NOT give it
back to your friend */
free(b); /* Give the four bytes back to your friend */
*b = 3; /* Your friend must now kill you and bury the body */
} /* Give back the four bytes that were 'a' */
Try with some more complex programs.
Explain the difference between the stack and the heap and where objects go.
Value types such as structs (both C++ and C#) go on the stack. Reference types (class instances) get put on the heap. A pointer (or reference) points to the memory location on the heap for that specific instance.
Reference type is the key word. Using a pointer in C++ is like using ref keyword in C#.
Managed apps make working with this stuff easy so .NET devs are spared the hassle and confusion. Glad I don't do C anymore.
The key for me was to understand the way memory works. Variables are stored in memory. The places in which you can put variables in memory are numbered. A pointer is a variable that holds this number.
Any C# programmer that understands the semantic differences between classes and structs should be able to understand pointers. I.e., explaining in terms of value vs. reference semantics (in .NET terms) should get the point across; I wouldn't complicate things by trying to explain in terms of ref (or out).
In C#, all references to classes are roughly the equivalent to pointers in the C++ world. For value types (structs, ints, etc..) this is not the case.
C#:
void func1(string parameter)
void func2(int parameter)
C++:
void func1(string* parameter)
void func2(int parameter)
Passing a parameter using the ref keyword in C# is equivalent to passing a parameter by reference in C++.
C#:
void func1(ref string parameter)
void func2(ref int parameter)
C++:
void func1((string*)& parameter)
void func2(int& parameter)
If the parameter is a class, it would be like passing a pointer by reference.

C# Unsafe/Fixed Code

Can someone give an example of a good time to actually use "unsafe" and "fixed" in C# code? I've played with it before, but never actually found a good use for it.
Consider this code...
fixed (byte* pSrc = src, pDst = dst) {
//Code that copies the bytes in a loop
}
compared to simply using...
Array.Copy(source, target, source.Length);
The second is the code found in the .NET Framework, the first a part of the code copied from the Microsoft website, http://msdn.microsoft.com/en-us/library/28k1s2k6(VS.80).aspx.
The built in Array.Copy() is dramatically faster than using Unsafe code. This might just because the second is just better written and the first is just an example, but what kinds of situations would you really even need to use Unsafe/Fixed code for anything? Or is this poor web developer messing with something above his head?
It's useful for interop with unmanaged code. Any pointers passed to unmanaged functions need to be fixed (aka. pinned) to prevent the garbage collector from relocating the underlying memory.
If you are using P/Invoke, then the default marshaller will pin objects for you. Sometimes it's necessary to perform custom marshalling, and sometimes it's necessary to pin an object for longer than the duration of a single P/Invoke call.
I've used unsafe-blocks to manipulate Bitmap-data. Raw pointer-access is significantly faster than SetPixel/GetPixel.
unsafe
{
BitmapData bmData = bm.LockBits(...)
byte *bits = (byte*)pixels.ToPointer();
// Do stuff with bits
}
"fixed" and "unsafe" is typically used when doing interop, or when extra performance is required. Ie. String.CopyTo() uses unsafe and fixed in its implementation.
reinterpret_cast style behaviour
If you are bit manipulating then this can be incredibly useful
many high performance hashcode implementations use UInt32 for the hash value (this makes the shifts simpler). Since .Net requires Int32 for the method you want to quickly convert the uint to an int. Since it matters not what the actual value is, only that all the bits in the value are preserved a reinterpret cast is desired.
public static unsafe int UInt32ToInt32Bits(uint x)
{
return *((int*)(void*)&x);
}
note that the naming is modelled on the BitConverter.DoubleToInt64Bits
Continuing in the hashing vein, converting a stack based struct into a byte* allows easy use of per byte hashing functions:
// from the Jenkins one at a time hash function
private static unsafe void Hash(byte* data, int len, ref uint hash)
{
for (int i = 0; i < len; i++)
{
hash += data[i];
hash += (hash << 10);
hash ^= (hash >> 6);
}
}
public unsafe static void HashCombine(ref uint sofar, long data)
{
byte* dataBytes = (byte*)(void*)&data;
AddToHash(dataBytes, sizeof(long), ref sofar);
}
unsafe also (from 2.0 onwards) lets you use stackalloc. This can be very useful in high performance situations where some small variable length array like temporary space is needed.
All of these uses would be firmly in the 'only if your application really needs the performance' and thus are inappropriate in general use, but sometimes you really do need it.
fixed is necessary for when you wish to interop with some useful unmanaged function (there are many) that takes c-style arrays or strings. As such it is not only for performance reasons but correctness ones when in interop scenarios.
Unsafe is useful for (for example) getting pixel data out of an image quickly using LockBits. The performance improvement over doing this using the managed API is several orders of magnitude.
We had to use a fixed when an address gets passed to a legacy C DLL. Since the DLL maintained an internal pointer across function calls, all hell would break loose if the GC compacted the heap and moved stuff around.
I believe unsafe code is used if you want to access something outside of the .NET runtime, ie. it is not managed code (no garbage collection and so on). This includes raw calls to the Windows API and all that jazz.
This tells me the designers of the .NET framework did a good job of covering the problem space--of making sure the "managed code" environment can do everything a traditional (e.g. C++) approach can do with its unsafe code/pointers. In case it cannot, the unsafe/fixed features are there if you need them. I'm sure someone has an example where unsafe code is needed, but it seems rare in practice--which is rather the point, isn't it? :)

Categories