C# Preprocessor directives - c#

In C++ we can do this:
struct {
#if defined (BIGENDIAN)
uint32_t h;
uint32_t l;
#else
uint32_t l;
uint32_t h;
#endif
} dw;
Now, in C# not so simple. I have a method to test for BigEndian but to define the struct at compile time, how can we get the same effect in C#? I was thinking that I can have classes like "BoardBig" and "BoardLittle" and use a factory to get the class I need based off of the IsBigEndian check. And for _WIN64 checks, I can have classes like "Position_64" and "Position_32" something like that. Is this a good approach? Since C# cannot define statements like #define IsBigEndian 1 or what have ya, not sure what to do.

Update: And as other posters have pointed out (upvoted), this is not a solution for endianness in C#.
C# Conditional compilation directives
#if BIGENDIAN
uint32_t h;
uint32_t l;
#else
uint32_t l;
uint32_t h;
#endif
BTW, you should avoid these if you can. Makes code harder to test.

Since you cannot "memory-map" the C# structures to raw data, there is no real advantage is using preprocessor for this purpose. So while C# does have preprocessor features that can be used for other purposes, I don't think they will be valuable to you here.
Instead, just work with one preferred structure and bury the low-level bit-twiddling for the special cases. Here is an example of big-endian and little-endian handling for a structure:
Marshalling a big-endian byte collection into a struct in order to pull out values

There is conditional compilation in C#, but you can't use it to get different code depending on the endianess. For managed languages the endianess of the system is not known at compile time.
The compiler produces IL code, which can be executed both on big endian and little endian systems. It's the JIT compiler that takes care of turning the IL code into native machine code, and turn numeric literals into the correct format.
You can use BitConverter.IsLittleEndian to find out the endianess at runtime.

Related

Why cannot marshal struct with auto layout

I encountered an odd behaviour when marshalling a struct with auto layout kind.
For example: let's take a simple code:
[StructLayout(LayoutKind.Auto)]
public struct StructAutoLayout
{
byte B1;
long Long1;
byte B2;
long Long2;
byte B3;
}
public static void Main()
{
Console.WriteLine("Sizeof struct is {0}", Marshal.SizeOf<StructAutoLayout>());
}
it throws an exception:
Unhandled Exception: System.ArgumentException: Type
'StructAutoLayout' cannot be marshaled as an unmanaged
structure; no meaningful size or offset can be computed.
So it means that compiler doesn't know struct size at compile time? I was sure that this attribute reorders struct fields and then compiles it, but it doesn't.
It doesn't make any sense. Marshalling is used for interop - and when doing interop, the two sides have to agree exactly on the structure of the struct.
When you use auto layout, you defer the decision about the structure layout to the compiler. Even different versions of the same compiler can result in different layouts - that's a problem. For example, one compiler might use this:
public struct StructAutoLayout
{
byte B1;
long Long1;
byte B2;
long Long2;
byte B3;
}
while another might do something like this:
public struct StructAutoLayout
{
byte B1;
byte B2;
byte B3;
byte _padding;
long Long1;
long Long2;
}
When dealing with native/unmanaged code, there's pretty much no meta-data involved - just pointers and values. The other side has no way of knowing how the structure is actually laid out, it expects a fixed layout you both agreed upon in advance.
.NET has a tendency to make you spoiled - almost everything just works. This is not the case when interoping with something like C++ - if you just guess your way around, you'll most likely end up with a solution that usually works, but once in a while crashes your whole application. When doing anything with unmanaged / native code, make sure you understand perfectly what you're doing - unmanaged interop is just fragile that way.
Now, the Marshal class is designed specifically for unmanaged interop. If you read the documentation for Marshal.SizeOf, it specifically says
Returns the size of an unmanaged type in bytes.
And of course,
You can use this method when you do not have a structure. The layout must be sequential or explicit.
The size returned is the size of the unmanaged type. The unmanaged and managed sizes of an object can differ. For character types, the size is affected by the CharSet value applied to that class.
If the type can't possibly be marshalled, what should Marshal.SizeOf return? That doesn't even make sense :)
Asking for the size of a type or an instance doesn't make any sense in a managed environment. "Real size in memory" is an implementation detail as far as you are concerned - it's not a part of the contract, and it's not something to rely on. If the runtime / compiler wanted, it could make every byte 77 bytes long, and it wouldn't break any contract whatsoever as long as it only stores values from 0 to 255 exactly.
If you used a struct with an explicit (or sequential) layout instead, you would have a definite contract for how the unmanaged type is laid out, and Marshal.SizeOf would work. However, even then, it will only return the size of the unmanaged type, not of the managed one - that can still differ. And again, both can be different on different systems (for example, IntPtr will be four bytes on a 32-bit system and eight bytes on a 64-bit system when running as a 64-bit application).
Another important point is that there's multiple levels of "compilation" in a .NET application. The first level, using a C# compiler, is only the tip of the iceberg - and it's not the part that handles reordering fields in the auto-layout structs. It simply marks the struct as "auto-layouted", and it's done. The actual layouting is handled when you run the application by the CLI (the specification is not clear on whether the JIT compiler handles that, but I would assume so). But that has nothing to do with Marshal.SizeOf or even sizeof - both of those are still handled at runtime. Forget everything you know from C++ - C# (and even C++/CLI) is an entirely different beast.
If you need to profile managed memory, use a memory profiler (like CLRProfiler). But do understand that you're still profiling memory in a very specific environment - different systems or .NET versions can give you different results. And in fact, there's nothing saying two instances of the same structure must be the same size.

__darwin_size_t equivalent to in c#

I am writing interfaces of objective-c class so that objective C methods can be called from my C# code. in the objective-c class they have used "__darwin_size_t"
#ifndef _SIZE_T
#define _SIZE_T
typedef __darwin_size_t size_t;
#endif /* _SIZE_T */
from the above code I can understand that __darwin_size_t is some datatype and "size_t" name can be used for same in this project. I am writing interface in C# language so I need to use some similar datatype that is available in c#. somewhere after this I found a bit more code like bellow
#if defined(__SIZE_TYPE__)
typedef __SIZE_TYPE__ __darwin_size_t; /* sizeof() */
#else
typedef unsigned long __darwin_size_t; /* sizeof() */
#endif
So I think I can use UInt64 (unsigned long) data type for size_t and Int64 for long long in my c# code.
Please suggest.
Thanks
Vishnu Sharma
My suggestion was fine in the bottom part of my question. I think very few people are working on Xamarin cross platform development tool right now.
Vishnu

The best way to use C++ code in .NET and VBA

I wrote a native C++ library that does some math and simulations using Visual Studio 2013. I'd like to have this library easily usable from VBA and .NET (C#). Being able to easily call it from pure C would be also desired, but not necessary.
The internals of the library are quite complex and I want to expose a facade library for the clients. The API will have just a couple of functions (about 5). Each of the functions will only take 2 pointers as parameters - input and output structures.
My first idea was to create a COM DLL in C++. It seems to be easy to integrate with a COM DLL from VBA and C#. Calling COM from pure C is also possible. But I read the creating COM DLLs in C++ is painful, so I dropped that idea.
My next idea is to create a pure C DLL. I'll define the functions and all input and output nested structs in a C header file, so it's reusable both from C and C++. Here's an excerpt:
#pragma pack(push)
#define MF_MAX_PERIODS 240
#define MF_MAX_REGIONS 60
typedef enum mf_agency_model_type { FNMA, FHLMC } mf_agency_model_type;
/* BEGIN OF INPUT DATA STRUCTURES */
/* VBA structs are packed to 4 byte boundaries. */
#pragma pack(4)
typedef struct mf_boarding_loans {
int regions_count;
double timeline[MF_MAX_REGIONS];
} mf_boarding_loans;
#pragma pack(4)
typedef struct mf_agency_model_input {
mf_agency_model_type model_type;
int periods_count;
mf_boarding_loans boarding_loans;
} mf_agency_model_input;
/* BEGIN OF OUTPUT DATA STRUCTURES */
#pragma pack(4)
typedef struct mf_curves {
double dd_90_rate[MF_MAX_PERIODS];
double dd_60_rate[MF_MAX_PERIODS];
} mf_curves;
#pragma pack(4)
typedef struct mf_agency_model_output{
double wal_years;
mf_curves curves;
} mf_agency_model_output;
#pragma pack(pop)
/* BEGIN OF EXTERN FUNCTIONS */
#ifdef __cplusplus
extern "C" {
#endif
/* VBA requires _stdcall calling convention. */
MFLIB_API void _stdcall mf_run_agency_model(mf_agency_model_input *model_input, mf_agency_model_output *model_output);
#ifdef __cplusplus
}
#endif
It should be possible to call such a DLL from C/C++, VBA (via DECLARE) and C# (via PInvoke). The problem is the amount of input and output data. Ultimately, my input data structures will contain about 70 parameters (numbers and arrays of numbers), and the outputs will contain about 100 variables (also numbers and arrays of numbers). I presume that I'd have to declare the same structures both in VBA and C# again and I'd really like to avoid that. (I think I could avoid the declaration in VBA if I expose the C# code as COM to VBA. It seems to be very easy to impelement COM in C#.) My goal is to define the structures only once. Here are my questions:
1) I quickly read about C++/CLI. Could C++/CLI compile my C header to .NET MSIL so it's usable from other .NET languages without any wrappers? I think there will still be a problem with exposing the object to VBA via COM, since it requires some attributes to be added to classes/interfaces, but macros should help here.
2) Is there a tool that will generate for me the structures declarations in VBA and C# from the C code? I'm also OK with having to declare the structures in some meta language and generate C, VBA and C# structures from that.
3) Is it better to define my arrays as fixed size in C or just raw pointers? I read somewhere that it's easier for interop if the arrays have fixed size.
4) Is it better to include the child structures in the parent directly or via pointers from the interop perspective?
Is there any other way that will free me from redefining the huge data structures in multiple languages?
Thanks for your help,
Michal

What is the cost of a #define?

To define constants, what is the more common and correct way? What is the cost, in terms of compilation, linking, etc., of defining constants with #define? It is another way less expensive?
The best way to define any const is to write
const int m = 7;
const float pi = 3.1415926f;
const char x = 'F';
Using #define is a bad c++ style. It is impossible to hide #define in namespace scope.
Compare
#define pi 3.1415926
with
namespace myscope {
const float pi = 3.1415926f;
}
Second way is obviously better.
The compiler itself never sees a #define. The preprocessor expands all macros before they're passed to the compiler. One of the side effects, though, is that the values are repeated...and two identical strings are not necessarily the exact same string. If you say
#define SOME_STRING "Just an example"
it's perfectly legal for the compiler to add a copy of the string to the output file each time it sees the string. A good compiler will probably eliminate duplicate literals, but that's extra work it has to do. If you use a const instead, the compiler doesn't have to worry about that as much.
The cost is only to the preprocessor, when #defines are resolved (ignoring the additional debugging cost of dealing with a project full of #defines for constants, of course).
#define macros are processed by the pre-processor, they are not visible to the compiler. And since they are not visible to the compiler as a symbol, it is hard to debug something which involves a macro.
The preferred way of defining constants is using the const keyword along with proper type information.
const unsigned int ArraySize = 100;
Even better is
static const unsigned int ArraySize = 100;
when the constant is used only in a single file.
#define will increase Compilation time but it will faster in execution...
generally in conditional compilation #define is used...
where const is used in general computation of numbers
Choice is depends upon your requirement...
#define is string replacement. So if you make mistakes in the macros, they will show up as errors later on. Mostly incorrect types or incorrect expressions are the common ones.
For conditional compilation, pre-processor macros work best. For other constants which are to be used in computation, const works good.
CPU time isn't really the cost of using #define or macros. The "cost" as a developer is as follows:
If there is an error in your macro, the compiler will flag it where you referenced the macro, not where you defined it.
You will lose type safety and scoping for your macro.
Debugging tools will not know the value of the macro.
These things may not burn CPU cycles, but they can burn developer cycles.
For constants, declaring const variables is preferable, and for little type-independent functions, inline functions and templates are preferable.

Calling unmanaged c++ code in C# Mixed with STL

Hey, I want to call unmanaged c++ code in C#
The function interface is like following(I simplified it to make it easy to understand)
Face genMesh(int param1, int param2);
Face is a struct defined as:
struct Face{
vector<float> nodes;
vector<int> indexs;
}
I googled and read the MSDN docs found ways to call simple c/c++ unmanged code in C#, also know how to hand the struct as return value. And My question is how to handle "vector". I did not find rules about mapping between vector and some types in C#
Thanks!
You want, if possible, to avoid using STL in anything but pure UNmanaged code. When you mix it with C++/CLI (or Managed C++), you will likely end up with the STL code running as managed and the client code running as unmanaged. What happens is that when you, say, iterate over a vector, every call to a vector method will transition into managed code and back again.
See here for a similar question.
You're probably going to need to pass the raw arrays unless you really want to jump through some hoops with the interop as the rules specify the types have to either be marhallable by the framework, or you've given the framework a specific structure it can marshall. This probably is not possible for vector. So, you can define you C++ struct as
#pragma pack(push, 8)
struct ReflSettings
{
double* Q;
double* DisplayQ;
}
#pragma pack(pop)
then you C# struct would be
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode, Pack = 8)]
public class ModelSettings:IDisposable
{
[XmlIgnore] internal IntPtr Q;
[XmlIgnore] internal IntPtr DisplayQ;
}
Hope this helps.
Probably the simplest thing to do is to create a managed class in C++ to represent the 'Face' structand copy the it's contents into the new managed class. Your c# code should then be able to understand the data.
You could use an ArrayList in place of the vectors.

Categories