Do enums have a limit of members in C#? - c#

I was wondering if the enum structure type has a limit on its members. I have this very large list of "variables" that I need to store inside an enum or as constants in a class but I finally decided to store them inside a class, however, I'm being a little bit curious about the limit of members of an enum (if any).
So, do enums have a limit on .Net?

Yes. The number of members with distinct values is limited by the underlying type of enum - by default this is Int32, so you can get that many different members (2^32 - I find it hard that you will reach that limit), but you can explicitly specify the underlying type like this:
enum Foo : byte { /* can have at most 256 members with distinct values */ }
Of course, you can have as many members as you want if they all have the same value:
enum { A, B = A, C = A, ... }
In either case, there is probably some implementation-defined limit in C# compiler, but I would expect it to be MIN(range-of-Int32, free-memory), rather than a hard limit.

Due to a limit in the PE file format, you probably can't exceed some 100,000,000 values. Maybe more, maybe less, but definitely not a problem.

From the C# Language Specification 3.0, 1.10:
An enum type’s storage format and
range of possible values are
determined by its underlying type.
While I'm not 100% sure I would expect Microsoft C# compiler only allowing non-negative enum values, so if the underlying type is an Int32 (it is, by default) then I would expect about 2^31 possible values, but this is an implementation detail as it is not specified. If you need more than that, you're probably doing something wrong.

You could theoretically use int64 as your base type in the enum and get 2^63 possible entries. Others have given you excellent answers on this.
I think there is a second implied question of should you use an enum for something with a huge number of items. This actually directly applies to your project in many ways.
One of the biggest considerations would be long term maintainability. Do you think the company will ever change the list of values you are using? If so will there need to be backward compatibility to previous lists? How significant a problem could this be? In general, the larger the number of members in an enum correlates to a higher probability the list will need to be modified at some future date.
Enums are great for many things. They are clean, quick and simple to implement. They work great with IntelliSense and make the next programmer's job easier, especially if the names are clear, concise and if needed, well documented.
The problem is an enumeration also comes with drawbacks. They can be problematic if they ever need to be changed, especially if the classes using them are being persisted to storage.
In most cases enums are persisted to storage as their underlying values, not as their friendly names.
enum InsuranceClass
{
Home, //value = 0 (int32)
Vehicle, //value = 1 (int32)
Life, //value = 2 (int32)
Health //value = 3 (int32)
}
In this example the value InsuranceClass.Life would get persisted as a number 2.
If another programmer makes a small change to the system and adds Pet to the enum like this;
enum InsuranceClass
{
Home, //value = 0 (int32)
Vehicle, //value = 1 (int32)
Pet, //value = 2 (int32)
Life, //value = 3 (int32)
Health //value = 4 (int32)
}
All of the data coming out of the storage will now show the Life policies as Pet policies. This is an extremely easy mistake to make and can introduce bugs that are difficult to track down.
The second major issue with enums is that every change of the data will require you to rebuild and redeploy your program. This can cause varying degrees of pain. On a web server that may not be a big issue, but if this is an app used on 5000 desktop systems you have an entirely different cost to redeploy your minor list change.
If your list is likely to change periodically you should really consider a system that stores that list in some other form, most likely outside your code. Databases were specifically designed for this scenario or even a simple config file could be used (not the preffered solution). Smart planning for changes can reduce or avoid the problems associated with rebuilding and redeploying your software.
This is not a suggestion to prematurely optimize your system for the possibility of change, but more a suggestion to structure the code so that a likely change in the future doesn't create a major problem. Different situations will require difference decisions.
Here are my rough rules of thumb for the use of enums;
Use them to classify and define other data, but not as data
themselves. To be clearer, I would use InsuranceClass.Life to
determine how the other data in a class should be used, but I would
not make the underlying value of {pseudocode} InsuranceClass.Life = $653.00 and
use the value itself in calculations. Enums are not constants. Doing
this creates confusion.
Use enums when the enum list is unlikely to change. Enums are great
for fundamental concepts but poor for constantly changing ideas.
When you create an enumeration this is a contract with future
programmers that you want to avoid breaking.
If you must change an enum, then have a rule everyone follows that
you add to the end, not the middle. The alternative is that you
define specific values to each enum and never change those. The
point is that you are unlikely to know how others are using your
enumerations underlying values and changing them can cause misery for anyone
else using your code. This is an order of magnitude more important
for any system that persists data.
The corollary to #2 and #3 is to never delete a member of an enum.
There are specific circles of hell for programmers who do this in a codebase used by others.
Hopefully that expanded on the answers in a helpful way.

Related

Most effecient way to keep status flags (under 32 items) in c#

Consider we are defining a class that:
Many instances of that class will be created
We must store under 32 flags in each instance that keeps states or some options, etc.
Defining flags count is fixed and we no-need to keep it in an enumerable variable in runtime. (say we can define separate bool variables, rather than one bool array)
Some properties (from each instance) is depended on our flags (or options) And flags states Will be used (read/write) in a hot call path in our application.
Note: Performance is important for us in that Application.
And As assumptions #1 and #4 dictated, we must care about both speed and memory-load in balance
Obviously we can implement our class in several ways. For example defining a flags Enum field, Or using a BitVector, Or defining separate (bool or Enum or int ...) variables, Or else defining uint variable and using bit-masks, to keep the state in each instance. But:
Which is The Most Efficient way to keep status flags for this situation?
Is it ( = the most efficient way) deeply depends on current in-using tools such as Compiler or even Runtime (CLR)?
As no body answered my question, and I performed some tests and researches, I will answer it myself and I hope to be usable for others:
Which is The Most Efficient way to keep status flags for this
situation?
Because the computer will align data in memory according to the processor architecture ,Even in C# (as a high level language), still It is generally a good advise to avoid separate boolean fields in classes.
Using bit-mask based solutions (same as flags Enum or BitVector32 or manual bit-mask operations) is preferable. For two or more boolean values, it’s a better solution in memory-load and is fast. But when we have a single boolean state var, this is useless.
Generally we can say if we choose flags Enum or else BitVector32 as solution, it should be almost as fast as we expect for a manual bit-masked operations in C# in most cases.
When we need to use various small numeric ranges in addition to boolean values as state, BitVector32 is helpful as an existing util that helps us to keep our states in one variable and saving memory-load.
We may prefer to use flags Enum to make our code more maintainable and clear.
Also we can say about the 2'nd part of the question
Is it ( = the most efficient way) deeply depends on current in-using
tools such as Compiler or even Runtime (CLR)?
Partially Yes.
When we choose each one of mentioned solutions (rather than manual bitwise operations), the performance is depended on compiler optimization that will do (for example in method calls we made when we were using BitVector32 or Enum and or enum operations, etc). So optimizations will boost up our code, and it seems this is common in C#, but for every solution rather than manual bitwise operations, with tools rather than .net official, it is better to be tested in that case.

List of const int instead of enum

I started working on a large c# code base and found the use of a static class with several const ints fields. This class is acting exactly like an enum would.
I would like to convert the class to an actual enum, but the powers that be said no. The main reason I would like to convert it is so that I could have the enum as the data type instead of int. This would help a lot with readability.
Is there any reason to not use enums and to use const ints instead?
This is currently how the code is:
public int FieldA { get; set; }
public int FieldB { get; set; }
public static class Ids
{
public const int ItemA = 1;
public const int ItemB = 2;
public const int ItemC = 3;
public const int ItemD = 4;
public const int ItemE = 5;
public const int ItemF = 6;
}
However, I think it should be the following instead:
public Ids FieldA { get; set; }
public Ids FieldB { get; set; }
I think many of the answers here ignore the implications of the semantics of enums.
You should consider using an enum when the entire set of all valid values (Ids) is known in advance, and is small enough to be declared in program code.
You should consider using an int when the set of known values is a subset of all the possible values - and the code only needs to be aware of this subset.
With regards to refactoring - when time and business contraints allow, it's a good idea to clean code up when the new design/implementation has clear benefit over the previous implementation and where the risk is well understood. In situations where the benefit is low or the risk is high (or both) it may be better to take the position of "do no harm" rather than "continuously improve". Only you are in a position to judge which case applies to your situation.
By the way, a case where neither enums or constant ints are necessarily a good idea is when the IDs represent the identifiers of records in an external store (like a database). It's often risky to hardcode such IDs in the program logic, as these values may actually be different in different environments (eg. Test, Dev, Production, etc). In such cases, loading the values at runtime may be a more appropriate solution.
Your suggested solution looks elegant, but won't work as it stands, as you can't use instances of a static type. It's a bit trickier than that to emulate an enum.
There are a few possible reasons for choosing enum or const-int for the implementation, though I can't think of many strong ones for the actual example you've posted - on the face of it, it seems an ideal candidate for an enum.
A few ideas that spring to mind are:
Enums
They provide type-safety. You can't pass any old number where an enum value is required.
Values can be autogenerated
You can use reflection to easily convert between the 'values' and 'names'
You can easily enumerate the values in an enum in a loop, and then if you add new enum members the loop will automatically take them into account.
You can insert new enunm values without worrying about clashes occurring if you accidentally repeat a value.
const-ints
If you don't understand how to use enums (e.g. not knowing how to change the underlying data type of an enum, or how to set explicit values for enum values, or how to assign the same value to mulitple constants) you might mistakenly believe you're achieving something you can't use an enum for, by using a const.
If you're used to other languages you may just naturally approach the problem with consts, not realising that a better solution exists.
You can derive from classes to extend them, but annoyingly you can't derive a new enum from an existing one (which would be a really useful feature). Potentially you could therefore use a class (but not the one i your example!) to achieve an "extendable enum".
You can pass ints around easily. Using an enum may require you to be constantly casting (e.g.) data you receive from a database to and from the enumerated type. What you lose in type-safety you gain in convenience. At least until you pass the wrong number somewhere... :-)
If you use readonly rather than const, the values are stored in actual memory locations that are read when needed. This allows you to publish constants to another assembly that are read and used at runtime, rather than built into the other assembly, which means that you don't have to recompile the dependant assembly when you change any of the constants in your own assembly. This is an important consideration if you want to be able to patch a large application by just releasing updates for one or two assemblies.
I guess it is a way of making it clearer that the enum values must stay unchanged. With an enum another programmer will just drop in a new value without thinking, but a list of consts makes you stop and think "why is it like this? How do I add a new value safely?". But I'd achieve this by putting explicit values on the enums and adding a clear comment, rather than resorting to consts.
Why should you leave the implementation alone?
The code may well have been written by an idiot who has no good reason for what he did. But changing his code and showing him he's an idiot isn't a smart or helpful move.
There may be a good reason it's like that, and you will break something if you change it (e.g. it may need to be a class due to being accessed through reflection, being exposed through external interfaces, or to stop people easily serializing the values because they'll be broken by the obfuscation system you're using). No end of unnecessary bugs are introduced into systems by people who don't fully understand how something works, especially if they don't know how to test their changes to ensure they haven't broken anything.
The class may be autogenerated by an external tool, so it is the tool you need to fix, not the source code.
There may be a plan to do something more with that class in future (?!)
Even if it's safe to change, you will have to re-test everything that is affected by the change. If the code works as it stands, is the gain worth the pain? When working on legacy systems we will often see existing code of poor quality or just done a way we don't personally like, and we have to accept that it is not cost effective to "fix" it, no matter how much it niggles. Of course, you may also find yourself biting back an "I told you so!" when the const-based implementation fails due to lacking type-safety. But aside from type-safety, the implementation is ultimately no less efficient or effective than an enum.
If it ain't broke, don't fix it.
I don't know the design of the system you're working on, but I suspect that the fields are integers that just happen to have a number of predefined values. That's to say they could, in some future state, contain more than those predefined values. While an enum allows for that scenario (via casting), it implies that only the values the enumeration contains are valid.
Overall, the change is a semantic one but it is unnecessary. Unnecessary changes like this are often a source of bugs, additional test overhead and other headaches with only mild benefits. I say add a comment expressing that this could be an enum and leave it as it is.
Yes, it does help with readability, and no I cannot think of any reason against it.
Using const int is a very common "old school" of programming practice for C++.
The reason I see is that if you want to be loosely coupled with another system that uses the same constants, you avoid being tightly coupled and share the same enum type.
Like in RPC calls or something...

Is there any harm in having many enum values? (many >= 1000)

I have a large list of error messages that my biz code can return based on what's entered. The list may end up with more than a thousand.
I'd like to just enum these all out, using the [Description("")] attribute to record the friendly message.
Something like:
public enum ErrorMessage
{
[Description("A first name is required for users.")]
User_FirstName_Required = 1,
[Description("The first name is too long. It cannot exceed 32 characters.")]
User_FirstName_Length = 2,
...
}
I know enums are primitive types, integers specifically. There shouldn't be any problem with that many integers, right?
Is there something I'm not thinking of? It seems like this should be okay, but I figured I should ask the community before spending the time to do it this way.
Does .Net care about enum types differently when they have lots of values?
Update
The reason I didn't want to use Resources is because
a) I need to be able to reference each unique error message with an integer value. The biz layer services an API, in addition to other things, and a list of integer values has to be returned denoting the errors. I don't believe Resources allows you to address a resource value with an integer. Am I wrong?
b) There are no localization requirements.
I think a design that has 1,000+ values in an enum needs some more thought. Sounds like a "God Enum" anti-pattern will have to be invented for this case.
The main downside I'd point out with having the friendly description in an Attribute is that this will cause challenges if you ever need to localize your app for another language. If this is a consideration, it would be a good idea to put the strings in a resource file.
The enum itself should not be a problem, though having all of your error codes in one master list can be confusing. You may consider creating seperate enums for seperate categories of return codes, as this will make it easier for developers to understand the possible return values for a particular function. You can still give them distinct numeric values (by specifying the numeric values explicitly) if it's important that the codes be unique.
On a side note, the .NET BCL does not make much use of return codes and return codes are somewhat discouraged in modern .NET development. They create maintainability issues (you can almost never remove old return codes or risk breaking backwards compatibility) and they require special validation logic to handle the returns for every call. Stateful validation can be accomplished with IDataErrorInfo, where you use an intermediate class that can represent invalid states, but that only allows a Commit of changes that are validated. This allows you to manipulate the object freely, but also provide feedback to the user as to the validity of its state. The equivalent logic with error codes often requires a switch statement for each use.
1000 is not many, you should just make sure that the underlying integer type is big enough (don't use a char for your enum.
On second thought 1000 is tons if you're manually entering them, if they are generated from some data set it could make sense kinda...
I fully agree with duffymo. An enum with 1000+ values smells bad from y design point of view. Not to mention that it would be quite nasty for the developer to use intelligence on such a GOD ENUM:-)
I would better go for using resources.
I think it's very bad, for error handling you can simply use resource, as i see you want to do reflection and fetch the description its bad too.
If you don't want to use resources, you can define different enum for each of your business rules, Also your different business doesn't need others error message (and shouldn't be like this).

Alternatives to nullable types in C#

I am writing algorithms that work on series of numeric data, where sometimes, a value in the series needs to be null. However, because this application is performance critical, I have avoided the use of nullable types. I have perf tested the algorithms to specifically compare the performance of using nullable types vs non-nullable types, and in the best case scenario nullable types are 2x slower, but often far worse.
The data type most often used is double, and currently the chosen alternative to null is double.NaN. However I understand this is not the exact intended usage for the NaN value, so am unsure whether there are any issues with this I cannot foresee and what the best practise would be.
I am interested in finding out what the best null alternatives are for the following data types in particular: double/float, decimal, DateTime, int/long (although others are more than welcome)
Edit: I think I need to clarify my requirements about performance. Gigs of numerical data are processed through these algorithms at a time which takes several hours. Therefore, although the difference between eg 10ms or 20ms is usually insignificant, in this scenario it really does makes a significant impact to the time taken.
Well, if you've ruled out Nullable<T>, you are left with domain values - i.e. a magic number that you treat as null. While this isn't ideal, it isn't uncommon either - for example, a lot of the main framework code treats DateTime.MinValue the same as null. This at least moves the damage far away from common values...
edit to highlight only where no NaN
So where there is no NaN, maybe use .MinValue - but just remember what evils happen if you accidentally use that same value meaning the same number...
Obviously for unsigned data you'll need .MaxValue (avoid zero!!!).
Personally, I'd try to use Nullable<T> as expressing my intent more safely... there may be ways to optimise your Nullable<T> code, perhaps. And also - by the time you've checked for the magic number in all the places you need to, perhaps it won't be much faster than Nullable<T>?
I somewhat disagree with Gravell on this specific edge case: a Null-ed variable is considered 'not defined', it doesn't have a value. So whatever is used to signal that is OK: even magic numbers, but with magic numbers you have to take into account that a magic number will always haunt you in the future when it becomes a 'valid' value all of a sudden. With Double.NaN you don't have to be afraid for that: it's never going to become a valid double. Though, you have to consider that NaN in the sense of the sequence of doubles can only be used as a marker for 'not defined', you can't use it as an error code in the sequences as well, obviously.
So whatever is used to mark 'undefined': it has to be clear in the context of the set of values that that specific value is considered the value for 'undefined' AND that won't change in the future.
If Nullable give you too much trouble, use NaN, or whatever else, as long as you consider the consequences: the value chosen represents 'undefined' and that will stay.
I am working on a large project that uses NaN as a null value. I am not entirely comfortable with it - for similar reasons as yours: not knowing what can go wrong. We haven't encountered any real problems so far, but be aware of the following:
NaN arithmetics - While, most of the time, "NaN promotion" is a good thing, it might not always be what you expect.
Comparison - Comparison of values gets rather expensive, if you want NaN's to compare equal. Now, testing floats for equality isn't simple anyway, but ordering (a < b) can get really ugly, because nan's sometimes need to be smaller, sometimes larger than normal values.
Code Infection - I see lots of arithmetic code that requires specific handling of NaN's to be correct. So you end up with "functions that accept NaN's" and "functions that don't" for performance reasons.
Other non-finites NaN is nto the only non-finite value. Should be kept in mind...
Floating Point Exceptions are not a problem when disabled. Until someone enables them. True story: Static intialization of a NaN in an ActiveX control. Doesn't sound scary, until you change installation to use InnoSetup, which uses a Pascal/Delphi(?) core, which has FPU exceptions enabled by default. Took me a while to figure out.
So, all in all, nothing serious, though I'd prefer not to have to consider NaNs that often.
I'd use Nullable types as often as possible, unless they are (proven to be) performance / ressource constraints. One case could be large vectors / matrices with occasional NaNs, or large sets of named individual values where the default NaN behavior is correct.
Alternatively, you can use an index vector for vectors and matrices, standard "sparse matrix" implementations, or a separate bool/bit vector.
Partial answer:
Float and Double provide NaN (Not a Number). NaN is a little tricky since, per spec, NaN != NaN. If you want to know if a number is NaN, you'll need to use Double.IsNaN().
See also Binary floating point and .NET.
Maybe the significant performance decrease happens when calling one of Nullable's members or properties (boxing).
Try to use a struct with the double + a boolean telling whether the value is specified or not.
One can avoid some of the performance degradation associated with Nullable<T> by defining your own structure
struct MaybeValid<T>
{
public bool isValue;
public T Value;
}
If desired, one may define constructor, or a conversion operator from T to MaybeValid<T>, etc. but overuse of such things may yield sub-optimal performance. Exposed-field structs can be efficient if one avoids unnecessary data copying. Some people may frown upon the notion of exposed fields, but they can be massively more efficient that properties. If a function that will return a T would need to have a variable of type T to hold its return value, using a MaybeValid<Foo> simply increases by 4 the size of thing to be returned. By contrast, using a Nullable<Foo> would require that the function first compute the Foo and then pass a copy of it to the constructor for the Nullable<Foo>. Further, returning a Nullable<Foo> will require that any code that wants to use the returned value must make at least one extra copy to a storage location (variable or temporary) of type Foo before it can do anything useful with it. By contrast, code can use the Value field of a variable of type Foo about as efficiently as any other variable.

When structures are better than classes? [duplicate]

This question already has answers here:
When should I use a struct rather than a class in C#?
(31 answers)
Closed 9 years ago.
Duplicate of: When to use struct in C#?
Are there practical reasons to use structures instead of some classes in Microsoft .NET 2.0/3.5 ?
"What is the difference between structures and classes?" - this is probably the most popular question on intrviews for ".NET developer" vacancies. The only answer that interviewer considers to be right is "structures are allocated on stack and classes are allocated on heap" and no further questions are asked about that.
Some google search showed that:
a) structures have numerous limitations and no additional abilities in comparison to classes and
b) stack (and as such
structures) can be faster on very specialized conditions including:
size of data chunk less that 16 bytes
no extensive boxing/unboxing
structure's members are nearly immutable
whole set of data is not big (otherwise we get stack overflow)
(please correct/add to this list if it is wrong or not full)
As far as I know, most typical commercial projects (ERM, accouting, solutions for banks, etc.) do not define even a single structure, all custom data types are defined as classes instead. Is there something wrong or at least imperfect in this approach?
NOTE: question is about run-of-the-mill business apps, please don't list "unusual" cases like game development, real-time animation, backward compatibility (COM/Interop), unmanaged code and so on - these answers are already under this similar question:
When to use struct?
As far as I know, most typical commercial projects (ERM, accouting, solutions for banks, etc.) do not define even a single structure, all custom data types are defined as classes instead. Is there something wrong or at least imperfect in this approach?
No! Everything is perfectly right with that. Your general rule should be to always use objects by default. After all we are talking about object-oriented programing for a reason and not structure-oriented programing (structs themselves are missing some OO principles like Inheritance and Abstraction).
However structures are sometimes better if:
You need precise control over the amount of memory used (structures use (depending on the size) a little bit to FAR less memory than objects.
You need precise control of memory layout. This is especially important for interop with Win32 or other native APIs
You need the fastest possible speed. (In lots of scenarios with larger sets of data you can get a decent speedup when correctly using structs).
You need to waste less memory and have large amounts of structured data in arrays. Especially in conjunction with Arrays you could get huge amount of memory savings with structures.
You are working extensively with pointers. Then structures offer lots of interesting characteristics.
IMO the most important use case are large arrays of small composite entities. Imagine an array containing 10^6 complex numbers. Or a 2d array containing 1000x1000 24-bit RGB values. Using struct instead of classes can make a huge difference in cases like these.
EDIT:
To clarify: Assume you have a struct
struct RGB
{
public byte R,G,B;
}
If you declare an array of 1000x1000 RGB values, this array will take exactly 3 MB of memory, because the values types are stored inline.
If you used a class instead of a struct, the array would contain 1000000 references. That alone would take 4 or 8 MB (on a 64 bit machine) of memory. If you initialized all items with separate objects, so you can modify the values separately, you'd habe 1000000 objects swirling around on the managed heap to keep the GC busy. Each object has an overhead (IIRC) of 2 references, i.e. the objects would use 11/19 MB of memory. In total that's 5 times as much memory as the simple struct version.
One advantage of stack allocated value types is that they are local to the thread. That means that they are inherently thread safe. That cannot be said for objects on the heap.
This of course assumes we're talking about safe, managed code.
Another difference with classes is that when you assign an structure instance to a variable, you are not just copying a reference but indeed copying the whole structure. So if you modify one of the instances (you shouldn't anyway, since structure instances are intended to be immutable), the other one is not modified.
All good answers thus far...I only have to add that by definition value types are not nullable and hence are a good candidate for use in scenarios where you do not want to be bothered with creating a new instance of a class and assigning it to fields, for example...
struct Aggregate1
{
int A;
}
struct Aggregate2
{
Aggregate1 A;
Aggregate1 B;
}
Note if Aggregate1 were a class then you would have had to initialize the fields in Aggregate2 manually...
Aggregate2 ag2 = new Aggregate2();
ag2.A = new Aggregate1();
ag2.B = new Aggregate1();
This is obviously not required as long as Aggregate1 is a struct...this may prove to be useful when you are creating a class/struct heirarchy for the express purpose of serialization/deserialization with the XmlSerializer Many seemingly mysterious exceptions will disappear just by using structs in this case.
If the purpose of a type is to bind a small fixed collection of independent values together with duct tape (e.g. the coordinates of a point, a key and associated value of an enumerated dictionary entry, a six-item 2d transformation matrix, etc.), the best representation, from the standpoint of both efficiency and semantics, is likely to be a mutable exposed-field structure. Note that this represents a very different usage scenario from the case where a struct represents a single unified concept (e.g. a Decimal or DateTime), and Microsoft's advice for when to use structures gives advice which is only applicable to the latter one. The style of "immutable" structure Microsoft describes is only really suitable for representing a single unified concept; if one needs to represent a small fixed collection of independent values, the proper alternative is not an immutable class (which offers inferior performance), nor a mutable class (which will in many cases offer incorrect semantics), but rather an exposed-field struct (which--when used properly--offers superior semantics and performance). For example, if one has a struct MyTransform which holds a 2d transformation matrix, a method like:
static void Offset(ref it, double x, double y)
{
it.dx += x;
it.dy += y;
}
is both faster and clearer than
static void Offset(ref it, double x, double y)
{
it = new Transform2d(it.xx, int.xy, it.yx, it.yy, it.dx+x, it.dy+y);
}
or
Transform2d Offset(double dx, double dy)
{
it = new Transform2d(xx, xy, yx, yy, dx+x, dy+y);
}
Knowing that dx and dy are fields of Transform2d is sufficient to know that the first method modifies those fields and has no other side-effect. By contrast, to know what the other methods do, one would have to examine the code for the constructor.
There have been some excellent answers that touch on the practicality of using structs vs. classes and visa-versa, but I think your original comment about structs being immutable is a pretty good argument for why classes are used more often in the high-level design of LOB applications.In Domain Driven Design http://www.infoq.com/minibooks/domain-driven-design-quickly there is somewhat of a parallel between Entities/Classes and Value Objects/Structs. Entities in DDD are items within the business domain whose identity we need to track with an identifier, e.g. CustomerId, ProductId, etc. Value Objects are items whose values we might be interested in, but whose identity we don't track with an identifier e.g Price or OrderDate. Entities are mutable in DDD except for their Identity Field, while Value Objects do not have an identity.So when modeling a typical business entity, a class is usually designed along with an identity attribute, which tracks the identity of the business object round trip from the persistance store and back again. Although at runtime we might change all the property values on a business object instance, the entity's identity is retained as long as the identifier is immutable. With business concepts that correspond to Money or Time, a struct is sort of a natural fit because even though a new instance is created whenever we perform a computation, that's ok because we aren't tracking an identity, only storing a value.
sometime, you just wanna transfer data between components, then struct is better than class. e.g. Data Transfer Object(DTO) which only carry data.

Categories