Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I would like to know what is more optimal, A number N of private integer variables, or a single array containing N integer values.
When you use an array, that is a plus indirection. The array will be allocated at a separate part of the memory, so when you first access it, your code first obtains its address, then it is able to read out its content. It also needs some indexing, but that is done extremly fast by the CPU. However, .NET is a safe environment and it will do a check whether you use a valid array index. It adds additional time.
When you use separate variables, these will be encompassed by your object instance and no indirection is needed. Also, no index bound check is needed.
Moreover, you cannot name nicely the Nth element of an array, but you can give good names for individual variables. So your code will be readable.
As others mentioned, you shouldn't do this kind of optimalizations, the compiler/jitter take care of it. The compiler knows several common use cases and has optimialization strategy for that. If you start doing tricky things, the compiler will not recognize your intention and cannot make the optimalization for you.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
int[] age=new int[5];
After reading many posts and blogs ,I am still not clear about the fact why arrays are reference types.I know this question is like very old and have been asked plenty times but still couldnt find direct answer.
Known things :Since Arrays are allocated on heap,hence they are reference types .
Need to know:Whats the reason behind making(compiler) arrays as reference types?.
Suppose age[0]=5,age[1]=25 ,what is the difficulty in assigning these on stack or make reference to heap if their type was object.why consider heap as accessing time is slow comparatively?.
Why heap ,why not on stack like for structures?.
Several reasons:
value types are passed by value (as in copied). So if you called a method with a value-type array, you'd be copying all the contents.
Stack memory is limited and primarily designed for quick access to parameters that are in current use (as implied by the term stack). Putting a large object onto the stack would cause lookups of other local state to take much longer, because they wouldn't all be on the same cache line in the CPU cache anymore.
Array contents are modifiable. So you'd have all the issues that you have with mutable structs when you tried to set a value, only to find that the copy, not the "original" was modified.
EDIT:
Yes you can use stackalloc in unsafe code. Doesn't make it a good idea.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This StackOverflow answer completely describes that a HashSet is unordered and its item enumeration order is undefined and should not be relied upon.
However,
This brings up another question: should I or should I not rely upon the enumeration order between two or more sebsequent enumerations? Given there are no insertions or removals.
For example, lets say I have added some items to a HashSet:
HashSet<int> set = new HashSet<int>();
set.Add(1);
set.Add(2);
set.Add(3);
set.Add(4);
set.Add(5);
Now, when I enumerate this set via foreach, let us say I receive this sequence:
// Result: 1, 3, 4, 5, 2.
The question is: will the order preserve if I enumerate the set times and times again given I do no modifications? Will it always be the same?
Practically speaking, it might always be the same between enumerations, but that assumption is not provided for in the description of IEnumerable and the implementor could decide to return then in whichever order it wants.
Who knows what it is doing under the hood, and whether it will keep doing it the same way in the future. For example, a future implementation of HashSet might be optimized to detect low memory conditions and rearrange its contents in memory, thereby affecting the order in which they are returned. So 99.9% of the time they would come back the same order, but if you started exhausting memory resources, it would suddenly return things in a different order.
Bottom line is I would not rely on the order of enumeration to be consistent over time. If the order is important to you then do your foreach over set.OrderBy(x => x) so that you can make sure it is in the order you want.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm refactoring some legacy code that uses a 2D string array:
/// <summary>Array of valid server messages</summary>
private static string[,] serverRsp =
{
{"JOIN", "RSP" },
{"SETTING", "RSP" },
. . .
I want to modernize this, but don't know if I should use a Dictionary, a List of list of string, or something else. Is there a standard correlation between the "olden" way and the golden way (legacy vs. refactored)?
IWBN (it would be nice) if there was a chart somewhere that showed the olden vs. the golden for data types and structures, etc.
[,] is not an "old" datastructure, and hopefully will never become.
Keep using it whenever appropriate.
For example:
just in this case have a List<List<T>> is much more confusing then having simple 2 dimensional array.
It's lighter then List<T>in terms of memory consumption (at least from my measurements).
In short: if there is no any real reason, or new requirement to change it, like make it faster O(1) access data structure key-value store (for non index, hence key like, fast access), do not change it. It is clear and it is readable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm used to using encapsulation no matter what, all of my variables are private.
But when I'm handling thousands of instances with thousands of properties, I start thinking about optimization, wondering if the benefits of encapsulation justify the performance penalty (if any)
I'm aware of why one should use encapsulation but what I'm asking about is: Is encapsulation worth the processing it requires if it's not required to be used ? How much does it use ?
I think you're missing the point of encapsulation. The point of encapsulation is that the object controls ALL interaction with it's fields. Thus enforcing business logic uniformly and protecting the state of the system. Given that you would have to run the business logic anyway... you're not saving anything by just using data objects.
Your first choice should be to encapsulate things. Most of the time, the setter and getter functions should get inlined.
All you "lose" is the time it takes for any extra logic involved in the actual verification that you are not setting an invalid value, etc. But you don't want to miss that out just for the sake of speed, would you?
So, if the alternative is to write
if (x >= 0) obj.x = x;
or
obj.setx(x); // where setx checks that x >= 0.
which is better?
If there are performance criteria for the system, then benchmark. If you are meeting the criteria, fine. If not, figure out where the bottlenecks are. But unless your setter and getter functions are "normal" ones (that is, just storing the value after some checking), it shouldn't be the bottlenecks. Typical bottlenecks are "poor choice of algorithms".
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm writing a library and instead of returning a byte array from an EventArgs derivation, it says I should return something like IList or ReadOnlyCollection instead.
Normally I'd be all for this but most of the existing .NET Framework uses byte arrays as opposed to generic list interfaces.
So if I were to use IList then when accessing the eventargs, if a client wanted to call File.WriteAllBytes he or she would have to do using System.Linq; and call the ToArray extension method to get the IList in the form of an array of bytes. Of course there are other ways to do this but this is the most elegant and typical.
Clients of this library are always going to want things to be in terms of an array of bytes so that they interface nicely with the rest of the framework.
Also, optimization may come in to play here. There is potential for large amounts of bytes to be manipulated so having to recopy the entire list just to get it in the form of a byte array each time would likely slow things down.
Lastly, it's just plain unpleasant. If clients are always going to want a byte array, then why not just give it to them? Do framework design guidelines not apply in this situation? What would you do?
There is potential for large amounts of bytes to be manipulated so having to recopy the entire list just to get it in the form of a byte array each time would likely slow things down.
But that is precisely why it should not be a byte array. Suppose you do this:
byte[] x1 = GetByteArray();
x1[0] = 0;
byte[] x2 = GetByteArray();
Every time you call GetByteArray you have to create a new byte array. Why? Because someone might have changed the one you handed out last time to have different contents! By handing out a byte array you guarantee that you are going to have to reconstruct that byte array from scratch every single time.
By contrast, if you hand out a read only collection of bytes then you can hand out the same collection over and over again. You know it is not going to change.
Clients of this library are always going to want things to be in terms
of an array of bytes so that they interface nicely with the rest of
the framework.
There you have your answer - FxCop output is in most cases just helpful suggestions - not commands - if this particular one doesn't apply to you you can even turn it off.
The guidelines and recommendations offered by FxCop are not always applicable in every situation. You don't need to follow them, and in some situations you shouldn't.