After writing code that can be boiled down to the following:
var size=-1;
var arr=new byte[size];
I was surprised that it threw an OverflowException. The docs for OverflowException state:
The exception that is thrown when an arithmetic, casting, or conversion operation in a checked context results in an overflow.
I couldn't see how providing a negative size for and array length fits into the description given for this exception, so delved a little deeper and found that this is indeed the specified behaviour:
The computed values for the dimension lengths are validated as follows. If one or more of the values are less than zero, a System.OverflowException is thrown and no further steps are executed.
I wonder why OverflowException was chosen. It's pretty misleading if you ask me. It cost me at least 5 minutes of investigation (not counting my musings here). Can anyone shed any light on this (to my thinking) peculiar design decision?
This is almost certainly an optimization. The .NET framework code is pretty religious about checking arguments to let the programmer fall in the pit of success. But that doesn't come for free. The cost is fairly minuscule, many class methods take lots more machine cycles than is spent on the checking.
But arrays are special. They are the very core data structure in the framework. Almost every collection class is built on top of them. Any overhead put in the Array class directly impacts the efficiency of a lot of code that sits on top of it. Avoiding the check is okay, it gets implicitly checked anyway when the internal code needs to cast the value to unsigned. And it is very rare that it trips. So checking it twice is not quite worth the better exception message.
OverflowException, in the documentation, basically defines an overflow as something that:
produces a result that is outside the range of the data type
In this case, negative values are outside of the valid range for an array size (or really, any size).
I could see the argument that ArgumentOutOfRangeException might be, in some ways, better - however, there is no argument involved in an array definition (as its not a method), so it, too, would not be a perfect choice.
It might be because that size is an unsigned int. It stores -1 in two's complement, which when looked at as a unsigned int, is the maximum positive integer that can be stored. If this number is bigger than the possible size of an array, it will overflow.
Warning: this is pure speculation.
Related
I have two question:
1) I need some expert view in terms of witting code which will be Performance and Memory Consumption wise sound enough.
2) Performance and Memory Consumption wise how good/bad is following piece of code and why ???
Need to increment the counter that could go maximum by 100 and writing code like this:
Some Sample Code is as follows:
for(int i=0;i=100;i++)
{
Some Code
}
for(long i=0;i=1000;i++)
{
Some Code
}
how good is to use Int16 or anything else instead of int, long if the requirement is same.
Need to increment the counter that could go maximum by 100 and writing code like this:
Options given:
for(int i=0;i=100;i++)
for(long i=0;i=1000;i++)
EDIT: As noted, neither of these would even actually compile, due to the middle expression being an assignment rather than an expression of type bool.
This demonstrates a hugely important point: get your code working before you make it fast. Your two loops don't do the same thing - one has an upper bound of 1000, the other has an upper bound of 100. If you have to choose between "fast" and "correct", you almost always want to pick "correct". (There are exceptions to this, of course - but that's usually in terms of absolute correctness of results across large amounts of data, not code correctness.)
Changing between the variable types here is unlikely to make any measurable difference. That's often the case with micro-optimizations. When it comes to performance, architecture is usually much more important than in-method optimizations - and it's also a lot harder to change later on. In general, you should:
Write the cleanest code you can, using types that represent your data most correctly and simply
Determine reasonable performance requirements
Measure your clean implementation
If it doesn't perform well enough, use profiling etc to work out how to improve it
DateTime dtStart = DateTime.Now;
for(int i=0;i=10000;i++)
{
Some Code
}
response.write ((DateTime.Now - dtStart).TotalMilliseconds.ToString());
same way for Long as well and you can know which one is better... ;)
When you are doing things that require a number representing iterations, or the quantity of something, you should always use int unless you have a good semantic reason to use a different type (ie data can never be negative, or it could be bigger than 2^31). Additionally, Worrying about this sort of nano-optimization concern will basically never matter when writing c# code.
That being said, if you are wondering about the differences between things like this (incrementing a 4 byte register versus incrementing 8 bytes), you can always cosult Mr. Agner's wonderful instruction tables.
On an Amd64 machine, incrementing long takes the same amount of time as incrementing int.**
On a 32 bit x86 machine, incrementing int will take less time.
** The same is true for almost all logic and math operations, as long as the value is not both memory bound and unaligned. In .NET a long will always be aligned, so the two will always be the same.
I have an object model that I use to fill results from a query and that I then pass along to a gridview.
Something like this:
public class MyObjectModel
{
public int Variable1 {get;set;}
public int VariableN {get;set;}
}
Let's say variable1 holds the value of a count and I know that the count will never get to become very large (ie. number of upcoming appointments for a certain day). For now, I've put these data types as int. Let's say it's safe to say that someone will book less than 255 appointments per day. Will changing the datatype from int to byte affect performance much? Is it worth the trouble?
Thanks
No, performance will not be affected much at all.
For each int you will be saving 3 bytes, or 6 in total for the specific example. Unless you have many millions of these, the savings in memory are very small.
Not worth the trouble.
Edit:
Just to clarify - my answer is specifically about the example code. In many cases the choices will make a difference, but it is a matter of scale and will require performance testing to ensure correct results.
To answer #Filip's comment - There is a difference between compiling an application to 64bit and selecting an isolated data type.
Using a integer variable smaller than an int (System.Int32) will not provide any performance benefits. This is because most integer operations in the CLR will promote the variable to an int prior to performing the operation. int is considered the "natural" integer size on the systems for which the CLR was developed.
Consider the following code:
for (byte appointmentIndex = 0; appointmentIndex < Variable1; appointmentIndex++)
ProcessAppointment(appointmentIndex);
In the compiled code, the comparison (appointmentIndex < Variable1) and the increment (appointmentIndex++) will (most likely) be performed using 32-bit integers. Even if the optimizer uses a smaller data type, the CPU itself will require additional work to use the smaller data type.
If you are storing an array of values, then using a smaller data type could help save space, which might give a performance advantage in some scenerios.
It will affect the amount of memory allocated for that variable. In my personal opinion, I don't think it's worth the trouble in the example case.
If there were a huge number of variables, or a database table where you could really save, then yes, but not in this case.
Besides, after years of maintenance programming, I can safely say that it's rarely safe to assume an upper limit on anything. if there's even a remote chance that some poor maintenance programmer is going to have to re-write the app because of trying to save a trivial amount of resources, it's not worth the pay-off.
The .NET runtime optimizes the use of Int32 especially for counters etc.
.NET Integer vs Int16?
Contrary to popular belief, making your data type smaller does not make access faster. In fact, it's slower. Look at bool, it's implemented as an int.
This is because internally, your CPU works with native-word-sized registers (32/64 bit these days), and you're forcing it to convert your data back and forth for no reason (well only when writing the result in memory, but it's still a penalty you could easily avoid).
Fiddling with integer widths only affects memory access, and caching specifically. This is the kind of stuff you can only figure out by profiling your application and looking at page fault counters in particular.
I agree with the other answers that performance won't be worth it. But if you're going to do it at all, go with a short instead of a byte. My rule of thumb is to pick the highest number you can imagine, multiply by 10, then use that as the basis to pick your value. So if you can't possibly imagine a value higher than 200, then use 2000 as your basis, which would mean you'd need a short.
When using Array.GetLength(dimension) in C#, does the size of the array actually get calculated each time it is called, or is the size cached/stored and that value just gets accessed?
What I really want to know is if setting a local variable to the length of the dimension of an array would add any efficiency if used inside a big loop or if I can just call array.GetLength() over and over w/o any speed penalty.
It is most certainly a bad idea to start caching/optimizing by yourself here.
When dealing with arrays, you have to follow a standard path that the (JIT) optimizer can recognize. If you do, not only will the Length property be cached but more important the index bounds-check can be done just once before the loop.
When the optimizer loses your trail you will pay the penalty of a per-access bounds-check.
This is why jagged arrays (int[][]) are faster than multi-dim (int[,]). The optimization for int[,] is simply missing. Up to Fx2 anyway, I didn't check the status of this in Fx4 yet.
If you want to research this further, the caching you propose is usually called 'hoisting' the Length property.
It is probably inserted at compile time if it is known then. Otherwise, stored in a variable. If it weren't, how would the size be calculated?
However, you shouldn't make assumptions about the internal operations of the framework. If you want to know if something is more or less efficient, test it!
If you really need the loop to be as fast as possible, you can store the length in a variable. This will give you a slight performance increase, some quick testing that I did shows that it's about 30% faster.
As the difference isn't bigger, it shows that the GetLength method is really fast. Unless you really need to cram the last out of the code, you should just use the method in the loop.
This goes for multidimensional arrays, only. For a single dimensional array it's actually faster to use the Length property in the loop, as the optimiser then can remove bounds checks when you use the array in the loop.
The naming convention is a clue. THe "Length" methods (e.g. Array.Length) in .net typically return a known value, while the "Count" methods (e.g. List.Count) will/may enumerate the contents of the collection to work out the number of items. (In later .nets there are extension methods like Any that allow you to check if a collection is non-empty without having to use the potentially expensive Count operation) GetLength should only differ from Length in that you can request the dimension you want the length of.
A local variable is unlikely to make any difference over a call to GetLength - the compiler will optimise most situations pretty well anyway - or you could use foreach which does not need to determine the length before it starts.
(But it would be easy to write a couple of loops and time them (with a high performance counter) to see what effect different calls/types might have on the execution speed. Doing this sort of quick test can be a great way of gaining insights into a language that you might not really take in if you just read the answers)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have observed for a while that C# programmers tend to use int everywhere, and rarely resort to uint. But I have never discovered a satisfactory answer as to why.
If interoperability is your goal, uint shouldn't appear in public APIs because not all CLI languages support unsigned integers. But that doesn't explain why int is so prevalent, even in internal classes. I suspect this is the reason uint is used sparingly in the BCL.
In C++, if you have an integer for which negative values make no sense, you choose an unsigned integer.
This clearly signifies that negative numbers are not allowed or expected, and the compiler will do some checking for you. I also suspect in the case of array indices, that the JIT can easily drop the lower bounds check.
However, when mixing int and unit types, extra care and casts will be needed.
Should uint be used more? Why?
int is shorter to type than uint.
Your observation of why uint isn't used in the BCL is the main reason, I suspect.
UInt32 is not CLS Compliant, which means that it is wholly inappropriate for use in public APIs. If you're going to be using uint in your private API, this will mean doing conversions to other types - and it's typically easier and safer to just keep the type the same.
I also suspect that this is not as common in C# development, even when C# is the only language being used, primarily because it is not common in the BCL. Developers, in general, try to (thankfully) mimic the style of the framework on which they are building - in C#'s case, this means trying to make your APIs, public and internal, look as much like the .NET Framework BCL as possible. This would mean using uint sparingly.
Normally int will suffice. If you can satisfy all of the following conditions, you can use uint:
It is not for a public API (since uint is not CLS compliant).
You don't need negative numbers.
You (might) need the additional range.
You are not using it in a comparison with < 0, as that is never true.
You are not using it in a comparison with >= 0, as that is never false.
The last requirement is often forgotten and will introduce bugs:
static void Main(string[] args)
{
if (args.Length == 0) return;
uint last = (uint)(args.Length - 1);
// This will eventually throw an IndexOutOfRangeException:
for (uint i = last; i >= 0; i--)
{
Console.WriteLine(args[i]);
}
}
1) Bad habit. Seriously. Even in C/C++.
Think of the common for pattern:
for( int i=0; i<3; i++ )
foo(i);
There's absolutely no reason to use an integer there. You will never have negative values. But almost everyone will do a simple loop that way, even if it contains (at least) two other "style" errors.
2) int is perceived as the native type of the machine.
I prefer uint to int unless a negative number is actually in the range of acceptable values. In particular, accepting an int param but throwing an ArgumentException if the number is less than zero is just silly--use a uint!
I agree that uint is underused, and I encourage everyone else to use it more.
I program at a lower level application layer where ints rarely get above 100, so negative values are not an issue (e.g. for i < myname.length() type stuff) it's just an old C habit - and shorter to type as mentioned above. However, in some cases, when interfacing to hardware where I'm dealing with event flags from devices, the uint is important in cases where a flag may use the left (highest) most bit.
Honestly, for 99.9% of my work I could easily use ushort, but int, you know, sounds sounds a lot better than ushort.
I have made a Direct3D 10 wrapper in C# & need to use uint if I want to create very large vertex buffers. Large buffers in the video card can not be represented with a signed int.
UINT is very useful & is silly to say otherwise. If anyone thinks just because they have never needed to use uint no one else will, you are wrong.
I think it is just laziness. C# is inherently a choice for development on desktops and other machines with relatively much resources.
C and C++, however, has deep roots in old systems and embedded systems where memory is sparse, so programmers are used to think carefully what datatype to use.
C# programmers are lazy, and since there are enough resources in general, nobody really optimizes memory usage (in general, not always of course). Event if a byte would be sufficient, a lot of C# programmers, including me, just use int for simplicity. Moreover, a lot of API functions accept ints, so it prevents casting.
I agree that choosing the correct datatype is good practice, but I think the main motivation is laziness.
Finally, choosing an integer is more mathematically correct. Unsigned ints don't exist in math (only natural numbers). And since most programmers have a mathematical background, using an integer is more natural.
I think a big part of the reason is that when C first came out most of the examples used int for brevity's sake. We rejoiced at not having to write integer like we did with Fortran and Pascal, and in those days we routinely used them for mundane things like array indices and loop counters. Unsigned integers were special cases for large numbers that needed that last extra bit. I think it's a natural progression that C habits continued into C# and other new languages like Python.
Some languages (e.g. many versions of Pascal) regard unsigned types as representing numeric quantities; an operation between an unsigned type and a signed type of the same size will generally be performed as though the operands were promoted to the next larger type (in some such languages, the largest type has no unsigned equivalent, so such promotion will always be possible).
Other languages (e.g. C) regard N-bit unsigned types as a group which wraps around modulo 2^N. Note that subtracting N from a member of such a group doesn't represent numerical subtraction, but rather yields the group member which, when N is added to it, would yield the original. Arguably, certain operations involving mixtures of signed and unsigned values don't really make sense and should perhaps have been forbidden, but even code which is sloppy with its specifications of things like numeric literals will usually work, and code has been written which mixes signed and unsigned types and, despite being sloppy, does work, that the spec isn't apt to change any time soon.
It's a lot easier to work exclusively with signed types than to work out all the intricacies of interactions between signed and unsigned types. Unsigned types are useful when decomposing large numbers out of smaller pieces (e.g. for serialization) or for reconstituting such numbers, but in general it's better to simply use signed numbers for things that actually represent quantities
I know this is probably an old thread but I wanted to give some clarification.
Lets take an int8 you can store –128 to 127 and it uses 1 byte that is a total of 127 positive numbers.
When you use an int8 one of the bits is used for the negative numbers -128.
When you use a Uint8 you give the negative numbers to the positive so this allows you to use 255 positive numbers with the same amount of storage 1 byte.
The only draw back is the you have now lost the capability to use negative values.
Another problem with this is not all programming languages and databases support this.
The only reason you would use this in my opinion is when you need to be efficient in like gaming programming and you have to store large non negative numbers.
This is why not many programs use this it.
The main reason is storage is not a problem and you can't use it flexibly with other software, plugins, Database, or Api's. Also for example a bank would need negative numbers to store money etc.
I hope this will help someone.
I have a questionable coding practice.
When I need to iterate through a small list of items whose count limit is under 32000, I use Int16 for my i variable type instead of Integer. I do this because I assume using the Int16 is more efficient than a full blown Integer.
Am I wrong? Is there no effective performance difference between using an Int16 vs an Integer? Should I stop using Int16 and just stick with Integer for all my counting/iteration needs?
You should almost always use Int32 or Int64 (and, no, you do not get credit by using UInt32 or UInt64) when looping over an array or collection by index.
The most obvious reason that it's less efficient is that all array and collection indexes found in the BCL take Int32s, so an implicit cast is always going to happen in code that tries to use Int16s as an index.
The less-obvious reason (and the reason that arrays take Int32 as an index) is that the CIL specification says that all operation-stack values are either Int32 or Int64. Every time you either load or store a value to any other integer type (Byte, SByte, UInt16, Int16, UInt32, or UInt64), there is an implicit conversion operation involved. Unsigned types have no penalty for loading, but for storing the value, this amounts to a truncation and a possible overflow check. For the signed types every load sign-extends, and every store sign-collapses (and has a possible overflow check).
The place that this is going to hurt you most is the loop itself, not the array accesses. For example take this innocent-looking loop:
for (short i = 0; i < 32000; i++) {
...
}
Looks good, right? Nope! You can basically ignore the initialization (short i = 0) since it only happens once, but the comparison (i<32000) and incrementing (i++) parts happen 32000 times. Here's some pesudo-code for what this thing looks like at the machine level:
Int16 i = 0;
LOOP:
Int32 temp0 = Convert_I16_To_I32(i); // !!!
if (temp0 >= 32000) goto END;
...
Int32 temp1 = Convert_I16_To_I32(i); // !!!
Int32 temp2 = temp1 + 1;
i = Convert_I32_To_I16(temp2); // !!!
goto LOOP;
END:
There are 3 conversions in there that are run 32000 times. And they could have been completely avoided by just using an Int32 or Int64.
Update: As I said in the comment, I have now, in fact written a blog post on this topic, .NET Integral Data Types And You
According to the below reference, the runtime optimizes performance of Int32 and recommends them for counters and other frequently accessed operations.
From the book: MCTS Self-Paced Training Kit (Exam 70-536): Microsoft® .NET Framework 2.0—Application Development Foundation
Chapter 1: "Framework Fundamentals"
Lesson 1: "Using Value Types"
Best Practices: Optimizing performance
with built-in types
The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables.
For floating-point operations, Double is the most efficient type because those operations are optimized by hardware.
Also, Table 1-1 in the same section lists recommended uses for each type.
Relevant to this discussion:
Int16 - Interoperation and other specialized uses
Int32 - Whole numbers and counters
Int64 - Large whole numbers
Int16 may actually be less efficient because the x86 instructions for word access take up more space than the instructions for dword access. It will depend on what the JIT does. But no matter what, it's almost certainly not more efficient when used as the variable in an iteration.
The opposite is true.
32 (or 64) bit integers are faster than int16. In general the native datatype is the fastest one.
Int16 are nice if you want to make your data-structures as lean as possible. This saves space and may improve performance.
Never assume efficiency.
What is or isn't more efficient will vary from compiler to compiler and platform to platform. Unless you actually tested this, there is no way to tell whether int16 or int is more efficient.
I would just stick with ints unless you come across a proven performance problem that using int16 fixes.
Any performance difference is going to be so tiny on modern hardware that for all intents and purposes it'll make no difference. Try writing a couple of test harnesses and run them both a few hundred times, take the average loop completion times, and you'll see what I mean.
It might make sense from a storage perspective if you have very limited resources - embedded systems with a tiny stack, wire protocols designed for slow networks (e.g. GPRS etc), and so on.
Use Int32 on 32-bit machines (or Int64 on 64-bit machines) for fastest performance. Use a smaller integer type if you're really concerned about the space it takes up (may be slower, though).
The others here are correct, only use less than Int32 (for 32-bit code)/Int64 (for 64-bit code) if you need it for extreme storage requirements, or for another level of enforcement on a business object field (you should still have propery level validation in this case, of course).
And in general, don't worry about efficiency until there is a performance problem. And in that case, profile it. And if guess & checking with both ways while profiling doesn't help you enough, check the IL code.
Good question though. You're learning more about how the compiler does it's thing. If you want to learn to program more efficiently, learning the basics of IL and how the C#/VB compilers do their job would be a great idea.
I can't imagine there being any significant performance gain on Int16 vs. int.
You save some bits in the variable declaration.
And definitely not worth the hassle when the specs change and whatever you are counting can go above 32767 now and you discover that when your application starts throwing exceptions...
There is no significant performance gain in using a data type smaller than Int32, in fact, i read somewhere that using Int32 will be faster than Int16 because of memory allocation