Alternative to BigInteger.ModPow(); in C# - c#

i'm looking for an alternative to the BigInteger package of C# which has been introduced with NET 4.x.
The mathematical operations with this object are terribly slow, I guess this is caused by the fact that the arithmetics are done on a higher level than the primitive types - or badly optimized, whatever.
Int64/long/ulong or other 64bit-numbers are way to small and won't calculate correctly - I'm talking about 64bit-integer to the power of 64-bit integers.
Hopefully someone can suggest my something. Thanks in advance.

Honestly, if you have extremely large numbers and need to do heavy computations with them and the BigInteger library still isn't cutting it for you, why not offload it onto an external process using whatever language or toolkit you know of that does it best? Are you truly constrained to write whatever it is you're trying to accomplish entirely in C#?
For example, you can offload to MATLAB in C#.

BIGInteger is indeed very slow. One of the reasons is it's immutability.
If you do a = a - b you will get a new copy of a. Normally this is fast. With BigInteger and say an integer of 2048 bits it will need to allocate an extra 2KB.
It should also have different multiplication-algorithms depending on integersize (I assume it is not that sophisticated). What I mean is that for very very large integers a different algorithm using fourier transforms works best and for smaller integers you break the work down in smaller multiplies (divide and conquer approach). See more on http://en.wikipedia.org/wiki/Multiplication_algorithm
Either way there are alternatives, none of which I have used or tested. They might be slower as .NET internal for all I know. (making a testcase and do some valid testing is your friend)
Google 'C# large integer multiplication' for a lot of homemade BigInteger implementations (usually from pre C#4.0 when BIGInteger was introduced)
https://github.com/devoyster/IntXLib
http://gmplib.org/ (there are C# wrappers)
http://www.extremeoptimization.com/ (commercial)
http://mathnetnumerics.codeplex.com/ (nice opensource, but not much onboard for very large integers)

public static int PowerBySquaring(int baseNumber, int exponent)
{
int result = 1;
while (exponent != 0)
{
if ((exponent & 1)==1)
{
result *= baseNumber;
}
exponent >>= 1;
baseNumber *= baseNumber;
}
return result;
}

Related

I just noticed I get different hashcodes from objects depending on if I build for x86 or 64. Can I do that aswell?

I noticed that hashcodes I got from other objects were different when I built for a either x86 or x64.
Up until now I have implemented most of my own hashing functions like this:
int someIntValueA;
int someIntValueB;
const int SHORT_MASK = 0xFFFF;
public override int GetHashCode()
{
return (someIntValueA & SHORT_MASK) + ((someIntValueB & SHORT_MASK) << 16);
}
Will storing the values in a long and getting the hashcode from that give me a wider range as well on 64-bit systems, or is this a bad idea?
public override int GetHashCode()
{
long maybeBiggerSpectrumPossible = someIntValueA + (someIntValueB << 32);
return maybeBiggerSpectrumPossible.GetHashCode();
}
No, that will be far worse.
Suppose your int values are typically in the range of a short: between -30000 and +30000. And suppose further that most of them are near the middle, say, between 0 and 1000. That's pretty typical. With your first hash code you get all the bits of both ints into the hash code and they don't interfere with each other; the number of collisions is zero under typical conditions.
But when you do your trick with a long, then you rely on what the long implementation of GetHashCode does, which is xor the upper 32 bits with the lower 32 bits. So your new implementation is just a slow way of writing int1 ^ int2. Which, in the typical scenario has almost all zero bits, and hence collisions all over the place.
The approach you suggest won't make anything any better (quite the opposite).
However…
SpookyHash is for example designed to work particularly quickly on 64-bit systems, because when working out the math the author was thinking about what would be fast on a 64-bit system, xxHash has 32-bit and 64-bit variants that are designed to give comparable quality of hash at better speed for 32-bit and 64-bit computation respectively.
The general idea of making use of the differences performances of different arithmetic operations on different machines is a valid one.
And your general idea of making use of a larger intermediary storage in hash calculation is also a valid one as long as those extra bits make their way into subsequent operations.
So at a very general level, the answer is yes, even if your particular implementation fails to come through with that.
Now, in practice, when you're sitting down to write a hashcode implementation should you worry about this?
Well it depends. For a while I was very bullish about using algorithms like SpookyHash, and it does very well (even on 32-bit systems) when the hash is based on a large amount of source data. But on the other hand it can be better, especially when used with smaller hash-based sets and dictionaries, to be crappy really fast than fantastic slowly. So there isn't an one-solution-fits-all answer. With just two input integers your initial solution is likely to beat a super-avalancy algorithm like xxHash or SpookyHash for many uses. You could perhaps do better if you also had a >> 16 to rotate rather than shift (fun fact, some jitters are optimised for that), but we're not touching on 64- vs 32-bit versions in that at all.
The cases where you do find a big possible improvement with taking a different approach in 64- and 32-bit are where there's a large amount of data to mix in, especially if it's in a blittable form (like string or byte[]) that you can access via a long* or int* depending on framework.
So, generally you can ignore the question of bitness, but if you find yourself thinking "this hashcode has to go through so much stuff to get an answer; can I make it better?" then maybe it's time to consider such matters.

How to hash a URL quickly

I have a unique situation where I need to produce hashes on the fly. Here is my situation. This question is related to here. I need to store a many urls in the database which need to be indexed. A URL can be over 2000 characters long. The database complains that a string over 900 bytes cannot be indexed. My solution is to hash the URL using MD5 or SHA256. I am not sure which hashing algorithm to use. Here are my requirements
Shortest character length with minimal collision
Needs to be very fast. I will be hashing the referurl on every page request
Collisions need to be minimized since I may have millions of urls in the database
I am not worried about security. I am worried about character length, speed, and collisions. Anyone know of a good algorithm for this?
In your case, I wouldn't use any of the cryptographic hash functions (i.e. MD5, SHA), since they were designed with security in mind: They mainly want to make it as hard as possible to finde two different strings with the same hash. I think this wouldn't be a problem in your case. (the possibility of random collisions is inherent to hashing, of course)
I'd strongly not suggest to use String.GetHashCode(), since the implementation is not known and MSDN says that it might vary between different versions of the framework. Even the results between x86 and x64 versions may be different. So you'll get into troubles when trying to access the same database using a newer (or different) version of the .NET framework.
I found the algorithm for the Java implementation of hashCode on Wikipedia (here), it seems quite easy to implement. Even a straightforward implementation would be faster than an implementation of MD5 or SHA imo. You could also use long values which reduces the probability of collisions.
There is also a short analysis of the .NET GetHashCode implementation here (not the algorithm itself but some implementation details), you could also use this one I guess. (or try to implement the Java version in a similar way ...)
a quick one :
URLString.GetHashCode().ToString("x")
While both MD5 and SHA1 have been proved ineffective where collision prevention is essential I suspect for your application either would be sufficient. I don't know for sure but I suspect that MD5 would be the simpler and quicker of the two algorithms.
Use the System.Security.Cryptography.SHA1Cng class, I would suggest. It's 160 bits or 20 bytes long, so that should definitely be small enough. If you need it to be a string, it will only require 40 characters, so that should suit your needs well. It should also be fast enough, and as far as I know, no collisions have yet been found.
I'd personally use String.GetHashCode(). This is the basic hash function. I honestly have no idea how it performs compared to other implementations but it should be fine.
Either of the two hashing functions that you name should be quick enough that you won't notice much difference between them. Unless this site requires ultra-high performance I would not worry too much about them. I'd personally probably go for MD5. This can be formatted as a string as hexdecimal in 64 characters or as a base 64 string in 44 characters.
The reason I'd go for MD5 is because you are very unlikely to run into collisions and even if you do you can structure your queries with "where urlhash = #hash and url = #url". The database engine should work out that one is indexed and the other isn't and use that information to do a sensible search.
If there are colisions the indexed scan on urlhash will return a handful of results which will be easy to do text comparisons on to get the right one. This is unlikely to be relevant very often though. You've pretty low chances of getting collisions this way.
Reflected source code of GetHashCode function in .net 4.0
public override unsafe int GetHashCode()
{
fixed (char* str = ((char*) this))
{
char* chPtr = str;
int num = 0x15051505;
int num2 = num;
int* numPtr = (int*) chPtr;
for (int i = this.Length; i > 0; i -= 4)
{
num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0];
if (i <= 2)
{
break;
}
num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1];
numPtr += 2;
}
return (num + (num2 * 0x5d588b65));
}
}
There was O(n) simple operations(+, <<, ^) and one multiplication. So this is very fast.
I've tested this function on 3 mln DB contains strings lengths up to 256 characters and about 97% of strings has no collision. (Maximum 5 strings have the same hash)
You may want to look at the following project:
CMPH - C Minimal Perfect Hashing Library
And check out the following hot topics listing for perfect hashes:
Hottest 'perfect-hash' Answers - Stack Overflow
You could also consider using a full text index in SQL rather than hashing:
CREATE FULLTEXT INDEX (Transact-SQL)

Fast method of calculating square root and power?

C#'s Math class does roots and powers in double only. Various things may go a bit faster if I add float-based square-root and power functions to my Math2 class (Today is a relaxation day and I find optimization relaxing).
So - Fast square-root and power functions that I don't have to worry about licensing for, plskthx. Or a link that'll get me there.
I'm going to take it as axiomatic that no software method will compete with the hardware instruction for square roots. The only difficulty is that .NET doesn't give us direct control of the hardware as in the days of inline assembler for C code.
Let's first discuss a generic x86 hardware prospect.
The floating point x86 instruction FSQRT does come in three precisions: single, double, and extended (the native precision of the 80-bit FP registers), and there is a 25-40% shorter timing for single vs. double precision. See here for 32-bit x86 instructions.
That may sound like a big opportunity, but it's only a dozen clocks or so. That sort of economization will easily get lost in the overhead unless you are able to carefully manage the code from function call to return value. Managed C++ sounds (as Marcelo Cantos suggests) like a more practical base for this than C#.
Note: Timings for FSQRT are identical to those FDIV, with which it shares an execution unit in the Intel architecture, and thus a common latency.
A better opportunity for specialized C# code probably exists in the direction of SSE SIMD instructions, where hardware allows for up to 4 single precision square roots to be done in parallel. JIT compiler support for this has been missing for years, but here are some leads on current development.
Intel has jumped in (Dec. 15,2010), seeing that .NET Framework 4 wasn't doing anything with SIMD:
[Intel Performance Libraries allow... SIMD instructions in C#]
Even before that the Mono project added JIT support for SIMD in Mono 2.2:
[Mono: Release Note Mono 2.2]
The possibility of calling Mono's SIMD support from MS C# was recently raised here:
[Calling mono c# code from Microsoft .net ? -- Stackoverflow]
An earlier question also addresses (though without much love shown!) how to install Mono's SIMD support:
[how to enable Mono.Simd -- Stackoverflow]
Should check out this link:
http://www.codecodex.com/wiki/Calculate_an_integer_square_root
has lots of speedy algorithms in a bunch of different languages.
Ex:
// Finds the integer square root of a positive number
public static int Isqrt(int num) {
if (0 == num) { return 0; } // Avoid zero divide
int n = (num / 2) + 1; // Initial estimate, never low
int n1 = (n + (num / n)) / 2;
while (n1 < n) {
n = n1;
n1 = (n + (num / n)) / 2;
} // end while
return n;
} // end Isqrt()
but there are a lot more, some C/C++ ones are supposed to be the fastest, or so they claim.
for the POW algotrithm check i found this one HERE, along an explanation of how to get to that algorithm, starting from simpler ones.
private double Power(double a, int b) {
if (b<0) {
throw new ApplicationException("B must be a positive integer or zero");
}
if (b==0) return 1;
if (a==0) return 0;
if (b%2==0) {
return Power(a*a, b/2);
} else if (b%2==1) {
return a*Power(a*a,b/2);
}
return 0;
}
Wikipedia has an extensive article on calculation of square roots:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots
Calculating x to the power of y is simpler:
http://www.osix.net/modules/article/?id=696
I liked this pocked calculator way of doing it:
... but I honestly have no idea whether it is fast.
Probably the easiest way is to implement the float versions in Managed C++. Whether that will go faster that the baked-in double versions or not, I can't say.

using uint vs int [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have observed for a while that C# programmers tend to use int everywhere, and rarely resort to uint. But I have never discovered a satisfactory answer as to why.
If interoperability is your goal, uint shouldn't appear in public APIs because not all CLI languages support unsigned integers. But that doesn't explain why int is so prevalent, even in internal classes. I suspect this is the reason uint is used sparingly in the BCL.
In C++, if you have an integer for which negative values make no sense, you choose an unsigned integer.
This clearly signifies that negative numbers are not allowed or expected, and the compiler will do some checking for you. I also suspect in the case of array indices, that the JIT can easily drop the lower bounds check.
However, when mixing int and unit types, extra care and casts will be needed.
Should uint be used more? Why?
int is shorter to type than uint.
Your observation of why uint isn't used in the BCL is the main reason, I suspect.
UInt32 is not CLS Compliant, which means that it is wholly inappropriate for use in public APIs. If you're going to be using uint in your private API, this will mean doing conversions to other types - and it's typically easier and safer to just keep the type the same.
I also suspect that this is not as common in C# development, even when C# is the only language being used, primarily because it is not common in the BCL. Developers, in general, try to (thankfully) mimic the style of the framework on which they are building - in C#'s case, this means trying to make your APIs, public and internal, look as much like the .NET Framework BCL as possible. This would mean using uint sparingly.
Normally int will suffice. If you can satisfy all of the following conditions, you can use uint:
It is not for a public API (since uint is not CLS compliant).
You don't need negative numbers.
You (might) need the additional range.
You are not using it in a comparison with < 0, as that is never true.
You are not using it in a comparison with >= 0, as that is never false.
The last requirement is often forgotten and will introduce bugs:
static void Main(string[] args)
{
if (args.Length == 0) return;
uint last = (uint)(args.Length - 1);
// This will eventually throw an IndexOutOfRangeException:
for (uint i = last; i >= 0; i--)
{
Console.WriteLine(args[i]);
}
}
1) Bad habit. Seriously. Even in C/C++.
Think of the common for pattern:
for( int i=0; i<3; i++ )
foo(i);
There's absolutely no reason to use an integer there. You will never have negative values. But almost everyone will do a simple loop that way, even if it contains (at least) two other "style" errors.
2) int is perceived as the native type of the machine.
I prefer uint to int unless a negative number is actually in the range of acceptable values. In particular, accepting an int param but throwing an ArgumentException if the number is less than zero is just silly--use a uint!
I agree that uint is underused, and I encourage everyone else to use it more.
I program at a lower level application layer where ints rarely get above 100, so negative values are not an issue (e.g. for i < myname.length() type stuff) it's just an old C habit - and shorter to type as mentioned above. However, in some cases, when interfacing to hardware where I'm dealing with event flags from devices, the uint is important in cases where a flag may use the left (highest) most bit.
Honestly, for 99.9% of my work I could easily use ushort, but int, you know, sounds sounds a lot better than ushort.
I have made a Direct3D 10 wrapper in C# & need to use uint if I want to create very large vertex buffers. Large buffers in the video card can not be represented with a signed int.
UINT is very useful & is silly to say otherwise. If anyone thinks just because they have never needed to use uint no one else will, you are wrong.
I think it is just laziness. C# is inherently a choice for development on desktops and other machines with relatively much resources.
C and C++, however, has deep roots in old systems and embedded systems where memory is sparse, so programmers are used to think carefully what datatype to use.
C# programmers are lazy, and since there are enough resources in general, nobody really optimizes memory usage (in general, not always of course). Event if a byte would be sufficient, a lot of C# programmers, including me, just use int for simplicity. Moreover, a lot of API functions accept ints, so it prevents casting.
I agree that choosing the correct datatype is good practice, but I think the main motivation is laziness.
Finally, choosing an integer is more mathematically correct. Unsigned ints don't exist in math (only natural numbers). And since most programmers have a mathematical background, using an integer is more natural.
I think a big part of the reason is that when C first came out most of the examples used int for brevity's sake. We rejoiced at not having to write integer like we did with Fortran and Pascal, and in those days we routinely used them for mundane things like array indices and loop counters. Unsigned integers were special cases for large numbers that needed that last extra bit. I think it's a natural progression that C habits continued into C# and other new languages like Python.
Some languages (e.g. many versions of Pascal) regard unsigned types as representing numeric quantities; an operation between an unsigned type and a signed type of the same size will generally be performed as though the operands were promoted to the next larger type (in some such languages, the largest type has no unsigned equivalent, so such promotion will always be possible).
Other languages (e.g. C) regard N-bit unsigned types as a group which wraps around modulo 2^N. Note that subtracting N from a member of such a group doesn't represent numerical subtraction, but rather yields the group member which, when N is added to it, would yield the original. Arguably, certain operations involving mixtures of signed and unsigned values don't really make sense and should perhaps have been forbidden, but even code which is sloppy with its specifications of things like numeric literals will usually work, and code has been written which mixes signed and unsigned types and, despite being sloppy, does work, that the spec isn't apt to change any time soon.
It's a lot easier to work exclusively with signed types than to work out all the intricacies of interactions between signed and unsigned types. Unsigned types are useful when decomposing large numbers out of smaller pieces (e.g. for serialization) or for reconstituting such numbers, but in general it's better to simply use signed numbers for things that actually represent quantities
I know this is probably an old thread but I wanted to give some clarification.
Lets take an int8 you can store –128 to 127 and it uses 1 byte that is a total of 127 positive numbers.
When you use an int8 one of the bits is used for the negative numbers -128.
When you use a Uint8 you give the negative numbers to the positive so this allows you to use 255 positive numbers with the same amount of storage 1 byte.
The only draw back is the you have now lost the capability to use negative values.
Another problem with this is not all programming languages and databases support this.
The only reason you would use this in my opinion is when you need to be efficient in like gaming programming and you have to store large non negative numbers.
This is why not many programs use this it.
The main reason is storage is not a problem and you can't use it flexibly with other software, plugins, Database, or Api's. Also for example a bank would need negative numbers to store money etc.
I hope this will help someone.

In C# is there any significant performance difference for using UInt32 vs Int32

I am porting an existing application to C# and want to improve performance wherever possible. Many existing loop counters and array references are defined as System.UInt32, instead of the Int32 I would have used.
Is there any significant performance difference for using UInt32 vs Int32?
The short answer is "No. Any performance impact will be negligible".
The correct answer is "It depends."
A better question is, "Should I use uint when I'm certain I don't need a sign?"
The reason you cannot give a definitive "yes" or "no" with regards to performance is because the target platform will ultimately determine performance. That is, the performance is dictated by whatever processor is going to be executing the code, and the instructions available. Your .NET code compiles down to Intermediate Language (IL or Bytecode). These instructions are then compiled to the target platform by the Just-In-Time (JIT) compiler as part of the Common Language Runtime (CLR). You can't control or predict what code will be generated for every user.
So knowing that the hardware is the final arbiter of performance, the question becomes, "How different is the code .NET generates for a signed versus unsigned integer?" and "Does the difference impact my application and my target platforms?"
The best way to answer these questions is to run a test.
class Program
{
static void Main(string[] args)
{
const int iterations = 100;
Console.WriteLine($"Signed: {Iterate(TestSigned, iterations)}");
Console.WriteLine($"Unsigned: {Iterate(TestUnsigned, iterations)}");
Console.Read();
}
private static void TestUnsigned()
{
uint accumulator = 0;
var max = (uint)Int32.MaxValue;
for (uint i = 0; i < max; i++) ++accumulator;
}
static void TestSigned()
{
int accumulator = 0;
var max = Int32.MaxValue;
for (int i = 0; i < max; i++) ++accumulator;
}
static TimeSpan Iterate(Action action, int count)
{
var elapsed = TimeSpan.Zero;
for (int i = 0; i < count; i++)
elapsed += Time(action);
return new TimeSpan(elapsed.Ticks / count);
}
static TimeSpan Time(Action action)
{
var sw = new Stopwatch();
sw.Start();
action();
sw.Stop();
return sw.Elapsed;
}
}
The two test methods, TestSigned and TestUnsigned, each perform ~2 million iterations of a simple increment on a signed and unsigned integer, respectively. The test code runs 100 iterations of each test and averages the results. This should weed out any potential inconsistencies. The results on my i7-5960X compiled for x64 were:
Signed: 00:00:00.5066966
Unsigned: 00:00:00.5052279
These results are nearly identical, but to get a definitive answer, we really need to look at the bytecode generated for the program. We can use ILDASM as part of the .NET SDK to inspect the code in the assembly generated by the compiler.
Here, we can see that the C# compiler favors signed integers and actually performs most operations natively as signed integers and only ever treats the value in-memory as unsigned when comparing for the branch (a.k.a jump or if). Despite the fact that we're using an unsigned integer for both the iterator AND the accumulator in TestUnsigned, the code is nearly identical to the TestSigned method except for a single instruction: IL_0016. A quick glance at the ECMA spec describes the difference:
blt.un.s :
Branch to target if less than (unsigned or unordered), short form.
blt.s :
Branch to target if less than, short form.
Being such a common instruction, it's safe to assume that most modern high-power processors will have hardware instructions for both operations and they'll very likely execute in the same number of cycles, but this is not guaranteed. A low-power processor may have fewer instructions and not have a branch for unsigned int. In this case, the JIT compiler may have to emit multiple hardware instructions (A conversion first, then a branch, for instance) to execute the blt.un.s IL instruction. Even if this is the case, these additional instructions would be basic and probably wouldn't impact the performance significantly.
So in terms of performance, the long answer is "It is unlikely that there will be a performance difference at all between using a signed or an unsigned integer. If there is a difference, it is likely to be negligible."
So then if the performance is identical, the next logical question is, "Should I use an unsigned value when I'm certain I don't need a sign?"
There are two things to consider here: first, unsigned integers are NOT CLS-compliant, meaning that you may run into issues if you're exposing an unsigned integer as part of an API that another program will consume (such as if you're distributing a reusable library). Second, most operations in .NET, including the method signatures exposed by the BCL (for the reason above), use a signed integer. So if you plan on actually using your unsigned integer, you'll likely find yourself casting it quite a bit. This is going to have a very small performance hit and will make your code a little messier. In the end, it's probably not worth it.
TLDR; back in my C++ days, I'd say "Use whatever is most appropriate and let the compiler sort the rest out." C# is not quite as cut-and-dry, so I would say this for .NET: There's really no performance difference between a signed and unsigned integer on x86/x64, but most operations require a signed integer, so unless you really NEED to restrict the values to positive ONLY or you really NEED the extra range that the sign bit eats, stick with a signed integer. Your code will be cleaner in the end.
I don't think there are any performance considerations, other than possible difference between signed and unsigned arithmetic at the processor level but at that point I think the differences are moot.
The bigger difference is in the CLS compliance as the unsigned types are not CLS compliant as not all languages support them.
I haven't done any research on the matter in .NET, but in the olden days of Win32/C++, if you wanted to cast a "signed int" to a "signed long", the cpu had to run an op to extend the sign. To cast an "unsigned int" to an "unsigned long", it just had stuff zero in the upper bytes. Savings was on the order of a couple of clock cycles (i.e., you'd have to do it billions of times to have an even perceivable difference)
There is no difference, performance wise. Simple integer calculations are well known and modern cpu's are highly optimized to perform them quickly.
These types of optimizations are rarely worth the effort. Use the data type that is most appropriate for the task and leave it at that. If this thing so much as touches a database you could probably find a dozen tweaks in the DB design, query syntax or indexing strategy that would offset a code optimization in C# by a few hundred orders of magnitude.
Its going to allocate the same amount of memory either way (although the one can store a larger value, as its not saving space for the sign). So I doubt you'll see a 'performance' difference, unless you use large values / negative values that will cause one option or the other to explode.
this isn't really to do with performance rather requirements for the loop counter.
Prehaps there were lots of iterations to complete
Console.WriteLine(Int32.MaxValue); // Max interation 2147483647
Console.WriteLine(UInt32.MaxValue); // Max interation 4294967295
The unsigned int may be there for a reason.
I've never empathized with the use of int in loops for(int i=0;i<bla;i++). And oftentimes I would also like to use unsigned just to avoid checking the range. Unfortunately (both in C++ and for similar reasons in C#), the recommendation is to not use unsigned to gain one more bit or to ensure non-negativity, :
"Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules"
page 73 from "The C++ Programming Language" by the language's creator Bjarne Stroustrup.
My understanding (I apologize for not having the source at hand) is that hardware makers also have a bias to optimize for integer types.
Nonetheless, it would be interesting to do the same exercise that #Robear did above but using integer with some positivity assert versus unsigned.

Categories