How much space does a int array take up? Or how much space (in bytes) does a int array consumes that looks something like this:
int[] SampleArray=new int[]{1,2,3,4};
Is memory allocation language specific ??
Thank you all
Since you add a lot of language tags, I want to write for C#. In C#, this depends on operating system.
For 32-bit, each int is 4 byte and 4 byte also for reference to the object, that makes 4 * 4 + 4 = 20 byte
For 64-bit, each int is 4 byte and 8 byte also for reference to the object, that makes 4 * 4 + 8 = 24 byte
From C# 5.0 in a Nutshell in page 22;
Each reference to an object requires an extra four or eight bytes,
depending on whether the .NET runtime is running on a 32- or 64-bit
platform.
In C++, how much memory new int[4]{1, 2, 3, 4} actually allocates is implementation-defined but the size of the array will be sizeof(int)*4.
Ques is : Is memory allocation language specific ??
Yes memory allocation is language specific..it vary according the language..
for exp:
sizeof(int)*4
in java int size is 4byte so memory consumption is: 4*4=16bytes
It depends on both the language, but moreover to the operating system.
You need 4 integers. Normally an integer is 2 or 4 bytes (mostly 4 on most systems), but to be sure check sizeof(int).
(Also keep in mind the values can be differently represented depending on the operating system), like MSB first or LSB first (or a mix in case 4 bytes are used).
In Java int[] array is an Object which in memory represented by the header (8 bytes for x86) and int length field (4 bytes) followed by array of ints (arrayLength * 4).
approxSize = 8 + 4 + 4 * arraylength
see more here http://www.javamex.com/tutorials/memory/object_memory_usage.shtml
Related
Why is BigInteger declared as a ValueType (struct) in C#? It seems to be very similar to the string type which is declared as a reference type.
Both are immutable (value types). Both can be arbitrarily large.
The recommendation I have heard is that a struct should never be more than 16 Bytes. BigInteger can get much larger than 16 Bytes and I would think this would make frequent operations extremely slow since it is always copied by value.
Copying a BigInteger does not cause the underlying data to be copied. Instead, just a reference to the data is copied.
Since BigInteger values are immutable it is safe for two or more values to share a common data buffer.
BigInteger has two instance fields:
int _sign - probably tells whether its a positive or negative value.
uint[] _bits - this is a reference to the data buffer.
An int is 4 bytes and a reference is 8 bytes (on a 64-bit system). Therefore the size of a BigInteger is ≤ 16 bytes.
If you look at the source for BigInteger and strip it down to only instance level fields (the things that would count toward it's size) all the class has is
public struct BigInteger : IFormattable, IComparable, IComparable<BigInteger>, IEquatable<BigInteger>
{
internal int _sign;
internal uint[] _bits;
}
So you have 4 bytes for _sign and 4 or 8 bytes for uint[] depending on if you are on a 32 or 64 bit system due to the fact that arrays are reference types. This gives you a total of 8 or 12 bytes, well below the 16 recommendation. (note: The CLR will pad the 12 byte version to 16 to make it a multiple of 8 for optimization reasons)
When a new BigInteger is created the _bits array will be shared between the two instances. Because the type is immutable (you can't change the value of any cell of _bits) it is safe for the two copies to share the array.
Here are the fields of a BigInteger:
// For values int.MinValue < n <= int.MaxValue, the value is stored in sign
// and _bits is null. For all other values, sign is +1 or -1 and the bits are in _bits
internal int _sign;
internal uint[] _bits;
So, one int and one uint[], which is a reference type. The type itself can't grow arbitrarily large. It'll be 8 bytes on x86 and 16 bytes on x64 (12 bytes for the field + 4 bytes of padding).
string and arrays are the only types in the framework which have a varying size and are special-cased in the runtime.
As to answer the question: there is less overhead in using a struct. Having a class wrapper over two fields would cause more indirection and more GC pressure for no good reason. Besides, a BigInteger is semantically a value.
The size of a struct matters only because the entire struct has to be copied each time you pass it around from one function to another. If it was not for the copying, nobody would care.
However, BigInteger consists of two parts:
The actual struct, which is the part that gets copied when you pass a BigInteger around, and is fairly small, and
The array of bits, which is of arbitrary length, but which is not copied each time the struct is copied.
So, when you pass a BigInteger, this is what happens:
Before copying:
[BigInteger instance 1] ---------> [array of bits]
After copying:
[BigInteger instance 1] ---------> [array of bits]
|
[BigInteger instance 2] ----+
Notice how there is always just one array of bits.
Is there a difference between space (memory) usage of integer 1 and 234234? How much space would int.MaxValue use and just integer 1 use?
I'm incrementing a value in the program everytime the object gets accessed so I can keep the objects with the most usage in memory and others flushed to the disk. Therefore I was thinking the counter (integer) might increase a lot and a lot of memory would be used just for the counter?
No. int.MaxValue is the largest value which can be represent by a 32 bit integer. If you want a larger value you use long which consumes 64 bits. Basically, the amount of memory an integer consumes has nothing to do with it's value.
int small = 1 // translates to 0x00000001
while
int big = int.MaxValue //translates to 0x7FFFFFFF
They still consume that same 4 bytes of memory, they just have different values for the bits. The values in my code snippet are represented in hexadecimal, if you don't know how that translates into actual bits just check it out wikipedia.
In Java, an empty string is 40 bytes. In Python it's 20 bytes. How big is an empty string object in C#? I cannot do sizeof, and I don't know how else to find out. Thanks.
It's 18 bytes:
16 bytes of memory + 2 bytes per character allocated + 2 bytes for the final null character.
Note that this was written about .Net 1.1.
The m_ArrayLength field was removed in .Net 4.0 (you can see this in the reference source)
The CLR version matters. Prior to .NET 4, a string object had an extra 4-byte field that stored the "capacity", m_arrayLength field. That field is no longer around in .NET 4. It otherwise has the standard object header, 4 bytes for the sync-block, 4 bytes for the method table pointer. Then 4 bytes to store the string length (m_stringLength), followed by 2 bytes each for each character in the string. And a 0 char to make it compatible with native code. Objects are always a multiple of 4 bytes long, minimum 16 bytes.
An empty string is thus 4 + 4 + 4 + 2 = 14 bytes, rounded up to 16 bytes on .NET 4.0. 20 bytes on earlier versions. Given values are for x86. This is all very visible in the debugger, check this answer for hints.
Jon Skeet recently wrote a whole article on the subject.
On x86, an empty string is 16 bytes, and on x64 it's 32 bytes
How many bits is a .NET string that's 10 characters in length? (.NET strings are UTF-16, right?)
On 32-bit systems:
4 bytes = Type pointer (Every object has one of these)
4 bytes = Lock (One of these too!)
4 bytes = Length (Need the length)
2 * Length bytes = Data (And the chars themselves)
=======================
12 + 2*Length bytes
=======================
96 + 16*Length bits
So 10 chars would = 256 bits = 32 bytes
I am not sure if the Lock grows to 64-bit on 64-bit systems. I kinda hope not, but you never know. The 64-bit structure overhead is therefore anywhere from 16-20 bytes (as opposed to the 12 bytes on 32-bit).
Every char in the string is two bytes in size, so if you are just converting the chars directly and not using any particular encoding, the answer is string.Length * 2 * 8
otherwise the result depends on the encoding, you can write:
int numbits = System.Text.Encoding.UTF8.GetByteCount(str)*8; //returns 80
or
int numbits = System.Text.Encoding.Unicode.GetByteCount(str)*8 //returns 160
If you are talking pure Unicode-16 then:
10 characters = 20 bytes = 160 bits
This really needs a context in order to be answered properly.
It all comes down to how you define character and how to you store the data.
For example, if you define character as a single letter from the users point of view it can be more than 2 bytes, for example this character: Å is two Unicode code points (U+0041 U+030A, Latin Capital A + Combining Ring Above) so it will require two .net chars or 4 bytes int UTF-16.
Now even if you are talking about 10 .net Char elements than if it's in memory you have some object overhead (that was already mentioned) and a bit of alignment overhead (on 32bit system everything has to be aligned to 4 bytes boundary, in 64bit the rules are more complicated) so you may have some empty bytes at the end.
If you are talking about database or files than each database and file system has its own overhead.
An int (Int32) has a memory footprint of 4 bytes. But what is the memory footprint of:
int? i = null;
and :
int? i = 3;
Is this in general or type dependent?
I'm not 100% sure, but I believe it should be 8 Bytes, 4 bytes for the int32, and (since every thing has to be 4-Byte aligned on a 32 bit machine) another 4 bytes for a boolean indicating whether the integer value has been specified or not.
Note, thanks to #sensorSmith, I am now aware that newer releases of .Net allow nullable values to be stored in smaller footprints (when the hardware memory design allows smaller chunks of memory to be independently allocated). On a 64 Bit machine it would still be 8 bytes (64 bits) since that is the smallest chunk of memory that can be addressed...
A nullable for example only requires a single bit for the boolean, and another single bit for the IsNull flag and so the total storage requirements is less than a byte it theoretically could be stored in a single byte, however, as usual, if the smallest chunk of memory that can be allocated is 8 bytes (like on a 64 bit machine), then it will still take 8 bytes of memory.
The size of Nullable<T> is definitely type dependent. The structure has two members
boolean: For the hasValue
value: for the underlying value
The size of the structure will typically map out to 4 plus the size of the type parameter T.
int? a = 3;
00000038 lea ecx,[ebp-48h]
0000003b mov edx,3
00000040 call 78BFD740
00000045 nop
a = null;
00000046 lea edi,[ebp-48h]
00000049 pxor xmm0,xmm0
0000004d movq mmword ptr [edi],xmm0
It seems that first dword is for the value, and the second one is for null flag. So, 8 bytes total.
Curious, BinaryWritter doesn't like to write nullable types. I was wandering if it could pack it tighter then 8 bytes...
The .NET (and most other languages/frameworks) default behavior is to align struct fields to a multiple of their size and structs themselves to a multiple of the size of their largest field. Reference: StructLayout
Nullable<T> has a bool flag and the T value. Since bool takes just 1 byte, the size of the largest field is the size of T; and Nullable doubles the space needed compared to a T alone. Reference:Nullable Source
Clarification: If T is itself a non-primitive struct rather than a primitive type, Nullable increases the space needed by the size of the largest primitive field within T or, recursively, within any of T's non-primitive fields. So, the size of a Nullable<Nullable<bool>> is 3, not 4.
You can check using some code similar to the one at https://www.dotnetperls.com/nullable-memory.
I got the following results:
Int32 4 bytes
Int32? 8 bytes
Int16 2 bytes
Int16? 4 bytes
Int64 8 bytes
Int64? 16 bytes
Byte 1 bytes
Byte? 2 bytes
bool 1 bytes
bool? 2 bytes
An int? is a struct containing a boolean hasValue, and an int. Therefore, it has a footprint of 5 bytes. The same applies to all instances of a nullable<T>: size = sizeof(T)+sizeof(bool)
The nullable type is a structure that contains the regular variable and a flag for the null state.
For a nullable int that would mean that it contains five bytes of data, but it's of course padded up to complete words, so it's using eight bytes.
You can generally expect that any nullable type will be four bytes larger than the regular type, except for small types like byte and boolean.
32-bit and 64-bit machines:
int == 4 bytes
int? == 8 bytes == 4 for int + 4 for the nullable type wrapper.
The nullable type wrapper requires 4 bytes of storage. And the integer
itself requires 4 bytes for each element. This is an efficient
implementation. In an array many nullable types are stored in
contiguous memory.
Based on a personal test (.NET Framework 4.6.1, x64, Release) and from – https://www.dotnetperls.com/nullable-memory
Also, if interesting: why int on x64 equals only 4 bytes?
Note: this is valid for Nullable<int> only, the size of Nullable<T> totally depends on the type.