reference to converting sql server rowversion to long or ulong?
i can convert SQL RowVersion to ulong by code:
static ulong BigEndianToUInt64(byte[] bigEndianBinary)
{
return ((ulong)bigEndianBinary[0] << 56) |
((ulong)bigEndianBinary[1] << 48) |
((ulong)bigEndianBinary[2] << 40) |
((ulong)bigEndianBinary[3] << 32) |
((ulong)bigEndianBinary[4] << 24) |
((ulong)bigEndianBinary[5] << 16) |
((ulong)bigEndianBinary[6] << 8) |
bigEndianBinary[7];
}
Now, I am faced with a problem, how to convert ulong to byte [8]?
I save the value of rowversion to a file, then read it and use it to make the query. the query parameter should be byte[] , not ulong . otherwise , there is an error will be raised.
i got an answer from BigEndian.GetBytes Method (Int64)
thanks all.
public static byte[] GetBytes(this ulong value)
{
return new[]
{
(byte)(value >> 56),
(byte)(value >> 48),
(byte)(value >> 40),
(byte)(value >> 32),
(byte)(value >> 24),
(byte)(value >> 16),
(byte)(value >> 8),
(byte)(value)
};
}
Related
I need to add GetInt64 and SetInt64 instructions to the ISession interface in ASP.NET Core so we're able to store some long values.
The existing code for GetInt32 and SetInt32 is available on Github in SessionExtensions.cs.
I am trying to understand the pattern that is in use:
public static void SetInt32(this ISession session, string key, int value)
{
var bytes = new byte[]
{
(byte)(value >> 24),
(byte)(0xFF & (value >> 16)),
(byte)(0xFF & (value >> 8)),
(byte)(0xFF & value)
};
session.Set(key, bytes);
}
public static int? GetInt32(this ISession session, string key)
{
var data = session.Get(key);
if (data == null || data.Length < 4)
{
return null;
}
return data[0] << 24 | data[1] << 16 | data[2] << 8 | data[3];
}
I had expected to see BitConverter.GetBytes, but for whatever reason the set is doing a bunch of right shifts against each octet, and the read does left shifts against each octet. I'm guessing that this relates to keeping the endianness neutral as the BitConverter methods return different values depending on the CPU architecture in use.
Is there an obvious reason the code is written like this?
Would the following be a correct implementation for SetInt64/GetInt64?
public static void SetInt64(this ISession session, string key, long value)
{
var bytes = new byte[]
{
(byte)(value >> 56),
(byte)(0xFF & (value >> 48)),
(byte)(0xFF & (value >> 40)),
(byte)(0xFF & (value >> 32)),
(byte)(0xFF & (value >> 24)),
(byte)(0xFF & (value >> 16)),
(byte)(0xFF & (value >> 8)),
(byte)(0xFF & value)
};
session.Set(key, bytes);
}
public static long? GetInt64(this ISession session, string key)
{
var data = session.Get(key);
if (data == null || data.Length < 8)
{
return null;
}
return data[0] << 56 | data[1] << 48 | data[2] << 40 | data[3] << 32 | data[4] << 24 | data[5] << 16 | data[6] << 8 | data[7];
}
You are almost right. The only problem is that in GetInt64 you need to cast byte to long before shifting it.
public static void SetInt64(this ISession session, string key, long value)
{
var bytes = new byte[]
{
(byte)(value >> 56),
(byte)(0xFF & (value >> 48)),
(byte)(0xFF & (value >> 40)),
(byte)(0xFF & (value >> 32)),
(byte)(0xFF & (value >> 24)),
(byte)(0xFF & (value >> 16)),
(byte)(0xFF & (value >> 8)),
(byte)(0xFF & value)
};
session.Set(key, bytes);
}
public static long? GetInt64(this ISession session, string key)
{
var data = session.Get(key);
if (data == null || data.Length < 8)
{
return null;
}
return (long)data[0] << 56 | (long)data[1] << 48 | (long)data[2] << 40 | (long)data[3] << 32 | (long)data[4] << 24 | (long)data[5] << 16 | (long)data[6] << 8 | (long)data[7];
}
Eventually this code and BitConverter has the same result. BitConverter is using Unsafe to do this operation, maybe that's the reason it's not being used.
https://source.dot.net/#System.Private.CoreLib/BitConverter.cs,107
Why the shifting?
Let's say you want to store 472 (a 3 digit number) in one digit place holders. This is what you do.
Take the first digit from right and put it in first place holder.
Shift the number one digit to the right and repeat the first step.
So it goes
472 ---> 2
047 ---> 7
004 ---> 4
This is exactly what's happening here except place holder is a byte which holds 8 bits and that is why the shifting is happening 8 bits at a time.
Ok, so I have two methods
public static long ReadLong(this byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24
| data[4] << 32 | data[5] << 40 | data[6] << 48 | data[7] << 56;
return length;
}
public static void WriteLong(this byte[] data, long i)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
data[0] = (byte)((i >> (8*0)) & 0xFF);
data[1] = (byte)((i >> (8*1)) & 0xFF);
data[2] = (byte)((i >> (8*2)) & 0xFF);
data[3] = (byte)((i >> (8*3)) & 0xFF);
data[4] = (byte)((i >> (8*4)) & 0xFF);
data[5] = (byte)((i >> (8*5)) & 0xFF);
data[6] = (byte)((i >> (8*6)) & 0xFF);
data[7] = (byte)((i >> (8*7)) & 0xFF);
}
So WriteLong works correctly(Verified against BitConverter.GetBytes()). The problem is ReadLong. I have a fairly good understanding of this stuff, but I'm guessing what's happening is the or operations are happening as 32 bit ints so at Int32.MaxValue it rolls over. I'm not sure how to avoid that. My first instinct was to make an int from the lower half and an int from the upper half and combine them, but I'm not quite knowledgeable to know even where to start with that, so this is what I tried....
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long l1 = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
long l2 = data[4] | data[5] << 8 | data[6] << 16 | data[7] << 24;
return l1 | l2 << 32;
}
This didn't work though, at least not for larger numbers, it seems to work for everything below zero.
Here's how I run it
void Main()
{
var larr = new long[5]{
long.MinValue,
0,
long.MaxValue,
1,
-2000000000
};
foreach(var l in larr)
{
var arr = new byte[8];
WriteLong(ref arr,l);
Console.WriteLine(ByteString(arr));
var end = ReadLong(arr);
var end2 = BitConverter.ToInt64(arr,0);
Console.WriteLine(l + " == " + end + " == " + end2);
}
}
and here's what I get(using the modified ReadLong method)
0:0:0:0:0:0:0:128
-9223372036854775808 == -9223372036854775808 == -9223372036854775808
0:0:0:0:0:0:0:0
0 == 0 == 0
255:255:255:255:255:255:255:127
9223372036854775807 == -1 == 9223372036854775807
1:0:0:0:0:0:0:0
1 == 1 == 1
0:108:202:136:255:255:255:255
-2000000000 == -2000000000 == -2000000000
The problem is not the or, it is the bitshift. This has to be done as longs. Currently, the data[i] are implicitely converted to int. Just change that to long and that's it. I.e.
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = (long)data[0] | (long)data[1] << 8 | (long)data[2] << 16 | (long)data[3] << 24
| (long)data[4] << 32 | (long)data[5] << 40 | (long)data[6] << 48 | (long)data[7] << 56;
return length;
}
You are doing int arithmetic and then assigning to long, try:
long length = data[0] | data[1] << 8L | data[2] << 16L | data[3] << 24L
| data[4] << 32L | data[5] << 40L | data[6] << 48L | data[7] << 56L;
This should define your constants as longs forcing it to use long arithmetic.
EDIT: Turns out this may not work according to comments below as while bitshift takes many operators on the left it only takes int on the right. Georg's should be the accepted answer.
How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;
I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method?
Changing the input parameter to p to ulong[] would create a macro inefficiency.
static ulong Fetch64(byte[] p, int ofs = 0)
{
unchecked
{
ulong result = p[0 + ofs] +
((ulong) p[1 + ofs] << 8) +
((ulong) p[2 + ofs] << 16) +
((ulong) p[3 + ofs] << 24) +
((ulong) p[4 + ofs] << 32) +
((ulong) p[5 + ofs] << 40) +
((ulong) p[6 + ofs] << 48) +
((ulong) p[7 + ofs] << 56);
return result;
}
}
Why not use BitConverter? I've got to believe the Microsoft has spent some time tuning that code. Plus it deals with endian issues.
Here's how BitConverter turns a byte[] into a long/ulong (ulong converts it as signed and then casts it to unsigned):
[SecuritySafeCritical]
public static unsafe long ToInt64(byte[] value, int startIndex)
{
if (value == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.value);
}
if (((ulong) startIndex) >= value.Length)
{
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.startIndex, ExceptionResource.ArgumentOutOfRange_Index);
}
if (startIndex > (value.Length - 8))
{
ThrowHelper.ThrowArgumentException(ExceptionResource.Arg_ArrayPlusOffTooSmall);
}
fixed (byte* numRef = &(value[startIndex]))
{
if ((startIndex % 8) == 0)
{
return *(((long*) numRef));
}
if (IsLittleEndian)
{
int num = ((numRef[0] | (numRef[1] << 8)) | (numRef[2] << 0x10)) | (numRef[3] << 0x18);
int num2 = ((numRef[4] | (numRef[5] << 8)) | (numRef[6] << 0x10)) | (numRef[7] << 0x18);
return (((long) ((ulong) num)) | (num2 << 0x20));
}
int num3 = (((numRef[0] << 0x18) | (numRef[1] << 0x10)) | (numRef[2] << 8)) | numRef[3];
int num4 = (((numRef[4] << 0x18) | (numRef[5] << 0x10)) | (numRef[6] << 8)) | numRef[7];
return (((long) ((ulong) num4)) | (num3 << 0x20));
}
}
I suspect that doing the conversion one 32-bit word at a time is for 32-bit efficiency. No 64-bit registers on a 32-bit CPU means dealing with a 64-bit ints is a lot more expensive.
If you know for sure you're targeting 64-bit hardware, it might be faster to do do the conversion in one fell swoop.
Try to use for instead of unrolling the loop. You may be able to save time on boundary checks.
Try BitConverter.ToUInt64 - http://msdn.microsoft.com/en-us/library/system.bitconverter.touint64.aspx if it is what you looking for.
For reference, Microsoft's .NET 4.0 BitConverter.ToInt64 (Shared Source Initiative at http://referencesource.microsoft.com/netframework.aspx):
// Converts an array of bytes into a long.
[System.Security.SecuritySafeCritical] // auto-generated
public static unsafe long ToInt64 (byte[] value, int startIndex) {
if( value == null) {
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.value);
}
if ((uint) startIndex >= value.Length) {
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.startIndex, ExceptionResource.ArgumentOutOfRange_Index);
}
if (startIndex > value.Length -8) {
ThrowHelper.ThrowArgumentException(ExceptionResource.Arg_ArrayPlusOffTooSmall);
}
fixed( byte * pbyte = &value[startIndex]) {
if( startIndex % 8 == 0) { // data is aligned
return *((long *) pbyte);
}
else {
if( IsLittleEndian) {
int i1 = (*pbyte) | (*(pbyte + 1) << 8) | (*(pbyte + 2) << 16) | (*(pbyte + 3) << 24);
int i2 = (*(pbyte+4)) | (*(pbyte + 5) << 8) | (*(pbyte + 6) << 16) | (*(pbyte + 7) << 24);
return (uint)i1 | ((long)i2 << 32);
}
else {
int i1 = (*pbyte << 24) | (*(pbyte + 1) << 16) | (*(pbyte + 2) << 8) | (*(pbyte + 3));
int i2 = (*(pbyte+4) << 24) | (*(pbyte + 5) << 16) | (*(pbyte + 6) << 8) | (*(pbyte + 7));
return (uint)i2 | ((long)i1 << 32);
}
}
}
}
Why not go unsafe?
unsafe static ulong Fetch64(byte[] p, int ofs = 0)
{
fixed (byte* bp = p)
{
return *((ulong*)(bp + ofs));
}
}
I'm trying to convert 4 bytes into a 32 bit unsigned integer.
I thought maybe something like:
UInt32 combined = (UInt32)((map[i] << 32) | (map[i+1] << 24) | (map[i+2] << 16) | (map[i+3] << 8));
But this doesn't seem to be working. What am I missing?
Your shifts are all off by 8. Shift by 24, 16, 8, and 0.
Use the BitConverter class.
Specifically, this overload.
BitConverter.ToInt32()
You can always do something like this:
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* numRef = &(value[startIndex]))
{
if ((startIndex % 4) == 0)
{
return *(((int*)numRef));
}
if (IsLittleEndian)
{
return (((numRef[0] | (numRef[1] << 8)) | (numRef[2] << 0x10)) | (numRef[3] << 0x18));
}
return ((((numRef[0] << 0x18) | (numRef[1] << 0x10)) | (numRef[2] << 8)) | numRef[3]);
}
}
But this would be reinventing the wheel, as this is actually how BitConverter.ToInt32() is implemented.