How to get an unique(most of the time) ushort number from GUID, I have tried below code but since I am converting it to ushort so it is just ignoring the LSB hexadecimal values of GUID
static ushort GetId() {
Guid guid = Guid.NewGuid();
byte[] buffer = guid.ToByteArray();
return BitConverter.ToUInt16(buffer, 0);
}
FYI: Somewhere in my code, I have a guid and I want to keep the corresponding ushort number.
I have tried below code but since I am converting it to ushort so it
is just ignoring the LSB hexadecimal values of GUID
Yes, this is correct and for good reason, you cannot store 128 bits of data in 16 bits of data.
Name Length (bytes) Contents
---------------------------------------------------------------------------------------
time_low 4 integer giving the low 32 bits of the time
time_mid 2 integer giving the middle 16 bits of the time
time_hi_and_version 2 4-bit "version" in the most significant bits, followed by the high 12 bits of the time
clock_seq_hi_and_res clock_seq_low 2 1-3 bit "variant" in the most significant bits, followed by the 13-15 bit clock sequence
node 6 the 48-bit node id
If you want the last 16 bits (2 bytes, 4 hex values) Just reverse the array
Array.Reverse(buffer, 0, buffer.Length);
return BitConverter.ToUInt16(buffer, 0);
Note what you are doing is very suspect, and i truly think you need to rethink your design
Related
Like https://www.youtube.com/watch?v={id}
{id}:11characters
I try to use Convert.ToBase64String
string encoded = Convert.ToBase64String(guid.ToByteArray())
.Replace("/", "_")
.Replace("+", "-").Replace("=", "");
like this
The GUIDs is only reduced to 22 characters.
How can I encode GUIDs to 11 character ids?(or less 22 characters)
11 characters, even assuming you could use all 8 bits per character in a URL (hint: you can't), would only allow 88 bits.
A UUID/GUID is 128 bits. Therefore, the conversion you propose is not possible without losing data.
This is off topic answer but it might give you an ID with only 11 characters.
In C# a long value has 64 bits, which if encoded with Base64, there will be 12 characters, including 1 padding =. If we trim the padding =, there will be 11 characters.
One crazy idea here is we could use a combination of Unix Epoch and a counter for one epoch value to form a long value. The Unix Epoch in C# DateTimeOffset.ToUnixEpochMilliseconds is in long format, but the first 2 bytes of the 8 bytes are always 0, because otherwise the date time value will be greater than the maximum date time value. So that gives us 2 bytes to place an ushort counter in.
So, in total, as long as the number of ID generation does not exceed 65536 per millisecond, we can have an unique ID:
// This is the counter for current epoch. Counter should reset in next millisecond
ushort currentCounter = 123;
var epoch = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds();
// Because epoch is 64bit long, so we should have 8 bytes
var epochBytes = BitConverter.GetBytes(epoch);
if (BitConverter.IsLittleEndian)
{
// Use big endian
epochBytes = epochBytes.Reverse().ToArray();
}
// The first two bytes are always 0, because if not, the DateTime.UtcNow is greater
// than DateTime.Max, which is not possible
var counterBytes = BitConverter.GetBytes(currentCounter);
if (BitConverter.IsLittleEndian)
{
// Use big endian
counterBytes = counterBytes.Reverse().ToArray();
}
// Copy counter bytes to the first 2 bytes of the epoch bytes
Array.Copy(counterBytes, 0, epochBytes, 0, 2);
// Encode the byte array and trim padding '='
var shortUid = Convert.ToBase64String(epochBytes).TrimEnd('=');
I have a simple piece of code to convert an Int to two shorts:
public static short[] IntToTwoShorts(int a)
{
byte[] bytes = BitConverter.GetBytes(a);
return new short[] { BitConverter.ToInt16(bytes, 0), BitConverter.ToInt16(bytes, 2) };
}
If I pass in 1851628330 (0x6E5D 9B2A) the result is:
{short[2]}
[0]: -25814
[1]: 28253
The problem is that -25814 is 0xFFFF 9B2A
I've tried various flavours including bit shifting. What's going on? That result isn't a short, and doesn't have 16 bits!
The trick is to use ushort when combining back two short into int:
public static short[] IntToTwoShorts(int a) {
unchecked {
return new short[] {
(short) a,
(short) (a >> 16)
};
}
}
public static int FromTwoShorts(short[] value) {
unchecked {
if (null == value)
throw new ArgumentNullException("value");
else if (value.Length == 1)
return (ushort)value[0]; // we don't want binary complement here
else if (value.Length != 2)
throw new ArgumentOutOfRangeException("value");
return (int)((value[1] << 16) | (ushort)value[0]); // ... and here
}
}
The cause of the unexpected behaviour is that negative numbers (like -25814) are represented as binary complements and so you have the same value (-25814) represented differently in different integer types:
-25814 == 0x9b2a // short, Int16
-25814 == 0xffff9b2a // int, Int32
-25814 == 0xffffffffffff9b2a // long, Int64
Some tests
int a = 1851628330;
short[] parts = IntToTwoShorts(a);
Console.WriteLine($"[{string.Join(", ", parts)}]");
Console.WriteLine($"{FromTwoShorts(parts)}");
Console.WriteLine($"{FromTwoShorts(new short[] { -25814 })}");
Console.WriteLine($"0x{FromTwoShorts(new short[] { -25814 }):X}");
Outcome:
[-25814, 28253]
1851628330
39722
0x9B2A
I would approach the problem with something like this:
public static short[] IntToTwoShorts(int a)
{
short retVar[2];
//Upper 16 byte masked with 0x0000FFFF
retVar[0] = (short) (a >> 16) & (65535);
//Lower 16 byte masked with 0x0000FFFF
retVar[1] = (short) (a >> 0) & (65535);
return retVar;
}
The problem isn't with the code (although there may be more efficient ways to do split integers). By attempting to represent an int as two signed 16 bit shorts, you now need to consider that the sign bit could be present in both shorts. Hence the comment that ushort[] would be a more appropriate choice of representation of the two 16 bit values.
The problem seems to be with the understanding of why a 4 byte signed integer (DWORD) can't be effectively represented in two 2 byte signed shorts (WORD)s.
The problem is that -25814 is 0xFFFF 9B2A
This isn't true - you've represented -25814 as a short, so it can't possibly be 0xFFFF 9B2A - that's a 32 bit representation. Its 16 bit representation is just 9B2A.
If you open up the Calculator on Windows, and set the mode to programmer, and tinker with the HEX and DEC bases, and then flipping between DWORD and WORD representations of the values, you should see that the 16 bit values you've extracted from the 32 bit int are correctly represented (provided that you understand the representation):
Your original 32 bit Integer (DWORD) is 1851628330:
The high word 28253 doesn't have the sign bit set, so you seem satisfied with the conversion to 6E5D:
However, if the low word is interpreted as a signed short, then you'll find that it's bit sign is set, so hence it reported as a negative value. However, the representation (bits, and hex) does correctly represent the last 16 bits of your original 32 bit int.
there is array of bits
BitArray bits = new BitArray(17);
i want to take first 13 bit and convert into 13bits signed integer
and remainder in bits array 4 bits convert into 4 bits integer.
How can I do it in C#?
Assuming your bits are stored LSB first (e.g., left most in the BitArray), you can do something like this (borrowing off of this post: How can I convert BitArray to single int?).
int[] arr = new int[1];
bits.CopyTo(arr, 0); // assume that bits are stored LSB first
int first13Bits = arr[0] >> 4; // shift off last 4 bits to leave top 13
int last4Bits = 0x0000000F & arr[0]; // mask off top 28 bits to leave bottom 4
Note that first13Bits should be signed, but last4Bits will not be signed here (since the top bits are masked off). If your bits are stored MSB first you will need to reverse the bits in the BitArray before you convert them (as CopyTo seems to assume they're stored LSB first).
I have a byte array of length 64. I have a sequential list of what the data in this byte array corresponds to. This list tells me the size in bits that each value is. Most values are 8 or 16 bits which is easy enougth to parse out. However, about mid way through the list i start getting length of 12 bits or 5 bits. What is the best way to loop through these and pull out the bits i need.
The following code should extract n bits from a data buffer stored as a uint[]. I haven't tested it, so caveat lector.
uint GetBits(uint[] data, ref uint pos, uint n) {
uint a = pos/32;
uint off = pos%32;
uint mask = (1 << n) - 1;
pos += n;
return (data[a] << off)&mask | (data[a + 1] >> (32 - off))&mask;
}
Note that it assumes little-endian storage, so that unaligned values flow from the high bits of one word into the low bits of the next.
I am not familiar with Hashing algorithms and the risks associated when using them and therefore have a question on the answer below that I received on a previous question . . .
Based on the comment that the hash value must, when encoded to ASCII, fit within 16 ASCI characters, the solution is first, to choose some cryptographic hash function (the SHA-2 family includes SHA-256, SHA-384, and SHA-512)
then, to truncate the output of the chosen hash function to 96 bits (12 bytes) - that is, keep the first 12 bytes of the hash function output and discard the remaining bytes
then, to base-64-encode the truncated output to 16 ASCII characters (128 bits)
yielding effectively a 96-bit-strong cryptographic hash.
If I substring the base-64-encoded string to 16 characters is that fundamentally different then keeping the first 12 bytes of the hash function and then base-64-encoding them? If so, could someone please explain (provide example code) for truncating the byte array?
I tested the substring of the full hash value against 36,000+ distinct values and had no collisions. The code below is my current implementation.
Thanks for any help (and clarity) you can provide.
public static byte[] CreateSha256Hash(string data)
{
byte[] dataToHash = (new UnicodeEncoding()).GetBytes(data);
SHA256 shaM = new SHA256Managed();
byte[] hashedData = shaM.ComputeHash(dataToHash);
return hashedData;
}
public override void InputBuffer_ProcessInputRow(InputBufferBuffer Row)
{
byte[] hashedData = CreateSha256Hash(Row.HashString);
string s = Convert.ToBase64String(hashedData, Base64FormattingOptions.None);
Row.HashValue = s.Substring(0, 16);
}
[Original post]
(http://stackoverflow.com/questions/4340471/is-there-a-hash-algorithm-that-produces-a-hash-size-of-64-bits-in-c)
No, there is no difference. However, it's easier to just get the base64 string of the first 12 bytes of the array, instead of truncating the array:
public override void InputBuffer_ProcessInputRow(InputBufferBuffer Row) {
byte[] hashedData = CreateSha256Hash(Row.HashString);
Row.HashValue = Convert.ToBase64String(hashedData, 0, 12);
}
The base 64 encoding simply puts 6 bits in each character, so 3 bytes (24 bits) goes into 4 characters. As long as you are splitting the data at an even 3 byte boundary, it's the same as splitting the string at the even 4 character boundary.
If you try to split the data between these boundaries, the base64 string will be padded with filler data up to the next boundary, so the result would not be the same.
Truncating is as easy as adding Take(12) here:
Change
byte[] hashedData = CreateSha256Hash(Row.HashString);
To:
byte[] hashedData = CreateSha256Hash(Row.HashString).Take(12).ToArray();