I'm attempting to convert a pseudo rand function from c++ to c# but it doesnt seem to return the correct values. Its important that i use a consistent set for encryption so i cant just use a random number.
this is the function in c++.
int get_pseudo_rand()
{
return( ((_last_rand = _last_rand * 214013L
+ 2531011L) >> 16) & 0x7fff );
}
and this is my c# alternative
int get_pseudo_rand()
{
return (((_last_rand = (_last_rand * 214013 + 2531011) >> 16) & 0x7fff));
}
I removed the Ls since c#s int data type is 4 bytes like c++ longs whereas c#s longs are 8 bytes.
the first time the function is run from the seed the answer is consistent with the c++ version but then it begins to diverge.
Any ideas?
You have parenthesized the two statements in a different way that changes their meaning. The C++ code updates _last_rand and then right-shifts the result, the C# code performs the right-shift before updating _last_rand. I've lined the statements up below to make the difference more obvious.
C++:
return (((_last_rand = _last_rand * 214013L + 2531011L) >> 16) & 0x7fff);
C#:
return (((_last_rand = (_last_rand * 214013 + 2531011 ) >> 16) & 0x7fff));
The problem is that you have parenthesized differentyl which leads to storing different values in _last_rand... with your code _last_rand stores 28818 after the first run... with the C++ code it stores 1888663550 which is the value BEFORE >> and before &. Thus it startes diverging from second run on...
To achieve the same behaviour as in C++ use in C#
return (((_last_rand = _last_rand * 214013 + 2531011) >> 16) & 0x7fff);
Related
I'm trying to transfer a C# function to C++ but ran into a lovely problem I've never seen nor needed before. Everything has transfer quite nicely except for a bit shift line.
C# - Works without problems.
long val = 2791804260201463808;
int Cap = (int)val; //-608501760
val = (long)((ulong)val >> 32);
return val; // this returns 650017582
Now transfer to C++
C++ - Compile error "warning C4293: '>>': shift count negative or too big, undefined behavior"
long val = 2791804260201463808;
int Cap = (int)val; //-608501760
val = (long)((ulong)val >> 32);
return val; // this returns -608501760 - No change, as if bit shift was skipped
How can I transfer this? I'm having a problem seeing out of my box.
I've tried different variable types with no luck.
I am working on a project which outputs to an odd circuit and need to invert half the byte I am sending. So for example, if I am sending the number 100 as a byte, it comes out in the chip as 01100100, nice and easy. The problem is that I need it to be 10010100, i.e. the first nibble is inverted. This is because of how the outputs of the chip work.
I have playing with the ~ command doing something like:
int b = a & 0x0000000F;
This inverts the second nibble. I can also invert the whole thing with:
int b = a & 0x000000FF;
But I want to get the first nibble of the byte and
int b = a & 0x000000F0;
doesn't give me the answer I am after.
Any suggestions?
To invert a bit, you xor (exclusive or) it.
So you have to do a ^ 0xF0;
with shifting:
b = (byte) ((b & 0x0F) + (~(b >> 4)<<4));
without shifting:
b = (byte)((b & 0x0F) + ((~(b & 0xF0)) & 0xF0));
(not that shifting or not matters...)
I have read through this SO question about 32-bits, but what about 64-bit numbers? Should I just mask the upper and lower 4 bytes, perform the count on the 32-bits and then add them together?
You can find 64 bit version here http://en.wikipedia.org/wiki/Hamming_weight
It is something like this
static long NumberOfSetBits(long i)
{
i = i - ((i >> 1) & 0x5555555555555555);
i = (i & 0x3333333333333333) + ((i >> 2) & 0x3333333333333333);
return (((i + (i >> 4)) & 0xF0F0F0F0F0F0F0F) * 0x101010101010101) >> 56;
}
This is a 64 bit version of the code form here How to count the number of set bits in a 32-bit integer?
Using Joshua's suggestion I would transform it into this:
static int NumberOfSetBits(ulong i)
{
i = i - ((i >> 1) & 0x5555555555555555UL);
i = (i & 0x3333333333333333UL) + ((i >> 2) & 0x3333333333333333UL);
return (int)(unchecked(((i + (i >> 4)) & 0xF0F0F0F0F0F0F0FUL) * 0x101010101010101UL) >> 56);
}
EDIT: I found a bug while testing 32 bit version. I added missing parentheses. The sum should be done before bitwise &, in the last line
EDIT2 Added safer version for ulong
A fast (and more portable than using non-standard compiler extensions) way:
int bitcout(long long n)
{
int ret=0;
while (n!=0)
{
n&=(n-1);
ret++;
}
return ret;
}
Every time you do a n&=(n-1) you eliminate the last set bit in n. Thus this takes O(number of set bits) time.
This faster than the O(log n) you would need if you tested every bit - not every bit is set unless the number is 0xFFFFFFFFFFFFFFFF), thus usually you need far fewer iterations.
Standard answer in C#:
ulong val = //whatever
byte count = 0;
while (val != 0) {
if ((val & 0x1) == 0x1) count++;
val >>= 1;
}
This shifts val right one bit, and increments count if the rightmost bit is set. This is a general algorithm that can be used for any length integer.
I have a SQL Server table that has a column in it that is defined as Binary(7).
It is updated with data from a Cobol program that has Comp-3 data (packed decimal).
I wrote a C# program to take a number and create the Comp-3 value. I have it available to SQL Server via CLR Integration. I'm able to access it like a stored procedure.
My problem is, I need to take the value from this program and save it in the binary column. When I select a row of data that is already in there, I am seeing a value like the following:
0x00012F0000000F
The value shown is COBOL comp-3 (packed decimal) data, stored in the SQL table. Remember, this field is defined as Binary(7). There are two values concatenated and stored here. Unsigned value 12, and unsigned value 0.
I need to concatenate 0x00012F (length of 3 characters) and 0x0000000F (length of 4 characters) together and write it to the column.
My question is two part.
1) I am able to return a string representation of the Comp-3 value from my program. But, I'm not sure if this is the format I need to return to make this work. What format should I return to SQL, so it can be used correctly?
2) What do I need to do to convert this to make it work?
I hope I was clear enough. It's a lot to digest...Thanks!
I figured it out!
I needed to change the output to byte[], and reference it coming out of the program in SQL as varbinary.
This is the code, if anyone else in the future needs it. I hope this helps others that need to create Comp-3 (packed decimal) in SQL. I'll outline the steps to use it below.
Below is the source for the C# program. Compile it as a dll.
using System;
using System.Collections.Generic;
using System.Data;
using Microsoft.SqlServer.Server;
using System.Data.SqlTypes;
namespace Numeric2Comp3
{
//PackedDecimal conversions
public class PackedDecimal
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void ToComp3(string numberin, out byte[] hexarray, out string hexvalue)
{
long value;
bool result = Int64.TryParse(numberin, out value);
if (!result)
{
hexarray = null;
hexvalue = null;
return;
}
Stack<byte> comp3 = new Stack<byte>(10);
byte currentByte;
if (value < 0)
{
currentByte = 0x0d; //signed -
value = -value;
}
else if (numberin.Trim().StartsWith("+"))
{
currentByte = 0x0c; //signed +
}
else
{
currentByte = 0x0f; //unsigned
}
bool byteComplete = false;
while (value != 0)
{
if (byteComplete)
currentByte = (byte)(value % 10);
else
currentByte |= (byte)((value % 10) << 4);
value /= 10;
byteComplete = !byteComplete;
if (byteComplete)
comp3.Push(currentByte);
}
if (!byteComplete)
comp3.Push(currentByte);
hexarray = comp3.ToArray();
hexvalue = bytesToHex(comp3.ToArray());
}
private static string bytesToHex(byte[] buf)
{
string HexChars = "0123456789ABCDEF";
System.Text.StringBuilder sb = new System.Text.StringBuilder((buf.Length / 2) * 5 + 3);
for (int i = 0; i < buf.Length; i++)
{
sbyte b = Convert.ToSByte(buf[i]);
b = (sbyte)(b >> 4); // Hit to bottom
b = (sbyte)(b & 0x0F); // get HI byte
sb.Append(HexChars[b]);
b = Convert.ToSByte(buf[i]); // refresh
b = (sbyte)(b & 0x0F); // get LOW byte
sb.Append(HexChars[b]);
}
return sb.ToString();
}
}
}
Save the dll somewhere in a folder on the SQL Server machine. I used 'C:\NTA\Libraries\Numeric2Comp3.dll'.
Next, you'll need to enable CLR Integration on SQL Server. Read about it on Microsoft's website here: Introduction to SQL Server CLR Integration. Open SQL Server Management Studio and execute the following to enable CLR Integration:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO
Once that is done, execute the following in Management Studio:
CREATE ASSEMBLY Numeric2Comp3 from 'C:\NTA\Libraries\Numeric2Comp3.dll' WITH PERMISSION_SET = SAFE
You can execute the following to remove the assembly, if you need to for any reason:
drop assembly Numeric2Comp3
Next, in Management studio, execute the following to create the stored procedure to reference the dll:
CREATE PROCEDURE Numeric2Comp3
#numberin nchar(27), #hexarray varbinary(27) OUTPUT, #hexstring nchar(27) OUTPUT
AS
EXTERNAL NAME Numeric2Comp3.[Numeric2Comp3.PackedDecimal].ToComp3
If everything above runs successfully, you're done!
Here is some SQL to test it out:
DECLARE #in nchar(27), #hexstring nchar(27), #hexarray varbinary(27)
set #in = '20120123'
EXEC Numeric2Comp3 #in, #hexarray out, #hexstring out
select len(#hexarray), #hexarray
select len(#hexstring), #hexstring
This will return the following values:
(No column name) (No column name)
5 0x020120123F
(No column name) (No column name)
10 020120123F
In my case, what I need is the value coming out of #hexarray. This will be written to the Binary column in my table.
I hope this helps others that may need it!
If you have Comp-3 stored in a binary filed as a hex string, well I wonder if the process that created this is working as it should.
Be that as it may, the best solution would be to cast them in the select; the cast sytax is simple, but I don't know if a comp-3 cast is available.
Here are examples on MSDN.
So let's work with the string: To transform the string you use this:
string in2 = "020120123C";
long iOut = Convert.ToInt64(in2.Substring(0, in2.Length - 1))
* (in2.Substring(in2.Length - 1, 1)=="D"? -1 : 1 ) ;
It treats the last character as th sign, with 'D' being the one negative sign. Both 'F' and 'C' would be positive.
Will you also need to write the data back?
I am curious: What string representaion comes out for fractional numbers like 123.45 ?
( I'll leave the original answer for reference..:)
Here are a few lines of code to show how you can work with bit and bytes.
The operations to use are:
shift the data n bits right or left: << n or >> n
masking/clearing unwanted high bits: e.g. set all to 0 except the last 4 bits: & 0xF
adding bitwise: |
If you have a string representation like the one you have shown the out3 and out4 byte would be the result. The other conversions are just examples how to process bit; you can't possibly have decimals as binarys or binarys that look like decimals. Maybe you get integers - then out7 and out8 would be the results.
To combine two bytes into one integer look at the last calculation!
// 3 possible inputs:
long input = 0x00012F0000071F;
long input2 = 3143;
string inputS = "0x00012F0000071F";
// take binary input as such
byte out1 = (byte)((input >> 4) & 0xFFFFFF );
byte out2 = (byte)(input >> 36);
// take string as decimals
byte out3 = Convert.ToByte(inputS.Substring(5, 2));
byte out4 = Convert.ToByte(inputS.Substring(13, 2));
// take binary as decimal
byte out5 = (byte)(10 * ((input >> 40) & 0xF) + (byte)((input >> 36) & 0xF));
byte out6 = (byte)(10 * ((input >> 8) & 0xF) + (byte)((input >> 4) & 0xF));
// take integer and pick out 3rd and last byte
byte out7 = (byte)(input2 >> 8);
byte out8 = (byte)(input2 & 0xFF);
// combine two bytes to one integer
int byte1and2 = (byte)(12) << 8 | (byte)(71) ;
Console.WriteLine(out1.ToString());
Console.WriteLine(out2.ToString());
Console.WriteLine(out3.ToString());
Console.WriteLine(out4.ToString());
Console.WriteLine(out5.ToString());
Console.WriteLine(out6.ToString());
Console.WriteLine(out7.ToString());
Console.WriteLine(out8.ToString());
Console.WriteLine(byte2.ToString());
What is the equivalent (in C#) of Java's >>> operator?
(Just to clarify, I'm not referring to the >> and << operators.)
Edit: The Unsigned right-shift operator >>> is now also available in C# 11 and later.
For earlier C# versions, you can use unsigned integer types, and then the << and >> do what you expect. The MSDN documentation on shift operators gives you the details.
Since Java doesn't support unsigned integers (apart from char), this additional operator became necessary.
Java doesn't have an unsigned left shift (<<<), but either way, you can just cast to uint and shfit from there.
E.g.
(int)((uint)foo >> 2); // temporarily cast to uint, shift, then cast back to int
Upon reading this, I hope my conclusion of use as follows is correct.
If not, insights appreciated.
Java
i >>>= 1;
C#:
i = (int)((uint)i >> 1);
n >>> s in Java is equivalent to TripleShift(n,s) where:
private static long TripleShift(long n, int s)
{
if (n >= 0)
return n >> s;
return (n >> s) + (2 << ~s);
}
There is no >>> operator in C#. But you can convert your value like int,long,Int16,Int32,Int64 to unsigned uint, ulong, UInt16,UInt32,UInt64 etc.
Here is the example.
private long getUnsignedRightShift(long value,int s)
{
return (long)((ulong)value >> s);
}
C# 11 and later supports >>> Unsigned right shift operator
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/bitwise-and-shift-operators#unsigned-right-shift-operator-
For my VB.Net folks
The suggested answers above will give you overflow exceptions with Option Strict ON
Try this for example -100 >>> 2 with above solutions:
The following code works always for >>>
Function RShift3(ByVal a As Long, ByVal n As Integer) As Long
If a >= 0 Then
Return a >> n
Else
Return (a >> n) + (2 << (Not n))
End If
End Function