Pascal to C# conversion - c#

I am trying to convert this pascal code into C# in order to communicate with a peripheral device attached to a comm port. This piece of code should calculate the Control Byte, however I'm not getting the right hex Value therefore I'm wondering if I'm converting the code in the right way.
Pascal:
begin
check := 255;
for i:= 3 to length(sequence)-4 do
check := check xor byte(sequence[i]);
end;
C#:
int check = 255;
for (int x = 3; x < (sequence.Length - 4); x++)
{
check = check ^ (byte)(sequence[x]);
}
Pascal function:
{ *** conversion of number into string ‘hex’ *** }
function word_to_hex (w: word) : string;
var
i : integer;
s : string;
b : byte;
c : char;
begin
s := ‘’;
for i:= 0 to 3 do
begin
b := (hi(w) shr 4) and 15;
case b of
0..9 : c := char(b+$30);
10..15 : c := char(b+$41-10);
end;
s := s + c;
w := w shl 4;
end;
word_ to_hex := s;
end;
C# Equivalent:
public string ControlByte(string check)
{
string s = "";
byte b;
char c = '\0';
//shift = check >> 4 & 15;
for (int x = 0; x <= 3; x++)
{
b = (byte)((Convert.ToInt32(check) >> 4) & 15);
if (b >= 0 && b <= 9)
{
c = (char)(b + 0x30);
}
else if (b >= 10 && b <= 15)
{
c = (char)(b + 0x41 - 10);
}
s = s + c;
check = (Convert.ToInt32(check) << 4).ToString();
}
return s;
}
And last pascal:
function byte_to_hex (b:byte) : string;
begin
byte_to_hex := copy(word_to_hex(word(b)),3,2);
end;
which i am not sure how is substringing the result from the function. So please let me know if there is something wrong with the code conversion and whether I need to convert the function result into bytes. I appreciate your help, UF.
Further info EDIT: Initially I send a string sequence containing the command and information that printer is supposed to print. Since every sequence has a unique Control Byte (in Hex) I have to calculate this from the sequence (sequence = "P1;1$l201PrinterPrinterPrinter1B/100.00/100.00/0/\") which is what upper code does according to POSNET=>"cc – control byte, encoded as 2 HEX digits (EXOR of all characters after ESC P to this byte with #255 initial quantity), according to the following algorithm in PASCAL language:(see first code block)".=>1. check number calculated in the above loop which constitutes control byte should be recoded into two HEX characters (ASCII characters from scope: ‘0’..’9’,’A’..’F’,’a’..’f’), utilizing the following byte_to_hex function:(see third code block). =>{* conversion of byte into 2 characters *}(see 5th code block)

The most obvious problem that I can see is that the Pascal code operates on 1-based 8 bit encoded strings, and the C# code operates on 0-based 16 bit encoded strings. To convert the Pascal/Delphi code that you use to C# you need to address the mis-match. Perhaps like this:
byte[] bytes = Encoding.Default.GetBytes(sequence);
int check = 255;
for (int i = 2; i < bytes.Length-4; i++)
{
check ^= bytes[i];
}
Now, in order to write this I've had to make quite a few assumptions, because you did not include anywhere near enough code in the question. Here's what I assumed:
The Pascal sequence variable is a 1-based 8 bit ANSI encoded Delphi AnsiString.
The Pascal check variable is a Delphi 32 bit signed Integer.
The C# sequence variable is a C# string.
If any of those assumptions prove to be false, then the code above will be no good. For instance, perhaps the Pascal check is really Byte. In which case I guess the C# code should be:
byte[] bytes = Encoding.Default.GetBytes(sequence);
byte check = 255;
for (int i = 2; i < bytes.Length - 4; i++)
{
check ^= bytes[i];
}
I hope that this persuades you of the importance of supplying complete information.
That's really all the meat of this question. The rest of the code concerns converting values to hex strings in C# code. That has been covered again and again here on Stack Overflow. For instance:
C# convert integer to hex and back again
How do you convert Byte Array to Hexadecimal String, and vice versa?
There are many many more such questions.

Related

Pascal code to C# code conversion (ord and chr)

I'm actualy trying to convert pascal code into c# code (we are re-writing old application).
Pascal code:
function DecryptStr(Source: PChar): string;
var
st: string;
i, k, mask: byte;
begin
Result := '';
try
SetString(st, Source, 32);
if st[1] <> #0 then
begin
mask := ord(st[1]);
k := ord(st[2]) xor mask;
SetLength(Result, k);
for i := 1 to k do
begin
inc(mask);
k := ord(st[i + 2]) xor mask;
Result[i] := chr(k);
// Result := Result + chr(k);
end;
end;
except
end;
end;
And my C# code:
public static string decrypt(string hash)
{ string buffer;
byte i, k, mask;
string result = "";
buffer = hash.Substring(0, 32);
mask = (byte)(buffer[0]);
k = (byte)((byte)(buffer[1])^mask);
for (i = 0; i<k-1; i++)
{
mask += 1;
k = (byte)((byte)(buffer[i+2])^mask);
result+=(char)(k);
}
// string decoded = System.
return (result);
}
Please, tell me, is it similar or pascal got some hidden stuff?
Example:
input in c#:
акРЖђЏГ€€њљђНѓGН q6™&і—'n1•\›ЛH[
output in c#:
\u0011$\f\v&\t\b
But it doesnt look like the real password.
Please, advice me what is going wrong.
The problem was in two things:
PChar format of input string - means it stores "0" at the end.
Idiotic mechanism of encryption - actualy, the guy who wrote down the code represented in question just chosed byte mask as a rand(200) + 32 was storing it in first byte of encrypted pass and then in cycle he was incrementing mask by one every step for all of pass length.
So the logic of decrypting is following:
Grab the mask - buffer[0] in this case
Do XOR for every other character with this mask, increasing it by one on every step.
3.???
Profit!
Thanks for everyone who participated in this theme!

In SQL Server, I need to pack 2 characters into 1 character, similar to HEX. How?

I have a SQL Server table that has a column in it that is defined as Binary(7).
It is updated with data from a Cobol program that has Comp-3 data (packed decimal).
I wrote a C# program to take a number and create the Comp-3 value. I have it available to SQL Server via CLR Integration. I'm able to access it like a stored procedure.
My problem is, I need to take the value from this program and save it in the binary column. When I select a row of data that is already in there, I am seeing a value like the following:
0x00012F0000000F
The value shown is COBOL comp-3 (packed decimal) data, stored in the SQL table. Remember, this field is defined as Binary(7). There are two values concatenated and stored here. Unsigned value 12, and unsigned value 0.
I need to concatenate 0x00012F (length of 3 characters) and 0x0000000F (length of 4 characters) together and write it to the column.
My question is two part.
1) I am able to return a string representation of the Comp-3 value from my program. But, I'm not sure if this is the format I need to return to make this work. What format should I return to SQL, so it can be used correctly?
2) What do I need to do to convert this to make it work?
I hope I was clear enough. It's a lot to digest...Thanks!
I figured it out!
I needed to change the output to byte[], and reference it coming out of the program in SQL as varbinary.
This is the code, if anyone else in the future needs it. I hope this helps others that need to create Comp-3 (packed decimal) in SQL. I'll outline the steps to use it below.
Below is the source for the C# program. Compile it as a dll.
using System;
using System.Collections.Generic;
using System.Data;
using Microsoft.SqlServer.Server;
using System.Data.SqlTypes;
namespace Numeric2Comp3
{
//PackedDecimal conversions
public class PackedDecimal
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void ToComp3(string numberin, out byte[] hexarray, out string hexvalue)
{
long value;
bool result = Int64.TryParse(numberin, out value);
if (!result)
{
hexarray = null;
hexvalue = null;
return;
}
Stack<byte> comp3 = new Stack<byte>(10);
byte currentByte;
if (value < 0)
{
currentByte = 0x0d; //signed -
value = -value;
}
else if (numberin.Trim().StartsWith("+"))
{
currentByte = 0x0c; //signed +
}
else
{
currentByte = 0x0f; //unsigned
}
bool byteComplete = false;
while (value != 0)
{
if (byteComplete)
currentByte = (byte)(value % 10);
else
currentByte |= (byte)((value % 10) << 4);
value /= 10;
byteComplete = !byteComplete;
if (byteComplete)
comp3.Push(currentByte);
}
if (!byteComplete)
comp3.Push(currentByte);
hexarray = comp3.ToArray();
hexvalue = bytesToHex(comp3.ToArray());
}
private static string bytesToHex(byte[] buf)
{
string HexChars = "0123456789ABCDEF";
System.Text.StringBuilder sb = new System.Text.StringBuilder((buf.Length / 2) * 5 + 3);
for (int i = 0; i < buf.Length; i++)
{
sbyte b = Convert.ToSByte(buf[i]);
b = (sbyte)(b >> 4); // Hit to bottom
b = (sbyte)(b & 0x0F); // get HI byte
sb.Append(HexChars[b]);
b = Convert.ToSByte(buf[i]); // refresh
b = (sbyte)(b & 0x0F); // get LOW byte
sb.Append(HexChars[b]);
}
return sb.ToString();
}
}
}
Save the dll somewhere in a folder on the SQL Server machine. I used 'C:\NTA\Libraries\Numeric2Comp3.dll'.
Next, you'll need to enable CLR Integration on SQL Server. Read about it on Microsoft's website here: Introduction to SQL Server CLR Integration. Open SQL Server Management Studio and execute the following to enable CLR Integration:
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO
Once that is done, execute the following in Management Studio:
CREATE ASSEMBLY Numeric2Comp3 from 'C:\NTA\Libraries\Numeric2Comp3.dll' WITH PERMISSION_SET = SAFE
You can execute the following to remove the assembly, if you need to for any reason:
drop assembly Numeric2Comp3
Next, in Management studio, execute the following to create the stored procedure to reference the dll:
CREATE PROCEDURE Numeric2Comp3
#numberin nchar(27), #hexarray varbinary(27) OUTPUT, #hexstring nchar(27) OUTPUT
AS
EXTERNAL NAME Numeric2Comp3.[Numeric2Comp3.PackedDecimal].ToComp3
If everything above runs successfully, you're done!
Here is some SQL to test it out:
DECLARE #in nchar(27), #hexstring nchar(27), #hexarray varbinary(27)
set #in = '20120123'
EXEC Numeric2Comp3 #in, #hexarray out, #hexstring out
select len(#hexarray), #hexarray
select len(#hexstring), #hexstring
This will return the following values:
(No column name) (No column name)
5 0x020120123F
(No column name) (No column name)
10 020120123F
In my case, what I need is the value coming out of #hexarray. This will be written to the Binary column in my table.
I hope this helps others that may need it!
If you have Comp-3 stored in a binary filed as a hex string, well I wonder if the process that created this is working as it should.
Be that as it may, the best solution would be to cast them in the select; the cast sytax is simple, but I don't know if a comp-3 cast is available.
Here are examples on MSDN.
So let's work with the string: To transform the string you use this:
string in2 = "020120123C";
long iOut = Convert.ToInt64(in2.Substring(0, in2.Length - 1))
* (in2.Substring(in2.Length - 1, 1)=="D"? -1 : 1 ) ;
It treats the last character as th sign, with 'D' being the one negative sign. Both 'F' and 'C' would be positive.
Will you also need to write the data back?
I am curious: What string representaion comes out for fractional numbers like 123.45 ?
( I'll leave the original answer for reference..:)
Here are a few lines of code to show how you can work with bit and bytes.
The operations to use are:
shift the data n bits right or left: << n or >> n
masking/clearing unwanted high bits: e.g. set all to 0 except the last 4 bits: & 0xF
adding bitwise: |
If you have a string representation like the one you have shown the out3 and out4 byte would be the result. The other conversions are just examples how to process bit; you can't possibly have decimals as binarys or binarys that look like decimals. Maybe you get integers - then out7 and out8 would be the results.
To combine two bytes into one integer look at the last calculation!
// 3 possible inputs:
long input = 0x00012F0000071F;
long input2 = 3143;
string inputS = "0x00012F0000071F";
// take binary input as such
byte out1 = (byte)((input >> 4) & 0xFFFFFF );
byte out2 = (byte)(input >> 36);
// take string as decimals
byte out3 = Convert.ToByte(inputS.Substring(5, 2));
byte out4 = Convert.ToByte(inputS.Substring(13, 2));
// take binary as decimal
byte out5 = (byte)(10 * ((input >> 40) & 0xF) + (byte)((input >> 36) & 0xF));
byte out6 = (byte)(10 * ((input >> 8) & 0xF) + (byte)((input >> 4) & 0xF));
// take integer and pick out 3rd and last byte
byte out7 = (byte)(input2 >> 8);
byte out8 = (byte)(input2 & 0xFF);
// combine two bytes to one integer
int byte1and2 = (byte)(12) << 8 | (byte)(71) ;
Console.WriteLine(out1.ToString());
Console.WriteLine(out2.ToString());
Console.WriteLine(out3.ToString());
Console.WriteLine(out4.ToString());
Console.WriteLine(out5.ToString());
Console.WriteLine(out6.ToString());
Console.WriteLine(out7.ToString());
Console.WriteLine(out8.ToString());
Console.WriteLine(byte2.ToString());

Array of chars in hex format to integer?

I have an API which returns a byte[] over the network which represents information about a device.
It is in format 15ab1234cd\r\n where the first 2 characters are a HEX representation of the amount of data in the message.
I am aware I can convert this to a string via ASCIIEncoding.ASCII.GetString, and then use Convert.ToInt32(string.Substring(0, 2), 16) to achieve this. However the whole thing stays a byte array throughout the life of the whole program I am writing, and I don't want to convert to a string just for the purpose of getting the packet length.
Any suggestions of converting array of chars in hex format to an int in C#?
There is no .Net provided function that does it. Converting first 2 bytes to string with Encoding.GetString is very readable (possibly not most performant):
var hexValue = ASCIIEncoding.ASCII.GetString(byteData, 0, 2);
var intValue = Convert.ToInt32(hexValue, 16);
You can easily write conversion code (map '0'-'9' and 'a'-'f' / 'A'-'F' ranges to corresponding integer value and add together.
Here is one-statement conversion strictly for entertainment purposes. The resulting lambda (before ((byte)'0',(byte)'A') in sample takes 2 byte arguments assuming them to be ASCII characters and convert into integer.
((Func<Func<char,int>, Func<byte, byte, int>>)
(charToInt=> (c, c1)=>
charToInt(char.ToUpper((char)c)) * 16 + charToInt(char.ToUpper((char)c1))))
((Func<char, int>)(
c => c >= '0' && c <='9' ? c-'0' : c >='A' && c <= 'F' ? c - 'A' + 10 : 0))
((byte)'0',(byte)'A')
If you know the first two values are valid hexadecimal characters (0-9, A-Z, a-z), it is possible to convert to a hex value using logical operators.
int GetIntFromHexBytes(byte[] s, int start, int length)
{
int ret = 0;
for (int i = start; i < start+length; i++)
{
ret <<= 4;
ret |= (byte)((s[i] & 0x0f) + ((s[i] & 0x40) >> 6) * 9);
}
return ret;
}
(This works because c & 0x0f returns the 4 least significant bits, and will range from 0-9 for the values '0'-'9', and from 1 - 6 for both capital and lowercase letters ('a' - 'z' and 'A' - 'Z'). s[i] & 0x40 is 0 for numeric characters, and 0x40 for alpha characters; shifting right six characters provides a value of 0 for numeric characters and 1 for alphabetic characters. Shifting left and multiplying by 9 will add a bias of 9 for alpha characters to map A-F and a-f from 1-6 to 10-15.)
Given the byte array:
byte[] b = { (byte)'7', (byte)'f', (byte)'1', (byte)'c' };
Calling GetIntFromHexBytes(b, 0, 2) will return 127 (0x7f), the first two bytes of the array, as required.
As a caution: this approach does no bounds checking. A check can be added in the loop if needed to ensure that the input bytes are valid hex characters.

How do I properly loop through and print bits of an Int, Long, Float, or BigInteger?

I'm trying to debug some bit shifting operations and I need to visualize the bits as they exist before and after a Bit-Shifting operation.
I read from this answer that I may need to handle backfill from the shifting, but I'm not sure what that means.
I think that by asking this question (how do I print the bits in a int) I can figure out what the backfill is, and perhaps some other questions I have.
Here is my sample code so far.
static string GetBits(int num)
{
StringBuilder sb = new StringBuilder();
uint bits = (uint)num;
while (bits!=0)
{
bits >>= 1;
isBitSet = // somehow do an | operation on the first bit.
// I'm unsure if it's possible to handle different data types here
// or if unsafe code and a PTR is needed
if (isBitSet)
sb.Append("1");
else
sb.Append("0");
}
}
Convert.ToString(56,2).PadLeft(8,'0') returns "00111000"
This is for a byte, works for int also, just increase the numbers
To test if the last bit is set you could use:
isBitSet = ((bits & 1) == 1);
But you should do so before shifting right (not after), otherwise you's missing the first bit:
isBitSet = ((bits & 1) == 1);
bits = bits >> 1;
But a better option would be to use the static methods of the BitConverter class to get the actual bytes used to represent the number in memory into a byte array. The advantage (or disadvantage depending on your needs) of this method is that this reflects the endianness of the machine running the code.
byte[] bytes = BitConverter.GetBytes(num);
int bitPos = 0;
while(bitPos < 8 * bytes.Length)
{
int byteIndex = bitPos / 8;
int offset = bitPos % 8;
bool isSet = (bytes[byteIndex] & (1 << offset)) != 0;
// isSet = [True] if the bit at bitPos is set, false otherwise
bitPos++;
}

Bits needed to change one number to another

Say I have two positive numbers a and b. How many bits must be inverted in order to convert a into b ?
I just want the count and not the exact position of the differing bits.
Lets assume a = 10 ( 1010 ) and b = 8 ( 1000 ). In this case the number of bits that should be inverted equals 1.
Any generalised algorithm?
The solution is simple
Step 1 ) Compute a XOR b
Step 2 ) Count the number of set bits in the result
Done!
int a = 10;
int b = 8;
int c = a ^ b; //xor
int count = 0;
while (c != 0)
{
if ((c & 1) != 0)
count++;
c = c >> 1;
}
return count;
changeMask = a XOR b
bitsToChange = 0
while changeMask>0
bitsToChange = bitsToChange + (changeMask AND 1)
changeMask = changeMask >> 1
loop
return bitsToChange
Good old-fashioned bit operations!
size_t countbits( unsigned int n )
{
size_t bits = 0;
while( n )
{
bits += n&1;
n >>= 1;
}
return bits;
}
countbits( a ^ b );
This could would work in C as well as C++. You could (in C++ only) make the countbits function a template.
Actually,humbly building on previous answer - this might work better for converting a to b:
the only difference with previous answer is that the bits already set in b dont need to be set again - so dont count them.
calculate (a XOR b) AND ~b
count the set bits
post corrected as per comment. Thanks!
abs(popcount(a) - popcount(b)) where popcount counts bits set in number (a lot of different variants exists)

Categories