Problem in converting sprintf to C# - c#

i have this line I need to write in C#
sprintf(
currentTAG,
"%2.2X%2.2X,%2.2X%2.2X",
hBuffer[ presentPtr+1 ],
hBuffer[ presentPtr ],
hBuffer[ presentPtr+3 ],
hBuffer[ presentPtr+2 ] );
hbuffer is a uchar array.
In C# I have the same data in a byte array and I need to implement this line...
Please help...

Check if this works:
byte[] hBuffer = { ... };
int presentPtr = 0;
string currentTAG = string.Format("{0:X2}{1:X2},{2:X2}{3:X2}",
hBuffer[p+1],
hBuffer[p],
hBuffer[p + 3],
hBuffer[p + 2]);
This is another option but less efficient:
byte[] hBuffer = { ... };
int presentPtr = 0;
string currentTAG = string.Format("{0}{1},{2}{3}",
hBuffer[p+1].ToString("X2"),
hBuffer[p].ToString("X2"),
hBuffer[p + 3].ToString("X2"),
hBuffer[p + 2].ToString("X2"));
Converting each byte of hBuffer to a
string, as in the second example, is
less efficient. The first example will
give you better performance,
especially if you do this many times,
by virtue of not spamming the garbage
collector.
[From the top of my head] In C/C++ %2.2X outputs the value in hexadecimal using upper case letters and at least two letters (left padded with zero).
In C++ the next example outputs 01 61 in the console:
unsigned char test[] = { 0x01, 'a' };
printf("%2.2X %2.2X", test[0], test[1]);
Using the information above, the following C# snippet outputs also 01 61 in the console:
byte[] test = { 0x01, (byte) 'a' };
Console.WriteLine(String.Format("{0:X2} {1:X2}", test[0], test[1]));

Composite Formatting: This page discusses how to use the string.Format() function.

You are looking for String.Format method.

Related

How to convert from string to 16-bit unsigned integer in python?

I'm currently working on some encoding and decoding of the string in python. I was supposed to convert some code from C# to python, however I encountered some problem as below:
So now I have a string that looks like this: 21-20-89-00-67-00-45-78
The code was supposed to eliminates the - in between the numbers, and packed 2 integers into 1 group, then convert them into bytes. In C#, it was done like this:
var value = "21-20-89-00-67-00-45-78";
var valueNoDash = value.Replace("-", null);
for (var i = 0; i < DataSizeInByte; i++)
{
//convert every 2 digits into 1 byte
Data[i] = Convert.ToByte(valueNoDash.Substring(i * 2, 2), 16);
}
The above code represents Step 1: Remove - from the string, Step 2: using Substring method to divide them into 2 digits in 1 group, Step 3: use Convert.ToByte with base 16 to convert them into 16-bit unsigned integer. The results in Data is
33
32
137
0
103
0
69
120
So far I have no problem with this C# code, however when I try to do the same in python, I could not get to the same result as the C# code. My python code are as below:
from textwrap import wrap
import struct
values = "21-20-89-00-67-00-45-78"
values_no_dash = a.replace('-', '')
values_grouped = wrap(b, 2)
values_list = []
for value in values_grouped:
values_list.append(struct.pack('i', int(value)))
In python, it gives me list of bytes in hex value, which is as below:
b'\x15\x00\x00\x00'
b'\x14\x00\x00\x00'
b'Y\x00\x00\x00'
b'\x00\x00\x00\x00'
b'C\x00\x00\x00'
b'\x00\x00\x00\x00'
b'-\x00\x00\x00'
b'N\x00\x00\x00'
This is in bytes object, however when I converted this object into Decimal, it gives me the exact same value as the original string: 21, 20, 89, 0, 67, 0, 45, 78.
Which means I did not convert successfully into 16-bit unsigned integer right? How can I do this in python? I've tried using str.encode() but the result still different. How can I achieve what C# had done in python?
Thanks and appreciates if anyone can help!
I think this is the solution you're looking for:
values = "21-20-89-00-67-00-45-78"
values_no_dash_grouped = values.split('-') #deletes dashes and groups numbers simultaneously
for value in values_no_dash_grouped:
print(int(value, 16)) #converts number in base 16 to base 10 and prints it
Hope it helps!

Get a guid to encode using big-endian formatting C#

I have a unusual situation where by I have an existing MySQL database that uses binary(16) primary keys, these are the basis for UUIDs that are used in an existing api.
My problem is that I now want to add a replacement api written with dotnet core, and I'm running into a problem with encoding that has been explained here
Specifically, the Guid struct in dotnet uses a mixed-endian format that produces a different string to the existing api. This isn't acceptable for obvious reasons.
So my question is this: is there an elegant way to force the Guid struct to encode entirely with the big-endian format?
If there isn't I can just write a terrible hack, but I thought I'd check with the collective intelligence of the SO community first!
Nope; as far as I'm aware there's no inbuilt way to get this. And yes, Guid has what I can only call "crazy-endian" implementation currently. You'd need to get the Guid-ordered bits (either via unsafe or Guid.ToByteArray) and then order them manually, figuring out which chunks to reverse - it isn't a simple Array.Reverse(). So: very manual, I'm afraid. I suggest using a guid like
00010203-0405-0607-0809-0a0b0c0d0e0f
to debug it; this gives you (as I suspect you are aware):
03-02-01-00-05-04-07-06-08-09-0A-0B-0C-0D-0E-0F
so:
reverse 4
reverse 2
reverse 2
straight 8
As of 2021 there still isn't a built-in way to convert a System.Guid to a MySQL compatible big endian string in C#.
Here's the extension we came up with when we encountered this exact C# mixed-endian Guid problem at work:
public static string ToStringBigEndian(this Guid guid)
{
// allocate enough bytes to store Guid ASCII string
Span<byte> result = stackalloc byte[36];
// set all bytes to 0xFF (to be able to distinguish them from real data)
result.Fill(0xFF);
// get bytes from guid
Span<byte> buffer = stackalloc byte[16];
_ = guid.TryWriteBytes(buffer);
int skip = 0;
// iterate over guid bytes
for (int i = 0; i < buffer.Length; i++)
{
// indices 4, 6, 8 and 10 will contain a '-' delimiter character in the Guid string.
// --> leave space for those delimiters
if (i is 4 or 6 or 8 or 10)
{
skip++;
}
// stretch high and low bytes of every single byte into two bytes (skipping '-' delimiter characters)
result[(2 * i) + skip] = (byte)(buffer[i] >> 0x4);
result[(2 * i) + 1 + skip] = (byte)(buffer[i] & 0x0Fu);
}
// iterate over precomputed byte array.
// values 0x0 to 0xF are final hex values, but must be mapped to ASCII characters.
// value 0xFF is to be mapped to '-' delimiter character.
for (int i = 0; i < result.Length; i++)
{
// map bytes to ASCII values (a-f will be lowercase)
ref byte b = ref result[i];
b = b switch
{
0xFF => 0x2D, // Map 0xFF to '-' character
< 0xA => (byte)(b + 0x30u), // Map 0x0 - 0x9 to '0' - '9'
_ => (byte)(b + 0x57u) // Map 0xA - 0xF to 'a' - 'f'
};
}
// get string from ASCII encoded guid byte array
return Encoding.ASCII.GetString(result);
}
it's a bit lengthy but apart from the big endian string it returns it does no heap allocations so it's guaranteed to be fast :)

sending array of sbytes through socket in client-server architecture C#

I would like to send array of sbytes. a[2] and a[3] are numbers -100..100.
static void speed_control(Socket sock)
{
sbyte[] a = new sbyte[5];
a[0] = Convert.ToSByte('[');
a[1] = Convert.ToSByte(14);
a[2] = Convert.ToSByte(Convert.ToInt16(Console.ReadLine()));
a[3] = Convert.ToSByte(Convert.ToInt16(Console.ReadLine()));
a[4] = Convert.ToSByte(']');
sock.Send(a);
}
sock.Send(a) gives me this error: cannot convert from sbyte[] to byte[].
Is there any other simple way to send this kind of data?
Sockets send and receive data in binary representation.
Your -100...100 numbers are not the binary representation, they are the data themselves. So typically, you need to convert your numbers to binary and then send them.
If you don't want to use the standard way and really insist in sending doing it your way, then you can do this:
Numbers between 0 and 100 can be sent as is. Number between -1 and -100, can be converted to numbers between 101 and 200 and then sent. The other side must reverse the calculation. So you'll be using byte, but not as binary data.
However, in that case your example doesn't make any sense. You seem to be sending characters, so you will never get negative values, and you must use the standard way and just change:
sbyte[] a = new sbyte[5];
to:
byte[] a = new byte[5];
If that example doesn't really represent what you're actually doing, then please update your question and post a better example that clearly shows how you are getting numbers -100...100.
If Socket.Send wants byte[] you have to provide byte[]
static void speed_control(Socket sock) {
unchecked { // we don't want IntegerOverflow to be thrown on (byte) -100 and alike
sock.Send(new byte[] {
(byte) '[',
14,
(byte) Convert.ToSByte(Console.ReadLine()),
(byte) Convert.ToSByte(Console.ReadLine()),
(byte) ']'
});
}
}
Even if the actual range is -100..100 you can use byte, not sbyte if you just cast
sbyte s =
byte b = unchecked((byte)s);
...
sbyte s = unchecked((sbyte)b);
and let the system use binary complement:
-100 (sbyte) ~ 156 (byte)
-99 ~ 157
...
-1 ~ 255

C# SHA-256 vs. Java SHA-256. Different results?

I want to convert a some code which is in Java to C#.
Java Code:
private static final byte[] SALT = "NJui8*&N823bVvy03^4N".getBytes();
public static final String getSHA256Hash(String secret)
{
try {
MessageDigest digest = MessageDigest.getInstance("SHA-256");
digest.update(secret.getBytes());
byte[] hash = digest.digest(SALT);
StringBuffer hexString = new StringBuffer();
for (int i = 0; i < hash.length; i++) {
hexString.append(Integer.toHexString(0xFF & hash[i]));
}
return hexString.toString();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
throw new RuntimeException("SHA-256 realization algorithm not found in JDK!");
}
When I tried to use the SimpleHash class I got different hashs
UPDATE:
For example:
Java: byte[] hash = digest.digest(SALT);
generates (first 6 bytes):
[0] = 9
[1] = -95
[2] = -68
[3] = 64
[4] = -11
[5] = 53
....
C# code (class SimpleHash):
string hashValue = Convert.ToBase64String(hashWithSaltBytes);
hashWithSaltBytes has (first 6 bytes):
[0] 175 byte
[1] 209 byte
[2] 120 byte
[3] 74 byte
[4] 74 byte
[5] 227 byte
The String.getBytes method encodes the string to bytes using the platform's default charset, whereas the example code you linked uses UTF-8.
Try this:
digest.update(secret.getBytes("UTF-8"));
Secondly, the Integer.toHexString method returns the hexadecimal result with no leading 0s.
The C# code you link to also uses salt - but the Java code does not. If you use salt with once, but not the other, then the results will be (and should be!) different.
hexString.append(Integer.toHexString(0xFF & hash[i]));
You are building the hash string incorrectly. Integer.toHexString does not include leading zeros, so while Integer.toHexString(0xFF) == "FF", the problem is that Integer.toHexString(0x05) == "5".
Suggested correction: String.format("%02x", hash[i] & 0xFF)
public static String getEncryptedPassword(String clearTextPassword) throws NoSuchAlgorithmException{
MessageDigest md = MessageDigest.getInstance("SHA-256");
md.update(clearTextPassword.getBytes(StandardCharsets.UTF_8));
byte[] digest = md.digest();
String hex = String.format("%064x", new BigInteger(1, digest));
String st = new String(hex.toUpperCase());
for (int i = 2; i < (hex.length() + hex.length() / 2) - 1 ;) {
st = new StringBuffer(st).insert(i, "-").toString();
i = i + 3;
}
return st ;
}
You can use the following java to match that of C#
You didn't really write how you called the SimpleHash class - with which parameters and such.
But note that its ComputeHash method has in its documentation:
Hash value formatted as a base64-encoded string.
Your class instead formats the output in hexadecimal, which will obviously be different.
Also, the salt is in SimpleHash interpreted as base64, while your method interprets it as ASCII (or whatever your system encoding is - most probably something ASCII-compatible, and the string only contains ASCII characters).
Also, the output in SimpleHash includes the salt (to allow reproducing it for the "verify" part when using random salt), which it doesn't in your method.
(More points are already mentioned by the other answers.)

C# Converting a XOR crypt function

I've been working on converting a C++ crypting method to C#. The problem is, I cant get it to encrypt/decrypt the way I want it to.
The idea is simple, I capture a packet, and decrypt it. The output will be:
Packet Size - Command/Action - Null (End)
(The decryptor cuts off the first and last 2 bytes)
The C++ code is this:
// Crypt the packet with Xor operator
void cryptPacket(char *packet)
{
unsigned short paksize=(*((unsigned short*)&packet[0])) - 2;
for(int i=2; i<paksize; i++)
{
packet[i] = 0x61 ^ packet[i];
}
}
So I thought this would work in C# if I didn't want to use pointers:
public static char[] CryptPacket(char[] packet)
{
ushort paksize = (ushort) (packet.Length - 2);
for(int i=2; i<paksize; i++)
{
packet[i] = (char) (0x61 ^ packet[i]);
}
return packet;
}
-but it isn't, the value returned is just another line of rubish instead of the decrypted value. The output given is: ..O♦&/OOOe.
Well.. atleast the '/' is in the right place for some reason.
Some more information:
The test packet I'm using is this:
Hex value: 0C 00 E2 66 65 47 4E 09 04 13 65 00
Plain text: ...feGN...e.
Decrypted: XX/hereXX
X = Unknown value, I cant really remember, but it doesn't matter.
Using Hex Workshop you can decrypt the packet this way:
Special Paste the hex value as CF_TEXT, make sure the 'treat as hexidecimal value' box is checked.
Afterwards, select everything from the hexidecimal value you just pasted, except the first and last 2 bytes.
Go to Tools>Operations>Xor.
Select 'Treat data as 8 bit data' and set value to '61'.
Press 'OK', and you'r done.
That's all the information I can give at the moment, because I'm writing this off the top of my head.
Thank you for your time.
In case you don't see a question in this:
It would be great if someone could take a look at the code to see what's wrong with it, or if there's another way to do it. I'm converting this code because I'm horrible with C++, and want to create a C# application with that code.
Ps: The code tags and such were a pain, so I'm sorry if the spacing etc. is a little messed up.
Your problem might be that as .NET's char is unicode, some characters are going to be using more than one byte, and your bitmask is only one byte long. So the most significant byte will be left unaltered.
I just tried your function and it seems ok:
class Program
{
// OP's method: http://stackoverflow.com/questions/4815959
public static byte[] CryptPacket(byte[] packet)
{
int paksize = packet.Length - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] = (byte)(0x61 ^ packet[i]);
}
return packet;
}
// http://stackoverflow.com/questions/321370 :)
public static byte[] StringToByteArray(string hex)
{
return Enumerable.Range(0, hex.Length).
Where(x => 0 == x % 2).
Select(x => Convert.ToByte(hex.Substring(x, 2), 16)).
ToArray();
}
static void Main(string[] args)
{
string hex = "0C 00 E2 66 65 47 4E 09 04 13 65 00".Replace(" ", "");
byte[] input = StringToByteArray(hex);
Console.WriteLine("Input: " + ASCIIEncoding.ASCII.GetString(input));
byte[] output = CryptPacket(input);
Console.WriteLine("Output: " + ASCIIEncoding.ASCII.GetString(output));
Console.ReadLine();
}
}
Console output:
Input: ...feGN.....
Output: ...../here..
(where '.' represents funny ascii characters)
It seems a bit smelly that your CryptPacket method is overwriting the initial array with the output values. And that irrelevant characters are not trimmed. But if you are trying to port something, I guess you should know what you are doing.
You could also consider trimming the input array, to remove the unwanted characters first, and then use a generic ROT13 method (like this one). This way you have your own "specialized" version with 2-byte offsets inside the crypt function itself, instead of something like:
public static byte[] CryptPacket(byte[] packet)
{
// create a new instance
byte[] output = new byte[packet.Length];
// process ALL array items
for (int i = 0; i < packet.Length; i++)
{
output[i] = (byte)(0x61 ^ packet[i]);
}
return output;
}
Here's an almost literal translation from C++ to C#, and it seems to work:
var packet = new byte[] {
0x0C, 0x00, 0xE2, 0x66, 0x65, 0x47,
0x4E, 0x09, 0x04, 0x13, 0x65, 0x00
};
CryptPacket(packet);
// displays "....../here." where "." represents an unprintable character
Console.WriteLine(Encoding.ASCII.GetString(packet));
// ...
void CryptPacket(byte[] packet)
{
int paksize = (packet[0] | (packet[1] << 8)) - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] ^= 0x61;
}
}

Categories