I've been working on converting a C++ crypting method to C#. The problem is, I cant get it to encrypt/decrypt the way I want it to.
The idea is simple, I capture a packet, and decrypt it. The output will be:
Packet Size - Command/Action - Null (End)
(The decryptor cuts off the first and last 2 bytes)
The C++ code is this:
// Crypt the packet with Xor operator
void cryptPacket(char *packet)
{
unsigned short paksize=(*((unsigned short*)&packet[0])) - 2;
for(int i=2; i<paksize; i++)
{
packet[i] = 0x61 ^ packet[i];
}
}
So I thought this would work in C# if I didn't want to use pointers:
public static char[] CryptPacket(char[] packet)
{
ushort paksize = (ushort) (packet.Length - 2);
for(int i=2; i<paksize; i++)
{
packet[i] = (char) (0x61 ^ packet[i]);
}
return packet;
}
-but it isn't, the value returned is just another line of rubish instead of the decrypted value. The output given is: ..O♦&/OOOe.
Well.. atleast the '/' is in the right place for some reason.
Some more information:
The test packet I'm using is this:
Hex value: 0C 00 E2 66 65 47 4E 09 04 13 65 00
Plain text: ...feGN...e.
Decrypted: XX/hereXX
X = Unknown value, I cant really remember, but it doesn't matter.
Using Hex Workshop you can decrypt the packet this way:
Special Paste the hex value as CF_TEXT, make sure the 'treat as hexidecimal value' box is checked.
Afterwards, select everything from the hexidecimal value you just pasted, except the first and last 2 bytes.
Go to Tools>Operations>Xor.
Select 'Treat data as 8 bit data' and set value to '61'.
Press 'OK', and you'r done.
That's all the information I can give at the moment, because I'm writing this off the top of my head.
Thank you for your time.
In case you don't see a question in this:
It would be great if someone could take a look at the code to see what's wrong with it, or if there's another way to do it. I'm converting this code because I'm horrible with C++, and want to create a C# application with that code.
Ps: The code tags and such were a pain, so I'm sorry if the spacing etc. is a little messed up.
Your problem might be that as .NET's char is unicode, some characters are going to be using more than one byte, and your bitmask is only one byte long. So the most significant byte will be left unaltered.
I just tried your function and it seems ok:
class Program
{
// OP's method: http://stackoverflow.com/questions/4815959
public static byte[] CryptPacket(byte[] packet)
{
int paksize = packet.Length - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] = (byte)(0x61 ^ packet[i]);
}
return packet;
}
// http://stackoverflow.com/questions/321370 :)
public static byte[] StringToByteArray(string hex)
{
return Enumerable.Range(0, hex.Length).
Where(x => 0 == x % 2).
Select(x => Convert.ToByte(hex.Substring(x, 2), 16)).
ToArray();
}
static void Main(string[] args)
{
string hex = "0C 00 E2 66 65 47 4E 09 04 13 65 00".Replace(" ", "");
byte[] input = StringToByteArray(hex);
Console.WriteLine("Input: " + ASCIIEncoding.ASCII.GetString(input));
byte[] output = CryptPacket(input);
Console.WriteLine("Output: " + ASCIIEncoding.ASCII.GetString(output));
Console.ReadLine();
}
}
Console output:
Input: ...feGN.....
Output: ...../here..
(where '.' represents funny ascii characters)
It seems a bit smelly that your CryptPacket method is overwriting the initial array with the output values. And that irrelevant characters are not trimmed. But if you are trying to port something, I guess you should know what you are doing.
You could also consider trimming the input array, to remove the unwanted characters first, and then use a generic ROT13 method (like this one). This way you have your own "specialized" version with 2-byte offsets inside the crypt function itself, instead of something like:
public static byte[] CryptPacket(byte[] packet)
{
// create a new instance
byte[] output = new byte[packet.Length];
// process ALL array items
for (int i = 0; i < packet.Length; i++)
{
output[i] = (byte)(0x61 ^ packet[i]);
}
return output;
}
Here's an almost literal translation from C++ to C#, and it seems to work:
var packet = new byte[] {
0x0C, 0x00, 0xE2, 0x66, 0x65, 0x47,
0x4E, 0x09, 0x04, 0x13, 0x65, 0x00
};
CryptPacket(packet);
// displays "....../here." where "." represents an unprintable character
Console.WriteLine(Encoding.ASCII.GetString(packet));
// ...
void CryptPacket(byte[] packet)
{
int paksize = (packet[0] | (packet[1] << 8)) - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] ^= 0x61;
}
}
Related
I'm already really newbie in coding but my problem is how to divide code likt this "7900BD7400BD7500BD76A5FF" to this "79 00 BD 74 00 BD 75 00 BD 76 A5 FF". My main problem was to convert hex into ascii, but any solution which i got convert only "short" expression. Maybe someone can give me some advices? I'll be really gratefull
A more general solution to the problem:
static String SeparateBy(
this string str,
string separator,
int groupLength)
{
var buffer = new StringBuilder();
for (var i = 0; i < str.Length; i++)
{
if (i % groupLength == 0)
{
buffer.Append(separator);
}
buffer.Append(str[i]);
}
return buffer.ToString();
}
And you'd call it like: "7900BD7400BD7500BD76A5FF".SeparateBy(" ", 2)
When posible, and if its relatively easy, try to generalize methods so they can be reused more often. Of course if things start to get complicated generalizing can be self defeating… knowing when or when not to generalize is a skill you will acquire little by little.
Since you don't seem to have much knowledge in string processing, I'll give an example that does not require you to lern too many things at once:
string input = "7900BD7400BD7500BD76A5FF";
string output = string.Empty;
for(int i=0; i<input.Length; i+=2) // Go in steps of 2
{
output += input[i]; // The first of 2 characters
output += input[i+1]; // The second of 2 characters
output += ' '; // The space
}
Console.WriteLine(output);
Please note that this solution only inserts spaces after every second character. It does not check whether these are all hex codes and whether its length is a multiple of 2. It assumes that whatever code is before generates a valid result. You should ensure that with unit tests.
This approach will not be very efficient for long strings (if you have long text, please learn about StringBuilder).
If you followed this advice for creating the hex data, then it's of course much easier to insert the space right away:
public static string ByteArrayToString(byte[] ba)
{
StringBuilder hex = new StringBuilder(ba.Length * 2);
foreach (byte b in ba)
hex.AppendFormat("{0:X2} ", b); // <-- I inserted a space in the format string
return hex.ToString();
}
I have this code in C that I need to port to C#:
void CryptoBuffer(unsigned char *Buffer, unsigned short length)
{
unsigned short i;
for(i=0; i < length; i++)
{
*Buffer ^= 0xAA;
*Buffer++ += 0xC9;
}
}
I tried this:
public void CryptoBuffer(byte[] buffer, int length)
{
for(int i = 0; i < length; i++)
{
buffer[i] ^= 0xAA;
buffer[i] += 0xC9;
}
}
But the outcome doesn't match the one expected.
According to the example, this:
A5 03 18 01...
should become this:
A5 6F 93 8B...
It also says the first byte is not encrypted, so that's why A5 stays the same.
EDIT for clarification: The specification just says you should skip the first byte, it doesn't go into details, so I'm guessing you just pass the sequence from position 1 until the last position to skip the first byte.
But my outcome with that C# port is:
A5 72 7B 74...
Is this port correct or am I missing something?
EDIT 2: For further clarification, this is a closed protocol, so I can't go into details, that's why I provided just enough information to help me port the code, that C code was the one that was given to me, and that's what the specification said it would do.
The real problem was that the "0xAA" was wrong in the specification, that's why the output wasn't the expected one. The C# code provided here and by the accepted answer are correct after all.
Let's break it down shall we, one step at a time.
void CryptoBuffer(unsigned char *Buffer, unsigned short length)
{
unsigned short i;
for(i=0; i < length; i++)
{
*Buffer ^= 0xAA;
*Buffer++ += 0xC9;
}
}
Regardless of some other remarks, this is how you normally do these things in C/C++. There's nothing fancy about this code, and it isn't overly complicated, but I think it is good to break it down to show you what happens.
Things to note:
unsigned char is basically the same as byte in c#
unsigned length has a value between 0-65536. Int should do the trick.
Buffer has a post-increment
The byte assignment (+= 0xC9) will overflow. If it overflows it's truncated to 8 bits in this case.
The buffer is passed by ptr, so the pointer in the calling method will stay the same.
This is just basic C code, no C++. It's quite safe to assume people don't use operator overloading here.
The only "difficult" thing here is the Buffer++. Details can be read in the book "Exceptional C++" from Sutter, but a small example explains this as well. And fortunately we have a perfect example at our disposal. A literal translation of the above code is:
void CryptoBuffer(unsigned char *Buffer, unsigned short length)
{
unsigned short i;
for(i=0; i < length; i++)
{
*Buffer ^= 0xAA;
unsigned char *tmp = Buffer;
*tmp += 0xC9;
Buffer = tmp + 1;
}
}
In this case the temp variable can be solved trivially, which leads us to:
void CryptoBuffer(unsigned char *Buffer, unsigned short length)
{
unsigned short i;
for(i=0; i < length; i++)
{
*Buffer ^= 0xAA;
*Buffer += 0xC9;
++Buffer;
}
}
Changing this code to C# now is pretty easy:
private void CryptoBuffer(byte[] Buffer, int length)
{
for (int i=0; i<length; ++i)
{
Buffer[i] = (byte)((Buffer[i] ^ 0xAA) + 0xC9);
}
}
This is basically the same as your ported code. This means that somewhere down the road something else went wrong... So let's hack the cryptobuffer shall we? :-)
If we assume that the first byte isn't used (as you stated) and that the '0xAA' and/or the '0xC9' are wrong, we can simply try all combinations:
static void Main(string[] args)
{
byte[] orig = new byte[] { 0x03, 0x18, 0x01 };
byte[] target = new byte[] { 0x6F, 0x93, 0x8b };
for (int i = 0; i < 256; ++i)
{
for (int j = 0; j < 256; ++j)
{
bool okay = true;
for (int k = 0; okay && k < 3; ++k)
{
byte tmp = (byte)((orig[k] ^ i) + j);
if (tmp != target[k]) { okay = false; break; }
}
if (okay)
{
Console.WriteLine("Solution for i={0} and j={1}", i, j);
}
}
}
Console.ReadLine();
}
There we go: oops there are no solutions. That means that the cryptobuffer is not doing what you think it's doing, or part of the C code is missing here. F.ex. do they really pass 'Buffer' to the CryptoBuffer method or did they change the pointer before?
Concluding, I think the only good answer here is that critical information for solving this question is missing.
The example you were provided with is inconsistent with the code in the C sample, and the C and C# code produce identical results.
The porting looks right; can you explain why 03 should become 6F? The fact that the result seems to be off the "expected" value by 03 is a bit suspicious to me.
The port looks right.
What I would do in this situation is to take out a piece of paper and a pen, write out the bytes in binary, do the XOR, and then the addition. Now compare this to the C and C# codes.
In C#, you are overflowing the byte so it gets truncated to 0x72. Here's the math for converting the 0x03 in both binary and hex:
00000011 0x003
^ 10101010 0x0AA
= 10101001 0x0A9
+ 11001001 0x0C9
= 101110010 0x172
With the original method in C, we first suppose the sequence is decrypted/encrypted in a symmetric way with calling CryptoBuffer
initially invoke on a5 03 18 01 ...
a5 03 18 01 ... => d8 72 7b 74 ...
then on d8 72 7b 74 ...
d8 72 7b 74 ... => 3b a1 9a a7 ...
initially invoke on a5 6f 93 8b ...
a5 6f 93 8b ... => d8 8e 02 ea ...
then on d8 8e 02 ea ...
d8 8e 02 ea ... => 3b ed 71 09 ...
and we know it's not feasible.
Of course, you might have an asymmetric decrypt method; but first off, we would need either a5 03 18 01 ... => a5 6f 93 8b ... or the reverse of direction been proved with any possible magic number. The code of an analysis with a brute force approach is put at the rear of post.
I made the magic number been a variable for testing. With reproducibility analysis, we found that the original sequence can be reproduced every 256 invocation on continuously varied magic number. Okay, with what we've gone through it's still possible here.
However, the feasibility analysis which tests all of 256*256=65536 cases with both direction, from original => expected and expected => original, and none makes it.
And now we know there is no way to decrypt the encrypted sequence to the expected result.
Thus, we can only tell that the expected behavior of both language or your code are identical, but for the expected result is not possible because of the assumption was broken.
Code for the analysis
public void CryptoBuffer(byte[] buffer, ushort magicShort) {
var magicBytes=BitConverter.GetBytes(magicShort);
var count=buffer.Length;
for(var i=0; i<count; i++) {
buffer[i]^=magicBytes[1];
buffer[i]+=magicBytes[0];
}
}
int Analyze(
Action<byte[], ushort> subject,
byte[] expected, byte[] original,
ushort? magicShort=default(ushort?)
) {
Func<byte[], String> LaHeX= // narrowing bytes to hex statement
arg => arg.Select(x => String.Format("{0:x2}\x20", x)).Aggregate(String.Concat);
var temporal=(byte[])original.Clone();
var found=0;
for(var i=ushort.MaxValue; i>=0; --i) {
if(found>255) {
Console.WriteLine(": might found more than the number of square root; ");
Console.WriteLine(": analyze stopped ");
Console.WriteLine();
break;
}
subject(temporal, magicShort??i);
if(expected.SequenceEqual(temporal)) {
++found;
Console.WriteLine("i={0:x2}; temporal={1}", i, LaHeX(temporal));
}
if(expected!=original)
temporal=(byte[])original.Clone();
}
return found;
}
void PerformTest() {
var original=new byte[] { 0xa5, 0x03, 0x18, 0x01 };
var expected=new byte[] { 0xa5, 0x6f, 0x93, 0x8b };
Console.WriteLine("--- reproducibility analysis --- ");
Console.WriteLine("found: {0}", Analyze(CryptoBuffer, original, original, 0xaac9));
Console.WriteLine();
Console.WriteLine("--- feasibility analysis --- ");
Console.WriteLine("found: {0}", Analyze(CryptoBuffer, expected, original));
Console.WriteLine();
// swap original and expected
var temporal=original;
original=expected;
expected=temporal;
Console.WriteLine("--- reproducibility analysis --- ");
Console.WriteLine("found: {0}", Analyze(CryptoBuffer, original, original, 0xaac9));
Console.WriteLine();
Console.WriteLine("--- feasibility analysis --- ");
Console.WriteLine("found: {0}", Analyze(CryptoBuffer, expected, original));
Console.WriteLine();
}
Here's a demonstration
http://codepad.org/UrX0okgu
shows that the original code, given an input of A5 03 18 01 produces D8 72 7B 01; so
rule that the first byte is not decoded can be correct only if the buffer is sent starting from 2nd (show us the call)
the output does not match (do you miss other calls?)
So your translation is correct but your expectations on what the original code does are not.
I am getting this string
8802000030000000C602000033000000000000800000008000000000000000001800000000000
and this is what i am expecting to convert from string,
88020000 long in little endian => 648
30000000 long in little endian => 48
C6020000 long in little endian => 710
33000000 long in little endian => 51
left side is the value i am getting from the string and right side is the value i am expecting. The right side values are might be wrong but is there any way i can get right side value from left??
I went through several threads here like
How to convert an int to a little endian byte array?
C# Big-endian ulong from 4 bytes
I tried quite different functions but nothing giving me value which are around or near by what i am expecting.
Update :
I am reading text file as below. Most of the data are current in text format, but all of the sudden i am getting bunch of GRAPHICS info, i am not sure how to handle it.
RECORD=28
cVisible=1
dwUser=0
nUID=23
c_status=1
c_data_validated=255
c_harmonic=0
c_dlg_verified=0
c_lock_sizing=0
l_last_dlg_updated=0
s_comment=
s_hlinks=
dwColor=33554432
memUsr0=
memUsr1=
memUsr2=
memUsr3=
swg_bUser=0
swg_dConnKVA=L0
swg_dDemdKVA=L0
swg_dCodeKVA=L0
swg_dDsgnKVA=L0
swg_dConnFLA=L0
swg_dDemdFLA=L0
swg_dCodeFLA=L0
swg_dDsgnFLA=L0
swg_dDiversity=L4607182418800017408
cStandard=0
guidDB={901CB951-AC37-49AD-8ED6-3753E3B86757}
l_user_selc_rating=0
r_user_selc_SCkA=
a_conn1=21
a_conn2=11
a_conn3=7
l_ct_ratio_1=x44960000
l_ct_ratio_2=x40a00000
l_set_ct_ratio_1=
l_set_ct_ratio_2=
c_ct_conn=0
ENDREC
GRAPHICS0=8802000030000000C602000033000000000000800000008000000000000000001800000000000
EOF
Depending on how you want to parse up the input string, you could do something like this:
string input = "8802000030000000C6020000330000000000008000000080000000000000000018000000";
for (int i = 0; i < input.Length ; i += 8)
{
string subInput = input.Substring(i, 8);
byte[] bytes = new byte[4];
for (int j = 0; j < 4; ++j)
{
string toParse = subInput.Substring(j * 2, 2);
bytes[j] = byte.Parse(toParse, NumberStyles.HexNumber);
}
uint num = BitConverter.ToUInt32(bytes, 0);
Console.WriteLine(subInput + " --> " + num);
}
88020000 --> 648
30000000 --> 48
C6020000 --> 710
33000000 --> 51
00000080 --> 2147483648
00000080 --> 2147483648
00000000 --> 0
00000000 --> 0
18000000 --> 24
Do you really literally mean that that's a string? What it looks like is this: You have a bunch of 32-bit words, each represented by 8 hex digits. Each one is presented in little-endian order, low byte first. You need to interpret each of those as an integer. So, e.g., 88020000 is 88 02 00 00, which is to say 0x00000288.
If you can clarify exactly what it is you've got -- a string, an array of some kind of numeric type, or what -- then it'll be easier to advise you further.
I want to convert a some code which is in Java to C#.
Java Code:
private static final byte[] SALT = "NJui8*&N823bVvy03^4N".getBytes();
public static final String getSHA256Hash(String secret)
{
try {
MessageDigest digest = MessageDigest.getInstance("SHA-256");
digest.update(secret.getBytes());
byte[] hash = digest.digest(SALT);
StringBuffer hexString = new StringBuffer();
for (int i = 0; i < hash.length; i++) {
hexString.append(Integer.toHexString(0xFF & hash[i]));
}
return hexString.toString();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
throw new RuntimeException("SHA-256 realization algorithm not found in JDK!");
}
When I tried to use the SimpleHash class I got different hashs
UPDATE:
For example:
Java: byte[] hash = digest.digest(SALT);
generates (first 6 bytes):
[0] = 9
[1] = -95
[2] = -68
[3] = 64
[4] = -11
[5] = 53
....
C# code (class SimpleHash):
string hashValue = Convert.ToBase64String(hashWithSaltBytes);
hashWithSaltBytes has (first 6 bytes):
[0] 175 byte
[1] 209 byte
[2] 120 byte
[3] 74 byte
[4] 74 byte
[5] 227 byte
The String.getBytes method encodes the string to bytes using the platform's default charset, whereas the example code you linked uses UTF-8.
Try this:
digest.update(secret.getBytes("UTF-8"));
Secondly, the Integer.toHexString method returns the hexadecimal result with no leading 0s.
The C# code you link to also uses salt - but the Java code does not. If you use salt with once, but not the other, then the results will be (and should be!) different.
hexString.append(Integer.toHexString(0xFF & hash[i]));
You are building the hash string incorrectly. Integer.toHexString does not include leading zeros, so while Integer.toHexString(0xFF) == "FF", the problem is that Integer.toHexString(0x05) == "5".
Suggested correction: String.format("%02x", hash[i] & 0xFF)
public static String getEncryptedPassword(String clearTextPassword) throws NoSuchAlgorithmException{
MessageDigest md = MessageDigest.getInstance("SHA-256");
md.update(clearTextPassword.getBytes(StandardCharsets.UTF_8));
byte[] digest = md.digest();
String hex = String.format("%064x", new BigInteger(1, digest));
String st = new String(hex.toUpperCase());
for (int i = 2; i < (hex.length() + hex.length() / 2) - 1 ;) {
st = new StringBuffer(st).insert(i, "-").toString();
i = i + 3;
}
return st ;
}
You can use the following java to match that of C#
You didn't really write how you called the SimpleHash class - with which parameters and such.
But note that its ComputeHash method has in its documentation:
Hash value formatted as a base64-encoded string.
Your class instead formats the output in hexadecimal, which will obviously be different.
Also, the salt is in SimpleHash interpreted as base64, while your method interprets it as ASCII (or whatever your system encoding is - most probably something ASCII-compatible, and the string only contains ASCII characters).
Also, the output in SimpleHash includes the salt (to allow reproducing it for the "verify" part when using random salt), which it doesn't in your method.
(More points are already mentioned by the other answers.)
i have this line I need to write in C#
sprintf(
currentTAG,
"%2.2X%2.2X,%2.2X%2.2X",
hBuffer[ presentPtr+1 ],
hBuffer[ presentPtr ],
hBuffer[ presentPtr+3 ],
hBuffer[ presentPtr+2 ] );
hbuffer is a uchar array.
In C# I have the same data in a byte array and I need to implement this line...
Please help...
Check if this works:
byte[] hBuffer = { ... };
int presentPtr = 0;
string currentTAG = string.Format("{0:X2}{1:X2},{2:X2}{3:X2}",
hBuffer[p+1],
hBuffer[p],
hBuffer[p + 3],
hBuffer[p + 2]);
This is another option but less efficient:
byte[] hBuffer = { ... };
int presentPtr = 0;
string currentTAG = string.Format("{0}{1},{2}{3}",
hBuffer[p+1].ToString("X2"),
hBuffer[p].ToString("X2"),
hBuffer[p + 3].ToString("X2"),
hBuffer[p + 2].ToString("X2"));
Converting each byte of hBuffer to a
string, as in the second example, is
less efficient. The first example will
give you better performance,
especially if you do this many times,
by virtue of not spamming the garbage
collector.
[From the top of my head] In C/C++ %2.2X outputs the value in hexadecimal using upper case letters and at least two letters (left padded with zero).
In C++ the next example outputs 01 61 in the console:
unsigned char test[] = { 0x01, 'a' };
printf("%2.2X %2.2X", test[0], test[1]);
Using the information above, the following C# snippet outputs also 01 61 in the console:
byte[] test = { 0x01, (byte) 'a' };
Console.WriteLine(String.Format("{0:X2} {1:X2}", test[0], test[1]));
Composite Formatting: This page discusses how to use the string.Format() function.
You are looking for String.Format method.