I am getting this string
8802000030000000C602000033000000000000800000008000000000000000001800000000000
and this is what i am expecting to convert from string,
88020000 long in little endian => 648
30000000 long in little endian => 48
C6020000 long in little endian => 710
33000000 long in little endian => 51
left side is the value i am getting from the string and right side is the value i am expecting. The right side values are might be wrong but is there any way i can get right side value from left??
I went through several threads here like
How to convert an int to a little endian byte array?
C# Big-endian ulong from 4 bytes
I tried quite different functions but nothing giving me value which are around or near by what i am expecting.
Update :
I am reading text file as below. Most of the data are current in text format, but all of the sudden i am getting bunch of GRAPHICS info, i am not sure how to handle it.
RECORD=28
cVisible=1
dwUser=0
nUID=23
c_status=1
c_data_validated=255
c_harmonic=0
c_dlg_verified=0
c_lock_sizing=0
l_last_dlg_updated=0
s_comment=
s_hlinks=
dwColor=33554432
memUsr0=
memUsr1=
memUsr2=
memUsr3=
swg_bUser=0
swg_dConnKVA=L0
swg_dDemdKVA=L0
swg_dCodeKVA=L0
swg_dDsgnKVA=L0
swg_dConnFLA=L0
swg_dDemdFLA=L0
swg_dCodeFLA=L0
swg_dDsgnFLA=L0
swg_dDiversity=L4607182418800017408
cStandard=0
guidDB={901CB951-AC37-49AD-8ED6-3753E3B86757}
l_user_selc_rating=0
r_user_selc_SCkA=
a_conn1=21
a_conn2=11
a_conn3=7
l_ct_ratio_1=x44960000
l_ct_ratio_2=x40a00000
l_set_ct_ratio_1=
l_set_ct_ratio_2=
c_ct_conn=0
ENDREC
GRAPHICS0=8802000030000000C602000033000000000000800000008000000000000000001800000000000
EOF
Depending on how you want to parse up the input string, you could do something like this:
string input = "8802000030000000C6020000330000000000008000000080000000000000000018000000";
for (int i = 0; i < input.Length ; i += 8)
{
string subInput = input.Substring(i, 8);
byte[] bytes = new byte[4];
for (int j = 0; j < 4; ++j)
{
string toParse = subInput.Substring(j * 2, 2);
bytes[j] = byte.Parse(toParse, NumberStyles.HexNumber);
}
uint num = BitConverter.ToUInt32(bytes, 0);
Console.WriteLine(subInput + " --> " + num);
}
88020000 --> 648
30000000 --> 48
C6020000 --> 710
33000000 --> 51
00000080 --> 2147483648
00000080 --> 2147483648
00000000 --> 0
00000000 --> 0
18000000 --> 24
Do you really literally mean that that's a string? What it looks like is this: You have a bunch of 32-bit words, each represented by 8 hex digits. Each one is presented in little-endian order, low byte first. You need to interpret each of those as an integer. So, e.g., 88020000 is 88 02 00 00, which is to say 0x00000288.
If you can clarify exactly what it is you've got -- a string, an array of some kind of numeric type, or what -- then it'll be easier to advise you further.
Related
Is there a simply way to convert decimal/ascii 6 bit decimal numbers from 1 to 100 to binary representation?
To be more specific im interested in 6 bit binary ascii. So I made this to get int 32.
For example "u" is changed to 61 instead 117 in standard decimal ascii.
Then this 61 is needed to be "111101" instead of traditional "01110101" but after this 48 + 8 math it's not important as now it's normal binary, just with 6 bits used.
foreach (char c in partToDecode)
{
var sum = c - 48;
if (sum>40)
{
sum = sum - 8;
}
Found this, but i don't have a clue how to traspose it to c#
void binary(unsigned n) {
unsigned i;
// Reverse loop
for (i = 1 << 31; i > 0; i >>= 1)
printf("%u", !!(n & i));
}
. . .
binary(65);
You can try Convert.ToString, e.g.
int source = 61;
// "111101"
string result = Convert.ToString(source, 2).PadLeft(6, '0');
Fiddle
I am trying to reverse engineering a serial port device that uses hdlc for its packet format.Based on the documentation, the packet should contain a bitwise inversion of the command(first 4 bytes), which in this case is "HELO". Monitoring the serial port when using the original program shows what the bitwise inversion should be:
HELO -> b7 ba b3 b0
READ -> ad ba be bb
The problem is, I am not getting values even remotely close.
public object checksum
{
get
{
var cmdDec = (int)Char.GetNumericValue((char)this.cmd);
return (cmdDec ^ 0xffffffff);
}
}
You have to work with bytes, not with chars:
string source = "HELO";
// Encoding.ASCII: I assume that the command line has ASCII encoded commands only
byte[] result = Encoding.ASCII
.GetBytes(source)
.Select(b => unchecked((byte)~b)) // unchecked: ~b returns int; can exceed byte.MaxValue
.ToArray();
Test (let's represent the result as hexadecimals)
// b7 ba b3 b0
Console.Write(string.Join(" ", result.Select(b => b.ToString("x2"))));
Char is not a byte. You should use bytes instead of chars.
So this.cmd is an array of bytes? You could use the BitConverter.ToUInt32()
PSEUDO: (you might fix some casting)
public uint checksum
{
get
{
var cmdDec = BitConverter.ToUInt32(this.cmd, 0);
return (cmdDec ^ 0xffffffff);
}
}
if this.cmd is a string you could get a byte array from it with Encoding.UTF8.GetBytes(string)
Your bitwise inversion isn't doing what you think it's doing. Take the following, for example:
int i = 5;
var j = i ^ 0xFFFFFFFF;
var k = ~i;
The first example is performing the inversion the way you are doing it, by XOR-ing the number with a max value. The second value uses the C# Bitwise-NOT ~ operator.
After running this code, j will be a long value equal to 4294967290, while k will be an int value equal to -6. Their binary representation will be the same, but j will include another 32 bits of 0's to go along with it. There's also the obvious problem of them being completely different numbers, so any math performed on the values will be completely different depending on what you are using.
I am using BitConverter.ToString(bytes) for converting by string to hexadecimal string which I further convert it into integer or float.
But the input stream consist of 0 to show that byte value is 0. So suppose I have an integer which is represented by 2 bytes of input starting at position x and the first consist of EE while 2nd byte is 00. Now when I use BitConverter.ToString(bytes, x, 2).Replace ("-","") I get output as EE00 whose integer value is 60928 but in this case the output should be 238 that is converting only first byte EE to integer.
But in some other case the 2 bytes might be EE01 whose integer value will 60929 which is correct in this case.
Any suggestion how can I solve my problem?
Since some people are saying that question is confusing I will restate my problem I have long hexadecimal string as input. In hexadecimal string the
1) First 12 bytes represent string.
2) next 11 bytes also represent some other string.
3) Next 1 byte represent integer.
4) Next 3 bytes represent integer.
5) Next 4 bytes represent integer.
6) Next 4 bytes represent float.
7) Next 7 bytes represent string.
8) Next 5 bytes represent integer.
So for 4th case if bytes are ee 00 00 then I should neglect 0's and convert ee to integer. But if it ee 00 ee then I should convert ee00ee to integer. Also every time I will be following same pattern as mentioned above.
This method converts a hex string to a byte array.
public static byte[] ConvertHexString(string hex)
{
Contract.Requried(!string.IsNullOrEmpty(hex));
// get length
var len = hex.Length;
if (len % 2 == 1)
{
throw new ArgumentException("hexValue: " + hex);
}
var lenHalf = len / 2;
// create a byte array
var bs = new byte[lenHalf];
try
{
// convert the hex string to bytes
for (var i = 0; i != lenHalf; i++)
{
bs[i] = (byte)int.Parse(hex.Substring(i * 2, 2), NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
}
catch (Exception ex)
{
throw new ParseException(ex.Message, ex);
}
// return the byte array
return bs;
}
From the other side:
public static string ConvertByteToHexString(byte num)
{
var text = BitConverter.ToString(new[] { num });
if (text.Length == 1)
{
text = "0" + text;
}
return text;
}
My problem has been solved. I was making a mistake of Endianness. I was receiving the data as EE 00 and I should have taken it as 00 EE before converting to integer. Thanks to all who gave me solution for my problem and sorry for missing out this important fact from question.
I have a very basic understanding of bitwise operators. I am at a loss to understand how the value is assigned however. If someone can point me in the right direction I would be very grateful.
My Hex Address: 0xE0074000
The Decimal value: 3758571520
The Binary Value: 11100000000001110100000000000000
I am trying to program a simple Micro Controller and use the Register access Class in the Microsoft .Net Micro Framework to make the Controller do what I want it to do.
Register T2IR = new Register(0xE0074000);
T2IR.Write(1 << 22);
In my above example, how are the bits in the Binary representation moved? I don’t understand how the management of bits is assigned to the address in Binary form.
If someone can point me in the right direction I would be very greatfull.
Forget about decimals for a start. You'll get back to that later.
First you need to see the logic between HEX and BINARY.
Okay, for a byte you have 8 bits (#7-0)
#7 = 0x80 = %1000 0000
#6 = 0x40 = %0100 0000
#5 = 0x20 = %0010 0000
#4 = 0x10 = %0001 0000
#3 = 0x08 = %0000 1000
#2 = 0x04 = %0000 0100
#1 = 0x02 = %0000 0010
#0 = 0x01 = %0000 0001
When you read that in binary, in a byte, like this one %00001000
Then the bit set, is the 4th from right aka bit #3 which has a value of 08 hex (in fact also decimal, but still forget about decimal while you figure out hex/binary)
Now if we have the binary number %10000000
This is the #7 bit which is on. That has a hex value of 0x80
So all you have to do is to sum them up in "nibbles" (each part of the hex byte is called a nibble by some geeks)
the maximum you can get in a nibble is (decimal) 15 or F as 0x10 + 0x20 + 0x40 + 0x80 = 0xF0 = binary %11110000
so all lights on (4 bits) in a nibble = F in hex (15 decimal)
same goes for the lower nibble.
Do you see the pattern?
Refer to #BerggreenDK's answer for what a shift is. Here's some info about what it's like in hex (same thing, just different representation):
Shifting is a very simple concept to understand. The register is of a fixed size, and whatever bits that won't fit falls off the end. So, take this example:
int num = 0xffff << 16;
Your variable in hex would now be 0xffff0000. Note how the the right end is filled with zeros. Now, let's shift it again.
num = num << 8;
num = num >> 8;
num is now 0x00ff0000. You don't get your old bits back. The same applies to right shifts as well.
Trick: Left shifting by 1 is like multiplying the number by 2, and right shifting by 1 is like integer dividing everything by 2.
I've been working on converting a C++ crypting method to C#. The problem is, I cant get it to encrypt/decrypt the way I want it to.
The idea is simple, I capture a packet, and decrypt it. The output will be:
Packet Size - Command/Action - Null (End)
(The decryptor cuts off the first and last 2 bytes)
The C++ code is this:
// Crypt the packet with Xor operator
void cryptPacket(char *packet)
{
unsigned short paksize=(*((unsigned short*)&packet[0])) - 2;
for(int i=2; i<paksize; i++)
{
packet[i] = 0x61 ^ packet[i];
}
}
So I thought this would work in C# if I didn't want to use pointers:
public static char[] CryptPacket(char[] packet)
{
ushort paksize = (ushort) (packet.Length - 2);
for(int i=2; i<paksize; i++)
{
packet[i] = (char) (0x61 ^ packet[i]);
}
return packet;
}
-but it isn't, the value returned is just another line of rubish instead of the decrypted value. The output given is: ..O♦&/OOOe.
Well.. atleast the '/' is in the right place for some reason.
Some more information:
The test packet I'm using is this:
Hex value: 0C 00 E2 66 65 47 4E 09 04 13 65 00
Plain text: ...feGN...e.
Decrypted: XX/hereXX
X = Unknown value, I cant really remember, but it doesn't matter.
Using Hex Workshop you can decrypt the packet this way:
Special Paste the hex value as CF_TEXT, make sure the 'treat as hexidecimal value' box is checked.
Afterwards, select everything from the hexidecimal value you just pasted, except the first and last 2 bytes.
Go to Tools>Operations>Xor.
Select 'Treat data as 8 bit data' and set value to '61'.
Press 'OK', and you'r done.
That's all the information I can give at the moment, because I'm writing this off the top of my head.
Thank you for your time.
In case you don't see a question in this:
It would be great if someone could take a look at the code to see what's wrong with it, or if there's another way to do it. I'm converting this code because I'm horrible with C++, and want to create a C# application with that code.
Ps: The code tags and such were a pain, so I'm sorry if the spacing etc. is a little messed up.
Your problem might be that as .NET's char is unicode, some characters are going to be using more than one byte, and your bitmask is only one byte long. So the most significant byte will be left unaltered.
I just tried your function and it seems ok:
class Program
{
// OP's method: http://stackoverflow.com/questions/4815959
public static byte[] CryptPacket(byte[] packet)
{
int paksize = packet.Length - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] = (byte)(0x61 ^ packet[i]);
}
return packet;
}
// http://stackoverflow.com/questions/321370 :)
public static byte[] StringToByteArray(string hex)
{
return Enumerable.Range(0, hex.Length).
Where(x => 0 == x % 2).
Select(x => Convert.ToByte(hex.Substring(x, 2), 16)).
ToArray();
}
static void Main(string[] args)
{
string hex = "0C 00 E2 66 65 47 4E 09 04 13 65 00".Replace(" ", "");
byte[] input = StringToByteArray(hex);
Console.WriteLine("Input: " + ASCIIEncoding.ASCII.GetString(input));
byte[] output = CryptPacket(input);
Console.WriteLine("Output: " + ASCIIEncoding.ASCII.GetString(output));
Console.ReadLine();
}
}
Console output:
Input: ...feGN.....
Output: ...../here..
(where '.' represents funny ascii characters)
It seems a bit smelly that your CryptPacket method is overwriting the initial array with the output values. And that irrelevant characters are not trimmed. But if you are trying to port something, I guess you should know what you are doing.
You could also consider trimming the input array, to remove the unwanted characters first, and then use a generic ROT13 method (like this one). This way you have your own "specialized" version with 2-byte offsets inside the crypt function itself, instead of something like:
public static byte[] CryptPacket(byte[] packet)
{
// create a new instance
byte[] output = new byte[packet.Length];
// process ALL array items
for (int i = 0; i < packet.Length; i++)
{
output[i] = (byte)(0x61 ^ packet[i]);
}
return output;
}
Here's an almost literal translation from C++ to C#, and it seems to work:
var packet = new byte[] {
0x0C, 0x00, 0xE2, 0x66, 0x65, 0x47,
0x4E, 0x09, 0x04, 0x13, 0x65, 0x00
};
CryptPacket(packet);
// displays "....../here." where "." represents an unprintable character
Console.WriteLine(Encoding.ASCII.GetString(packet));
// ...
void CryptPacket(byte[] packet)
{
int paksize = (packet[0] | (packet[1] << 8)) - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] ^= 0x61;
}
}