C# & ASM - Jump to Address - c#

I am writing a code Injection using C# to analyze potential malware software.
For that I jump to unallocated memory, write to that unallocated memory and jump back / return.
The code injection / allocation part works great without issues but the Jump part is difficult.
I took sample code for a 32bit process, but mine is 64bit so I converted it to 64bit and I might broke something while doing that as the Jump Address is not the right one where it should jump to.
As Example I want to do a code Injection at Address: 7FF95BBD0000.
My code looks like that:
public void injection()
{
_oMemory.Alloc(out _newmem, 0x300); //Allocate 300 Bytes of unallocated Memory for the code Injection (This part works)
var CodeBaseAddress = ModuleBaseAddress + 0x36652EE; //I want to jump from this Address (7FFBA13F52EE output = 140718718800622) to (7FF95BBD0000)
var CodeInjectionAddress = (ulong)_newmem; //This is the Address of the Code Injection that I want to jump to (7FF95BBD0000 output = 140708962697216)
var Jumpbytes = Jmp(ConeInjectionAddress, CodeBaseAddress, false); //this should give me the byte[] Byte Decimal for the Jump from the CodeBaseAddress to the CodeInjectionAddress but it gives me a slightly wrong output (outputs = {233, 13, 173, 192, 94, 4, 128, 255, 255}, in Hex = {0xE9, 0x0D, 0xAD, 0x7D, 0xBA, 0xFD, 0xFF, 0xFF, 0xFF}, in OPCode = "jmp 7FFB5BBD0000")
_oMemory.Write((IntPtr)ad1, bv1); //This writes the Jumpbytes to the CodeBaseAddress which works
_oMemory.CloseHandle(); //Closes Handle which also works
}
public static byte[] Jmp(ulong jmpto, ulong jmpfrom, bool nop)
{
var test = jmpto - jmpfrom;
var test2 = test - 5;
var dump = test2.ToString("x"); //Get original bytes
if (dump.Length == 7) //Make sure we have 4 bytes
dump = "0" + dump;
dump += "E9"; //Add JMP
if (nop)
dump = "90" + dump; //Add NOP if needed
var hex = new byte[dump.Length / 2];
for (var i = 0; i < hex.Length; i++)
hex[i] = Convert.ToByte(dump.Substring(i * 2, 2), 16); //Set each byte to 2 chars
Array.Reverse(hex); //Reverse byte array for use with Write()
return hex;
}
Notice how the Jmp method returns "7FFB5BBD0000" instead of "7FF95BBD0000". It changes the 95 to a B5 which leads to a wrong Jump Address.
Also another weird thing is that the jump to the wrong address looks like this:
And if I would change the jump to my desired Address using cheat engine it looks like this:
I guess my Jump is too "big" using the E9 jump so it gives me a wrong Jump address with my Method? How could I fix that?
Thanks for anyone helping me out or pointing me into the right direction.

Related

How do I add bits to a MemoryStream

So I've been trying to add bits of a value to a MemoryStream but the issue is I have no idea how. I've seen that it's used for performance when it comes to networking.
I know I want a function that takes the bit value and how many bits it takes to store that value. So for instance, to store the value 3 I would need to allocate 2 bits 0000 0000 0000 0011. I would essentially pack the bits into a byte array and then add that byte array to the MemoryStream
var ms = new MemoryStream();
ms.WriteByte(1);
ms.WriteByte(1);
ms.WriteByte(1);
ms.WriteByte(1);
ms.WriteByte(1);
WriteBits(2, 3);
WriteBits(1, 1);
void WriteBits(int numbBits, int value)
{
/* Convert the "value" to a byte or bytes and add it to the MemoryStream */
}
How do I properly implement this?
Java Example
public void writeBits(int numBits, int value) {
int bytePos = bitPosition >> 3;
int bitOffset = 8 - (bitPosition & 7);
bitPosition += numBits;
for (; numBits > bitOffset; bitOffset = 8) {
buffer[bytePos] &= ~bitMaskOut[bitOffset]; // mask out the desired area
buffer[bytePos++] |= (value >> (numBits - bitOffset))
& bitMaskOut[bitOffset];
numBits -= bitOffset;
}
if (numBits == bitOffset) {
buffer[bytePos] &= ~bitMaskOut[bitOffset];
buffer[bytePos] |= value & bitMaskOut[bitOffset];
} else {
buffer[bytePos] &= ~(bitMaskOut[numBits] << (bitOffset - numBits));
buffer[bytePos] |= (value & bitMaskOut[numBits]) << (bitOffset - numBits);
}
}
So I've been trying to add bits of a value to a MemoryStream
You don't, MemoryStream only handles bytes.
So for instance, to store the value 3 I would need to allocate 2 bits
This would only be true if the range of values you want to store is [0, 3]. If you want the possibility of storing any larger value you need more bits.
How do I properly implement this?
you would need to implement your own bit-stream. The java example looks like it has a byte[] buffer, and a bitPosition. You would need to implement this. The bif-fiddeling code looks like it should work just about the same in c#. Once you have a byte[] it is trivial to write this out to whatever stream you want, and usually possible to send directly over the network.
I've seen that it's used for performance when it comes to networking
I think there is a significant misunderstanding here. While you could manually manipulate individual bits, in most cases it would just be a waste of (development) time.
In general, a better way to get good performance is to use existing, well optimized and designed libraries. And there are a variety of serialization libraries that converts objects to byte-streams for you. An example would be protobuf (.net), this actually encodes numbers with a variable number of bytes.
If you still need smaller data it is usually more efficient to use some form of compression. The old classic deflate usually gives good compromise between size and performance, while algorithms like lz4 prioritizes speed over compression ratio.
I had exactly the same problem and wrote an entire BitStream library which can handle any reads and writes of an arbitrary number of bits to a MemoryStream (and any other stream, too). The library is open-source, MIT-licensed and fast (https://github.com/martinweihrauch/BitStream).
Writing bits to a MemoryStream.
These are the steps to write a value to a certain number of bits to a specific position in the MemoryStream:
Have a Stream available, e. g. a MemoryStream(), to which you want to write.
Connect this Stream to a new Bitstream
using SharpBitStream;
uint[] testDataUnsigned = { 5, 62, 17, 50, 33 };
var ms = new MemoryStream();
var bs = new BitStream(ms);
Now, you can start writing to the BitStream like this:
foreach(var bits in testDataUnsigned)
{
bs.WriteUnsigned(6, (ulong)bits);
}
Writing can be done as above by only providing the bitlength and the value, but you of course also have full controll of exactly where to write the bits like so:
bs.WriteUnsigned(3, 2, 4, 5);
// Overloaded signature of WriteUnsigned:
// public void WriteUnsigned(long offsetByteStream, int offsetBit, int bitLength, ulong value)
// For signed numbers (e. g. -17), use
// bs.WriteSigned(3, 2, 4, -5);
This means, you can control that you write to the 4th byte (3, because starting at 0) in the underlying byte Stream,
starting from the the 3rd (=2) position of the byte with a length of 6 bits and the value 5 (=0b0101);
Reading works similarly:
Just read the next 6 bits, wherever your byte and bit position is (e. g. for loops, etc):
ulong number = bs.ReadUnsigned(6);
// For Signed, use
// long number = bs.ReadSigned(6);
Read a specific position, in this example read 4 bits from 3rd byte in Stream (2= 3rd position), starting with bit #0:
ulong number = bs.ReadUnsigned(2, 0, 4);
// For signed, use
// long number = bs.ReadSigned(2, 0, 4);
Note: The bit offset is always counting from 0 from the left-most position.

What is the best way to prep data for serial transmission?

I am working on a C# program which will communicate with a VFD using the Mitsubishi communication protocol.
I am preparing several methods to create an array of bytes to be sent out.
Right now, I have typed up more of a brute-force method of preparing and sending the bytes.
public void A(Int16 Instruction, byte WAIT, Int32 Data )
{
byte[] A_Bytes = new byte[13];
A_Bytes[0] = C_ENQ;
A_Bytes[1] = 0x00;
A_Bytes[2] = 0x00;
A_Bytes[3] = BitConverter.GetBytes(Instruction)[0];
A_Bytes[4] = BitConverter.GetBytes(Instruction)[1];
A_Bytes[5] = WAIT;
A_Bytes[6] = BitConverter.GetBytes(Data)[0];
A_Bytes[7] = BitConverter.GetBytes(Data)[1];
A_Bytes[8] = BitConverter.GetBytes(Data)[2];
A_Bytes[9] = BitConverter.GetBytes(Data)[3];
Int16 SUM = 0;
for(int i = 0; i<10; i++)
{
SUM += A_Bytes[i];
}
A_Bytes[10] = BitConverter.GetBytes(SUM)[0];
A_Bytes[11] = BitConverter.GetBytes(SUM)[1];
A_Bytes[12] = C_CR;
itsPort.Write(A_Bytes, 0, 13);
}
However, something seems very inefficient about this. Especially the fact that I call GetBytes() so often.
Is this a good method, or is there a vastly shorter/faster one?
MAJOR UPDATE:
turns out, the mitsubishi structure is a little wonky in how it does all this.
Instead of working with bytes, it works with ascii chars. so while ENQ is still 0x05, an instruction code of E1, for instance, is actually 0x45 and 0x31.
This might actually make things easier.
Even without changing your algorithm, this can be made a bit more efficient and a bit more c#-like. If concating two array bothers you, that is of course optional.
var instructionBytes = BitConverter.GetBytes(instruction);
var dataBytes = BitConverter.GetBytes(data);
var contentBytes = new byte[] {
C_ENQ, 0x00, 0x00, instructionBytes[0], instructionBytes[1], wait,
dataBytes[0], dataBytes[1], dataBytes[2], dataBytes[3]
};
short sum = 0;
foreach(var byteValue in contentBytes)
{
sum += byteValue;
}
var sumBytes = BitConverter.GetBytes(sum);
var messageBytes = contentBytes.Concat(new byte[] { sumBytes[0], sumBytes[1], C_CR } );
itsPort.Write(messageBytes, 0, messageBytes.Length);
What I would suggest though, if you find yourself writing a lot of code like this, is to consider wrapping this up into a Message class. This code would form the basis of your constructor. You could then vary behavior (make things longer, shorter etc) with inheritance (or composition) and deal with the message as an object rather than a byte array.
Incidentally, you may see margin gains from using BinaryWriter rather than BitConverter (maybe?), but it's more hassle to use. (byte)(sum >> 8) is another option as well, which I think is the fastest actually and probably makes the most sense in your use case.

Defining a bit[] array in C#

currently im working on a solution for a prime-number calculator/checker. The algorythm is already working and verry efficient (0,359 seconds for the first 9012330 primes). Here is a part of the upper region where everything is declared:
const uint anz = 50000000;
uint a = 3, b = 4, c = 3, d = 13, e = 12, f = 13, g = 28, h = 32;
bool[,] prim = new bool[8, anz / 10];
uint max = 3 * (uint)(anz / (Math.Log(anz) - 1.08366));
uint[] p = new uint[max];
Now I wanted to go to the next level and use ulong's instead of uint's to cover a larger area (you can see that already), where i tapped into my problem: the bool-array.
Like everybody should know, bool's have the length of a byte what takes a lot of memory when creating the array... So I'm searching for a more resource-friendly way to do that.
My first idea was a bit-array -> not byte! <- to save the bool's, but haven't figured out how to do that by now. So if someone ever did something like this, I would appreciate any kind of tips and solutions. Thanks in advance :)
You can use BitArray collection:
http://msdn.microsoft.com/en-us/library/system.collections.bitarray(v=vs.110).aspx
MSDN Description:
Manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
You can (and should) use well tested and well known libraries.
But if you're looking to learn something (as it seems to be the case) you can do it yourself.
Another reason you may want to use a custom bit array is to use the hard drive to store the array, which comes in handy when calculating primes. To do this you'd need to further split addr, for example lowest 3 bits for the mask, next 28 bits for 256MB of in-memory storage, and from there on - a file name for a buffer file.
Yet another reason for custom bit array is to compress the memory use when specifically searching for primes. After all more than half of your bits will be 'false' because the numbers corresponding to them would be even, so in fact you can both speed up your calculation AND reduce memory requirements if you don't even store the even bits. You can do that by changing the way addr is interpreted. Further more you can also exclude numbers divisible by 3 (only 2 out of every 6 numbers has a chance of being prime) thus reducing memory requirements by 60% compared to plain bit array.
Notice the use of shift and logical operators to make the code a bit more efficient.
byte mask = (byte)(1 << (int)(addr & 7)); for example can be written as
byte mask = (byte)(1 << (int)(addr % 8));
and addr >> 3 can be written as addr / 8
Testing shift/logical operators vs division shows 2.6s vs 4.8s in favor of shift/logical for 200000000 operations.
Here's the code:
void Main()
{
var barr = new BitArray(10);
barr[4] = true;
Console.WriteLine("Is it "+barr[4]);
Console.WriteLine("Is it Not "+barr[5]);
}
public class BitArray{
private readonly byte[] _buffer;
public bool this[long addr]{
get{
byte mask = (byte)(1 << (int)(addr & 7));
byte val = _buffer[(int)(addr >> 3)];
bool bit = (val & mask) == mask;
return bit;
}
set{
byte mask = (byte) ((value ? 1:0) << (int)(addr & 7));
int offs = (int)addr >> 3;
_buffer[offs] = (byte)(_buffer[offs] | mask);
}
}
public BitArray(long size){
_buffer = new byte[size/8 + 1]; // define a byte buffer sized to hold 8 bools per byte. The spare +1 is to avoid dealing with rounding.
}
}

C# Converting a XOR crypt function

I've been working on converting a C++ crypting method to C#. The problem is, I cant get it to encrypt/decrypt the way I want it to.
The idea is simple, I capture a packet, and decrypt it. The output will be:
Packet Size - Command/Action - Null (End)
(The decryptor cuts off the first and last 2 bytes)
The C++ code is this:
// Crypt the packet with Xor operator
void cryptPacket(char *packet)
{
unsigned short paksize=(*((unsigned short*)&packet[0])) - 2;
for(int i=2; i<paksize; i++)
{
packet[i] = 0x61 ^ packet[i];
}
}
So I thought this would work in C# if I didn't want to use pointers:
public static char[] CryptPacket(char[] packet)
{
ushort paksize = (ushort) (packet.Length - 2);
for(int i=2; i<paksize; i++)
{
packet[i] = (char) (0x61 ^ packet[i]);
}
return packet;
}
-but it isn't, the value returned is just another line of rubish instead of the decrypted value. The output given is: ..O♦&/OOOe.
Well.. atleast the '/' is in the right place for some reason.
Some more information:
The test packet I'm using is this:
Hex value: 0C 00 E2 66 65 47 4E 09 04 13 65 00
Plain text: ...feGN...e.
Decrypted: XX/hereXX
X = Unknown value, I cant really remember, but it doesn't matter.
Using Hex Workshop you can decrypt the packet this way:
Special Paste the hex value as CF_TEXT, make sure the 'treat as hexidecimal value' box is checked.
Afterwards, select everything from the hexidecimal value you just pasted, except the first and last 2 bytes.
Go to Tools>Operations>Xor.
Select 'Treat data as 8 bit data' and set value to '61'.
Press 'OK', and you'r done.
That's all the information I can give at the moment, because I'm writing this off the top of my head.
Thank you for your time.
In case you don't see a question in this:
It would be great if someone could take a look at the code to see what's wrong with it, or if there's another way to do it. I'm converting this code because I'm horrible with C++, and want to create a C# application with that code.
Ps: The code tags and such were a pain, so I'm sorry if the spacing etc. is a little messed up.
Your problem might be that as .NET's char is unicode, some characters are going to be using more than one byte, and your bitmask is only one byte long. So the most significant byte will be left unaltered.
I just tried your function and it seems ok:
class Program
{
// OP's method: http://stackoverflow.com/questions/4815959
public static byte[] CryptPacket(byte[] packet)
{
int paksize = packet.Length - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] = (byte)(0x61 ^ packet[i]);
}
return packet;
}
// http://stackoverflow.com/questions/321370 :)
public static byte[] StringToByteArray(string hex)
{
return Enumerable.Range(0, hex.Length).
Where(x => 0 == x % 2).
Select(x => Convert.ToByte(hex.Substring(x, 2), 16)).
ToArray();
}
static void Main(string[] args)
{
string hex = "0C 00 E2 66 65 47 4E 09 04 13 65 00".Replace(" ", "");
byte[] input = StringToByteArray(hex);
Console.WriteLine("Input: " + ASCIIEncoding.ASCII.GetString(input));
byte[] output = CryptPacket(input);
Console.WriteLine("Output: " + ASCIIEncoding.ASCII.GetString(output));
Console.ReadLine();
}
}
Console output:
Input: ...feGN.....
Output: ...../here..
(where '.' represents funny ascii characters)
It seems a bit smelly that your CryptPacket method is overwriting the initial array with the output values. And that irrelevant characters are not trimmed. But if you are trying to port something, I guess you should know what you are doing.
You could also consider trimming the input array, to remove the unwanted characters first, and then use a generic ROT13 method (like this one). This way you have your own "specialized" version with 2-byte offsets inside the crypt function itself, instead of something like:
public static byte[] CryptPacket(byte[] packet)
{
// create a new instance
byte[] output = new byte[packet.Length];
// process ALL array items
for (int i = 0; i < packet.Length; i++)
{
output[i] = (byte)(0x61 ^ packet[i]);
}
return output;
}
Here's an almost literal translation from C++ to C#, and it seems to work:
var packet = new byte[] {
0x0C, 0x00, 0xE2, 0x66, 0x65, 0x47,
0x4E, 0x09, 0x04, 0x13, 0x65, 0x00
};
CryptPacket(packet);
// displays "....../here." where "." represents an unprintable character
Console.WriteLine(Encoding.ASCII.GetString(packet));
// ...
void CryptPacket(byte[] packet)
{
int paksize = (packet[0] | (packet[1] << 8)) - 2;
for (int i = 2; i < paksize; i++)
{
packet[i] ^= 0x61;
}
}

Problem in converting sprintf to C#

i have this line I need to write in C#
sprintf(
currentTAG,
"%2.2X%2.2X,%2.2X%2.2X",
hBuffer[ presentPtr+1 ],
hBuffer[ presentPtr ],
hBuffer[ presentPtr+3 ],
hBuffer[ presentPtr+2 ] );
hbuffer is a uchar array.
In C# I have the same data in a byte array and I need to implement this line...
Please help...
Check if this works:
byte[] hBuffer = { ... };
int presentPtr = 0;
string currentTAG = string.Format("{0:X2}{1:X2},{2:X2}{3:X2}",
hBuffer[p+1],
hBuffer[p],
hBuffer[p + 3],
hBuffer[p + 2]);
This is another option but less efficient:
byte[] hBuffer = { ... };
int presentPtr = 0;
string currentTAG = string.Format("{0}{1},{2}{3}",
hBuffer[p+1].ToString("X2"),
hBuffer[p].ToString("X2"),
hBuffer[p + 3].ToString("X2"),
hBuffer[p + 2].ToString("X2"));
Converting each byte of hBuffer to a
string, as in the second example, is
less efficient. The first example will
give you better performance,
especially if you do this many times,
by virtue of not spamming the garbage
collector.
[From the top of my head] In C/C++ %2.2X outputs the value in hexadecimal using upper case letters and at least two letters (left padded with zero).
In C++ the next example outputs 01 61 in the console:
unsigned char test[] = { 0x01, 'a' };
printf("%2.2X %2.2X", test[0], test[1]);
Using the information above, the following C# snippet outputs also 01 61 in the console:
byte[] test = { 0x01, (byte) 'a' };
Console.WriteLine(String.Format("{0:X2} {1:X2}", test[0], test[1]));
Composite Formatting: This page discusses how to use the string.Format() function.
You are looking for String.Format method.

Categories