I am working on a C# program which will communicate with a VFD using the Mitsubishi communication protocol.
I am preparing several methods to create an array of bytes to be sent out.
Right now, I have typed up more of a brute-force method of preparing and sending the bytes.
public void A(Int16 Instruction, byte WAIT, Int32 Data )
{
byte[] A_Bytes = new byte[13];
A_Bytes[0] = C_ENQ;
A_Bytes[1] = 0x00;
A_Bytes[2] = 0x00;
A_Bytes[3] = BitConverter.GetBytes(Instruction)[0];
A_Bytes[4] = BitConverter.GetBytes(Instruction)[1];
A_Bytes[5] = WAIT;
A_Bytes[6] = BitConverter.GetBytes(Data)[0];
A_Bytes[7] = BitConverter.GetBytes(Data)[1];
A_Bytes[8] = BitConverter.GetBytes(Data)[2];
A_Bytes[9] = BitConverter.GetBytes(Data)[3];
Int16 SUM = 0;
for(int i = 0; i<10; i++)
{
SUM += A_Bytes[i];
}
A_Bytes[10] = BitConverter.GetBytes(SUM)[0];
A_Bytes[11] = BitConverter.GetBytes(SUM)[1];
A_Bytes[12] = C_CR;
itsPort.Write(A_Bytes, 0, 13);
}
However, something seems very inefficient about this. Especially the fact that I call GetBytes() so often.
Is this a good method, or is there a vastly shorter/faster one?
MAJOR UPDATE:
turns out, the mitsubishi structure is a little wonky in how it does all this.
Instead of working with bytes, it works with ascii chars. so while ENQ is still 0x05, an instruction code of E1, for instance, is actually 0x45 and 0x31.
This might actually make things easier.
Even without changing your algorithm, this can be made a bit more efficient and a bit more c#-like. If concating two array bothers you, that is of course optional.
var instructionBytes = BitConverter.GetBytes(instruction);
var dataBytes = BitConverter.GetBytes(data);
var contentBytes = new byte[] {
C_ENQ, 0x00, 0x00, instructionBytes[0], instructionBytes[1], wait,
dataBytes[0], dataBytes[1], dataBytes[2], dataBytes[3]
};
short sum = 0;
foreach(var byteValue in contentBytes)
{
sum += byteValue;
}
var sumBytes = BitConverter.GetBytes(sum);
var messageBytes = contentBytes.Concat(new byte[] { sumBytes[0], sumBytes[1], C_CR } );
itsPort.Write(messageBytes, 0, messageBytes.Length);
What I would suggest though, if you find yourself writing a lot of code like this, is to consider wrapping this up into a Message class. This code would form the basis of your constructor. You could then vary behavior (make things longer, shorter etc) with inheritance (or composition) and deal with the message as an object rather than a byte array.
Incidentally, you may see margin gains from using BinaryWriter rather than BitConverter (maybe?), but it's more hassle to use. (byte)(sum >> 8) is another option as well, which I think is the fastest actually and probably makes the most sense in your use case.
Related
I am writing a code Injection using C# to analyze potential malware software.
For that I jump to unallocated memory, write to that unallocated memory and jump back / return.
The code injection / allocation part works great without issues but the Jump part is difficult.
I took sample code for a 32bit process, but mine is 64bit so I converted it to 64bit and I might broke something while doing that as the Jump Address is not the right one where it should jump to.
As Example I want to do a code Injection at Address: 7FF95BBD0000.
My code looks like that:
public void injection()
{
_oMemory.Alloc(out _newmem, 0x300); //Allocate 300 Bytes of unallocated Memory for the code Injection (This part works)
var CodeBaseAddress = ModuleBaseAddress + 0x36652EE; //I want to jump from this Address (7FFBA13F52EE output = 140718718800622) to (7FF95BBD0000)
var CodeInjectionAddress = (ulong)_newmem; //This is the Address of the Code Injection that I want to jump to (7FF95BBD0000 output = 140708962697216)
var Jumpbytes = Jmp(ConeInjectionAddress, CodeBaseAddress, false); //this should give me the byte[] Byte Decimal for the Jump from the CodeBaseAddress to the CodeInjectionAddress but it gives me a slightly wrong output (outputs = {233, 13, 173, 192, 94, 4, 128, 255, 255}, in Hex = {0xE9, 0x0D, 0xAD, 0x7D, 0xBA, 0xFD, 0xFF, 0xFF, 0xFF}, in OPCode = "jmp 7FFB5BBD0000")
_oMemory.Write((IntPtr)ad1, bv1); //This writes the Jumpbytes to the CodeBaseAddress which works
_oMemory.CloseHandle(); //Closes Handle which also works
}
public static byte[] Jmp(ulong jmpto, ulong jmpfrom, bool nop)
{
var test = jmpto - jmpfrom;
var test2 = test - 5;
var dump = test2.ToString("x"); //Get original bytes
if (dump.Length == 7) //Make sure we have 4 bytes
dump = "0" + dump;
dump += "E9"; //Add JMP
if (nop)
dump = "90" + dump; //Add NOP if needed
var hex = new byte[dump.Length / 2];
for (var i = 0; i < hex.Length; i++)
hex[i] = Convert.ToByte(dump.Substring(i * 2, 2), 16); //Set each byte to 2 chars
Array.Reverse(hex); //Reverse byte array for use with Write()
return hex;
}
Notice how the Jmp method returns "7FFB5BBD0000" instead of "7FF95BBD0000". It changes the 95 to a B5 which leads to a wrong Jump Address.
Also another weird thing is that the jump to the wrong address looks like this:
And if I would change the jump to my desired Address using cheat engine it looks like this:
I guess my Jump is too "big" using the E9 jump so it gives me a wrong Jump address with my Method? How could I fix that?
Thanks for anyone helping me out or pointing me into the right direction.
For Serialization of Primitive Array, i'am wondering how to convert a Primitive[] to his corresponding byte[]. (ie an int[128] to a byte[512], or a ushort[] to a byte[]...)
The destination can be a Memory Stream, a network message, a file, anything.
The goal is performance (Serialization & Deserialization time), to be able to write with some streams a byte[] in one shot instead of loop'ing' through all values, or allocate using some converter.
Some already solution explored:
Regular Loop to write/read
//array = any int[];
myStreamWriter.WriteInt32(array.Length);
for(int i = 0; i < array.Length; ++i)
myStreamWriter.WriteInt32(array[i]);
This solution works for Serialization and Deserialization And is like 100 times faster than using Standard System.Runtime.Serialization combined with a BinaryFormater to Serialize/Deserialize a single int, or a couple of them.
But this solution becomes slower if array.Length contains more than 200/300 values (for Int32).
Cast?
Seems C# can't directly cast a Int[] to a byte[], or a bool[] to a byte[].
BitConverter.Getbytes()
This solution works, but it allocates a new byte[] at each call of the loop through my int[]. Performances are of course horrible
Marshal.Copy
Yup, this solution works too, but same problem as previous BitConverter one.
C++ hack
Because direct cast is not allowed in C#, i tryed some C++ hack after seeing into memory that array length is stored 4 bytes before array data starts
ARRAYCAST_API void Cast(int* input, unsigned char** output)
{
// get the address of the input (this is a pointer to the data)
int* count = input;
// the size of the buffer is located just before the data (4 bytes before as this is an int)
count--;
// multiply the number of elements by 4 as an int is 4 bytes
*count = *count * 4;
// set the address of the byte array
*output = (unsigned char*)(input);
}
and the C# that call:
byte[] arrayB = null;
int[] arrayI = new int[128];
for (int i = 0; i < 128; ++i)
arrayI[i] = i;
// delegate call
fptr(arrayI, out arrayB);
I successfully retrieve my int[128] into C++, switch the array length, and affecting the right adress to my 'output' var, but C# is only retrieving a byte[1] as return. It seems that i can't hack a managed variable like that so easily.
So i really start to think that all theses casts i want to achieve are just impossible in C# (int[] -> byte[], bool[] -> byte[], double[] -> byte[]...) without Allocating/copying...
What am-i missing?
How about using Buffer.BlockCopy?
// serialize
var intArray = new[] { 1, 2, 3, 4, 5, 6, 7, 8 };
var byteArray = new byte[intArray.Length * 4];
Buffer.BlockCopy(intArray, 0, byteArray, 0, byteArray.Length);
// deserialize and test
var intArray2 = new int[byteArray.Length / 4];
Buffer.BlockCopy(byteArray, 0, intArray2, 0, byteArray.Length);
Console.WriteLine(intArray.SequenceEqual(intArray2)); // true
Note that BlockCopy is still allocating/copying behind the scenes. I'm fairly sure that this is unavoidable in managed code, and BlockCopy is probably about as good as it gets for this.
i have the following code:
int BufSize = 60000000;
int BufSizeM1M = BufSize - 1000000;
byte[] ByteBuf = new byte[BufSizeM1M];
byte[] ByteBufVer = new byte[BufSizeM1M];
using (WinFileIO WFIO = new WinFileIO(ByteBuf))
{
WFIO.OpenForWriting(path);
Byte[] BytesInFiles = GetBytes(content);
WFIO.WriteBlocks(BytesInFiles.Length);
}
EDIT:
This is the original code i was working with, trying to modify it myself seems to fail, so i was thinking you guyz might have a look:
int BufSize = 60000000;
int BufSizeM1M = BufSize - 1000000;
byte[] ByteBuf = new byte[BufSizeM1M];
byte[] ByteBufVer = new byte[BufSizeM1M];
int[] BytesInFiles = new int[3]
using (WinFileIO WFIO = new WinFileIO(ByteBuf))
WFIO.OpenForWriting(path);
WFIO.WriteBlocks(BytesInFiles[FileLoop]);
}
FileLoop is an int between 0-3 (the code was run in a loop)
this was used for testing write speed.
how would one change it to write actual content of a string?
the WFIO dll was provided to me without instructions and i cannot seem to get it to work.
the code above is the best i could get, but it writes a file filled with spaces instead of the actual string in the content variable. help please.
You seem to be passing only a length (number of bytes) to this components so it probably doesn't know what to write. Your ByteBuf array is initialized to an empty byte array and you probably write out BytesInFiles.Length number of 0-s. You are putting the converted content into BytesInFiles but you never use that buffer for writing - you only use its length.
I think you might be missing a step here. Once you've done:
Byte[] BytesInFiles = GetBytes(content);
Won't you need to do something with BytesInFiles? Currently it seems as though you are writing chunks of BytesInFiles, which will have been initialized to contain all zeros when you created it.
Edit: Would something like this help?
Byte[] BytesInFiles = GetBytes(content);
using (WinFileIO WFIO = new WinFileIO(BytesInFiles))
{
WFIO.OpenForWriting(path);
WFIO.WriteBlocks(BytesInFiles.Length);
}
Preface:
I am doing a data-import that has a verify-commit phase. The idea is that: the first phase allows taking data from various sources and then running various insert/update/validate operations on a database. The commit is rolled back but a "verification hash/checksum" is generated. The commit phase is the same, but, if the "verification hash/checksum" is the same then the operations will be committed. (The database will be running under the appropriate isolation levels.)
Restrictions:
Input reading and operations are forward-read-once only
Do not want to pre-create a stream (e.g. writing to MemoryStream not desirable) as there may be a lot of data. (It would work on our servers/load, but pretend memory is limited.)
Do not want to "create my own". (I am aware of available code like CRC-32 by Damien which I could use/modify but would prefer something "standard".)
And what I (think I am) looking for:
A way to generate a Hash (e.g. SHA1 or MD5?) or a Checksum (e.g. CRC32 but hopefully more) based on input + operations. (The input/operations could themselves be hashed to values more fitting to the checksum generation but it would be nice just to be able to "write to steam".)
So, the question is:
How to generate a Running Hash (or Checksum) in C#?
Also, while there are CRC32 implementations that can be modified for a Running operation, what about running SHAx or MD5 hashes?
Am I missing some sort of handy Stream approach than could be used as an adapter?
(Critiques are welcome, but please also answer the above as applicable. Also, I would prefer not to deal with threads. ;-)
You can call HashAlgorithm.TransformBlock multiple times, and then calling TransformFinalBlock will give you the result of all blocks.
Chunk up your input (by reading x amount of bytes from a steam) and call TransformBlock with each chunk.
EDIT (from the msdn example):
public static void PrintHashMultiBlock(byte[] input, int size)
{
SHA256Managed sha = new SHA256Managed();
int offset = 0;
while (input.Length - offset >= size)
offset += sha.TransformBlock(input, offset, size, input, offset);
sha.TransformFinalBlock(input, offset, input.Length - offset);
Console.WriteLine("MultiBlock {0:00}: {1}", size, BytesToStr(sha.Hash));
}
Sorry I don't have any example readily available, though for you, you're basically replacing input with your own chunk, then the size would be the number of bytes in that chunk. You will have to keep track of the offset yourself.
Hashes have a build and a finalization phase. You can shove arbitrary amounts of data in during the build phase. The data can be split up as you like. Finally, you finish the hash operation and get your hash.
You can use a writable CryptoStream to write your data. This is the easiest way.
You can generate an MD5 hash using the MD5CryptoServiceProvider's ComputeHash method. It takes a stream as input.
Create a memory or file stream, write your hash inputs to that, and then call the ComputeHash method when you are done.
var myStream = new MemoryStream();
// Blah blah, write to the stream...
myStream.Position = 0;
using (var csp = new MD5CryptoServiceProvider()) {
var myHash = csp.ComputeHash(myStream);
}
EDIT: One possibility to avoid building up massive Streams is calling this over and over in a loop and XORing the results:
// Assuming we had this somewhere:
Byte[] myRunningHash = new Byte[16];
// Later on, from above:
for (var i = 0; i < 16; i++) // I believe MD5 are 16-byte arrays. Edit accordingly.
myRunningHash[i] = myRunningHash[i] ^ [myHash[i];
EDIT #2: Finally, building on #usr's answer below, you can probably use HashCore and HashFinal:
using (var csp = new MD5CryptoServiceProvider()) {
// My example here uses a foreach loop, but an
// event-driven stream-like approach is
// probably more what you are doing here.
foreach (byte[] someData in myDataThings)
csp.HashCore(someData, 0, someData.Length);
var myHash = csp.HashFinal();
}
this is the canonical way:
using System;
using System.Security.Cryptography;
using System.Text;
public void CreateHash(string sSourceData)
{
byte[] sourceBytes;
byte[] hashBytes;
//create Bytearray from source data
sourceBytes = ASCIIEncoding.ASCII.GetBytes(sSourceData);
// calculate 16 Byte Hashcode
hashBytes = new MD5CryptoServiceProvider().ComputeHash(sourceBytes);
string sOutput = ByteArrayToHexString(hashBytes);
}
static string ByteArrayToHexString(byte[] arrInput)
{
int i;
StringBuilder sOutput = new StringBuilder(arrInput.Length);
for (i = 0; i < arrInput.Length - 1; i++)
{
sOutput.Append(arrInput[i].ToString("X2"));
}
return sOutput.ToString();
}
I'm trying to use the XNA microphone to capture audio and pass it to an API I have that analyses the data for display purposes. However, the API requires the audio data in an array of 16 bit integers. So my question is fairly straight forward; what's the most efficient way to convert the byte array into a short array?
private void _microphone_BufferReady(object sender, System.EventArgs e)
{
_microphone.GetData(_buffer);
short[] shorts;
//Convert and pass the 16 bit samples
ProcessData(shorts);
}
Cheers,
Dave
EDIT: This is what I have come up with and seems to work, but could it be done faster?
private short[] ConvertBytesToShorts(byte[] bytesBuffer)
{
//Shorts array should be half the size of the bytes buffer, as each short represents 2 bytes (16bits)
short[] shorts = new short[bytesBuffer.Length / 2];
int currentStartIndex = 0;
for (int i = 0; i < shorts.Length - 1; i++)
{
//Convert the 2 bytes at the currentStartIndex to a short
shorts[i] = BitConverter.ToInt16(bytesBuffer, currentStartIndex);
//increment by 2, ready to combine the next 2 bytes in the buffer
currentStartIndex += 2;
}
return shorts;
}
After reading your update, I can see you need to actually copy a byte array directly into a buffer of shorts, merging bytes. Here's the relevant section from the documentation:
The byte[] buffer format used as a parameter for the SoundEffect constructor, Microphone.GetData method, and DynamicSoundEffectInstance.SubmitBuffer method is PCM wave data. Additionally, the PCM format is interleaved and in little-endian.
Now, if for some weird reason your system has BitConverter.IsLittleEndian == false, then you will need to loop through your buffer, swapping bytes as you go, to convert from little-endian to big-endian. I'll leave the code as an exercise - I am reasonably sure all the XNA systems are little-endian.
For your purposes, you can just copy the buffer directly using Marshal.Copy or Buffer.BlockCopy. Both will give you the performance of the platform's native memory copy operation, which will be extremely fast:
// Create this buffer once and reuse it! Don't recreate it each time!
short[] shorts = new short[_buffer.Length/2];
// Option one:
unsafe
{
fixed(short* pShorts = shorts)
Marshal.Copy(_buffer, 0, (IntPtr)pShorts, _buffer.Length);
}
// Option two:
Buffer.BlockCopy(_buffer, 0, shorts, 0, _buffer.Length);
This is a performance question, so: measure it!
It is worth pointing out that for measuring performance in .NET you want to do a release build and run without the debugger attached (this allows the JIT to optimise).
Jodrell's answer is worth commenting on: Using AsParallel is interesting, but it is worth checking if the cost of spinning it up is worth it. (Speculation - measure it to confirm: converting byte to short should be extremely fast, so if your buffer data is coming from shared memory and not a per-core cache, most of your cost will probably be in data transfer not processing.)
Also I am not sure that ToArray is appropriate. First of all, it may not be able to create the correct-sized array directly, having to resize the array as it builds it will make it very slow. Additionally it will always allocate the array - which is not slow itself, but adds a GC cost that you almost certainly don't want.
Edit: Based on your updated question, the code in the rest of this answer is not directly usable, as the format of the data is different. And the technique itself (a loop, safe or unsafe) is not as fast as what you can use. See my other answer for details.
So you want to pre-allocate your array. Somewhere out in your code you want a buffer like this:
short[] shorts = new short[_buffer.Length];
And then simply copy from one buffer to the other:
for(int i = 0; i < _buffer.Length; ++i)
result[i] = ((short)buffer[i]);
This should be very fast, and the JIT should be clever enough to skip one if not both of the array bounds checks.
And here's how you can do it with unsafe code: (I haven't tested this code, but it should be about right)
unsafe
{
int length = _buffer.Length;
fixed(byte* pSrc = _buffer) fixed(short* pDst = shorts)
{
byte* ps = pSrc;
short* pd = pDst;
while(pd < pd + length)
*(pd++) = (short)(*(ps++));
}
}
Now the unsafe version has the disadvantage of requiring /unsafe, and also it may actually be slower because it prevents the JIT from doing various optimisations. Once again: measure it.
(Also you can probably squeeze more performance if you try some permutations on the above examples. Measure it.)
Finally: Are you sure you want the conversion to be (short)sample? Shouldn't it be something like ((short)sample-128)*256 to take it from unsigned to signed and extend it to the correct bit-width? Update: seems I was wrong on the format here, see my other answer
The pest PLINQ I could come up with is here.
private short[] ConvertBytesToShorts(byte[] bytesBuffer)
{
//Shorts array should be half the size of the bytes buffer, as each short represents 2 bytes (16bits)
var odd = buffer.AsParallel().Where((b, i) => i % 2 != 0);
var even = buffer.AsParallell().Where((b, i) => i % 2 == 0);
return odd.Zip(even, (o, e) => {
return (short)((o << 8) | e);
}.ToArray();
}
I'm dubios about the performance but with enough data and processors who knows.
If the conversion operation is wrong ((short)((o << 8) | e)) please change to suit.