Greatings, everybody! I'm making a small tool that Encrypts the Data of a File by Xoring all its bytes and after that, Encoding the Xor Result by using Base64. The problem begins when Looping between each Byte contained inside the File Data, this is how I've done:
int Initialization_Vector_Size = Initialization_Vector.Length;
string Crypto_String = null;
for(int Index = 0; Index < Input_Data_Size; Index++)
{
int Single_Index = Index;
byte Single_Byte = Input_Data[Single_Index];
char Input_Character = Convert.ToChar(Single_Byte);
int Input_Character_Code = (int)Input_Character;
int Key_Index = Single_Index % Cipher_Key_Size;
char Key_Character = Cipher_Key[Key_Index];
int Cipher_Key_Code = (int)Key_Character;
int Xor_Value = Input_Character_Code ^ Cipher_Key_Code;
int IV_Index = Single_Index % Initialization_Vector_Size;
char IV_Character = Initialization_Vector[IV_Index];
int Initialization_Vector_Code = (int)IV_Character;
int Crypto_Character_Code = Xor_Value ^ Initialization_Vector_Code;
char Crypto_Character = (char)Crypto_Character_Code;
char[] Characters_List = {Crypto_Character};
string Security_Code = new String(Characters_List);
Crypto_String += Security_Code;
}
byte[] Encrypted_Data = Encoding.ASCII.GetBytes(Crypto_String);```
I created small not finishd Packet Builder class.
AddString() working without problems, but if i use AddInt() the console output looks very weird. Any can tell me why the integer not display correctly?
Main
Packet packet = new Packet();
packet.builder.AddString(Constants.Requests.GET_RESOURCES);
packet.builder.AddString("Another_String");
packet.builder.AddInt(500);
byte[] byteArray = packet.builder.GetByteBuffer();
Console.WriteLine(ByteArrayToString(byteArray));
ByteArray Output: Get_Resources:Another_String:?☺:
47-65-74-5F-52-65-73-6F-75-72-63-65-73-00-3A-41-6E-6F-74-68-65-72-5F-53-74-72-69-6E-67-00-3A-F4-01-00-00-00-3A
As you can see: ?☺ is definitly wrong. The functions are almost the same.
Class
class Packet
{
public Builder builder;
public Packet()
{
builder = new Builder();
}
private static string ByteArrayToString(byte[] arr)
{
System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding();
return enc.GetString(arr);
}
public static string[] Read(byte[] _recievedData)
{
string data = ByteArrayToString(_recievedData).Trim();
string[] result = data.Split(':');
return result;
}
public class Builder
{
private byte[] buffer;
private int offset;
//Makes very easy on client to filter packets...
private byte[] seperator;
public Builder()
{
offset = 0;
buffer = new byte[4096];
seperator = BitConverter.GetBytes(':');
}
public void AddInt(int intValue)
{
byte[] byteArray = BitConverter.GetBytes(intValue);
for (int x = 0; x < byteArray.Length; x++)
{
buffer[x + offset] = byteArray[x];
}
for (int y = 0; y < seperator.Length; y++)
{
buffer[byteArray.Length + (y + 1) + offset] = seperator[y];
}
offset += (byteArray.Length + seperator.Length);
}
public void AddString(string str)
{
byte[] byteArray = Encoding.ASCII.GetBytes(str);
for (int x = 0; x < byteArray.Length; x++)
{
buffer[x + offset] = byteArray[x];
}
for (int y = 0; y < seperator.Length; y++)
{
buffer[byteArray.Length + (y + 1) + offset] = seperator[y];
}
offset += (byteArray.Length + seperator.Length);
}
public byte[] GetByteBuffer()
{
return buffer;
}
public void Reset()
{
buffer = null;
offset = 0;
}
}
}
Your code is working perfectly fine. Possibly it is not what you want but following code converts an int in 4 bytes because it is a 32-bit integer.
byte[] byteArray = BitConverter.GetBytes(intValue);
at the end of your output, you see those 4 bytes as expected in little endian format F4-01-00-00 because 500 in hexadecimal is 0x01F4. This explains why you are getting, what you are getting.
Now I am assuming that you are expecting 500 instead of ?☺. Following code should fetch you desired result:
byte[] byteArray = BitConverter.GetBytes(intValue.ToString());
This will add a string representation of the number instead of binary representation. Based on the return type of Read function, the need seems to be a string representation.
I am trying to read a wav file as under
class Program
{
struct WavHeader
{
public int riffID;
public int size;
public int wavID;
public int fmtID;
public int fmtSize;
public int format;
public int channels;
public int sampleRate;
public int bytePerSec;
public int blockSize;
public int bit;
public int dataID;
public int dataSize;
}
static void Main(string[] args)
{
WavHeader Header = new WavHeader();
List<short> lDataList = new List<short>();
List<short> rDataList = new List<short>();
using (FileStream fs = new FileStream(#"D:\Test.wav", FileMode.Open, FileAccess.Read))
using (BinaryReader br = new BinaryReader(fs))
{
try
{
Header.riffID = br.ReadInt32();
Header.size = br.ReadInt32();
Header.wavID = br.ReadInt32();
Header.fmtID = br.ReadInt32();
Header.fmtSize = br.ReadInt32();
Header.format = br.ReadUInt16();
Header.channels = br.ReadUInt16();
Header.sampleRate = br.ReadInt32();
Header.bytePerSec = br.ReadInt32();
Header.blockSize = br.ReadInt16();
Header.bit = br.ReadInt16();
if (Header.fmtSize == 18)
{
// Read any extra values
int fmtExtraSize = br.ReadInt16();
br.ReadBytes(fmtExtraSize);
}
Header.dataID = br.ReadInt32();
Header.dataSize = br.ReadInt32();
int bytesForSamp = Header.bit / 8;
int samps = Header.dataSize / bytesForSamp;
for (int i = 0; i < samps; i++)
{
lDataList.Add((short)br.ReadUInt16());
rDataList.Add((short)br.ReadUInt16());
}
}
finally
{
if (br != null)
{
br.Close();
}
if (fs != null)
{
fs.Close();
}
}
}
}
}
But getting runtime error at
lDataList.Add((short)br.ReadUInt16());
rDataList.Add((short)br.ReadUInt16());
{"Unable to read beyond the end of the stream."}
I have seen this SO Q/A and tried to fit as per the requirement but that's returns float.
Here you correctly calculate the bytes per sample and the number of samples:
int bytesForSamp = Header.bit / 8;
int samps = Header.dataSize / bytesForSamp;
But here you assume, that the file has 2 channels and 16-bit samples:
for (int i = 0; i < samps; i++)
{
lDataList.Add((short)br.ReadUInt16());
rDataList.Add((short)br.ReadUInt16());
}
If the file actually has 8-bit samples and/or only 1 channel, then this loop tries to read beyond the input file at some point.
You would either have to assert, that the file is really 16-bit 2ch after reading the header, or handle the file correctly, by reading correctly according to the number of bits per sample and channels specified in the header of the wave-file.
this is my code in C# :
public static String MD5Encrypt(String str, Boolean raw_output=false)
{
// Use input string to calculate MD5 hash
String output;
MD5 md5 = System.Security.Cryptography.MD5.Create();
byte[] inputBytes = System.Text.Encoding.ASCII.GetBytes(str);
byte[] hashBytes = md5.ComputeHash(inputBytes);
// Convert the byte array to hexadecimal string
StringBuilder sb = new StringBuilder();
for (int i = 0; i < hashBytes.Length; i++)
{
sb.Append(hashBytes[i].ToString("x2"));
}
output = sb.ToString();
if (raw_output)
{
output = pack(output);
}
return output;
}
public static String pack(String S)
{
string MultiByte = "";
for (int i = 0; i <= S.Length - 1; i += 2)
{
MultiByte += Convert.ToChar(HexToDec(S.Substring(i, 2)));
}
return MultiByte;
}
private static int HexToDec(String hex)
{
//Int32.Parse(hexString, System.Globalization.NumberStyles.HexNumber);
return Convert.ToInt32(hex, 16);
}
To reproduce what is done in php by this way :
md5($str, true);
OR
pack('H*', md5( $str ));
I tried many things but can't get the same on the two sides in some cases of word.
For example, Trying this test on the string "8tv7er5j"
PHP Side :
9c36ad446f83ca38619e12d9e1b3c39e <= md5("8tv7er5j");
œ6DoƒÊ8ažÙá³Ãž <= md5("8tv7er5j", true) or pack("H*", md5("8tv7er5j"))
C# Side :
9c36ad446f83ca38619e12d9e1b3c39e <= MD5Encrypt("8tv7er5j")
6DoÊ8aÙá³Ã <= MD5Encrypt("8tv7er5j", true) or pack( MD5Encrypt("8tv7er5j") )
Why ? Encoding problem ?
EDIT 1 :
I have the good result, but bad encoded with this this function for pack() :
if ((hex.Length % 2) == 1) hex += '0';
byte[] bytes = new byte[hex.Length / 2];
for (int i = 0; i < hex.Length; i += 2)
{
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
}
return bytes;
So, System.Text.Encoding.UTF8.GetString(bytes) give me :
�6�Do��8a���Þ
And System.Text.Encoding.ASCII.GetString(bytes)
?6?Do??8a??????
...
I encountered same scenario where I am in need of php's pack-unpack-md5 functions in C#. Most important was that I need to match out of all these 3 functions with php.
I created my own functions and then validated(verified) my output with functions at onlinephpfunctions.com. The output was same when I parsed with DefaultEncoding. FYI, I checked my application's encoding(Encoding.Default.ToString()) and it was System.Text.SBCSCodePageEncoding
Pack
private static string pack(string input)
{
//only for H32 & H*
return Encoding.Default.GetString(FromHex(input));
}
public static byte[] FromHex(string hex)
{
hex = hex.Replace("-", "");
byte[] raw = new byte[hex.Length / 2];
for (int i = 0; i < raw.Length; i++)
{
raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16);
}
return raw;
}
MD5
private static string md5(string input)
{
byte[] asciiBytes = Encoding.Default.GetBytes(input);
byte[] hashedBytes = MD5CryptoServiceProvider.Create().ComputeHash(asciiBytes);
string hashedString = BitConverter.ToString(hashedBytes).Replace("-", "").ToLower();
return hashedString;
}
Unpack
private static string unpack(string p1, string input)
{
StringBuilder output = new StringBuilder();
for (int i = 0; i < input.Length; i++)
{
string a = Convert.ToInt32(input[i]).ToString("X");
output.Append(a);
}
return output.ToString();
}
PS: User can enhance these functions with other formats
I guess that PHP defaults to Latin1 so the code should look like :
public static String PhpMd5Raw(string str)
{
var md5 = System.Security.Cryptography.MD5.Create();
var inputBytes = System.Text.Encoding.ASCII.GetBytes(str);
var hashBytes = md5.ComputeHash(inputBytes);
var latin1Encoding = System.Text.Encoding.GetEncoding("ISO-8859-1");
return latin1Encoding.GetString(hashBytes);
}
If you are going to feed the result as a key for HMAC-SHA1 hashing keep it as bytes[] and initialize the HMACSHA1 with the return value of this function: DO NOT convert it to a string and back to bytes, I have spent hours because of this mistake.
public static byte[] PackH(string hex)
{
if ((hex.Length % 2) == 1) hex += '0';
byte[] bytes = new byte[hex.Length / 2];
for (int i = 0; i < hex.Length; i += 2)
{
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
}
return bytes;
}
I know this is an old question. I am posting my answer for anyone who might reach this page searching for it.
The following code is the full conversion of the pearl function pack("H*") to c#.
public static String Pack(String input)
{
input = input.Replace("-", " ");
byte[] hashBytes = new byte[input.Length / 2];
for (int i = 0; i < hashBytes.Length; i++)
{
hashBytes[i] = Convert.ToByte(input.Substring(i * 2, 2), 16);
}
return Encoding.UTF7.GetString(hashBytes); // for perl/php
}
I'm sorry. I didn't go with the questions completely. But if php code is as below,
$testpack = pack("H*" , "you value");
and if can't read the $testpack values(due to some non support format), then first do base64_encode as below and echo it.
echo base64_encode($testpack);
Then use Risky Pathak answer. For complete this answer I'll post his answer with some small modification like base 64 encoding etc.
var hex = "you value";
hex = hex.Replace("-", "");
byte[] raw = new byte[hex.Length / 2];
for (int i = 0; i < raw.Length; i++)
{
raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16);
}
var res = Convert.ToBase64String(raw);
Console.WriteLine(res);
Now if you compare both of values, those should be similar.
And all credit should go to the Risky Pathak answer.
The same in c# can be reached with Hex.Decode() method.
And bin2hex() in php is Hex.Encode().
I have to write a Vigenere encryption / decryption function that operates on full bytes (to encrypt and send files over tcp and then decrypt on the other side).
My encrypting function seems to be working (more or less, can't really test it without decrypting function).
This is the code of the encrypting function:
public static Byte[] encryptByteVigenere(Byte[] plaintext, string key)
{
Byte[] result= new Byte[plaintext.Length];
key = key.Trim().ToUpper();
int keyIndex = 0;
int keylength = key.Length;
for (int i = 0; i < plaintext.Length; i++)
{
keyIndex = keyIndex % keylength;
int shift = (int)key[keyIndex] - 65;
result[i] = (byte)(((int)plaintext[i] + shift) % 256);
keyIndex++;
}
return result;
}
However, the decrypting function, even though wrote in pretty much the same way, causes an error.
"Attempted to divide by zero."
The code of the decrypting function:
public static Byte[] decryptByteVigenere(Byte[] ciphertext, string key)
{
Byte[] result = new Byte[ciphertext.Length];
key = key.Trim().ToUpper();
int keyIndex = 0;
int keylength = key.Length;
for (int i = 0; i < ciphertext.Length; i++)
{
keyIndex = keyIndex % keylength;
int shift = (int)key[keyIndex] - 65;
result[i]= (byte)(((int)ciphertext[i] + 256 - shift) % 256);
keyIndex++;
}
return result;
}
The error points at the line
keyIndex = keyIndex % keylength;
But what wonders me is that the code is pretty much the same in the first function and it doesn't seem to cause any trouble. I'm testing it on the received fild, which arrives correctly without encryption. Could anyone help me with that?
EDIT:
The method / thread that is using the decryption function code:
public void fileListenThread()
{
try
{
fileServer.Start();
String receivedFileName = "test.dat";
String key = (textKlucz.Text).ToUpper();
while (true)
{
fileClient = fileServer.AcceptTcpClient();
NetworkStream streamFileServer = fileClient.GetStream();
int thisRead = 0;
int blockSize = 1024;
Byte[] dataByte = new Byte[blockSize];
Byte[] dataByteDecrypted = new Byte[blockSize];
FileStream fileStream = new FileStream(receivedFileName, FileMode.Create);
while (true)
{
thisRead = streamFileServer.Read(dataByte, 0, blockSize);
dataByteDecrypted = Program.decryptByteVigenere(dataByte, key);
fileStream.Write(dataByteDecrypted, 0, thisRead);
if (thisRead == 0)
break;
}
fileStream.Close();
}
}
catch (SocketException e)
{
MessageBox.Show("SocketException: " + e, "Wystąpił wyjątek", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
Ok the problem was indeed the sending / receiving method, not the function itself. I still don't really know what caused the problem, but rewriting the functions helped. Thanks for your input!
I'm leaving it here in case someone needed such function in the future... even though it's rather trivial thing.
Cheers.