I use a NFC-Reader to read data. The data will be stores as byte[] bytes_credentials. Receiving data is working fine but now I want to encode the 16 byte Array to String. I use string str = Encoding.Default.GetString(new_data); for decoding the array. The string output is working fine if the byte Array is fully "booked up" (16 of 16 bytes used), for example: This is an test! -> 54 68 69 73 20 69 73 20 61 6e 20 74 65 73 74 21. The problem i am facing is, that if the NFC-Array is not fully "booked up", the Array look like this: This is! -> 54 68 69 73 20 69 73 21 00 00 00 00 00 00 00 00. The output in console looks like this: This is!????????.
How can I CHECK and REMOVE the empty bytes (00) from the Array. So that the Output looks like this: This is!.
Thank you for helping!!!
I have checked previous posts but no of them are working for me...
You can use LINQ to filter your array further:
using System.Text;
var bytes_credentials = new byte[]
{
0x54,
0x68,
0x69,
0x73,
0x20,
0x69,
0x73,
0x21,
0,
0,
0,
0,
0,
0,
0,
0
};
Console.WriteLine(Encoding.Default.GetString(
bytes_credentials.Where(b => b != 0).ToArray()
));
To learn more about LINQ, take a look here: https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/
Can anyone explain how to make a TDES MAC in OpenSSL command line?
I am trying to duplicate some functionality of a working C# program in C for the OpenSSL API, and am having trouble duplicating the .Net MACTripleDES.ComputeHash function in openssl. Here is an example with bogus data and key:
using (MACTripleDES hmac = new MACTripleDES(Utilities.HexStringToByteArray("112233445566778899aabbccddeeff00")))
{
// Compute the hash of the input file.
byte[] hashValue = hmac.ComputeHash(Utilities.HexStringToByteArray("001000000000000000000000000000008000000000000000"));
string signature = Utilities.ByteArrayToHexString(hashValue);
PrintToFeedback("Bogus Signature = " + signature);
}
The result is "Bogus Signature = A056D11063084B3E" My new C program has to provide the same hash of that data in order to interoperate with its wider environment. But the way to do this in openSSL eludes me. This shows that the openssl data starts out the same as the C# data:
cmd>od -tx1 bsigin
0000000 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0000020 80 00 00 00 00 00 00 00
stringified, 001000000000000000000000000000008000000000000000 MATCHes the c# string.
cmd>openssl dgst -md5 -mac hmac -macopt hexkey:112233445566778899aabbccddeeff00 bsigin
HMAC-MD5(bsigin)= 7071d693451da3f2608531ee43c1bb8a
That data is too long, and my expected data is not a substring. Same for -sha1 etc. I tried encrypting and making the digest separately, no good. MS does not say what kind of hash it does, and I can't find documentation of how to set up a MAC with TDES in openssl.
So I'm hoping someone here knows enough about both platforms to give me a decent hint.
Command line answer:
cmd>openssl enc -des-ede-cbc -K 112233445566778899aabbccddeeff00 -iv 0000000000000000 -in bsigin -out bsigout
cmd>od -tx1 bsigout
0000000 7c de 93 c6 5f b4 03 21 aa c0 89 b8 ae f3 da 5d
0000020 a0 56 d1 10 63 08 4b 3e 4c 03 41 d6 dd 9e e4 32
^^^^^^^^^^^^^^^^^^^^^^^
That is, the command line form returns 32 bytes, and bytes 16..23 contain the hmac.
API answer:
DES_key_schedule SchKey1,SchKey2;
DES_cblock iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
DES_set_key((C_Block *)Key1, &SchKey1);
DES_set_key((C_Block *)Key2, &SchKey2);
DES_ede3_cbc_encrypt( (unsigned char*)input_data, (unsigned char*)cipher, inputLength, &SchKey1, &SchKey2, &SchKey1, &iv, DES_ENCRYPT);
Where Key1 is the Lkey or left 8 bytes of the 16 byte TDES key, and Key2 is the Rkey or right 8 bytes of the 16 byte TDES key. This call only populates 24 bytes of cipher, as opposed to the 32 byte return of the command line version. You still take bytes 16..23. Hopefully the supporting declarations are intuitive.
Hello I'm trying to pass data from a pointer to a struct but the values seem to be different.
struct somestruct
{
public file header;
public uint version;
}
unsafe struct file
{
public fixed char name[8];
public uint type;
public uint size;
}
Then in code somewhere..
public unsafe int ReadFile(string filepath)
{
somestruct f = new somestruct();
byte[] fdata = System.IO.ReadAllBytes( filepath );
fixed( byte* src = fdata )
{
f.header = *(file*)src;
MessageBox.Show( new string(f.header.name) ); //should be 'FILENAME' but it's like japanese.
}
return 0;
}
Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000 46 49 4C 45 4E 41 4D 45 00 00 00 01 00 00 00 30 FILENAME.......0
00000010 74 27 9F EF 74 77 F1 D7 C5 86 93 3D 39 0D 72 A9 t'Ÿïtwñ×ņ“=9.r©
00000020 63 8B 92 CF F6 7D 8A 14 45 9D 68 51 A4 8E A4 EE c‹’Ïö}Š.E.hQ¤Ž¤î
00000030 4E FE D0 66 45 0E C9 8D 96 BB F4 EE 52 1F 89 D3 NþÐfE.É.–»ôîR.‰Ó
00000040 5C 80 1A 71 8A 16 B1 8B 3A A8 1B A4 48 11 B8 E8 \€.qŠ.±‹:¨.¤H.¸è
Do you have any idea what's going on?
Each char is 2 bytes - a fixed buffer of 8 chars is 16 bytes. You are reading the first 8 bytes as only the first 4 characters in that buffer, and the high bytes will make it look. Like the eastern Unicode ranges.
I would say: deserialize it at the stream level. Don't do this.
Basically, read (at least) 20 bytes into a buffer, then decode manually, using:
string s = Encoding.ASCII.GetString(buffer, 0, 8);
For the string, and probably shift operations for the unsigned integers.
You could also use unsafe code to read the integers from the buffer, via the other meaning of fixed and a pointer-cast.
A char is UTF-16 and is 2 bytes. You need to convert the UTF-8/ANSI (1 byte) string to a UTF-16 string.
I have been given a task to encrypt data that will be stored in our database and sent to one of our customers. I figured the best way to do this is to use asymetric encryption so that once we have encrypted it with our customer's public key nobody within but the customer (owner of the private key) will be able to decrypt it.
I would like to store our customer's public key, algorithm type (RSA or DSA), and expiration date in our database instead of managing their certificate. The question is how can I store and use their public key? I have created the following little program to test and I am running into problems.
class Program
{
static void Main(string[] args)
{
const string publicKeyString = "30 81 89 02 81 81 00 c2 6e 7e e8 78 66 3d 74 fd a7 57 21 24 2d c0 ee 53 59 54 14 db f5 cb 5e 8c 64 c8 73 d5 83 d7 12 57 3f e2 92 54 9a 87 94 18 71 04 c8 b5 92 44 27 78 e9 d3 de cb 5f f6 93 75 c0 46 6b 50 c7 45 a8 38 f9 a1 83 8e 26 51 5a 8c 22 95 8e 2b 4c 10 ea c6 85 ed 02 ed 66 81 ef a3 55 15 ad 64 33 d3 bd ca 75 db 35 44 49 54 ef 6a ca 2a d5 90 a7 9b be 03 40 62 16 fd be 39 fb b6 f0 6b f8 f1 00 c0 c5 02 03 01 00 01";
const string stringToEncrypt = "11111111111111111111";
var encoding = new UTF8Encoding();
var encryptedData = Encrypt(encoding.GetBytes(stringToEncrypt), encoding.GetBytes(publicKeyString));
Console.WriteLine("**** Encrypted String ****");
Console.WriteLine(encoding.GetString(encryptedData));
var decryptedData = Decrypt(encryptedData);
Console.WriteLine("**** Decrypted String ****");
Console.WriteLine(encoding.GetString(decryptedData));
Console.ReadKey();
}
static byte[] Encrypt(byte[] dataToEncrypt, byte[] publicKey)
{
var exponent = new byte[] { 1, 0, 1 };
var rsa = new RSACryptoServiceProvider();
rsa.ImportParameters(new RSAParameters() { Modulus = publicKey, Exponent = exponent });
var encryptedData = rsa.Encrypt(dataToEncrypt, false);
return encryptedData;
}
static byte[] Decrypt(byte[] dataToDecrypt)
{
var cert = new X509Certificate2(#"C:\certs\BP_DEV_CERT_1024.p12", "password");
var rsa = (RSACryptoServiceProvider) cert.PrivateKey;
var decryptedData = rsa.Decrypt(dataToDecrypt, false);
return decryptedData;
}
}
When I run this program I get "The data to be decrypted exceeds the maximum for this modulus of 128 bytes." This leads me to believe the way I am prepare
the public key to be used is totally wrong.
So I guess I need to know a couple of things:
I can copy the public key from the certificate but how should I store it in the database?
How should I properly covert the public key string to a proper byte array?
Any other pointers that someone may have.
RSA and other asymmetric algorithms are not suitable for encrypting data in bulk. The maximum message length is a few bytes less than the key modulus. Of course, you could form the data into blocks and apply RSA encryption repeatedly, but this is still horribly slow. Instead, RSA is used to exchange encryption keys for a symmetric cipher.
I recommend you use S/MIME to encrypt your customers' data. It's a standard that has been widely reviewed for security, and you probably already have a library to support the protocol. Most email clients support S/MIME, so your customers probably already have the software that they need.
S/MIME (and PGP) work by generating a key for a symmetric cipher like AES—the "content encryption key". This is used to encrypt the message. Then that symmetric key is encrypted with the public RSA key—the "key encryption key"—of each recipient. The encrypted content encryption key is sent along with the cipher text to each recipient.
I have this uncompressed byte array:
0E 7C BD 03 6E 65 67 6C 65 63 74 00 00 00 00 00 00 00 00 00 42 52 00 00 01 02 01
00 BB 14 8D 37 0A 00 00 01 00 00 00 00 05 E9 05 E9 00 00 00 00 00 00 00 00 00 00
00 00 00 00 01 00 00 00 00 00 81 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 05 00 00 01 00 00 00
And I need to compress it using the deflate algorithm (implemented in zlib), from what I searched the equivalent in C# would be using GZipStream but I can't match the compressed resulted at all.
Here is the compressing code:
public byte[] compress(byte[] input)
{
using (MemoryStream ms = new MemoryStream())
{
using (GZipStream deflateStream = new GZipStream(ms, CompressionMode.Compress))
{
deflateStream.Write(input, 0, input.Length);
}
return ms.ToArray();
}
}
Here is the result of the above compressing code:
1F 8B 08 00 00 00 00 00 04 00 ED BD 07 60 1C 49 96 25 26 2F 6D CA 7B 7F 4A F5 4A
D7 E0 74 A1 08 80 60 13 24 D8 90 40 10 EC C1 88 CD E6 92 EC 1D 69 47 23 29 AB 2A
81 CA 65 56 65 5D 66 16 40 CC ED 9D BC F7 DE 7B EF BD F7 DE 7B EF BD F7 BA 3B 9D
4E 27 F7 DF FF 3F 5C 66 64 01 6C F6 CE 4A DA C9 9E 21 80 AA C8 1F 3F 7E 7C 1F 3F
22 7E 93 9F F9 FB 7F ED 65 7E 51 E6 D3 F6 D7 30 CF 93 57 BF C6 AF F1 6B FE 5A BF
E6 AF F1 F7 FE 56 7F FC 03 F3 D9 AF FB 5F DB AF 83 E7 0F FE 35 23 1F FE BA F4 FE
AF F1 6B FC 1A FF 0F 26 EC 38 82 5C 00 00 00
Here is the result I am expecting:
78 9C E3 AB D9 CB 9C 97 9A 9E 93 9A 5C C2 00 03 4E 41 0C 0C 8C 4C 8C 0C BB 45 7A
CD B9 80 4C 90 18 EB 4B D6 97 0C 28 00 2C CC D0 C8 C8 80 09 58 21 B2 00 65 6B 08
C8
What I am doing wrong, could some one help me out there ?
First, some information: DEFLATE is the compression algorithm, it is defined in RFC 1951. DEFLATE is used in the ZLIB and GZIP formats, defined in RFC 1950 and 1952 respectively, which essentially are thin wrappers around DEFLATE bytestreams. The wrappers provide metadata such as, the name of the file, timestamps, CRCs or Adlers, and so on.
.NET's base class library implements a DeflateStream that produces a raw DEFLATE bytestream, when used for compression. When used in decompression it consumes a raw DEFLATE bytestream. .NET also provides a GZipStream, which is just a GZIP wrapper around that base. There is no ZlibStream in the .NET base class library - nothing that produces or consumes ZLIB. There are some tricks to doing it, you can search around.
The deflate logic in .NET exhibits a behavioral anomaly, where previously compressed data can actually be inflated, significantly, when "compressed". This was the source of a Connect bug raised with Microsoft, and has been discussed here on SO. This may be what you are seeing, as far as ineffective compression. Microsoft have rejected the bug, because while it is ineffective for saving space, the compressed stream is not invalid, in other words it can be "decompressed" by any compliant DEFLATE engine.
In any case, as someone else posted, the compressed bytestream produced by different compressors may not necessarily be the same. It depends on their default settings, and the application-specified settings for the compressor. Even though the compressed bytestreams are different, they may still decompress to the same original bytestream. On the other hand the thing you used to compress was GZIP, while it appears what you want is ZLIB. While they are related, they are not the same; you cannot use GZipStream to produce a ZLIB bytestream. This is the primary source of the difference you see.
I think you want a ZLIB stream.
The free managed Zlib in the DotNetZip project implements compressing streams for all of the three formats (DEFLATE, ZLIB, GZIP). The DeflateStream and GZipStream work the same way as the .NET builtin classes, and there's a ZlibStream class in there, that does what you think it does. None of these classes exhibit the behavior anomaly I described above.
In code it looks like this:
byte[] original = new byte[] {
0x0E, 0x7C, 0xBD, 0x03, 0x6E, 0x65, 0x67, 0x6C,
0x65, 0x63, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x42, 0x52, 0x00, 0x00,
0x01, 0x02, 0x01, 0x00, 0xBB, 0x14, 0x8D, 0x37,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x05, 0xE9, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x81, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00
};
var compressed = Ionic.Zlib.ZlibStream.CompressBuffer(original);
The output is like this:
0000 78 DA E3 AB D9 CB 9C 97 9A 9E 93 9A 5C C2 00 03 x...........\...
0010 4E 41 0C 0C 8C 4C 8C 0C BB 45 7A CD 61 62 AC 2F NA...L...Ez.ab./
0020 19 B0 82 46 46 2C 82 AC 40 FD 40 0A 00 35 25 07 ...FF,..#.#..5%.
0030 CE .
To decompress,
var uncompressed = Ionic.Zlib.ZlibStream.UncompressBuffer(compressed);
You can see the documentation on the static CompressBuffer method.
EDIT
The question is raised, why is DotNetZip producing 78 DA for the first two bytes instead of 78 9C? The difference is immaterial. 78 DA encodes "max compression", while 78 9C encodes "default compression". As you can see in the data, for this small sample, the actual compressed bytes are exactly the same whether using BEST or DEFAULT. Also, the compression level information is not used during decompression. It has no effect in your application.
If you don't want "max" compression, in other words if you are very set on getting 78 9C as the first two bytes, even though it doesn't matter, then you cannot use the CompressBuffer convenience function, which uses the best compression level under the covers. Instead you can do this:
var compress = new Func<byte[], byte[]>( a => {
using (var ms = new System.IO.MemoryStream())
{
using (var compressor =
new Ionic.Zlib.ZlibStream( ms,
CompressionMode.Compress,
CompressionLevel.Default ))
{
compressor.Write(a,0,a.Length);
}
return ms.ToArray();
}
});
var original = new byte[] { .... };
var compressed = compress(original);
The result is:
0000 78 9C E3 AB D9 CB 9C 97 9A 9E 93 9A 5C C2 00 03 x...........\...
0010 4E 41 0C 0C 8C 4C 8C 0C BB 45 7A CD 61 62 AC 2F NA...L...Ez.ab./
0020 19 B0 82 46 46 2C 82 AC 40 FD 40 0A 00 35 25 07 ...FF,..#.#..5%.
0030 CE .
Quite simply what you got had a GZip header. What you want is the simpler Zlib header. ZLib has options for GZip header, Zlib header or no header. Typically the Zlib header is used unless the data is associated with a disk file (in which case GZip header is used.) Apparently, there is no way with .Net library to write a zlib header (even though this is by far the most common header used in file formats). Try http://dotnetzip.codeplex.com/.
You can quickly test all the different zlib options using HexEdit (Operations->Compression->Settings). See http://www.hexedit.com . It took me 10 minutes to check your data by simply pasting your compressed bytes into HexEdit and decompressing. Also tried compressing your orignal bytes with GZip and ZLib headers as a double-check. Note that you may have to fiddle with the settings to get exactly the bytes you were expecting.