Select File APDU command failing with MiFare 1k card - c#

I'm using PCSC-Sharp to transmit commands to a card. The specific command is:
00 A4 04 0C 0C D2 76 00 01 35 4B 41 4E 4D 30 31 00 00
So I did the following:
var contextFactory = ContextFactory.Instance;
using (var ctx = contextFactory.Establish(SCardScope.System)) {
using (var isoReader = new IsoReader(ctx, readerName, SCardShareMode.Shared, SCardProtocol.Any, false)) {
var apdu = new CommandApdu(IsoCase.Case4Short, isoReader.ActiveProtocol) {
CLA = 0x00,
Instruction = InstructionCode.SelectFile, //0xA4
P1 = 0x04,
P2 = 0x0C,
Data = new byte[] { 0x0C, 0xD2, 0x76, 0x00, 0x01, 0x35,
0x4B, 0x41, 0x4E, 0x4D, 0x30, 0x31, 0x00, 0x00 },
Le = 0x00,
};
var response = isoReader.Transmit(apdu);
Console.WriteLine("SW1 SW2 = {0:X2} {1:X2}", response.SW1, response.SW2);
}
}
But on Transmit an InvalidApduException is thrown on getting SW1.
Am I missing something when converting the command string into a CommandApdu instance?

Mifare cards have no file system (just some numbered sectors) and understand no ISO 7816-4 APDUs, so InvalidApduException seems correct. The readers, which are able to handle MIFARE typically understand some pseudo-APDUs (CLA=0xFF, ...) which are translated accordingly. While these depend on the respective reader, some pretty established conventions exist, which you should find easily here using the mifare tag.
The SELECT by AID (for file-system based cards) or SELECT application (for Javacards) you are attempting is unlikely to have such a direct translation, since MIFARE cards have no similar concept.

Related

Problem parsing timing info from an SPS inside an avcC box

I am trying to parse an SPS inside an avcC box in a MP4 file. For some reason, I don't get the expected timing values while everything else is fine. Using a hex editor, I extracted these bytes to works with.
byte[] spsSmall =
{
0x67, 0x42, 0xC0, 0x1E, 0x9E, 0x21, 0x81, 0x18, 0x53, 0x4D, 0x40, 0x40,
0x40, 0x50, 0x00, 0x00, 0x03, 0x00, 0x10, 0x00, 0x00, 0x03, 0x03, 0xC8,
0xF1, 0x62, 0xEE
};
And this is H264 Analyzer report after converting my clip .mp4 to .h264
Nal length 29 start code 4 bytes
ref 3 type 7 Sequence parameter set
profile: 66
constaint_set0_flag: 1
constaint_set1_flag: 1
constaint_set2_flag: 0
constaint_set3_flag: 0
level_idc: 30
seq parameter set id: 0
log2_max_frame_num_minus4: 6
pic_order_cnt_type: 0
log2_max_pic_order_cnt_lsb_minus4: 7
num_ref_frames: 2
gaps_in_frame_num_value_allowed_flag: 0
pic_width_in_mbs_minus1: 34 (560)
pic_height_in_map_minus1: 19
frame_mbs_only_flag: 1
derived height: 320
direct_8x8_inference_flag: 1
frame_cropping_flag: 0
vui_parameters_present_flag: 1
aspect_ratio_info_present_flag: 0
overscan_info_present_flag: 0
video_signal_info_present_flag: 1
video_format: 5
video_full_range_flag: 0
colour_description_present_flag: 1
colour_primaries: 1
transfer_characteristics: 1
matrix_coefficients: 1
chroma_loc_info_present_flag: 0
timing_info_present_flag: 1
num_units_in_tick: 1
time_scale: 60
fixed_frame_scale: 1
nal_hrd_parameters_present_flag: 0
vcl_hrd_parameters_present_flag: 0
pic_struct_present_flag: 0
motion_vectors_over_pic_boundaries_flag: 1
max_bytes_per_pic_denom: 0
max_bits_per_mb_denom: 0
log2_max_mv_length_horizontal: 10
log2_max_mv_length_vertical: 10
num_reorder_frames: 0
max_dec_frame_buffering: 2
So I should expect num_units_in_tick to be 1 and time_scale to be 60 but I get for some reason a num_units_in_tick of 48 and a time_scale of 16777216.
You can find my implementation here
I checked FFmpeg and others implementations to see if I was missing something, but they seem to do the same things as me. I tried other clips, but I still have everything right other than the timing info. The doc don't seem to provide more than what I already know. Not only that, I have the colour_primaries, transfer_characteristics, matrix_coefficients all equals to 1 right before the timing info. If I was too far or too early, I would get the value wrong. The chance I get 24 bits with this exact sequence are really low. So I am lost to what I should do.
I found this post saying
If you are using field-based video then this will be a field rate, so
you'll have to halve it to get a frame rate.
Not sure what it meant. Even if I halve the number of bits (32 ⇾ 16) or divide by 2, I don't get something close to this.
You should remove emulation_prevention_three_byte from the NAL i.e. you should search for 0x00, 0x00, 0x03 byte aligned sequences and remove 0x03 from there. So that resulting unescaped spsSmall would be:
byte[] spsSmall =
{
0x67, 0x42, 0xC0, 0x1E, 0x9E, 0x21, 0x81, 0x18, 0x53, 0x4D, 0x40, 0x40,
0x40, 0x50, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x03, 0xC8, 0xF1, 0x62,
0xEE
};

Bytes from string from bytes not equals origin bytes

I have an encryption algorithm (RSA) and trying to make it able to encrypt and decrypt text of any length, the problem I faced is somewhy after encryption of block of bytes, if I convert is to string (using Encoding.ASCII.GetString) and then go back (with GetBytes) - I don't get the same array of bytes, same with UTF8, I'm not really into encodings, can someone help how can I solve this problem, so I can convert encrypted bytes into string and pass it to decryption algorithm and it will get proper bytes?
byte[] bytes = new byte[] { 0xe1, 0xde, 0x4a, 0x10, 0xea, 0x74, 0x8f, 0x18, 0xd7, 0x93, 0x04, 0x7a, 0x10, 0xb2, 0xa8, 0xfa, 0x11, 0x00, 0x7a, 0xfb, 0xcb,
0x19, 0xb7, 0xf5, 0x25, 0x26, 0x6d, 0xa0, 0x0d, 0xdc, 0xe5, 0x0a };
Console.WriteLine(string.Join(" ", bytes));
// 255 222 74 16 234 116 143 24 215 147 4 122 16 178 168 250 17 0 122 251 203 25 183 245 37 38 109 160 13 220 229 10
byte[] bytes2 = Encoding.ASCII.GetBytes(Encoding.ASCII.GetString(bytes));
Console.WriteLine(string.Join(" ", bytes2));
// 63 63 74 16 63 116 63 24 63 4 122 16 63 63 63 17 0 122 63 63 25 63 63 37 38 109 63 13 63 63 10
byte[] bytes3 = Encoding.UTF8.GetBytes(Encoding.ASCII.GetString(bytes));
Console.WriteLine(string.Join(" ", bytes3));
// 63 63 74 16 63 116 63 24 63 4 122 16 63 63 63 17 0 122 63 63 25 63 63 37 38 109 63 13 63 63 10
Array bytes I got from encrypting "hello world!" with 32 bytes key from my encryption algorithm, as you see, ascii nor utf8 to string and then back to bytes doesn't gives me back my original array of bytes somewhy
You're not supposed to use the Encoding.XYZ objects for converting sequences of bytes into strings unless those bytes actually make up a string in that encoding. Their purpose is to do the bytes-to-string conversion when you're reading text from any form of medium that serves bytes, such as FileStream or similar. However, those bytes actually have to be correctly encoded for the encoding you choose. You cannot convert arbitrary sequences of bytes to strings using these encodings. As you've already observed, they will mangle the result. You might get lucky for quite a few byte sequences, but if you're using any of the cryptographic secure encryption algorithms, that luck will run out immediately.
Instead, use something like Base64 or 85. Base64 is built into .NET, and if you have this code:
byte[] original = ...
string encoded = Encoding.ASCII.GetString(original);
byte[] decoded = Encoding.ASCII.GetBytes(encoded);
all you have to do is change to this:
byte[] original = ...
string encoded = Convert.ToBase64String(original);
byte[] decoded = Convert.FromBase64String(encoded);

Openssl Command Line for Triple DES HMAC like C# MACTripleDES

Can anyone explain how to make a TDES MAC in OpenSSL command line?
I am trying to duplicate some functionality of a working C# program in C for the OpenSSL API, and am having trouble duplicating the .Net MACTripleDES.ComputeHash function in openssl. Here is an example with bogus data and key:
using (MACTripleDES hmac = new MACTripleDES(Utilities.HexStringToByteArray("112233445566778899aabbccddeeff00")))
{
// Compute the hash of the input file.
byte[] hashValue = hmac.ComputeHash(Utilities.HexStringToByteArray("001000000000000000000000000000008000000000000000"));
string signature = Utilities.ByteArrayToHexString(hashValue);
PrintToFeedback("Bogus Signature = " + signature);
}
The result is "Bogus Signature = A056D11063084B3E" My new C program has to provide the same hash of that data in order to interoperate with its wider environment. But the way to do this in openSSL eludes me. This shows that the openssl data starts out the same as the C# data:
cmd>od -tx1 bsigin
0000000 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0000020 80 00 00 00 00 00 00 00
stringified, 001000000000000000000000000000008000000000000000 MATCHes the c# string.
cmd>openssl dgst -md5 -mac hmac -macopt hexkey:112233445566778899aabbccddeeff00 bsigin
HMAC-MD5(bsigin)= 7071d693451da3f2608531ee43c1bb8a
That data is too long, and my expected data is not a substring. Same for -sha1 etc. I tried encrypting and making the digest separately, no good. MS does not say what kind of hash it does, and I can't find documentation of how to set up a MAC with TDES in openssl.
So I'm hoping someone here knows enough about both platforms to give me a decent hint.
Command line answer:
cmd>openssl enc -des-ede-cbc -K 112233445566778899aabbccddeeff00 -iv 0000000000000000 -in bsigin -out bsigout
cmd>od -tx1 bsigout
0000000 7c de 93 c6 5f b4 03 21 aa c0 89 b8 ae f3 da 5d
0000020 a0 56 d1 10 63 08 4b 3e 4c 03 41 d6 dd 9e e4 32
^^^^^^^^^^^^^^^^^^^^^^^
That is, the command line form returns 32 bytes, and bytes 16..23 contain the hmac.
API answer:
DES_key_schedule SchKey1,SchKey2;
DES_cblock iv = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
DES_set_key((C_Block *)Key1, &SchKey1);
DES_set_key((C_Block *)Key2, &SchKey2);
DES_ede3_cbc_encrypt( (unsigned char*)input_data, (unsigned char*)cipher, inputLength, &SchKey1, &SchKey2, &SchKey1, &iv, DES_ENCRYPT);
Where Key1 is the Lkey or left 8 bytes of the 16 byte TDES key, and Key2 is the Rkey or right 8 bytes of the 16 byte TDES key. This call only populates 24 bytes of cipher, as opposed to the 32 byte return of the command line version. You still take bytes 16..23. Hopefully the supporting declarations are intuitive.

setting the BluetoothLEAdvertisement.Flags in C# for iBeacon advertisemet

MS provided samples to send and receive Bluetooth Low Energy advertisements.
I saw this very helpful answer for breaking down the iBeacon packet. There's also an example for setting BluetoothLEAdvertisement.ManufacturerData as the ibeacon standards.
May I ask how can I set the Flags of the BluetoothLEAdvertisement?
For example set the value to:
02-01-06
Thanks
Edit 1:
Here's the code:
using System;
using System.Management;
using System.Text.RegularExpressions;
using Windows.Devices.Bluetooth.Advertisement;
using System.Runtime.InteropServices.WindowsRuntime;
namespace BLE_iBeacon
{
class IBeacon
{
static void Main()
{
Console.WriteLine("Advertising as iBeacon. Press Enter to exit");
// Create and initialize a new publisher instance.
BluetoothLEAdvertisementPublisher publisher = new BluetoothLEAdvertisementPublisher();
// Add a manufacturer-specific section:
var manufacturerData = new BluetoothLEManufacturerData();
// Set the company ID for the manufacturer data.
// 0x004C Apple, Inc.
manufacturerData.CompanyId = 0x004C;
byte[] dataArray = new byte[] {
// last 2 bytes of Apple's iBeacon
0x02, 0x15,
// UUID E4 C8 A4 FC F6 8B 47 0D 95 9F 29 38 2A F7 2C E7
0xE4, 0xC8, 0xA4, 0xFC,
0xF6, 0x8B, 0x47, 0x0D,
0x95, 0x9F, 0x29, 0x38,
0x2A, 0xF7, 0x2C, 0xE7,
// Major
0x00, 0x00,
// Minor
0x00, 0x01,
// TX power
0xC5
};
manufacturerData.Data = dataArray.AsBuffer();
// Add the manufacturer data to the advertisement publisher:
publisher.Advertisement.ManufacturerData.Add(manufacturerData);
publisher.Advertisement.Flags = BluetoothLEAdvertisementFlags.GeneralDiscoverableMode;
publisher.Start();
Console.Read();
publisher.Stop();
}
}
}
Edit 2:
In the C# code if I do not set the Flags, my windows laptop would advertise raw packet like:
04 3E 27 02 01
02 01 0D 45 84 D3 68 21 1B 1A FF 4C 00
02 15 E4 C8 A4 FC F6 8B 47 0D 95 9F 29 38 2A F7 2C E7
00 00 00 01 C5 BA
My purpose is to use raspberry pi's as BLE receivers. I used the Radius Networks's code here. You can see in the ibeacon_scan script, they check the packet of the advertisement to see if it's an iBeacon by:
if [[ $packet =~ ^04\ 3E\ 2A\ 02\ 01\ .{26}\ 02\ 01\ .{14}\ 02\ 15 ]]; then
So the previous raw packet would not be recognized, for missing the flag part. I am wondering if I can advertise the packet with the Flags, like:
04 3E 2A 02 01
02 01 0D 45 84 D3 68 21 1B **02 01 1A** 1A FF 4C 00
02 15 E4 C8 A4 FC F6 8B 47 0D 95 9F 29 38 2A F7 2C E7
00 00 00 01 C5 BA
instead of changing the scan script in the pi.
iBeacon on Windows
The following code publishes an iBeacon on Windows 10 machines:
// Create and initialize a new publisher instance.
BluetoothLEAdvertisementPublisher publisher = new BluetoothLEAdvertisementPublisher();
// Add a manufacturer-specific section:
var manufacturerData = new BluetoothLEManufacturerData();
// Set the company ID for the manufacturer data.
// 0x004C Apple, Inc.
manufacturerData.CompanyId = 0x004c;
// Create the payload
var writer = new DataWriter();
byte[] dataArray = new byte[] {
// last 2 bytes of Apple's iBeacon
0x02, 0x15,
// UUID e2 c5 6d b5 df fb 48 d2 b0 60 d0 f5 a7 10 96 e0
0xe2, 0xc5, 0x6d, 0xb5,
0xdf, 0xfb, 0x48, 0xd2,
0xb0, 0x60, 0xd0, 0xf5,
0xa7, 0x10, 0x96, 0xe0,
// Major
0x00, 0x00,
// Minor
0x00, 0x01,
// TX power
0xc5
};
writer.WriteBytes(dataArray);
manufacturerData.Data = writer.DetachBuffer();
// Add the manufacturer data to the advertisement publisher:
publisher.Advertisement.ManufacturerData.Add(manufacturerData);
publisher.Start();
Proximity UUID
While testing this out, my iOS device would not recognize the Proximity UUID you provided. I'm guessing this is because you generated it yourself, so the app doesn't know what to look for. Instead, I used the proximity UUID from this answer which identifies the Windows 10 device as an AirLocate iBeacon.
Flags
Windows 10 does not currently allow developers to set the flags for a Bluetooth LE advertisement. Luckily, for the Windows device to be recognized as an iBeacon, you don't need those flags!
Ideally, you want to set the flags byte to 0x1a, but other values may still work. The important flag to set is General Discoverable (0x02).
You can use BluetoothLEAdvertisementFlags is an enumeration of bit values here.
My C# is very rusty, but you might try setting the flags hex value directly with: publisher.Advertisement.Flags = 0x1A;

zlib compressing byte array?

I have this uncompressed byte array:
0E 7C BD 03 6E 65 67 6C 65 63 74 00 00 00 00 00 00 00 00 00 42 52 00 00 01 02 01
00 BB 14 8D 37 0A 00 00 01 00 00 00 00 05 E9 05 E9 00 00 00 00 00 00 00 00 00 00
00 00 00 00 01 00 00 00 00 00 81 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 05 00 00 01 00 00 00
And I need to compress it using the deflate algorithm (implemented in zlib), from what I searched the equivalent in C# would be using GZipStream but I can't match the compressed resulted at all.
Here is the compressing code:
public byte[] compress(byte[] input)
{
using (MemoryStream ms = new MemoryStream())
{
using (GZipStream deflateStream = new GZipStream(ms, CompressionMode.Compress))
{
deflateStream.Write(input, 0, input.Length);
}
return ms.ToArray();
}
}
Here is the result of the above compressing code:
1F 8B 08 00 00 00 00 00 04 00 ED BD 07 60 1C 49 96 25 26 2F 6D CA 7B 7F 4A F5 4A
D7 E0 74 A1 08 80 60 13 24 D8 90 40 10 EC C1 88 CD E6 92 EC 1D 69 47 23 29 AB 2A
81 CA 65 56 65 5D 66 16 40 CC ED 9D BC F7 DE 7B EF BD F7 DE 7B EF BD F7 BA 3B 9D
4E 27 F7 DF FF 3F 5C 66 64 01 6C F6 CE 4A DA C9 9E 21 80 AA C8 1F 3F 7E 7C 1F 3F
22 7E 93 9F F9 FB 7F ED 65 7E 51 E6 D3 F6 D7 30 CF 93 57 BF C6 AF F1 6B FE 5A BF
E6 AF F1 F7 FE 56 7F FC 03 F3 D9 AF FB 5F DB AF 83 E7 0F FE 35 23 1F FE BA F4 FE
AF F1 6B FC 1A FF 0F 26 EC 38 82 5C 00 00 00
Here is the result I am expecting:
78 9C E3 AB D9 CB 9C 97 9A 9E 93 9A 5C C2 00 03 4E 41 0C 0C 8C 4C 8C 0C BB 45 7A
CD B9 80 4C 90 18 EB 4B D6 97 0C 28 00 2C CC D0 C8 C8 80 09 58 21 B2 00 65 6B 08
C8
What I am doing wrong, could some one help me out there ?
First, some information: DEFLATE is the compression algorithm, it is defined in RFC 1951. DEFLATE is used in the ZLIB and GZIP formats, defined in RFC 1950 and 1952 respectively, which essentially are thin wrappers around DEFLATE bytestreams. The wrappers provide metadata such as, the name of the file, timestamps, CRCs or Adlers, and so on.
.NET's base class library implements a DeflateStream that produces a raw DEFLATE bytestream, when used for compression. When used in decompression it consumes a raw DEFLATE bytestream. .NET also provides a GZipStream, which is just a GZIP wrapper around that base. There is no ZlibStream in the .NET base class library - nothing that produces or consumes ZLIB. There are some tricks to doing it, you can search around.
The deflate logic in .NET exhibits a behavioral anomaly, where previously compressed data can actually be inflated, significantly, when "compressed". This was the source of a Connect bug raised with Microsoft, and has been discussed here on SO. This may be what you are seeing, as far as ineffective compression. Microsoft have rejected the bug, because while it is ineffective for saving space, the compressed stream is not invalid, in other words it can be "decompressed" by any compliant DEFLATE engine.
In any case, as someone else posted, the compressed bytestream produced by different compressors may not necessarily be the same. It depends on their default settings, and the application-specified settings for the compressor. Even though the compressed bytestreams are different, they may still decompress to the same original bytestream. On the other hand the thing you used to compress was GZIP, while it appears what you want is ZLIB. While they are related, they are not the same; you cannot use GZipStream to produce a ZLIB bytestream. This is the primary source of the difference you see.
I think you want a ZLIB stream.
The free managed Zlib in the DotNetZip project implements compressing streams for all of the three formats (DEFLATE, ZLIB, GZIP). The DeflateStream and GZipStream work the same way as the .NET builtin classes, and there's a ZlibStream class in there, that does what you think it does. None of these classes exhibit the behavior anomaly I described above.
In code it looks like this:
byte[] original = new byte[] {
0x0E, 0x7C, 0xBD, 0x03, 0x6E, 0x65, 0x67, 0x6C,
0x65, 0x63, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x42, 0x52, 0x00, 0x00,
0x01, 0x02, 0x01, 0x00, 0xBB, 0x14, 0x8D, 0x37,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x05, 0xE9, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x81, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00
};
var compressed = Ionic.Zlib.ZlibStream.CompressBuffer(original);
The output is like this:
0000 78 DA E3 AB D9 CB 9C 97 9A 9E 93 9A 5C C2 00 03 x...........\...
0010 4E 41 0C 0C 8C 4C 8C 0C BB 45 7A CD 61 62 AC 2F NA...L...Ez.ab./
0020 19 B0 82 46 46 2C 82 AC 40 FD 40 0A 00 35 25 07 ...FF,..#.#..5%.
0030 CE .
To decompress,
var uncompressed = Ionic.Zlib.ZlibStream.UncompressBuffer(compressed);
You can see the documentation on the static CompressBuffer method.
EDIT
The question is raised, why is DotNetZip producing 78 DA for the first two bytes instead of 78 9C? The difference is immaterial. 78 DA encodes "max compression", while 78 9C encodes "default compression". As you can see in the data, for this small sample, the actual compressed bytes are exactly the same whether using BEST or DEFAULT. Also, the compression level information is not used during decompression. It has no effect in your application.
If you don't want "max" compression, in other words if you are very set on getting 78 9C as the first two bytes, even though it doesn't matter, then you cannot use the CompressBuffer convenience function, which uses the best compression level under the covers. Instead you can do this:
var compress = new Func<byte[], byte[]>( a => {
using (var ms = new System.IO.MemoryStream())
{
using (var compressor =
new Ionic.Zlib.ZlibStream( ms,
CompressionMode.Compress,
CompressionLevel.Default ))
{
compressor.Write(a,0,a.Length);
}
return ms.ToArray();
}
});
var original = new byte[] { .... };
var compressed = compress(original);
The result is:
0000 78 9C E3 AB D9 CB 9C 97 9A 9E 93 9A 5C C2 00 03 x...........\...
0010 4E 41 0C 0C 8C 4C 8C 0C BB 45 7A CD 61 62 AC 2F NA...L...Ez.ab./
0020 19 B0 82 46 46 2C 82 AC 40 FD 40 0A 00 35 25 07 ...FF,..#.#..5%.
0030 CE .
Quite simply what you got had a GZip header. What you want is the simpler Zlib header. ZLib has options for GZip header, Zlib header or no header. Typically the Zlib header is used unless the data is associated with a disk file (in which case GZip header is used.) Apparently, there is no way with .Net library to write a zlib header (even though this is by far the most common header used in file formats). Try http://dotnetzip.codeplex.com/.
You can quickly test all the different zlib options using HexEdit (Operations->Compression->Settings). See http://www.hexedit.com . It took me 10 minutes to check your data by simply pasting your compressed bytes into HexEdit and decompressing. Also tried compressing your orignal bytes with GZip and ZLib headers as a double-check. Note that you may have to fiddle with the settings to get exactly the bytes you were expecting.

Categories