Sorry for the lengthy post I'm trying to give as much information as I can and I did my best to format everything to be as easily readable as possible. I've been trying to decompress GIF's in C# and seem to have everything except the LZW decompression down. I am reading in the Gif from a URL. For this example, I will be using this Animated GIF. Which I know has 35 Frames but I only want to look at the first one.
HEADER: 474946383961
GIF Version: 89a
LOGICAL DESCRIPTOR: 41003D00F60000
Width : 65
Height: 61
Sorted Color Table: False
Global Color Table Size: 128
Background Color Index: 0
Pixel Bits: 0
Below is the Global Color Table, something I am slightly confused about because it is filled 123/128 colors and the rest are 000000s, do I discredit this while filling the code table?
Sorry about the format here, only showing it for the question regarding the 000000s
Global Color Table:
141414 181818 1C1C1C 242424 262626 282828 2A2A2A 2C2C28 302C2C 30302C 3430303 434303
434344 030185 030284 03C346 038306 438306 43C305 044245 048245 448285 C50285C 502C5C
542C5C 543054 503860 542C68 50286C 50286C 542870 542870 542C70 582874 582870 582C745
82C745 C2C785 C2C605 830645 830645 C30645 C34685 C307C6 02C6C6 0346C6 438746 434786
030786 838786 C3C006 464806 02C806 42C806 430846 C34886 C348C7 034847 43C887 0388C7
43C887 83C8C7 83C907 034947 034947 434987 434947 438987 4389C7 8389C7 C38A07 838A07
C388C7 C40A46 050A48 03C948 048A08 848A08 C48A89 04CA89 44CAC9 44CAC9 850B09 C50B49
C50B49 C54B4A 054BCA 050B8A 054B8A 454BCA 454BCA 458BCA C5CC0A 454C0A 854C0A 858C4A
858C4A C58C4A C5CC8A C58C8A C5CC4B 058C8B 058C8B 05CC89 C64C8B 060CCB 460D0B 860D0B
864D4B C64D4C 068D8C 068D8C 468DCC 468DCC 86CE8D 070ECD 874F0D C78F4D C78F4D C7CF4E
078F8E 07CF8E 47C000 000000 000000 000000 000000 000000
Graphics Control Extension
21F9040D03007B00 |Graphics Control Extension
Block Size: 4
Has Transparency: True
Delay: 768
Transparency Color Index: 123
Image Descriptor
IMAGE DESCRIPTOR: 2C0000000041003D0000
Left: 0
Top: 0
Width : 65
Height: 61
Local Color Table: False
Interlace: False
Finally Where I am confused the Image Data
LZW minimum code size: 7
IMAGE DATA SUBBLOCK 1 HEXDATA:
80 7B 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F 86 06 06 90 94 84 07
95 86 05 0C 00 98 94 05 01 07 05 9D 7B 05 0B 9B A3 8F 07 0C 0C 02 05
93 94 07 08 0A 09 9C A8 8D 05 08 B2 04 02 AF A9 09 0B 0A B5 B6 8B 05
00 C0 AB 01 A2 8E 05 BF C0 C2 C3 8A 07 0A 08 AB 0F 19 CA 8C A5 C0 0B
CF D0 88 C5 0C 09 0C 08 6E 29 CB 0C 0B A6 DD DE 87 9A E9 4C 67 2A D8
89 06 0C 0A E9 08 EB EC 85 06 00 BF 58 63 C6 C8 5B 64 40 4B BA 05 B4
F6 11 03 F0 20 60 18 37 03 E9 85 41 90 4E 43 05 85 C4 AA 9C 39 43 E6
8C 0C 15 8A 0A 98 E9 58 A6 0A 05 8C 21 29 90 21 33 86 0C 98 88 99 C0
8C D9 38 E6 22 CA 90 13 58 86 21 23 05 45 81 9F 40 81 5A 09 C8 F2 E4
CD 94 1D 37 5C D0 93 A7 A9 D3 A6 2A 7C 8C 34 63 F2 28 B1 09 14 EE D8
A9 B3 B5 8E 57 AE 76 B8 AC 68 49 66 82 D5 6C 75 E2 A8 55 0B 67 6D 1C
For All intensive purposes We should need to look at the first few binary bits
IMAGE BLOCK BINARY:
10000000 clearcode, 01111011, 10000010, 10000011, 10000100,
10000101, 10000110, 10000111, 10001000, 10001001, 10001010,
10001011, 10001100, 10001101, 10001110, 10001111,
Codes:
1000000, 0011110, 1110000, 01010000... ect
My main question is how do I use LSB Packing order when reading these codes, secondly how does this make sense for each pixel considering the background is transparent, like how do I get the index of the first non transparent pixel. Finally, at what point do I increase the code size for adding codes to the table to LZW Minimum codes size +1(8). Thank you for any advice.
LSB packing order just means to read the data as little-endian and right shift the data as you "eat" the bits.
Here's an example in C, C# makes accessing memory more painful, but the logic would be the same:
uint32_t ulBits;
unsigned char *pData;
int codelen, code, bitnum;
int mask;
int nextcode;
codelen = 7; // assume 7 bits to start
mask = (1<<(codelen+1)) -1;
clearcode = (mask >> 1) + 1;
nextcode = clearcode + 2;
ulBits = *(uint32_t)pData; // read 32-bits as little endian
bitnum = 0;
#define WORDLEN 32
// To read the variable length codes you would do the following:
while (decoding == true)
{
if ((bitnum + codelen) > WORDLEN) // need to read more data
{
pData += (bitnum >> 3); // adjust source pointer
ulBits = *(uint32_t)pData; // read another 32-bits
bitnum &= 7; // reset bit offset
}
code = (ulBits >> bitnum);
code &= mask;
bitnum += codelen;
// some logic here to increment the nextcode is beyond the scope of this answer
<the rest of your logic here>
}
As you decompress the codes, you add a new item to your dictionary and increment your "next code" value. When this value can't fit in the current code size, you increase it by one bit until you hit 4096 and usually start over with a clear code to reset the dictionary. There is a rarely used option called "differed clear code". In this case the full dictionary stays in use until a clear code is received. There are plenty of sample LZW decoders that you can look at, so it's not necessary to post an entire one here.
Related
I'm trying to generate a public key with BouncyCastle (because I'm using Unity and do not have access to ECDiffieHellmanCng), and then I transfer the public key to the server which is using ECDiffieHellmanCng for its key handling.
The server is rejecting my key, for what appears to be because of its small length. ECDiffieHellmanCng generates a public key that is much larger in size compared to that of what Bouncy castle generates.
Is there a way to generate a larger key in bouncy castle?
I tried changing the keybit size, but get an error saying: InvalidParameterException: unknown key size.
Key that BouncyCastle generates:
3059301306072A8648CE3D020106082A8648CE3D03010703420004272F71C1D8B3DC0A7FCB1E9650EEF64EA8F639BEC97D49F8848455C2F5869F7324332D188129C84727F834EE7EE7D8EB7DFC8D40CD4ED219A4FBCEF6C15200F3
Key that ECDiffieHellmanCng generates:
45434B35420000000055CC8665A66A7CDF2E9BF7C69A25B322C72CDBDB1EA8F348050B0A7CF32F9AAD890EA513583367977D5157B2F7FBF55661C9AE2DBAF09B1DC1EA8F193688C3C09501BEE326867ABCB41CA1029F66AF888649F0A6C0674D19670CF32461BA7B3867C1623D68829A7A9A7F1CFC6F5DB99E13C8D960AEF6F5CDAB5B3B62ED6CBEC7222C9F
Here is the code thats generating the bouncy castle key:
const string Algorithm = "ECDH";
const int KeyBitSize = 256;
const int NonceBitSize = 128;
const int MacBitSize = 128;
const int DefaultPrimeProbability = 30;
IAsymmetricCipherKeyPairGenerator aliceKeyGen = GeneratorUtilities.GetKeyPairGenerator(Algorithm);
DHParametersGenerator aliceGenerator = new DHParametersGenerator();
aliceGenerator.Init(KeyBitSize, DefaultPrimeProbability, new SecureRandom());
DHParameters aliceParameters = aliceGenerator.GenerateParameters();
KeyGenerationParameters aliceKGP = new DHKeyGenerationParameters(new SecureRandom(), aliceParameters);
aliceKeyGen.Init(aliceKGP);
AsymmetricCipherKeyPair aliceKeyPair = aliceKeyGen.GenerateKeyPair();
IBasicAgreement aliceKeyAgree = AgreementUtilities.GetBasicAgreement(Algorithm);
aliceKeyAgree.Init(aliceKeyPair.Private);
SubjectPublicKeyInfo publicKeyInfo = SubjectPublicKeyInfoFactory.CreateSubjectPublicKeyInfo(aliceKeyPair.Public);
byte[] serializedPublicBytes = publicKeyInfo.ToAsn1Object().GetDerEncoded();
string serializedPublic = AsString(serializedPublicBytes);
public static string AsString(byte[] bytes, bool keepDashes = false)
{
string hex = BitConverter.ToString(bytes);
return (keepDashes ? hex : hex.Replace("-", ""));
}
I also tried the Mentalis.org DH library, which gives me a larger key, but still just a hair too short.
// create a new DH instance
DiffieHellman dh1 = new DiffieHellmanManaged();
// generate the public key of the first DH instance
byte[] ke1 = dh1.CreateKeyExchange();
string publicKeyString = AsString(ke1);
Key from mentalis.org library:
5F4542F9A8F5636ECCBBAC38238C97ABE757B8F65E25B181BCF41C58985E699EFD6B9606B99F7074717E83F7AC1B5E97DFF6DBA94876F74645F25F0D7FAA1528898C1BD0BB568DF15A98724093766B213769893A05B47E40410B0F395C834F68F57B2EE01852895D912C1D56675A7D8C5367B5E06DE08AAA18CBB4C69F3AE142
If you were to decode the BouncyCastle version you'd see that it is
30 59
SEQUENCE
30 13
SEQUENCE
06 07 2A 86 48 CE 3D 02 01
OBJECT IDENTIFIER 1.2.840.10045.2.1 (id-ecPublicKey)
06 08 2A 86 48 CE 3D 03 01 07
OBJECT IDENTIFIER 1.2.840.10045.3.1.7 (id-secp256r1)
03 42 00
BIT STRING
04 27 2F 71 C1 D8 B3 DC 0A 7F CB 1E 96 50 EE F6
4E A8 F6 39 BE C9 7D 49 F8 84 84 55 C2 F5 86 9F
73 24 33 2D 18 81 29 C8 47 27 F8 34 EE 7E E7 D8
EB 7D FC 8D 40 CD 4E D2 19 A4 FB CE F6 C1 52 00
F3
The BIT STRING's payload is the encoded value of an ecPublicKey whose curve is secp256r1.
Then, following 2.3.3 Elliptic-Curve-Point-to-Octet-String Conversion from the SEC-1 paper we see that it's encoded as
04
Uncompressed Point
X = 27 2F 71 C1 D8 B3 DC 0A 7F CB 1E 96 50 EE F6 4E
A8 F6 39 BE C9 7D 49 F8 84 84 55 C2 F5 86 9F 73
Y = 24 33 2D 18 81 29 C8 47 27 F8 34 EE 7E E7 D8 EB
7D FC 8D 40 CD 4E D2 19 A4 FB CE F6 C1 52 00 F3
Following the logic from the .NET Core import/export ECC feature we see that the equivalent CNG blob is
// BCRYPT_ECDH_PUBLIC_P256_MAGIC (little-endian)
45 43 B4 31
// cbKey=(DWORD)32 (little-endian)
20 00 00 00
// The X bytes (big-endian):
27 2F 71 C1 D8 B3 DC 0A 7F CB 1E 96 50 EE F6 4E
A8 F6 39 BE C9 7D 49 F8 84 84 55 C2 F5 86 9F 73
// The Y bytes (big-endian):
27 2F 71 C1 D8 B3 DC 0A 7F CB 1E 96 50 EE F6 4E
A8 F6 39 BE C9 7D 49 F8 84 84 55 C2 F5 86 9F 73
Hello I'm trying to pass data from a pointer to a struct but the values seem to be different.
struct somestruct
{
public file header;
public uint version;
}
unsafe struct file
{
public fixed char name[8];
public uint type;
public uint size;
}
Then in code somewhere..
public unsafe int ReadFile(string filepath)
{
somestruct f = new somestruct();
byte[] fdata = System.IO.ReadAllBytes( filepath );
fixed( byte* src = fdata )
{
f.header = *(file*)src;
MessageBox.Show( new string(f.header.name) ); //should be 'FILENAME' but it's like japanese.
}
return 0;
}
Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000 46 49 4C 45 4E 41 4D 45 00 00 00 01 00 00 00 30 FILENAME.......0
00000010 74 27 9F EF 74 77 F1 D7 C5 86 93 3D 39 0D 72 A9 t'Ÿïtwñ×ņ“=9.r©
00000020 63 8B 92 CF F6 7D 8A 14 45 9D 68 51 A4 8E A4 EE c‹’Ïö}Š.E.hQ¤Ž¤î
00000030 4E FE D0 66 45 0E C9 8D 96 BB F4 EE 52 1F 89 D3 NþÐfE.É.–»ôîR.‰Ó
00000040 5C 80 1A 71 8A 16 B1 8B 3A A8 1B A4 48 11 B8 E8 \€.qŠ.±‹:¨.¤H.¸è
Do you have any idea what's going on?
Each char is 2 bytes - a fixed buffer of 8 chars is 16 bytes. You are reading the first 8 bytes as only the first 4 characters in that buffer, and the high bytes will make it look. Like the eastern Unicode ranges.
I would say: deserialize it at the stream level. Don't do this.
Basically, read (at least) 20 bytes into a buffer, then decode manually, using:
string s = Encoding.ASCII.GetString(buffer, 0, 8);
For the string, and probably shift operations for the unsigned integers.
You could also use unsafe code to read the integers from the buffer, via the other meaning of fixed and a pointer-cast.
A char is UTF-16 and is 2 bytes. You need to convert the UTF-8/ANSI (1 byte) string to a UTF-16 string.
I have a string variable from which I get the following bytes with the following loop:
Bytes I get: 1e 05 55 3c *e2 *91 6f 03 *fe 1a 1d *f4 51 6a 5e 3a *ce *d1 04 *8c
With that loop:
byte[] temp = new byte[source.Length];
string x = "";
for (int i = 0;i != source.Length;i++)
{
temp[i] = ((byte) source[i]);
}
Now I have wanted to simplify that operation and use Encoding's GetBytes.
The problem is I cannot fit an appropriate encoding. e.g. I get several bytes incorrect:
Encoding.ASCII.GetBytes(source): 1e 05 55 3c *3f *3f 6f 03 *3f 1a 1d *3f 51 6a 5e 3a *3f *3f 04 *3f
Encoding.Default.GetBytes(source): 1e 05 55 3c e2 3f 6f 03 3f 1a 1d f4 51 6a 5e 3a ce 4e 04 3f
How can I get rid of that loop and use Encoding's GetBytes?
Here is the summary:
Loop(correct bytes): 1e 05 55 3c *e2 *91 6f 03 *fe 1a 1d *f4 51 6a 5e 3a *ce *d1 04 *8c
Encoding.ASCII.GetBytes(source): 1e 05 55 3c *3f *3f 6f 03 *3f 1a 1d *3f 51 6a 5e 3a *3f *3f 04 *3f
Encoding.Default.GetBytes(source): 1e 05 55 3c e2 3f 6f 03 3f 1a 1d f4 51 6a 5e 3a ce 4e 04 3f
Thanks!
Addition:
I have a string input in hex, sth like: "B1807869C20CC1788018690341"
then I transfer this into string with the method:
private static string hexToString(string sText)
{
int i = 0;
string plain = "";
while (i < sText.Length)
{
plain += Convert.ToChar(Convert.ToInt32(sText.Substring(i, 2), 16));
i += 2;
}
return plain;
}
Your hexToString is transferring byte values (via hex) directly to unicode code-points in the range 0-255. As it happens, that ties into code-page 28591, so if you use:
Encoding enc = Encoding.GetEncoding(28591);
and use that enc, you should get the right data; however, a more important point here is that binary data is not the same as text data, and you should not use a string to hold arbitrary binary.
Presuming that you are trying to "decode" a string literal:
C# stores the strings as Unicode internally.
So you might want to use a encoding that (correctly) supports Unicode
such as:
Encoding.UTF8.GetBytes(source)
Encoding.UnicodeEncoding.GetBytes(source)
Note the caution given for Encoding.Default in MSDN
I'm writing a C#.Net app to run on windows that needs to take an image of a removable disk and chuck it onto a Linux Live USB. The Live USB is the inserted into the target machine and boots, on start up it runs a script which uses the dd command like so to flash it onto another drive:
dd if=/path/to/file/from/csharp/program of=/dev/sdX
The problem I am having is creating the image on the windows side. I have tried my Live Linux out with files I have created on a Linux system using dd and that works fine, but I need to be able to create these files from within a C#.Net application on Windows. I'd rather not have to rely on cygwin or some other dependency so tried to use the Win32 CreateFile function to open the physical device.
CreateFile is called with the first arg set to "\.\F:" (if F: is the drive I want to image), like so:
SafeFileHandle TheDevice = CreateFile(_DevicePath, (uint)FileAccess.Read, (uint)(FileShare.Write | FileShare.Read | FileShare.Delete), IntPtr.Zero, (uint)FileMode.Open, (uint)FILE_ATTRIBUTE_SYSTEM | FILE_FLAG_SEQUENTIAL_SCAN, IntPtr.Zero);
if (TheDevice.IsInvalid)
{
throw new IOException("Unable to access drive. Win32 Error Code " + Marshal.GetLastWin32Error());
}
FileStream Dest = System.IO.File.Open(_SaveFile, FileMode.Create);
FileStream Src = new FileStream(TheDevice, FileAccess.Read);
Src.CopyTo(Dest);
Dest.Flush();
Src.Close();
Dest.Close();
But when the output file is dd'd back onto a disk using the Live Linux USB the result is not as expected (the disk isn't bootable etc, but from examining the output file in a hex editor, it looks like there is an MBR at the beginning etc).
Is this a problem with endianess or should I using something other than a FileStream to copy the data into the file.
Alternatively is there an example of dd for Windows source code (C# or C++, i've looked at the Delphi for http://www.chrysocome.net/dd and don't totally understand it or have a decent Delphi IDE to pick the code apart) so I can see how that works?
UPDATE/EDIT:
Here is a hex string of the first 512 Bytes that the dd output contains:
33 C0 FA 8E D8 8E D0 BC 00 7C 89 E6 06 57 8E C0 FB FC BF 00 06 B9 00 01 F3 A5 EA 1F 06
00 00 52 52 B4 41 BB AA 55 31 C9 30 F6 F9 CD 13 72 13 81 FB 55 AA 75 0D D1 E9 73 09 66
C7 06 8D 06 B4 42 EB 15 5A B4 08 CD 13 83 E1 3F 51 0F B6 C6 40 F7 E1 52 50 66 31 C0 66
99 E8 66 00 E8 21 01 4D 69 73 73 69 6E 67 20 6F 70 65 72 61 74 69 6E 67 20 73 79 73 74
65 6D 2E 0D 0A 66 60 66 31 D2 BB 00 7C 66 52 66 50 06 53 6A 01 6A 10 89 E6 66 F7 36 F4
7B C0 E4 06 88 E1 88 C5 92 F6 36 F8 7B 88 C6 08 E1 41 B8 01 02 8A 16 FA 7B CD 13 8D 64
10 66 61 C3 E8 C4 FF BE BE 7D BF BE 07 B9 20 00 F3 A5 C3 66 60 89 E5 BB BE 07 B9 04 00
31 C0 53 51 F6 07 80 74 03 40 89 DE 83 C3 10 E2 F3 48 74 5B 79 39 59 5B 8A 47 04 3C 0F
74 06 24 7F 3C 05 75 22 66 8B 47 08 66 8B 56 14 66 01 D0 66 21 D2 75 03 66 89 C2 E8 AC
FF 72 03 E8 B6 FF 66 8B 46 1C E8 A0 FF 83 C3 10 E2 CC 66 61 C3 E8 62 00 4D 75 6C 74 69
70 6C 65 20 61 63 74 69 76 65 20 70 61 72 74 69 74 69 6F 6E 73 2E 0D 0A 66 8B 44 08 66
03 46 1C 66 89 44 08 E8 30 FF 72 13 81 3E FE 7D 55 AA 0F 85 06 FF BC FA 7B 5A 5F 07 FA
FF E4 E8 1E 00 4F 70 65 72 61 74 69 6E 67 20 73 79 73 74 65 6D 20 6C 6F 61 64 20 65 72
72 6F 72 2E 0D 0A 5E AC B4 0E 8A 3E 62 04 B3 07 CD 10 3C 0A 75 F1 CD 18 F4 EB FD 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 19 16 9F 29 00 00 80 01 01 00 06 FE 3F 0E 3F 00 00 00 61 C8 03 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 AA
and here is what my code produces:
EB 76 90 4D 53 44 4F 53 35 2E 30 00 02 04 04 00 02 00 02 00 00 F8 F2 00 3F 00 FF 00 3F
00 00 00 61 C8 03 00 80 00 29 7A E8 21 04 4E 4F 20 4E 41 4D 45 20 20 20 20 46 41 54 31
36 20 20 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 E9 05 01 B4 0E 53 33 DB CD 10 5B C3 8A 07 3C 00 74 06 E8 EE FF 43 EB F4 C3
0D 4E 6F 20 42 50 42 3A 20 43 61 6E 27 74 20 62 6F 6F 74 20 75 73 69 6E 67 20 43 48 53
20 66 75 6E 63 74 69 6F 6E 73 00 50 B0 2E E8 BC FF 58 33 DB 8E 06 E4 01 F6 06 DC 01 02
75 42 F6 06 DC 01 04 75 07 80 3E E8 01 80 72 34 53 53 52 50 06 53 55 6A 10 8B F4 52 50
8A 16 E8 01 B8 00 42 F9 CD 13 8A EC 58 5A 8D 64 10 72 14 80 FD 00 75 0F 03 C5 83 D2 00
C3 BB 91 00 E8 78 FF F4 EB FD 83 3E 18 00 00 74 F0 52 50 8B CD F7 36 18 00 8B F2 03 D1
3B 16 18 00 76 06 8B 0E 18 00 2B CE 33 D2 F7 36 1A 00 88 16 E9 01 8B F8 8B D7 51 8A C1
8D 4C 01 C0 E6 06 0A CE 8A EA 8B 16 E8 01 B4 02 CD 13 59 73 15 80 FC 09 75 0A 49 EB DE
8A C4 04 30 E8 18 FF B4 00 CD 13 EB D1 58 5A 03 C1 83 D2 00 2B E9 74 07 C1 E1 09 03 D9
EB 94 C3 00 00 00 00 FA FC E8 00 00 5E 81 EE 85 01 2E 8B 84 E4 01 8E D8 8E C0 8E D0 2E
C7 84 7C 01 AF 01 2E 89 84 7E 01 B9 00 01 BF 00 00 F3 2E A5 2E FF AC 7C FF BC 00 0A FB
80 3E E8 01 FF 75 04 88 16 E8 01 83 06 E4 01 20 A1 E0 01 8B 16 E2 01 BD 02 00 E8 E9 FE
50 52 EB 74 90 00 00 00 00 00 00 00 00 00 00 00 D3 20 00 00 00 30 80 00 FF 00 68 41 00
40 09 FF 40 5A AC 04 00 00 AC 04 00 00 00 00 12 00 55 AA
This was taken from exactly the same CF card without any editing/writing etc happening, so i'm confused as to why they are so different, but both end with the correct 55 AA bytes too. Does Windows mangle the MBR's on cards when they're accessed this way or is some other weird under the hood stuff happening that I'm not aware of?
I think what you have should work - I've tried this myself using a bootable floppy disk image (mounted as a virtual drive using ImDisk) and the resulting file is binary identical to the original image.
For completeness here is the code I used (in its entirity):
using System;
using System.IO;
using System.Runtime.InteropServices;
using Microsoft.Win32.SafeHandles;
namespace ConsoleApplication1
{
public class Program
{
const int FILE_ATTRIBUTE_SYSTEM = 0x4;
const int FILE_FLAG_SEQUENTIAL_SCAN = 0x8;
[DllImport("Kernel32.dll", SetLastError = true, CharSet = CharSet.Auto)]
public static extern SafeFileHandle CreateFile(string fileName, [MarshalAs(UnmanagedType.U4)] FileAccess fileAccess, [MarshalAs(UnmanagedType.U4)] FileShare fileShare, IntPtr securityAttributes, [MarshalAs(UnmanagedType.U4)] FileMode creationDisposition, int flags, IntPtr template);
[STAThread]
static void Main()
{
using (SafeFileHandle device = CreateFile(#"\\.\E:", FileAccess.Read, FileShare.Write | FileShare.Read | FileShare.Delete, IntPtr.Zero, FileMode.Open, FILE_ATTRIBUTE_SYSTEM | FILE_FLAG_SEQUENTIAL_SCAN, IntPtr.Zero))
{
if (device.IsInvalid)
{
throw new IOException("Unable to access drive. Win32 Error Code " + Marshal.GetLastWin32Error());
}
using (FileStream dest = File.Open("TempFile.bin", FileMode.Create))
{
using (FileStream src = new FileStream(device, FileAccess.Read))
{
src.CopyTo(dest);
}
}
}
}
}
}
If this doesn't work then it seems to indicate that:
There is a problem with the original image.
The problem is with whatever is using the disk image that you've just written.
There is some subtle differences in dealing with the specific device you are accessing (although I can't think what)
The most likely culprit is step 2. What exactly is it that you are doing with the resulting disk image?
Update: This is written in the comments, but for completeness I thought I'd add it to my answer - it looks like whats happening is that the contents of the first partition of the disk is being written, when instead what is wanted is the contents of the entire disk.
When you take a look at the second hex string (the one produced by sample code) in something like HxD we see this:
ëv.MSDOS5.0..........øò.?.ÿ.?...aÈ..€.)zè!.NO NAME FAT16 ..
........................................................é..´.S3Û
Í.[Ê.<.t.èîÿCëôÃ.No BPB: Can't boot using CHS functions.P°.è¼ÿX
3ÛŽ.ä.ö.Ü..uBö.Ü..u.€>è.€r4SSRP.SUj.‹ôRPŠ.è.¸.BùÍ.ŠìXZ.d.r.€ý.u.
.ŃÒ.û‘.èxÿôëýƒ>...tðRP‹Í÷6..‹ò.Ñ;...v.‹...+Î3Ò÷6..ˆ.é.‹ø‹×QŠÁ.
L.Àæ..Ίê‹.è.´.Í.Ys.€ü.u.IëÞŠÄ.0è.ÿ´.Í.ëÑXZ.ÁƒÒ.+ét.Áá..Ùë”Ã....
úüè..^.î…..‹„ä.ŽØŽÀŽÐ.Ç„|.¯..‰„~.¹..¿..ó.¥.ÿ¬|ÿ¼..û€>è.ÿu.ˆ.è.ƒ.
ä. ¡à.‹.â.½..èéþPRët............Ó ...0€.ÿ.hA.#.ÿ#Z¬...¬.......Uª
This looks to me like the boot sector of a FAT16 partition - the presence of the strings "MSDOS5.0", "NO NAME" and "FAT16" near the start is a dead giveaway.
Compare this to the output of the first hex string (the one produced by dd):
3ÀúŽØŽÐ¼.|‰æ.WŽÀûü¿..¹..ó¥ê....RR´A»ªU1É0öùÍ.r..ûUªu.Ñés.fÇ...´B
ë.Z´.Í.ƒá?Q.¶Æ#÷áRPf1Àf™èf.è!.Missing operating system...f`f1Ò».
|fRfP.Sj.j.‰æf÷6ô{Àä.ˆáˆÅ’ö6ø{ˆÆ.áA¸..Š.ú{Í..d.faÃèÄÿ¾¾}¿¾.¹ .ó¥
Ãf`‰å»¾.¹..1ÀSQö.€t.#‰ÞƒÃ.âóHt[y9Y[ŠG.<.t.$.<.u"f‹G.f‹V.f.Ðf!Òu.
f‰Âè¬ÿr.è¶ÿf‹F.è ÿƒÃ.âÌfaÃèb.Multiple active partitions...f‹D.f.
F.f‰D.è0ÿr..>þ}Uª.….ÿ¼ú{Z_.úÿäè..Operating system load error...^
¬´.Š>b.³.Í.<.uñÍ.ôëý......................................Ÿ)..€.
...þ?.?...aÈ..................................................Uª
And we see something that looks to me a lot like a master boot record. Why? Because in the MBR all of the first 440 bytes is boot code, unlike a FAT boot sector which contains the distinctive bios parameter block (it looks like garbage above, but if you put that through a disassembler you get something that looks like valid 16 bit code).
Also, both of those look like valid and completely different boot sectors (complete with error messages). There is no way that a programming error could have "mangled" one to look like the other - it must just be that the wrong thing is being read.
In order to get CreateFile to return the disk instead of the partition it looks like you just need to pass it a different string, for example #"\\.\PhysicalDrive0" opens the first physical disk.
See:
Low Level Disk Access
INFO: Direct Drive Access Under Win32
This is what i've written to do get the \.\PhysicalDriveX path for a given drive letter. If Pass the drive letter into this and take the return value and pass into CreateFile as the first Param I should now get something similar to dd under Linux.
using System.Management; //Add in a reference to this as well in the project settings
public static string GetPhysicalDevicePath(char DriveLetter)
{
ManagementClass devs = new ManagementClass( #"Win32_Diskdrive");
{
ManagementObjectCollection moc = devs.GetInstances();
foreach(ManagementObject mo in moc)
{
foreach (ManagementObject b in mo.GetRelated("Win32_DiskPartition"))
{
foreach (ManagementBaseObject c in b.GetRelated("Win32_LogicalDisk"))
{
string DevName = string.Format("{0}", c["Name"]);
if (DevName[0] == DriveLetter)
return string.Format("{0}", mo["DeviceId"]);
}
}
}
}
return "";
}
I'm writing an app to get a better understanding of DKIM. The spec says I retrieve a "ASN.1 DER-encoded" public key from the domain TXT record. I can seen the key on "s1024._domainkey.yahoo.com" = "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDrEee0Ri4Juz+QfiWYui/E9UGSXau/2P8LjnTD8V4Unn+2FAZVGE3kL23bzeoULYv4PeleB3gfm".
How can I use this key from .net? The examples I've seen get the key from a X509Certificate2, or an XML file containing the RSAParameters.
CORRECTION: I copy/pasted the key above from the network-tools.com DNS tool, which must've cut it short. nslookup gives me the full key:
s1024._domainkey.yahoo.com text =
"k=rsa; t=y; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDrEee0Ri4Juz+QfiWYui/E9UGSXau2P8LjnTD8V4Unn+2FAZVGE3kL23bzeoULYv4PeleB3gfm"
"JiDJOKU3Ns5L4KJAUUHjFwDebt0NP+sBK0VKeTATL2Yr/S3bTxhy+1xtj4RkdV7fVxTn56Lb4udUnwuxK4V5b5PdOKj+XcwIDAQAB; n=A 1024 bit key;"
So abelenky was on the right track with BASE64..
This is the base64-encoding of the DER-encoding of an ASN.1 PublicKeyInfo containing an RSA public key.
Here is a translation:
0 30 159: SEQUENCE {
3 30 13: SEQUENCE {
5 06 9: OBJECT IDENTIFIER '1 2 840 113549 1 1 1'
16 05 0: NULL
: }
18 03 141: BIT STRING 0 unused bits, encapsulates {
22 30 137: SEQUENCE {
25 02 129: INTEGER
: 00 EB 11 E7 B4 46 2E 09 BB 3F 90 7E 25 98 BA 2F
: C4 F5 41 92 5D AB BF D8 FF 0B 8E 74 C3 F1 5E 14
: 9E 7F B6 14 06 55 18 4D E4 2F 6D DB CD EA 14 2D
: 8B F8 3D E9 5E 07 78 1F 98 98 83 24 E2 94 DC DB
: 39 2F 82 89 01 45 07 8C 5C 03 79 BB 74 34 FF AC
: 04 AD 15 29 E4 C0 4C BD 98 AF F4 B7 6D 3F F1 87
: 2F B5 C6 D8 F8 46 47 55 ED F5 71 4E 7E 7A 2D BE
: 2E 75 49 F0 BB 12 B8 57 96 F9 3D D3 8A 8F FF 97
: 73
157 02 3: INTEGER 65537
: }
: }
: }
The OBJECT IDENTIFIER indicates that the following BIT STRING contains the encoding of an RSAPublicKey. The INTEGERs are the modulus and the public exponent.
You can decode the base64 with Convert.FromBase64String, but I don't think .NET has built-in functionality for parsing PublicKeyInfos, so you need to use a 3rd party tool like BouncyCastle.
For anyone interested in this matter I would suggest the System.Security.Cryptography.X509Certificates.PublicKey which can be used to read a DER encoded public key.
That string looks like its some sort of base-64 encoding.
If you convert that string from base-64 to a BLOB, it should then be in valid ASN.1 format.
Try the bouncycastle library, it provides great functionality for such cases.