Encryption/Encoding Algorithm - c#

I have an unencrypted/unencoded string - "565040574". I also have the encrypted/encoded string for this string - "BSubW2AUWrSCL7dk9ucoiA==".
It looks like this string has been Base64ed after encryption, but I don't know which encryption algorithm has been used. If I convert "BSubW2AUWrSCL7dk9ucoiA==" string to bytes using Convert.FromBase64String("BSubW2AUWrSCL7dk9ucoiA=="), I get 16 bytes.
Is there anything using which I can know what type of encryption has been used to encrypt the "565040574" to "BSubW2AUWrSCL7dk9ucoiA=="?

No, there is nothing to tell you how it was encrypted. If you don't have the key to decrypt it then you will be out of luck anyway.
If the plan was to save this to a file or send it in email then it would be base-64 encoded, so that was a good guess.
You may be able to narrow down what it is not by looking at the fact that you have 7 bytes of padding perhaps, but whether it was IDEA or Blowfish or AES, there is no way to know.

Looking at it, from the top of my head I would say AES and more specifically Rijndael.
EDIT:
Just to add, as I said in my comment, without the key you will never know what this is. I am taking it on a best guess scenario, also based on implementations that could be termed "more common", which could also be a complete oversight from me.
Remember that if you can ever outright say what algorithm a ciphertext is in, never, ever use that algorithm.

What can you tell from the data you have? Well, the most concrete bit of information you have is that 9 bytes of cleartext encrypts to 16 bytes of ciphertext. Since it is unlikely that a data compression algorithm is being used on such a small chunk of data, this means we can make an educated guess that:
It is encrypted with a block cipher, with a block size <= 128 bits.
The encryption mode is ECB, since there is no room for an IV.

Related

Shortening DataProtection encryption length in .net?

I'm quite new to .net and had a question regarding DataProtector.
When using DataProtector.Protect without any configuration, the resulting encryption becomes too long for the API I need to pass it to, I was wondering if using the configuration methods (as seen here) would help? I tried the following in the class where I needed to protect the data:
var serviceCollection = new ServiceCollection();
serviceCollection.AddDataProtection()
.UseCustomCryptographicAlgorithms(new ManagedAuthenticatedEncryptionSettings()
{
// a type that subclasses SymmetricAlgorithm
EncryptionAlgorithmType = typeof(Aes),
// specified in bits
EncryptionAlgorithmKeySize = 128,
// a type that subclasses KeyedHashAlgorithm
ValidationAlgorithmType = typeof(HMACSHA256)
});
var services = serviceCollection.BuildServiceProvider();
_protector = services.GetDataProtector("MyClass.v1");
var protect = _protector.Protect(JsonConvert.SerializeObject(myData));
However even after changing the EncryptionAlgorithmKeySize from the default 256 to the minimum 128, 'protect' was still resulting in an encryption of the same length which makes me think that the configuration isn't working or configuration doesn't affect encryption length.
Does anyone know if this is being done the right way or if there is a better way to reduce encryption length?
For example a simple 9 character string gets encrypted to 134 characters.
Any help is much appreciated, thanks!
DPAPI is meant to secure data-at-rest, not data for transmission.
Ryan Dobbs is correct, above (or below? I can't figure out how StackOverflow sorts unaccepted answers...), weakening your encryption to attain a smaller payload is a very bad idea. The right way to address this is to secure the connection (TLS-style SSL), then you can just send things plaintext, or (as Ryan suggests) drop a properly-encrypted payload somewhere that both sender and receiver can access it.
But to answer your question more directly, the payload size is controlled by the hashing function. Encryption key size only tells you the cryptographic complexity of the encryption algorithm -- how hard the encryption is to break. The part that says HMACSHA256 is a SHA-256 hash which means it produces a 256-bit output.
MD5 is 128-bit but it's generally insecure (only good for checksums).
The documentation says the key size and hash size must be equivalent, so you can't go to 128 bits with SHA. The shortest SHA available is the old SHA1 algorithm (HMACSHA1) which is 160 bits, but the expectation is that anything less than 256-bits will be insecure relatively soon. The SHA2 algorithm yields HMACSHA256 and HMACSHA512.

Decrypting message hashed with SHA256

I was given a 16 byte key(used to encrypt message in RC4). First 8 bytes are unknown for me. I know that key was created by hashing a message using SHA256(secret) and getting first 16 characters from string obtained from this hashing function. Unfortunately i don't see a way to get the first 8 bytes of this key. As i know SHA256 is one way hashing function(we cant decrypt it). So how i can use half of the key to get a whole? I would be grateful for giving me some advice.
You answered your own question. The point of a hash is that it's very hard to get the original value, and that the hash changes completely when even a single bit is different.
The 8 bytes you're looking for could be anything, dependent solely on the original value that was being hashed. If you don't know the original value, there is no way to determine what the first 8 bytes of the hash are.

clarification for crypt SHA-512 algorithm (c#)

EDIT: Sorry I forgot to mention, I'm not using the implemented sha512 crypt because as far as I can tell it doesn't involve a salt value or a specified number of rounds to compute the hash with.
Okay so I'm coding the sha-512 crypt in c# and I'm following the steps found here...
http://people.redhat.com/drepper/SHA-crypt.txt
This is my first time doing anything encryption related so I want to make sure I'm understanding the steps correctly... I don't understand c code well enough to direct translation from c to c# :/
I have assumed finishing a digest is the same as computing the hash. In this case, I've also assumed that when the steps refer to a finished digest, they are referring the the computed hash, rather than the pre-hash computed digest bytes. Correct me if I'm wrong please!
Assuming everything has been done correctly for steps 1-8, my doubts start at step 9
9. For each block of 32 or 64 bytes in the password string (excluding
the terminating NUL in the C representation), add digest B to digest A
Since I'm using SHA-512, I have block sizes of 64 bytes.
Would the following code produce the desired result?
//FYI, temp = digestA from steps 1-3 (before expanding digestA for step 9)
//alt_result = computed digestB hash (64 byte hash)
for (cnt = key.Length; cnt > 64; cnt -= 64) //9
{
int i = 0;
ctx.TransformBlock(alt_result, 0, 64, digestA, temp.Length + 64 * i);
i++;
}
If anyone can clarify that what I've stated is correct, I would appreciate it. Thanks!
Salting is as simple as appending a fixed byte string on the end of your input string. Essentially providing a known "homegrown" transform to your input.
About the algorithm itself: you seem to be starting at a disadvantage. A neophyte, you're making a lot of "assumptions" about basic crypting terminology that even need clarification. If the CLR implementation won't work for you, I think your time would be better spent finding a good C implementation and figuring out how to integrate to that. Figuring out the interop (extern) calls to that will be far easier than diving into the intracacies of crypting, the results will be more efficient, and the knowledge you gain about native interop will be far more useful/reusable.
I'll add some important clarification for others who might come across this later.
First:
SHA512 and SHA512Crypt are two distinct algorithms for two different purposes. SHA512 is a general purpose hashing algorithm (see this). SHA512Crypt is a password storage or password based key derivation algorithm that uses SHA512 (hash) internally (see this). SHA512Crypt is based on the earlier Crypt function that used MD5 instead of SHA512.
The password storage/key generation algorithms have been specifically created to make it orders of magnitude more expensive to brute force. The typical way this is done is by iterating over the underlying hash algorithm in some fashion. However, you don't want to to this yourself... which brings us to...
Second:
Do NOT write your own cryptography methods. (see this) There are tons of ways to screw it up, even if you know exactly what you are doing.
If you don't want to use the built in Rfc2898DerviceBytes due to it being based on SHA1, then you could look at bcrypt or some other public, reviewed implementation of a known cryptographic algorithms.

RSA private key encryption

Is there any way to perform private key encryption in C#?
I know about the standard RSACryptoServiceProvider in System.Security.Cryptography, but these classes provide only public key encryption and private key decryption. Also, they provide digital signature functionality, which uses internally private key encryption, but there are not any publicly accessible functions to perform private key encryption and public key decryption.
I've found this article on codeproject, which is a very good start point for performing this kind of encryption, however, I was looking for some ready-to-use code, as the code in the article can hardly encrypt arbitrary-long byte arrays containing random values (that means any values, including zeroes).
Do you know some good components (preferably free) to perform private key encryption?
I use .NET 3.5.
Note: I know this is generally considered as bad way of using asymmetric encryption (encrypting using private key and decrypting using public key), but I just need to use it that way.
Additional Explanation
Consider you have
var bytes = new byte[30] { /* ... */ };
and you want to use 2048bit RSA to ensure no one have changed anything in this array.
Normally, you would use digital signature (ie. RIPEMD160), which you then attach to the original bytes and send over to the receiver.
So, you have 30 bytes of original data, and additional 256 bytes of digital signature (because it is a 2048bit RSA), which is overall of 286 bytes. Hovewer, only 160 bits of that 256 bytes are actually hash, so there is exactly 1888 bits (236 bytes) unused.
So, my idea was this:
Take the 30 bytes of original data, attach to it the hash (20 bytes), and now encrypt these 50 bytes. You get 256 bytes long message, which is much shorter than 286 bytes, because "you were able to push the actual data inside the digital signature".
ECDSA Resources
MSDN
Eggheadcafe.com
c-plusplus.de
MSDN Blog
Wiki
DSA Resources
CodeProject
MSDN 1
MSDN 2
MSDN 3
Final Solution
If anyone is interested how I've solved this problem, I'm going to use 1024bit DSA and SHA1, which is widely supported on many different versions of Windows (Windows 2000 and newer), security is good enough (I'm not signing orders, I just need to ensure that some child can't crack the signature on his iPhone (:-D)), and the signature size is only 40 bytes long.
What you are trying to design is known as a "Signature scheme with message recovery".
Designing a new signature scheme is hard. Designing a new signature scheme with message recovery is harder. I don't know all the details about your design, but there is a good chance that it is susceptible to a chosen message attack.
One proposal for signature schemes with message recovery is RSA PSS-R. But unfortunately, this proposal is covered with a patent.
The IEEE P1363 standarization group, once discussed the addition of signature schemes with message recovery. However, I'm not sure about the current state of this effort, but it might be worth checking out.
Your Public key is a sub-set of your private key. You can use your private key as a public key as it will only use the components of the full key it requires.
In .NET both your private & public keys are stored in the RSAParameters struct. The struct contains fields for:
D
DP
DQ
Exponent
InverseQ
Modulus
P
Q
If you're at the point where the data is so small that the digital signature is huge in comparison, then you have excess signature. The solution isn't to roll your own algorithm, but to cut down what's there. You definitely don't want to try to combine a key with the hash in an amateurish way: this has been broken already, which is why we have HMAC's.
So here's the basic idea:
Create a session key using a cryptographically strong RNG.
Transmit it via PKE.
Use the session key to generate an HMAC-SHA1 (or HMAC-RIPEMD160, or whatever).
If the size of the hash is absurdly large for the given data, cut it in half by XORing the top with the bottom. Repeat as needed.
Send the data and the (possibly cut-down) hash.
The receiver uses the data and the session key to regenerate the hash and then compares it with the one transmitted (possibly after first cutting it down.)
Change session keys often.
This is a compromise between the insanity of rolling your own system and using an ill-fitting one.
I'm wide open to constructive criticism...
I get it now, after reading the comments.
The answer is: don't do it.
Cryptographic signature algorithms are not algorithms from which you can pick and choose - or modify - steps. In particular, supposing a signature sig looks something like encrypt(hash), orig + sig is not the same as encrypt(orig + hash). Further, even outdated signature algorithms like PKCS v1.5 are not as simple as encrypt(hash) in the first place.
A technique like the one you describe sacrifices security for the sake of cleverness. If you don't have the bandwidth for a 256 byte signature, then you need one of:
a different algorithm,
more bandwidth, or
a smaller key.
And if you go with (1), please be sure it's not an algorithm you made up! The simple fact is that crypto is hard.

AES output, is it smaller than input?

I want to encrypt a string and embed it in a URL, so I want to make sure the encrypted output isn't bigger than the input.
Is AES the way to go?
It's impossible to create any algorithm which will always create a smaller output than the input, but can reverse any output back to the input. If you allow "no bigger than the input" then basically you're just talking isomorphic algorithms where they're always the same size as the input. This is due to the pigeonhole principle.
Added to that, encryption usually has a little bit of padding (e.g. "to the nearest 8 bytes, rounded up" - in AES, that's 16 bytes). Oh, and on top of that you're got the issue of converting between text and binary. Encryption algorithms usually work in binary, but URLs are in text. Even if you assume ASCII, you could end up with an encrypted binary value which isn't ASCII. The simplest way of representing arbitrary binary data in text is to use base64. There are other alternatives which would be highly fiddly, but the general "convert text to binary, encrypt, convert binary to text" pattern is the simplest one.
Simple answer is no.
Any symmetric encryption algorithm ( AES included ) will produce an output of at minimum the same but often slightly larger. As Jon Skeet points out, usually because of padding or alignment.
Of course you could compress your string using zlib and encrypt but you'd need to decompress after decrypting.
Disclaimer - compressing the string with zlib will not guarantee it comes out smaller though
What matters is not really the cipher that you use, but the encryption mode that you use. For example the CTR mode has no length expansion, but every encryption needs a new distinct starting point for the counter. Other modes like OFB, CFB (or CBC with ciphertext stealing) also don't need to be padded to a multiple of the block length of the cipher, but they need an IV. It is unclear from your question if there is some information available from which an IV could be derived pseudorandomly an if any of these modes would be appropriate. It is also unclear if you need authentication, or if you need semantic security> i.e. is it a problem if you encrypt the same string twice and you get the same ciphertext twice?
If we are talking about symetric encription to obtain the original encrypted string from a cyphered one it is not possible. I think that unless you use hashes (SHA1, SHA256...) you will never obtain a cyphered string smaller than the original text. The problem with hashes is that they are not the solution for retrieving the original string because they are one way encryption algorithms.
When using AES, the output data will be rounded up to have a specific length (e.g a length divisible trough 16).
If you want to transfer secret data to another website, a HTTP post may do better than embedding the data into the URL.
Also just another thing to clarify:
Not only is it true that symmetric encryption algorithms produce an output that is at least as large as the input, the same is true of asymmetric encryption.
"Asymmetric encryption" and "cryptographic hashes" are two different things.
Asymmetric encryption (e.g. RSA) means that given the output (i.e. the ciphertext), you can get the input (i.e. the plaintext) back if you have the right key, it's just that decrypting requires a different key than the key used for encrypting. For asymmetric encryption, the same "pigeonhole principle" argument applies.
Cryptographic hashes (e.g. SHA-1) mean that given the output (i.e. the hash) you can't get the input back, and you can't even find a different input that hashes to the same value (assuming the hash is secure). For cryptographic hashes, the hash can be shorter than the input. (In fact the hash is the same size regardless of the length of the input.
And also one more thing: In any secure encryption system the ciphertext will be longer than the plaintext. This is because there are multiple possible ciphertexts that any given plaintext could encrypt to (e.g. using different IVs.) If this were not the case then the cipher would leak information because if two identical plaintexts were encrypted, they would encrypt to identical ciphertexts, and an adversary would then know that the plaintexts were the same.

Categories