Searched all over on this. When using Rust openssl crate is it possible to return the encrypted Vec as a UTF-8 string or does the PKCS1 padding prevent this indefinitely? I would be sending the encrypted data back to the user from a .NET/C# Web in a http API call so a string would be preferable.
#[no_mangle]
pub extern "C" fn rsa_encrypt(public_key: *const c_char, data_to_encrypt: *const c_char) -> *mut c_char {
let public_key_string = unsafe {
assert!(!public_key.is_null());
CStr::from_ptr(public_key)
}.to_str().unwrap();
let data_to_encrypt_string = unsafe {
assert!(!data_to_encrypt.is_null());
CStr::from_ptr(data_to_encrypt)
}.to_str().unwrap();
let rsa = Rsa::public_key_from_pem(public_key_string.as_bytes()).unwrap();
let mut buf: Vec<u8> = vec![0; rsa.size() as usize];
rsa.public_encrypt(data_to_encrypt_string.as_bytes(), &mut buf, Padding::PKCS1).unwrap();
return CString::new(String::from_utf8(buf).unwrap()).unwrap().into_raw()
}
Tried to export as a Vec but that doesn't satisfy transforming to a string on the C# side.
I was able to resolve this with the base64 encoding that you suggested and that I was thinking was going to work. This works with FFI from C# calling into rust. A string is how I wanted the data sent back to the user calling the API.
#[no_mangle]
pub extern "C" fn rsa_encrypt(pub_key: *const c_char, data_to_encrypt: *const c_char) -> *mut c_char {
let pub_key_string = unsafe {
assert!(!pub_key.is_null());
CStr::from_ptr(pub_key)
}.to_str().unwrap();
let data_to_encrypt_bytes = unsafe {
assert!(!data_to_encrypt.is_null());
CStr::from_ptr(data_to_encrypt)
}.to_str().unwrap().as_bytes();
let public_key = RsaPublicKey::from_pkcs1_pem(pub_key_string).unwrap();
let mut rng = rand::thread_rng();
let encrypted_bytes = public_key.encrypt(&mut rng, PaddingScheme::new_pkcs1v15_encrypt(), &data_to_encrypt_bytes).unwrap();
return CString::new(base64::encode(encrypted_bytes)).unwrap().into_raw();
}
#[no_mangle]
pub extern "C" fn rsa_decrypt(priv_key: *const c_char, data_to_decrypt: *const c_char) -> *mut c_char {
let priv_key_string = unsafe {
assert!(!priv_key.is_null());
CStr::from_ptr(priv_key)
}.to_str().unwrap();
let data_to_decrypt_string = unsafe {
assert!(!data_to_decrypt.is_null());
CStr::from_ptr(data_to_decrypt)
}.to_str().unwrap();
let data_to_decrypt_bytes = base64::decode(data_to_decrypt_string).unwrap();
let private_key = RsaPrivateKey::from_pkcs8_pem(priv_key_string).unwrap();
let decrypted_bytes = private_key.decrypt(PaddingScheme::new_pkcs1v15_encrypt(), &data_to_decrypt_bytes).expect("failed to decrypt");
return CString::new(decrypted_bytes).unwrap().into_raw()
}
Related
Scenario
I've written a simple keylogger using the modern RawInput technique registering the desired device for event/data interception.
About Raw Input
Then, I'm using basically all these Windows API member definitions:
Raw Input Functions
Raw Input Structures
Problem/Question
I'm using a non-English keyboard with a non-English O.S, then, my problem begins when I try to parse an special key of this keyboard like a ñ/Ñ character which is recognized as an System.Windows.Forms.Keys.OemTilde key,
or a ç/Ç character which is recognized as an System.Windows.Forms.Keys.OemQuestion key.
I would like to make my keylogger lenguage-specific aware (or at least, with proper character recognition for my current culture, es-ES), but I'm stuck because lack of knowledges to start retrieving properlly those characters.
Please, note that my intention is to learn how I can do it in an efficient/automated way like the O.S does with my keyboard when I press an Ñ character it types that Ñ, what I mean is that I'm totally aware of a solution that implies to perform a manual parsing of special characters like for example this:
Select Case MyKey
Case Keys.OemTilde
char = "ñ"c
End Select
That is not the behavior that I'm looking for, but I can understand that maybe I need additional "things" to reproduce a good recognition/translation of those chars for each kind of keayborad, but what "things" I need?.
Research
I'm not sure how to proceed, because as I said, I don't have the knowledges to know the answer to this problem (that's why I'm asking), but I imagine that the knowledge of the current keyboard layout will be involved, then, I know that I can retrieve the current keyboard layout with the CultureInfo.CurrentCulture.KeyboardLayoutId property.
I know that the keyboard layout for culture en-US is 1033, and for culture es-ES is 3082.
Also, note the documentation of the the MakeCode member of the RAWKEYBOARD structure, maybe it seems to be a hint for what I pretend to do, I don't know:
MakeCode
Type: USHORT
The scan code from the key depression.
The scan code for keyboard overrun is KEYBOARD_OVERRUN_MAKE_CODE.
but actually it is a guess work
Here is the code I found.
The correct solution is the ToUnicode WinAPI function:
[DllImport("user32.dll")]
public static extern int ToUnicode(uint virtualKeyCode, uint scanCode,
byte[] keyboardState,
[Out, MarshalAs(UnmanagedType.LPWStr, SizeConst = 64)]
StringBuilder receivingBuffer,
int bufferSize, uint flags);
static string GetCharsFromKeys(Keys keys, bool shift, bool altGr)
{
var buf = new StringBuilder(256);
var keyboardState = new byte[256];
if (shift)
keyboardState[(int) Keys.ShiftKey] = 0xff;
if (altGr)
{
keyboardState[(int) Keys.ControlKey] = 0xff;
keyboardState[(int) Keys.Menu] = 0xff;
}
WinAPI.ToUnicode((uint) keys, 0, keyboardState, buf, 256, 0);
return buf.ToString();
}
Console.WriteLine(GetCharsFromKeys(Keys.E, false, false)); // prints e
Console.WriteLine(GetCharsFromKeys(Keys.E, true, false)); // prints E
// Assuming British keyboard layout:
Console.WriteLine(GetCharsFromKeys(Keys.E, false, true)); // prints é
Console.WriteLine(GetCharsFromKeys(Keys.E, true, true)); // prints É
I'm developing an online game which has to trade packets with a C++ server. I'm doing it using the Unity 5 engine. My life was getting hard when I started to write the packet structures in the C# code using Marshaling. Unity have serious bugs here, but ok for all of the Unity related bugs I'm used to implement any sort of workaround, but this bug I'm facing right now I think it could be some Marshal's limitation.
I have this C# struct:
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]
public struct MSG_SendNotice : IGamePacket
{
public const PacketFlag Opcode = (PacketFlag)1 | PacketFlag.Game2Client;
private PacketHeader m_Header;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 96)]
public string Text;
public PacketHeader Header { get { return m_Header; } }
}
It should works fine when calling Marshal.PtrToStructure. The problem is when some non-ascii character is sent on Text. The Marshal fails the conversion and assign null to Text. If I manually change this non-ascii character to any ascii character before converting the packet buffer the marshal works. The point is that I canno't format all the packets server-side to avoid send these non-ascii characters, I actually need them to display the correct string. Is there a way to set the encoding of this marshaled string (Text) in its definition?
A ny ideas are appreciated, thanks very much.
I would encode/decode the string manually:
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 96)]
public byte[] m_Text;
public string Text
{
get
{
return m_Text != null ? Encoding.UTF8.GetString(m_Text).TrimEnd('\0') : string.Empty;
}
set
{
m_Text = Encoding.UTF8.GetBytes(value ?? string.Empty);
Array.Resize(ref m_Text, 96);
m_Text[95] = 0;
}
}
Note that on set I'm manually adding a final 0 terminator (m_Text[95] = 0) to be sure the string will be a correctly-terminated C string.
I've done a test: it works even in the case that m_Text is null.
I'm working with the Google DoubleClick ad exchange API. Their examples are in C++ and well, I'm pretty awful at C++. I'm trying to convert this to C# for something I'm working on and really, I think I just need some explanation of what is actually happening in certain blocks of this code sample. Honestly I kind of know what should happen over all but I'm not sure I am getting it 'right' and with encryption/decryption there isn't a 'sort of right'.
This is the full example from their API site:
bool DecryptByteArray(
const string& ciphertext, const string& encryption_key,
const string& integrity_key, string* cleartext) {
// Step 1. find the length of initialization vector and clear text.
const int cleartext_length =
ciphertext.size() - kInitializationVectorSize - kSignatureSize;
if (cleartext_length < 0) {
// The length can't be correct.
return false;
}
string iv(ciphertext, 0, kInitializationVectorSize);
// Step 2. recover clear text
cleartext->resize(cleartext_length, '\0');
const char* ciphertext_begin = string_as_array(ciphertext) + iv.size();
const char* const ciphertext_end = ciphertext_begin + cleartext->size();
string::iterator cleartext_begin = cleartext->begin();
bool add_iv_counter_byte = true;
while (ciphertext_begin < ciphertext_end) {
uint32 pad_size = kHashOutputSize;
uchar encryption_pad[kHashOutputSize];
if (!HMAC(EVP_sha1(), string_as_array(encryption_key),
encryption_key.length(), (uchar*)string_as_array(iv),
iv.size(), encryption_pad, &pad_size)) {
printf("Error: encryption HMAC failed.\n");
return false;
}
for (int i = 0;
i < kBlockSize && ciphertext_begin < ciphertext_end;
++i, ++cleartext_begin, ++ciphertext_begin) {
*cleartext_begin = *ciphertext_begin ^ encryption_pad[i];
}
if (!add_iv_counter_byte) {
char& last_byte = *iv.rbegin();
++last_byte;
if (last_byte == '\0') {
add_iv_counter_byte = true;
}
}
if (add_iv_counter_byte) {
add_iv_counter_byte = false;
iv.push_back('\0');
}
}
Step 1 is quite obvious. This block is what I am really not sure how to interpret:
if (!HMAC(EVP_sha1(), string_as_array(encryption_key),
encryption_key.length(), (uchar*)string_as_array(iv),
iv.size(), encryption_pad, &pad_size)) {
printf("Error: encryption HMAC failed.\n");
return false;
}
What exactly is happening in that if body? What would that look like in C#? There are a lot of parameters that do SOMETHING but it seems like an awful lot crammed in a small spot. Is there some stdlib HMAC class? If I knew more about that I might better understand what's happening.
The equivalent C# code for that block is:
using (var hmac = new HMACSHA1(encryption_key))
{
var encryption_pad = hmac.ComputeHash(iv);
}
It's computing the SHA1 HMAC of the initialization vector (IV), using the given encryption key.
The HMAC function is actually a macro from OpenSSL.
Just as a comment, I think it would be easier to implement this from their pseudocode description rather than from their C++ code.
I have a class in managed C++ that has all its member variables initialized inside the constructor. The member of interest is an array. I am calling it from the .cs file of a C# project, having linked the two projects with the dll of the first one. However, the function says that one or more of the parameters are incorrect and therefore, cannot be used successfully.
The class declaration and the function declaration is as follows. Both are in the .h file.
Now, I would like to call the function in the .cs file as follows:
var Driver = new Driver();
long status = Driver.Config2("CAN0", 8, Driver.AttrIdList, Driver.AttrValueList);
Console.WriteLine(status);
If the function Config executes correctly, it should output a 0. However, I am getting a negative number and upon the lookup with the table provided by the vendor, it states that one or more of the parameters are not setup correctly. I have no idea how to get past this point since I'm a newbie to managed C++. All help would be greatly appreciated.
Thanks.
The code declaration is as follows:
public ref class Driver
{
public:
NCTYPE_STATUS Status;
NCTYPE_OBJH TxHandle;
MY_NCTYPE_CAN_FRAME Transmit;
array<NCTYPE_UINT32>^ AttrIdList;
array<NCTYPE_UINT32>^ AttrValueList;
array<char>^ Data;
NCTYPE_UINT32 Baudrate;
public:
Driver()
{
Baudrate = 1000000;
TxHandle = 0;
AttrIdList = gcnew array<NCTYPE_UINT32>(8);
AttrValueList = gcnew array<NCTYPE_UINT32>(8);
AttrIdList[0] = NC_ATTR_BAUD_RATE;
AttrValueList[0] = Baudrate;
AttrIdList[1] = NC_ATTR_START_ON_OPEN;
AttrValueList[1] = NC_TRUE;
AttrIdList[2] = NC_ATTR_READ_Q_LEN;
AttrValueList[2] = 0;
AttrIdList[3] = NC_ATTR_WRITE_Q_LEN;
AttrValueList[3] = 1;
AttrIdList[4] = NC_ATTR_CAN_COMP_STD;
AttrValueList[4] = 0;
AttrIdList[5] = NC_ATTR_CAN_MASK_STD;
AttrValueList[5] = NC_CAN_MASK_STD_DONTCARE;
AttrIdList[6] = NC_ATTR_CAN_COMP_XTD;
AttrValueList[6] = 0;
AttrIdList[7] = NC_ATTR_CAN_MASK_XTD;
AttrValueList[7] = NC_CAN_MASK_XTD_DONTCARE;
interior_ptr<NCTYPE_UINT32> pstart (&AttrIdList[0]);
interior_ptr<NCTYPE_UINT32> pend (&AttrIdList[7]);
Data = gcnew array<char>(8);
for (int i=0; i<8;i++)
Data[i]=i*2;
}
I also have another method right underneath the Config function that is declared as follows:
NCTYPE_STATUS Config2 (String^ objName, int numAttrs, array<unsigned long>^ AttrIdList, array<unsigned long>^ AttrValueList )
{
msclr::interop::marshal_context^ context = gcnew msclr::interop::marshal_context();
const char* name = context->marshal_as<const char*>(objName);
char* name_unconst = const_cast<char*>(name);
return ncConfig (name_unconst, 8, nullptr, nullptr);
delete context;
}
The program compiles and builds, this is a run-time error. I am guessing it has something to do with the two nullptr passed in the function Config2, but if I replace these with the parameters AttrIdList and AttrValueList, the compiler gives an error:
cannot convert parameter 3 from 'cli::array^' to 'NCTYPE_ATTRID_P'
BTW: NCTYPE_STATUS is unsigned long, while NCTYPE_ATTRID_P is unsigned long*.
cannot convert parameter 3 from 'cli::array^' to 'NCTYPE_ATTRID_P'
NCTYPE_ATTRID_P is unsigned long*
You can't pass a managed array to a pure native C++ function, you first need to 'convert' it to a fixed unsigned long* pointer.
Here's a way to do it:
unsigned long* ManagedArrayToFixedPtr(array<unsigned long>^input)
{
pin_ptr<unsigned long> pinned = &input[0];
unsigned long* bufferPtr = pinned;
unsigned long* output = new unsigned long[input->Length];
memcpy_s(output, input->Length, bufferPtr, input->Length);
return output;
}
Testing the function:
array<unsigned long>^ larray = gcnew array<unsigned long> {2,4,6,8,10,12,14,16};
unsigned long* lptr = ManagedArrayToFixedPtr(larray); //returns pointer to 2
Edit:
Rememer to #include "windows.h" to be able to use the memcpy_s function!
I'm embedding Mono in an MacOSX app written in Objective-c.
I'm accessing a C# lib (DDL), which only contains a bunch of static methods returning different types.
So far I can successfully get returned int, double and string, but I'm having trouble retrieving a returned array...
For exemple, here's how I retrieve an int:
MonoDomain *domain = mono_jit_init("TestDomain");
NSBundle* mainBundle = [NSBundle mainBundle];
NSString* dll = [mainBundle pathForResource:#"TestLib86" ofType:#"dll"];
MonoAssembly* assembly = mono_domain_assembly_open(domain, [dll UTF8String]);
MonoImage* image = mono_assembly_get_image(assembly);
// Get INTEGER
// get a method handle to whatever you like
const char* descAsString = "MiniLib86.Show:GetInt()";
MonoMethodDesc* description = mono_method_desc_new(descAsString,TRUE);
MonoMethod* method = mono_method_desc_search_in_image(description, image);
// call it
void* args[0];
MonoObject *result = mono_runtime_invoke(method, NULL, args, NULL);
int int_result = *(int*)mono_object_unbox (result);
// See the result in log
NSLog(#"int result %i", int_result);
The method in C# that returns an List looks like this:
public static List<int> GetListInt()
{
return new System.Collections.Generic.List<int>{1,2,3,4,5};
}
Any help would be really appreciated !
Take a look at mono_runtime_invoke_array.