C# marshaling non-ASCII string? - c#

I'm developing an online game which has to trade packets with a C++ server. I'm doing it using the Unity 5 engine. My life was getting hard when I started to write the packet structures in the C# code using Marshaling. Unity have serious bugs here, but ok for all of the Unity related bugs I'm used to implement any sort of workaround, but this bug I'm facing right now I think it could be some Marshal's limitation.
I have this C# struct:
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]
public struct MSG_SendNotice : IGamePacket
{
public const PacketFlag Opcode = (PacketFlag)1 | PacketFlag.Game2Client;
private PacketHeader m_Header;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 96)]
public string Text;
public PacketHeader Header { get { return m_Header; } }
}
It should works fine when calling Marshal.PtrToStructure. The problem is when some non-ascii character is sent on Text. The Marshal fails the conversion and assign null to Text. If I manually change this non-ascii character to any ascii character before converting the packet buffer the marshal works. The point is that I canno't format all the packets server-side to avoid send these non-ascii characters, I actually need them to display the correct string. Is there a way to set the encoding of this marshaled string (Text) in its definition?
A ny ideas are appreciated, thanks very much.

I would encode/decode the string manually:
[MarshalAs(UnmanagedType.ByValArray, SizeConst = 96)]
public byte[] m_Text;
public string Text
{
get
{
return m_Text != null ? Encoding.UTF8.GetString(m_Text).TrimEnd('\0') : string.Empty;
}
set
{
m_Text = Encoding.UTF8.GetBytes(value ?? string.Empty);
Array.Resize(ref m_Text, 96);
m_Text[95] = 0;
}
}
Note that on set I'm manually adding a final 0 terminator (m_Text[95] = 0) to be sure the string will be a correctly-terminated C string.
I've done a test: it works even in the case that m_Text is null.

Related

How to remove exposed secure string from memory

currently I have issue on project where secure string is exposed like this:
IntPtr unmanagedString = IntPtr.Zero;
try
{
unmanagedString = Marshal.SecureStringToGlobalAllocUnicode(secureString);
string str = Marshal.PtrToStringUni(unmanagedString);
...
...
}
finally
{
Marshal.ZeroFreeGlobalAllocUnicode(ptr);
}
After Marshal.SecureStringToGlobalAllocUnicode(secureString) call, copy of secure string content is saved in unmanaged memory. Even after Marshal.ZeroFreeGlobalAllocUnicode(ptr) is called string can be easily found with memory tools, by simply searching for all strings.
Is there a way to completely remove it or at least go around it in some way, like overwrite it?

Represent an special keyboard key in the current culture?

Scenario
I've written a simple keylogger using the modern RawInput technique registering the desired device for event/data interception.
About Raw Input
Then, I'm using basically all these Windows API member definitions:
Raw Input Functions
Raw Input Structures
Problem/Question
I'm using a non-English keyboard with a non-English O.S, then, my problem begins when I try to parse an special key of this keyboard like a ñ/Ñ character which is recognized as an System.Windows.Forms.Keys.OemTilde key,
or a ç/Ç character which is recognized as an System.Windows.Forms.Keys.OemQuestion key.
I would like to make my keylogger lenguage-specific aware (or at least, with proper character recognition for my current culture, es-ES), but I'm stuck because lack of knowledges to start retrieving properlly those characters.
Please, note that my intention is to learn how I can do it in an efficient/automated way like the O.S does with my keyboard when I press an Ñ character it types that Ñ, what I mean is that I'm totally aware of a solution that implies to perform a manual parsing of special characters like for example this:
Select Case MyKey
Case Keys.OemTilde
char = "ñ"c
End Select
That is not the behavior that I'm looking for, but I can understand that maybe I need additional "things" to reproduce a good recognition/translation of those chars for each kind of keayborad, but what "things" I need?.
Research
I'm not sure how to proceed, because as I said, I don't have the knowledges to know the answer to this problem (that's why I'm asking), but I imagine that the knowledge of the current keyboard layout will be involved, then, I know that I can retrieve the current keyboard layout with the CultureInfo.CurrentCulture.KeyboardLayoutId property.
I know that the keyboard layout for culture en-US is 1033, and for culture es-ES is 3082.
Also, note the documentation of the the MakeCode member of the RAWKEYBOARD structure, maybe it seems to be a hint for what I pretend to do, I don't know:
MakeCode
Type: USHORT
The scan code from the key depression.
The scan code for keyboard overrun is KEYBOARD_OVERRUN_MAKE_CODE.
but actually it is a guess work
Here is the code I found.
The correct solution is the ToUnicode WinAPI function:
[DllImport("user32.dll")]
public static extern int ToUnicode(uint virtualKeyCode, uint scanCode,
byte[] keyboardState,
[Out, MarshalAs(UnmanagedType.LPWStr, SizeConst = 64)]
StringBuilder receivingBuffer,
int bufferSize, uint flags);
static string GetCharsFromKeys(Keys keys, bool shift, bool altGr)
{
var buf = new StringBuilder(256);
var keyboardState = new byte[256];
if (shift)
keyboardState[(int) Keys.ShiftKey] = 0xff;
if (altGr)
{
keyboardState[(int) Keys.ControlKey] = 0xff;
keyboardState[(int) Keys.Menu] = 0xff;
}
WinAPI.ToUnicode((uint) keys, 0, keyboardState, buf, 256, 0);
return buf.ToString();
}
Console.WriteLine(GetCharsFromKeys(Keys.E, false, false)); // prints e
Console.WriteLine(GetCharsFromKeys(Keys.E, true, false)); // prints E
// Assuming British keyboard layout:
Console.WriteLine(GetCharsFromKeys(Keys.E, false, true)); // prints é
Console.WriteLine(GetCharsFromKeys(Keys.E, true, true)); // prints É

Google C++ code example explanation, translating to C#

I'm working with the Google DoubleClick ad exchange API. Their examples are in C++ and well, I'm pretty awful at C++. I'm trying to convert this to C# for something I'm working on and really, I think I just need some explanation of what is actually happening in certain blocks of this code sample. Honestly I kind of know what should happen over all but I'm not sure I am getting it 'right' and with encryption/decryption there isn't a 'sort of right'.
This is the full example from their API site:
bool DecryptByteArray(
const string& ciphertext, const string& encryption_key,
const string& integrity_key, string* cleartext) {
// Step 1. find the length of initialization vector and clear text.
const int cleartext_length =
ciphertext.size() - kInitializationVectorSize - kSignatureSize;
if (cleartext_length < 0) {
// The length can't be correct.
return false;
}
string iv(ciphertext, 0, kInitializationVectorSize);
// Step 2. recover clear text
cleartext->resize(cleartext_length, '\0');
const char* ciphertext_begin = string_as_array(ciphertext) + iv.size();
const char* const ciphertext_end = ciphertext_begin + cleartext->size();
string::iterator cleartext_begin = cleartext->begin();
bool add_iv_counter_byte = true;
while (ciphertext_begin < ciphertext_end) {
uint32 pad_size = kHashOutputSize;
uchar encryption_pad[kHashOutputSize];
if (!HMAC(EVP_sha1(), string_as_array(encryption_key),
encryption_key.length(), (uchar*)string_as_array(iv),
iv.size(), encryption_pad, &pad_size)) {
printf("Error: encryption HMAC failed.\n");
return false;
}
for (int i = 0;
i < kBlockSize && ciphertext_begin < ciphertext_end;
++i, ++cleartext_begin, ++ciphertext_begin) {
*cleartext_begin = *ciphertext_begin ^ encryption_pad[i];
}
if (!add_iv_counter_byte) {
char& last_byte = *iv.rbegin();
++last_byte;
if (last_byte == '\0') {
add_iv_counter_byte = true;
}
}
if (add_iv_counter_byte) {
add_iv_counter_byte = false;
iv.push_back('\0');
}
}
Step 1 is quite obvious. This block is what I am really not sure how to interpret:
if (!HMAC(EVP_sha1(), string_as_array(encryption_key),
encryption_key.length(), (uchar*)string_as_array(iv),
iv.size(), encryption_pad, &pad_size)) {
printf("Error: encryption HMAC failed.\n");
return false;
}
What exactly is happening in that if body? What would that look like in C#? There are a lot of parameters that do SOMETHING but it seems like an awful lot crammed in a small spot. Is there some stdlib HMAC class? If I knew more about that I might better understand what's happening.
The equivalent C# code for that block is:
using (var hmac = new HMACSHA1(encryption_key))
{
var encryption_pad = hmac.ComputeHash(iv);
}
It's computing the SHA1 HMAC of the initialization vector (IV), using the given encryption key.
The HMAC function is actually a macro from OpenSSL.
Just as a comment, I think it would be easier to implement this from their pseudocode description rather than from their C++ code.

Add an attachment to Bugzilla using XML-RPC in VBA

I am currently developing an Excel macro which allows creating Bugs in a Bugzilla instance.
After some trial and error this now turns out to work fine.
I wanted to enhance the client so that it's also possible to add screenshots to the newly created bug.
The environment I'm using is a little bit tricky:
I have to use MS Excel for my task.
As Excel does not understand XML-RPC, I downloaded an interface DLL (CookComputing.XmlRpcV2.dll from xml-rpc.net) which makes the XML-RPC interface accessible from .NET.
Then I created an additional DLL which can be called from Excel macros (using COM interop).
As already mentioned, this is working fine for tasks like browsing or adding new bugs.
But when adding an attachment to the bug, the image must be converted into a base64 data type. Although this seems to work fine and although the creation of the screenshot seems to succeed, the image seems to be corrupted and cannot be displayed.
Here's what I do to add the image:
The Bugzilla add_attachment method accepts a struct as input:
http://www.bugzilla.org/docs/4.0/en/html/api/Bugzilla/WebService/Bug.html#add_attachment.
This type was defined in C# and is visible also in VBA.
This is the struct definition:
[ClassInterface(ClassInterfaceType.AutoDual)]
public class TAttachmentInputData
{
public string[] ids;
public string data; // base64-encoded data
public string file_name;
public string summary;
public string content_type;
public string comment;
public bool is_patch;
public bool is_private;
public void addId(int id)
{
ids = new string[1];
ids[0] = id.ToString();
}
public void addData(string strData)
{
try
{
byte[] encData_byte = new byte[strData.Length];
encData_byte = System.Text.Encoding.ASCII.GetBytes(strData);
string encodedData = Convert.ToBase64String(encData_byte);
data = new Byte[System.Text.Encoding.ASCII.GetBytes(encodedData).Length];
data = System.Text.Encoding.ASCII.GetBytes(encodedData);
}
catch (Exception e)
{
throw new Exception("Error in base64Encode" + e.Message);
}
}
This is the part in my macro where I would like to add the attachment:
Dim attachmentsStruct As New TAttachmentInputData
fname = attachmentFileName
attachmentsStruct.file_name = GetFilenameFromPath(fname)
attachmentsStruct.is_patch = False
attachmentsStruct.is_private = False
'multiple other definitions
Open fname For Binary As #1
attachmentsStruct.addData (Input(LOF(1), #1))
Close #1
attachmentsStruct.file_name = GetFilenameFromPath(fname)
Call BugzillaClass.add_attachment(attachmentsStruct)
Where BugzillaClass it the interface exposed from my DLL to Excel VBA.
The method add_attachment refers to the XML-RPC method add_attachment.
I assume that my problem is the conversion from the binary file into base64.
This is done using the addData method in my C# DLL.
Is the conversion done correctly there?
Any idea why the images are corrupted?
I think the issue is that you are reading in binary data in the macro, but the addData method is expecting a string. Try declaring the parameter in addData as byte[].

Why do I have an empty (but working) table index band in UITableView?

Using MonoTouch to develop my very first iPhone application for a customer, I've overridden the UITableViewDataSource.SectionIndexTitles function to provide a string[] array with what I thought would make the letters in the vertical band.
Currently I'm facing a working index band but without any characters displayed:
(I do think the UITableViewDataSource.SectionIndexTitles has the native counterpart sectionIndexTitlesForTableView).
My question:
Can someone give me a hint on what I might be doing wrong here?
I do not have all A-Z characters but some characters missing, maybe this could be an issue?
This is a bug in MonoTouch. A workaround is to create a new method in your table source class and decorate it with the Export attribute, passing the native ObjC method to override (sectionIndexTitlesForTableView:):
string[] sectionIndexArray;
//..
[Export ("sectionIndexTitlesForTableView:")]
public NSArray SectionTitles (UITableView tableview)
{
return NSArray.FromStrings(sectionIndexArray);
}
I would like to get back to my point...
can you show the way you build the returned value for sectionIndexTitlesForTableView? I just tried with SimpleSectionedTableView sample app from apple http://developer.apple.com/library/ios/#samplecode/TableViewSuite/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007318
with this code:
- (NSArray *)sectionIndexTitlesForTableView:(UITableView *)tableView {
NSMutableArray *a = [[NSMutableArray alloc]initWithCapacity:10];
for (int i = 0; i<[regions count]; i++) {
Region *r = [regions objectAtIndex:i];
NSString *s = r.name;
[a addObject:s];
}
NSArray *ax = [NSArray arrayWithArray:a];
return ax;
}
And everything works fine...

Categories