IntPtr to String and long? - c#

I am using C# and the .NET 2.0 framework.
I have this method here to get a string out of a IntPtr:
void* cfstring = __CFStringMakeConstantString(StringToCString(name));
IntPtr result = AMDeviceCopyValue_Int(device, unknown, cfstring);
if (result != IntPtr.Zero)
{
byte length = Marshal.ReadByte(result, 8);
if (length > 0)
{
string s = Marshal.PtrToStringAnsi(new IntPtr(result.ToInt64() + 9L), length);
return s;
}
}
return String.Empty;
It works and all data which is saved as string in my USB-Device (return data of method AMDeviceCopyValue).
Now I get this:
£\rº\0\0¨\t\0\0\0ˆ\0\0\0\0\0\0x>Ô\0\0\0\0Ũ\t\0\0\0€1ÔxÕ͸MÔ\0\0\0\0Ȩ\t\0\0\0€)\0fŒ\a\0Value\0\0˨\t\0\0\0€7fŒ\a\0Result\0Ψ\t\0\0\0ˆTÅfŒ\a\0\0Key\0\0\0\0ñ¨\t\0\0\0€+fŒ\a\0Port\0\0\0ô¨\t\0\0\0€%fŒ\a\0Key\0\0\0\0÷¨\t\0\0\0€:\0\0\0\0\0\0\0\0\0\0\0\0\0\0ú¨\t\0"
This is saved as long - so, how I can get this IntPtr into long and not string?

If you simply want to read a long instead of a string, use:
long l = Marshal.ReadInt64(result, 9);
return l;

Related

C# extract data from start of StringBuilder

I want to extract some chars from start of StringBuilder variable.
I wrote bellow code :
private string getPart(StringBuilder data, int len)
{
string s = data.ToString(0, len);
data.Remove(0, len);
return s;
}
Any suggestion fro better coding?
If you want character extraction in many parts of your code,
you can try implementing the algorithm as an extension method:
public static class StringBuilderExtensions {
public static String Extract(this StringBuilder source, int length) {
if (Object.ReferenceEquals(null, source))
throw new ArgumentNullException("source");
else if ((length < 0) || (length > source.Length))
throw new ArgumentOutOfRangeException("length");
// Your actual algorithm
String result = source.ToString(0, length);
source.Remove(0, length);
return result;
}
}
...
StringBuilder data = ...
String s = data.Extract(len); // <- Just extract
You can use CopyTo method:
private string getPart(StringBuilder data, int len)
{
var output = new char[len];
data.CopyTo(0, output, 0, len);
data.Remove(0, len);
return new string(output);
}

Converting byte arrays (from readfile) to string

So, I am using ReadFile from kernel32 for reading the file. Here is my code in reading files with the help of SetFilePointer and ReadFile.
public long ReadFileMe(IntPtr filehandle, int startpos, int length, byte[] outdata)
{
IntPtr filea = IntPtr.Zero;
long ntruelen = GetFileSize(filehandle, filea);
int nRequestStart;
uint nRequestLen;
uint nApproxLength;
int a = 0;
if (ntruelen <= -1)
{
return -1;
}
else if (ntruelen == 0)
{
return -2;
}
if (startpos > ntruelen)
{
return -3;
}
else if (length <= 0)
{
return -5;
}
else if (length > ntruelen)
{
return -6;
}
else
{
nRequestStart = startpos;
nRequestLen = (uint)length;
outdata = new byte[nRequestLen - 1];
SetFilePointer(filehandle, (nRequestStart - 1), ref a, 0);
ReadFile(filehandle, outdata, nRequestLen, out nApproxLength, IntPtr.Zero);
return nApproxLength; //just for telling how many bytes are read in this function
}
}
When I used this function, it works (for another purpose) so this code is tested and works.
But the main problem is, I now need to convert the outdata on the parameter which the function puts the bytes into string.
I tried using Encoding.Unicode and so on (all UTF), but it doesn't work.
Try to use Encoding.GetString (Byte[], Int32, Int32) method. this decodes a sequence of bytes from the specified byte array into a string.
Hmm... Encoding.Name_of_encoding.GetString must work...
try smth like this:
var convertedBuffer = Encoding.Convert(
Encoding.GetEncoding( /*name of encoding*/),Encoding.UTF8, outdata);
var str = Encoding.UTF8.GetString(convertedBuffer);
UPDATE:
and what about this?:
using (var streamReader = new StreamReader(#"C:\test.txt", true))
{
var currentEncoding = streamReader.CurrentEncoding.EncodingName;
Console.WriteLine(currentEncoding);
}
You might need to add the out parameter on outdata parameter :
Passing Arrays Using ref and out
public long ReadFileMe(IntPtr filehandle, int startpos, int length, out byte[] outdata)

How to initialize byte array as an OUT parameter in C# before proper array length is known

I'm having trouble with a webmethod used to download a file to a calling HTTPHandler.ashx file. The handler calls the webmethod as follows:
byte[] docContent;
string fileType;
string fileName;
string msgInfo = brokerService.DownloadFile(trimURL, recNumber, out docContent, out fileType, out fileName);
In the called webmethod, I have to initialize the byte array before using it or I get compiler errors on all the return statements:
The out parameter 'docContents' must be assigned to before control leaves the current method
I tried setting it to an empty array but that causes the Buffer.BlockCopy method to fail:
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
mscorlib
at System.Buffer.BlockCopy(Array src, Int32 srcOffset, Array dst, Int32 dstOffset, Int32 count)
I know I need to initialize it but I don't know the length of the array needed until I access the database. Through debugging I've verified all the code works except for the Buffer.BlockCopy:
public string DownloadFile(string trimURL
, string TrimRecordNumber
, out byte[] docContents
, out string returnFiletype
, out string returnFilename)
{
docContents = new byte[0];
returnFiletype = null; returnFilename = null;
try
{
ConnectToTrim(trimURL);
if (!db.IsValid)
return "CRITICAL: Database Is NOT Valid";
Record myRec = db.GetRecord(TrimRecordNumber);
if (myRec == null)
return "CRITICAL: Record not found.";
uint bufferSize = 10000;
int documentLength = myRec.DocumentSize;
byte[] result = new byte[documentLength];
byte[] buffer = new byte[bufferSize];
uint bytesRead;
uint totalBytesRead = 0;
TRIMSDK.IStream docStream = myRec.GetDocumentStream(string.Empty, false, string.Empty);
while (totalBytesRead < documentLength)
{
docStream.RemoteRead(out buffer[0], 10000, out bytesRead);
for (int i = 0; i < bytesRead; i++)
{
result[totalBytesRead] = buffer[i];
totalBytesRead += 1;
}
}
returnFiletype = myRec.Extension;
returnFilename = myRec.SuggestedFileName;
Buffer.BlockCopy(result, 0, docContents, 0, result.Length);
return string.Format("OK-Document for recordnumber({0}): {1}.{2} - size: {3} bytes",
TrimRecordNumber, returnFilename, returnFiletype, Convert.ToString(documentLength));
}
catch (Exception ex)
{
return LogException(ex, "CRITICAL: Exception in DownloadFile method has been logged:", trimURL, 100);
}
}
You can start by initializing it to null. Just prior to calling Buffer.BlockCopy, allocate and initialize it to the proper length.
This would look like:
public string DownloadFile(string trimURL
, string TrimRecordNumber
, out byte[] docContents
, out string returnFiletype
, out string returnFilename)
{
docContents = null;
//...
returnFiletype = myRec.Extension;
returnFilename = myRec.SuggestedFileName;
docContents = new byte[result.Length]; // Allocate appropriately here...
Buffer.BlockCopy(result, 0, docContents, 0, result.Length);
return ...
Alternatively, you can just directly allocate and copy the results into docContents - eliminating the need for result entirely. You will still need to initailize to null at the start if you want to leave your overall flow control alone, however.

How do I read the names from a PE module's export table?

I've successfully read a PE header from an unmanaged module loaded into memory by another process. What I'd like to do now is read the names of this module's exports. Basically, this is what I have so far (I've left out a majority of the PE parsing code, because I already know it works):
Extensions
public static IntPtr Increment(this IntPtr ptr, int amount)
{
return new IntPtr(ptr.ToInt64() + amount);
}
public static T ToStruct<T>(this byte[] data)
{
GCHandle handle = GCHandle.Alloc(data, GCHandleType.Pinned);
T result = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(T));
handle.Free();
return result;
}
public static byte[] ReadBytes(this Process process, IntPtr baseAddress, int size)
{
int bytesRead;
byte[] bytes = new byte[size];
Native.ReadProcessMemory(process.Handle, baseAddress, bytes, size, out bytesRead);
return bytes;
}
public static T ReadStruct<T>(this Process process, IntPtr baseAddress)
{
byte[] bytes = ReadBytes(process, baseAddress, Marshal.SizeOf(typeof(T)));
return bytes.ToStruct<T>();
}
public static string ReadString(this Process process, IntPtr baseAddress, int size)
{
byte[] bytes = ReadBytes(process, baseAddress, size);
return Encoding.ASCII.GetString(bytes);
}
GetExports()
Native.IMAGE_DATA_DIRECTORY dataDirectory =
NtHeaders.OptionalHeader.DataDirectory[Native.IMAGE_DIRECTORY_ENTRY_EXPORT];
if (dataDirectory.VirtualAddress > 0 && dataDirectory.Size > 0)
{
Native.IMAGE_EXPORT_DIRECTORY exportDirectory =
_process.ReadStruct<Native.IMAGE_EXPORT_DIRECTORY>(
_baseAddress.Increment((int)dataDirectory.VirtualAddress));
IntPtr namesAddress = _baseAddress.Increment((int)exportDirectory.AddressOfNames);
IntPtr nameOrdinalsAddress = _baseAddress.Increment((int)exportDirectory.AddressOfNameOrdinals);
IntPtr functionsAddress = _baseAddress.Increment((int)exportDirectory.AddressOfFunctions);
for (int i = 0; i < exportDirectory.NumberOfFunctions; i++)
{
Console.WriteLine(_process.ReadString(namesAddress.Increment(i * 4), 64));
}
}
When I run this, all I get is a pattern of double question marks, then completely random characters. I know the header is being read correctly, because the signatures are correct. The problem has to lie in the way that I'm iterating over the function list.
The code at this link seems to suggest that the names and ordinals form a matched pair of arrays, counted up to NumberOfNames, and that the functions are separate. So your loop may be iterating the wrong number of times, but that doesn't explain why you're seeing bad strings from the very beginning.
For just printing names, I'm having success with a loop like the one shown below. I think the call to ImageRvaToVa may be what you need to get the correct strings? However I don't know whether that function will work unless you've actually loaded the image by calling MapAndLoad -- that's what the documentation requests, and the mapping did not seem to work in some quick experiments I did using LoadLibrary instead.
Here's the pInvoke declaration:
[DllImport("DbgHelp.dll", CallingConvention = CallingConvention.StdCall), SuppressUnmanagedCodeSecurity]
public static extern IntPtr ImageRvaToVa(
IntPtr NtHeaders,
IntPtr Base,
uint Rva,
IntPtr LastRvaSection);
and here's my main loop:
LOADED_IMAGE loadedImage = ...; // populated with MapAndLoad
IMAGE_EXPORT_DIRECTORY* pIID = ...; // populated with ImageDirectoryEntryToData
uint* pFuncNames = (uint*)
ImageRvaToVa(
loadedImage.FileHeader,
loadedImage.MappedAddress,
pIID->AddressOfNames,
IntPtr.Zero);
for (uint i = 0; i < pIID->NumberOfNames; i++ )
{
uint funcNameRVA = pFuncNames[i];
if (funcNameRVA != 0)
{
char* funcName =
(char*) (ImageRvaToVa(loadedImage.FileHeader,
loadedImage.MappedAddress,
funcNameRVA,
IntPtr.Zero));
var name = Marshal.PtrToStringAnsi((IntPtr) funcName);
Console.WriteLine(" funcName: {0}", name);
}
}

Fastest way to convert a possibly-null-terminated ascii byte[] to a string?

I need to convert a (possibly) null terminated array of ascii bytes to a string in C# and the fastest way I've found to do it is by using my UnsafeAsciiBytesToString method shown below. This method uses the String.String(sbyte*) constructor which contains a warning in it's remarks:
"The value parameter is assumed to point to an array representing a string encoded using the default ANSI code page (that is, the encoding method specified by Encoding.Default).
Note: * Because the default ANSI code page is system-dependent, the string created by this constructor from identical signed byte arrays may differ on different systems. * ...
* If the specified array is not null-terminated, the behavior of this constructor is system dependent. For example, such a situation might cause an access violation. *
"
Now, I'm positive that the way the string is encoded will never change... but the default codepage on the system that my app is running on might change. So, is there any reason that I shouldn't run screaming from using String.String(sbyte*) for this purpose?
using System;
using System.Text;
namespace FastAsciiBytesToString
{
static class StringEx
{
public static string AsciiBytesToString(this byte[] buffer, int offset, int maxLength)
{
int maxIndex = offset + maxLength;
for( int i = offset; i < maxIndex; i++ )
{
/// Skip non-nulls.
if( buffer[i] != 0 ) continue;
/// First null we find, return the string.
return Encoding.ASCII.GetString(buffer, offset, i - offset);
}
/// Terminating null not found. Convert the entire section from offset to maxLength.
return Encoding.ASCII.GetString(buffer, offset, maxLength);
}
public static string UnsafeAsciiBytesToString(this byte[] buffer, int offset)
{
string result = null;
unsafe
{
fixed( byte* pAscii = &buffer[offset] )
{
result = new String((sbyte*)pAscii);
}
}
return result;
}
}
class Program
{
static void Main(string[] args)
{
byte[] asciiBytes = new byte[]{ 0, 0, 0, (byte)'a', (byte)'b', (byte)'c', 0, 0, 0 };
string result = asciiBytes.AsciiBytesToString(3, 6);
Console.WriteLine("AsciiBytesToString Result: \"{0}\"", result);
result = asciiBytes.UnsafeAsciiBytesToString(3);
Console.WriteLine("UnsafeAsciiBytesToString Result: \"{0}\"", result);
/// Non-null terminated test.
asciiBytes = new byte[]{ 0, 0, 0, (byte)'a', (byte)'b', (byte)'c' };
result = asciiBytes.UnsafeAsciiBytesToString(3);
Console.WriteLine("UnsafeAsciiBytesToString Result: \"{0}\"", result);
Console.ReadLine();
}
}
}
Oneliner (assuming the buffer actually contains ONE well formatted null terminated string):
String MyString = Encoding.ASCII.GetString(MyByteBuffer).TrimEnd((Char)0);
Any reason not to use the String(sbyte*, int, int) constructor? If you've worked out which portion of the buffer you need, the rest should be simple:
public static string UnsafeAsciiBytesToString(byte[] buffer, int offset, int length)
{
unsafe
{
fixed (byte* pAscii = buffer)
{
return new String((sbyte*)pAscii, offset, length);
}
}
}
If you need to look first:
public static string UnsafeAsciiBytesToString(byte[] buffer, int offset)
{
int end = offset;
while (end < buffer.Length && buffer[end] != 0)
{
end++;
}
unsafe
{
fixed (byte* pAscii = buffer)
{
return new String((sbyte*)pAscii, offset, end - offset);
}
}
}
If this truly is an ASCII string (i.e. all bytes are less than 128) then the codepage problem shouldn't be an issue unless you've got a particularly strange default codepage which isn't based on ASCII.
Out of interest, have you actually profiled your application to make sure that this is really the bottleneck? Do you definitely need the absolute fastest conversion, instead of one which is more readable (e.g. using Encoding.GetString for the appropriate encoding)?
I'm not sure of the speed, but I found it easiest to use LINQ to remove the nulls before encoding:
string s = myEncoding.GetString(bytes.TakeWhile(b => !b.Equals(0)).ToArray());
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestProject1
{
class Class1
{
static public string cstr_to_string( byte[] data, int code_page)
{
Encoding Enc = Encoding.GetEncoding(code_page);
int inx = Array.FindIndex(data, 0, (x) => x == 0);//search for 0
if (inx >= 0)
return (Enc.GetString(data, 0, inx));
else
return (Enc.GetString(data));
}
}
}
s = s.Substring(0, s.IndexOf((char) 0));
Just for completeness, you can also use built-in methods of the .NET framework to do this:
var handle = GCHandle.Alloc(buffer, GCHandleType.Pinned);
try
{
return Marshal.PtrToStringAnsi(handle.AddrOfPinnedObject());
}
finally
{
handle.Free();
}
Advantages:
It doesn't require unsafe code (i.e., you can also use this method for VB.NET) and
it also works for "wide" (UTF-16) strings, if you use Marshal.PtrToStringUni instead.
One possibility to consider: check that the default code-page is acceptable and use that information to select the conversion mechanism at run-time.
This could also take into account whether the string is in fact null-terminated, but once you've done that, of course, the speed gains my vanish.
An easy / safe / fast way to convert byte[] objects to strings containing their ASCII equivalent and vice versa using the .NET class System.Text.Encoding. The class has a static function that returns an ASCII encoder:
From String to byte[]:
string s = "Hello World!"
byte[] b = System.Text.Encoding.ASCII.GetBytes(s);
From byte[] to string:
byte[] byteArray = new byte[] {0x41, 0x42, 0x09, 0x00, 0x255};
string s = System.Text.Encoding.ASCII.GetString(byteArray);
This is a bit ugly but you don't have to use unsafe code:
string result = "";
for (int i = 0; i < data.Length && data[i] != 0; i++)
result += (char)data[i];

Categories