Size of addrinfo structure - c#

I want to duplicate addrinfo structure (simply copying all bytes of it) but my changes result in memory corruption in host application when hooking.
My code is simple as:
byte[] addressInfoBytes = new byte[32];
Marshal.Copy(addressInfoAddress, addressInfoBytes, 0, addressInfoBytes.Length);
newAddressInfoAddress = GCHandle.Alloc(addressInfoBytes, GCHandleType.Pinned).AddrOfPinnedObject();
I thought it happens because 32 is not correct size of this structure.
I calculated this number based on this definition on MSDN:
typedef struct addrinfo {
int ai_flags;
int ai_family;
int ai_socktype;
int ai_protocol;
size_t ai_addrlen;
char *ai_canonname;
struct sockaddr *ai_addr;
struct addrinfo *ai_next;
} ADDRINFOA, *PADDRINFOA;
Do you have any Idea about correct size of this structure and what I do incorrect?
Thank you very much for your time.

I solved this problem long time ago, so I just thought posting it here may help some one else as well.
using System.Net;
using System.Net.Sockets;
[StructLayout(LayoutKind.Sequential)]
internal struct AddressInfo
{
internal AddressInfoHints Flags;
internal AddressFamily Family;
internal SocketType SocketType;
internal ProtocolFamily Protocol;
internal int AddressLen;
internal IntPtr CanonName; // sbyte Array
internal IntPtr Address; // byte Array
internal IntPtr Next; // Next Element In AddressInfo Array
[Flags]
internal enum AddressInfoHints
{
None = 0,
Passive = 0x01,
Canonname = 0x02,
Numerichost = 0x04,
All = 0x0100,
Addrconfig = 0x0400,
V4Mapped = 0x0800,
NonAuthoritative = 0x04000,
Secure = 0x08000,
ReturnPreferredNames = 0x010000,
Fqdn = 0x00020000,
Fileserver = 0x00040000,
}
}
AddressInfo addressInfo = (AddressInfo)Marshal.PtrToStructure(handle, typeof(AddressInfo));

Related

Efficient reading structured binary data from a file

I have the following code fragment that reads a binary file and validates it:
FileStream f = File.OpenRead("File.bin");
MemoryStream memStream = new MemoryStream();
memStream.SetLength(f.Length);
f.Read(memStream.GetBuffer(), 0, (int)f.Length);
f.Seek(0, SeekOrigin.Begin);
var r = new BinaryReader(f);
Single prevVal=0;
do
{
r.ReadUInt32();
var val = r.ReadSingle();
if (prevVal!=0) {
var diff = Math.Abs(val - prevVal) / prevVal;
if (diff > 0.25)
Console.WriteLine("Bad!");
}
prevVal = val;
}
while (f.Position < f.Length);
It unfortunately works very slowly, and I am looking to improve this. In C++, I would simply read the file into a byte array and then recast that array as an array of structures:
struct S{
int a;
float b;
}
How would I do this in C#?
define a struct (possible a readonly struct) with explicit layout ([StructLayout(LayoutKind.Explicit)]) that is precisely the same as your C++ code, then one of:
open the file as a memory-mapped file, get the pointer to the data; use either unsafe code on the raw pointer, or use Unsafe.AsRef<YourStruct> on the data, and Unsafe.Add<> to iterate
open the file as a memory-mapped file, get the pointer to the data; create a custom memory over the pointer (of your T), and iterate over the span
open the file as a byte[]; create a Span<byte> over the byte[], then use MemoryMarshal.Cast<,> to create a Span<YourType>, and iterate over that
open the file as a byte[]; use fixed to pin the byte* and get a pointer; use unsafe code to walk the pointer
something involve "pipelines" - a Pipe that is the buffer, maybe using StreamConnection on a FileStream for filling the pipe, and a worker loop that dequeues from the pipe; complication: the buffers can be discontiguous and may split at inconvenient places; solvable, but subtle code required whenever the first span isn't at least 8 bytes
(or some combination of those concepts)
Any of those should work much like your C++ version. The 4th is simple, but for very large data you probably want to prefer memory-mapped files
This is what we use (compatible with older versions of C#):
public static T[] FastRead<T>(FileStream fs, int count) where T: struct
{
int sizeOfT = Marshal.SizeOf(typeof(T));
long bytesRemaining = fs.Length - fs.Position;
long wantedBytes = count * sizeOfT;
long bytesAvailable = Math.Min(bytesRemaining, wantedBytes);
long availableValues = bytesAvailable / sizeOfT;
long bytesToRead = (availableValues * sizeOfT);
if ((bytesRemaining < wantedBytes) && ((bytesRemaining - bytesToRead) > 0))
{
Debug.WriteLine("Requested data exceeds available data and partial data remains in the file.");
}
T[] result = new T[availableValues];
GCHandle gcHandle = GCHandle.Alloc(result, GCHandleType.Pinned);
try
{
uint bytesRead;
if (!ReadFile(fs.SafeFileHandle, gcHandle.AddrOfPinnedObject(), (uint)bytesToRead, out bytesRead, IntPtr.Zero))
{
throw new IOException("Unable to read file.", new Win32Exception(Marshal.GetLastWin32Error()));
}
Debug.Assert(bytesRead == bytesToRead);
}
finally
{
gcHandle.Free();
}
GC.KeepAlive(fs);
return result;
}
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Interoperability", "CA1415:DeclarePInvokesCorrectly")]
[DllImport("kernel32.dll", SetLastError=true)]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool ReadFile
(
SafeFileHandle hFile,
IntPtr lpBuffer,
uint nNumberOfBytesToRead,
out uint lpNumberOfBytesRead,
IntPtr lpOverlapped
);
NOTE: This only works for structs that contain only blittable types, of course. And you must use [StructLayout(LayoutKind.Explicit)] and declare the packing to ensure that the struct layout is identical to the binary format of the data in the file.
For recent versions of C#, you can use Span as mentioned by Marc in the other answer!
Thank you everyone for very helpful comments and answers. Given this input, this is my preferred solution:
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct Data
{
public UInt32 dummy;
public Single val;
};
static void Main(string[] args)
{
byte [] byteArray = File.ReadAllBytes("File.bin");
ReadOnlySpan<Data> dataArray = MemoryMarshal.Cast<byte, Data>(new ReadOnlySpan<byte>(byteArray));
Single prevVal=0;
foreach( var v in dataArray) {
if (prevVal!=0) {
var diff = Math.Abs(v.val - prevVal) / prevVal;
if (diff > 0.25)
Console.WriteLine("Bad!");
}
prevVal = v.val;
}
}
}
It indeed works much faster than the original implementation.
You are actually not using the MemoryStream at all currently. Your BinaryReader accesses the file directly. To have the BinaryReader use the MemoryStream instead:
Replace
f.Seek(0, SeekOrigin.Begin);
var r = new BinaryReader(f);
...
while (f.Position < f.Length);
with
memStream.Seek(0, SeekOrigin.Begin);
var r = new BinaryReader(memStream);
...
while(r.BaseStream.Position < r.BaseStream.Length)

How to get a byte** from managed byte[] buffer

I've been using FFmpeg.AutoGen https://github.com/Ruslan-B/FFmpeg.AutoGen wrapper to decode my H264 video for sometime with great success and now have to add AAC audio decoding (previous I was using G711 and NAudio for this).
I have the AAC stream decoding using avcodec_decode_audio4, however the output buffer or frame is in floating point format FLT and I need it to be in S16. For this I have found unmanaged examples using swr_convert and FFmpeg.AutoGen does have this function P/Invoked as;
[DllImport(SWRESAMPLE_LIBRARY, EntryPoint="swr_convert", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
public static extern int swr_convert(SwrContext* s, byte** #out, int out_count, byte** #in, int in_count);
My trouble is that I can't find a successful way of converting/fixing/casting my managed byte[] in to a byte** to provide this as the destination buffer.
Has anyone doing this before?
My non-working code...
packet.ResetBuffer(m_avFrame->linesize[0]*2);
fixed (byte* pData = packet.Payload)
{
byte** src = &m_avFrame->data_0;
//byte** dst = *pData;
IntPtr d = new IntPtr(pData);
FFmpegInvoke.swr_convert(m_pConvertContext, (byte**)d.ToPointer(), packet.Length, src, (int)m_avFrame->linesize[0]);
}
Thanks for any help.
Cheers
Dave
The function you are trying to call is documented here: http://www.ffmpeg.org/doxygen/2.0/swresample_8c.html#a81af226d8969df314222218c56396f6a
The out_arg parameter is declare like this:
uint8_t* out_arg[SWR_CH_MAX]
That is an length SWR_CH_MAX array of byte arrays. Your translation renders that as byte** and so forces you to use unsafe code. Personally I think I would avoid that. I would declare the parameter like this:
[MarshalAs(UnmanagedType.LPArray)]
IntPtr[] out_arg
Declare the array like this:
IntPtr[] out_arg = new IntPtr[channelCount];
I am guessing that the CH in SWR_CH_MAX is short-hand for channel.
Then you need to allocate memory for the output buffer. I'm not sure how you want to do that. You could allocate one byte array per channel and pin those arrays to get hold of a pointer to pass down to the native code. That would be my preferred approach because upon return you'd have your channels in nice managed arrays. Another way would be a call to Marshal.AllocHGlobal.
The input buffer would need to be handled in the same way.
I would not use the automated pinvoke translation that you are currently using. It seems he'll bent on forcing you to use pointers and unsafe code. Not massively helpful. I'd translate it by hand.
I'm sorry not to give more specific details but it's a little hard because your question did not contain any information about the types used in your code samples. I hope the general advice is useful.
Thanks to #david-heffernan answer I've managed to get the following working and I'm posting as an answer as examples of managed use of FFmpeg are very rare.
fixed (byte* pData = packet.Payload)
{
IntPtr[] in_buffs = new IntPtr[2];
in_buffs[0] = new IntPtr(m_avFrame->data_0);
in_buffs[1] = new IntPtr(m_avFrame->data_1);
IntPtr[] out_buffs = new IntPtr[1];
out_buffs[0] = new IntPtr(pData);
FFmpegInvoke.swr_convert(m_pConvertContext, out_buffs, m_avFrame->nb_samples, in_buffs, m_avFrame->nb_samples);
}
In in the complete context of decoding a buffer of AAC audio...
protected override void DecodePacket(MediaPacket packet)
{
int frameFinished = 0;
AVPacket avPacket = new AVPacket();
FFmpegInvoke.av_init_packet(ref avPacket);
byte[] payload = packet.Payload;
fixed (byte* pData = payload)
{
avPacket.data = pData;
avPacket.size = packet.Length;
if (packet.KeyFrame)
{
avPacket.flags |= FFmpegInvoke.AV_PKT_FLAG_KEY;
}
int in_len = packet.Length;
int count = FFmpegInvoke.avcodec_decode_audio4(CodecContext, m_avFrame, out frameFinished, &avPacket);
if (count != packet.Length)
{
}
if (count < 0)
{
throw new Exception("Can't decode frame!");
}
}
FFmpegInvoke.av_free_packet(ref avPacket);
if (frameFinished > 0)
{
if (!mConverstionContextInitialised)
{
InitialiseConverstionContext();
}
packet.ResetBuffer(m_avFrame->nb_samples*4); // need to find a better way of getting the out buff size
fixed (byte* pData = packet.Payload)
{
IntPtr[] in_buffs = new IntPtr[2];
in_buffs[0] = new IntPtr(m_avFrame->data_0);
in_buffs[1] = new IntPtr(m_avFrame->data_1);
IntPtr[] out_buffs = new IntPtr[1];
out_buffs[0] = new IntPtr(pData);
FFmpegInvoke.swr_convert(m_pConvertContext, out_buffs, m_avFrame->nb_samples, in_buffs, m_avFrame->nb_samples);
}
packet.Type = PacketType.Decoded;
if (mFlushRequest)
{
//mRenderQueue.Clear();
packet.Flush = true;
mFlushRequest = false;
}
mFirstFrame = true;
}
}

Readprocessmemory into string

I know everything about process and what address i want to read, but i don't know how to use Readprocessmemory function. Do i need to add some usings or something?
I made this in C++, but how can i do it in C#?
char* ReadMemoryText(DWORD address,int size)
{
char ret[size];
DWORD processId;
HWND hwnd = FindWindow("WindowX",NULL);
if(tibia!=NULL)
{
GetWindowThreadProcessId(hwnd,&processId);
HANDLE phandle = OpenProcess(PROCESS_VM_READ, 0, processId);
if(!phandle)
{
cout<<GetLastError()<<endl;
cout <<"Could not get handle!\n";
cin.get();
}
ReadProcessMemory(phandle, (LPVOID)address, &ret,size,0);
char * rt = ret;
for(int i=0;i<size && ret[i]!=0;++i)
cout << ret[i];
return rt;
}
return NULL;
}
Here is an example of using C# that reads a char array from memory. In this case it's the local player's name string from Assault Cube.
[DllImport("kernel32.dll", SetLastError = true)]
public static extern bool ReadProcessMemory(
IntPtr hProcess, IntPtr lpBaseAddress, byte[] lpBuffer, Int32 nSize, out IntPtr lpNumberOfBytesRead);
var nameAddr = ghapi.FindDMAAddy(hProc, (IntPtr)(modBase2 + 0x10f4f4), new int[] { 0x225 });
byte[] name = new byte[16];
ghapi.ReadProcessMemory(hProc, nameAddr, name, 16, out _);
Console.WriteLine(Encoding.Default.GetString(name));
We use pinvoke to get access to ReadProcessMemory exported from kernel32.dll
We use FindDMAAddy to get the address of the name variable. The char array is a fixed size of 16 bytes.
We use ReadProcessMemory using source and destination variables, size 16 and the last argument we just use "out _" because we don't care about bytesRead argument.
Then we need to convert that char array to a string type with proper encoding for which we use Encoding.Default.GetString().
Then write that line to the console.

CKR_BUFFER_TOO_SMALL = 0x00000150

I want to PInvoke C_Encrypt() "pkcs#11" from a .dll :
[DllImport("cryptoki.dll", SetLastError = true)]
private static extern UInt32 C_Encrypt(CK_SESSION_HANDLE hSession,IntPtr pData,CK_ULONG ulDataLen,out IntPtr pEncryptedData,out CK_ULONG pulEncryptedData);
/*
.... Main
in which I initialize the encryption parametrs with C_EncyptInit
*/
CK_BYTE[] text = new CK_BYTE[] { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x08, 0x09 };
System.UInt32 t, tt = (System.UInt32)text.Length;
IntPtr pdata = Marshal.AllocHGlobal(text.Length);
Marshal.Copy(text, 0, pdata, text.Length);
IntPtr chif = IntPtr.Zero;
tt = (System.UInt32)Marshal.SizeOf(pdata);
rv = C_Encrypt(h, pdata, tt, out chif, out t);
help please
There's a variety of different problems here.
Your P/Invoke signature is wrong. The final two parameters are not out parameters. The C_Encrypt function will write the encrypted data to those parameters, but you need to allocate and pass them yourself.
You need to allocate data for chif, and then pass the size that you allocated for chif as the final param t. This is the root cause of the error that you're seeing.
Nit: Your variable names are confusing, and you seem to have mixed up tt and t somewhere, since you assign to tt twice.
I resolved the problem By my self:
[DllImport("D:/Program Files/Eracom/ProtectToolkit C SDK/bin/sw/cryptoki.dll", SetLastError = true)]
private static extern UInt32 C_Encrypt(CK_SESSION_HANDLE hSession, CK_BYTE[] pData, CK_ULONG ulDataLen, CK_BYTE[] pEncryptedData,ref CK_ULONG pulEncryptedData);
enjoy

PInvoke and EntryPointNotFoundException

I can't understand what is wrong with a pinvoke below which results into an EntryPointNotFoundException:
A function in C with a structure declaration:
extern "C"__declspec (dllimport) __stdcall
LONG NET_DVR_Login_V30 (char *sDVRIP, WORD wDVRPort, char *sUserName,
char *sPassword, LPNET_DVR_DEVICEINFO_V30 lpDeviceInfo);
typedef struct
{
BYTE sSerialNumber[48];
BYTE byAlarmInPortNum;
BYTE byAlarmOutPortNum;
BYTE byDiskNum;
BYTE byDVRType;
BYTE byChanNum;
BYTE byStartChan;
BYTE byAudioChanNum;
BYTE byIPChanNum;
BYTE byZeroChanNum;
BYTE byMainProto;
BYTE bySubProto;
BYTE bySupport;
BYTE byRes1[20];
}NET_DVR_DEVICEINFO_V30, *LPNET_DVR_DEVICEINFO_V30;
The import in C#, the structure declaration and the pinvoke:
[DllImport("SDK.dll", SetLastError = true,
CallingConvention = CallingConvention.StdCall)]
public extern static int NET_DVR_Login_V30(
[MarshalAs(UnmanagedType.LPStr)] string sDVRIP,
ushort wDVRPort,
[MarshalAs(UnmanagedType.LPStr)] string sUserName,
[MarshalAs(UnmanagedType.LPStr)] string sPassword,
ref NET_DVR_DEVICEINFO_V30 lpDeviceInfo);
[StructLayout(LayoutKind.Sequential,
CharSet = CharSet.Ansi)]
public struct NET_DVR_DEVICEINFO_V30
{
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 48)]
public string sSerialNumber;
public byte byAlarmOutPortNum;
public byte byDiskNum;
public byte byDVRType;
public byte byChanNum;
public byte byStartChan;
public byte byAudioChanNum;
public byte byIPChanNum;
public byte byZeroChanNum;
public byte byMainProto;
public byte bySubProto;
public byte bySupport;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 20)]
public string byRes1;
}
NET_DVR_DEVICEINFO_V30 deviceInfo = new NET_DVR_DEVICEINFO_V30();
int result = Functions.NET_DVR_Login_V30(ip, port, user,
password, ref deviceInfo);
I inspected the function name via dumpbin and it is not mangled. So I wonder why an EntryPointNotFoundException occurs, if anything were wrong with the parameters for example, a PInvokeStackImbalance error would occur, let's say.
Any ideas what could be wrong with this pinvoke?
There is a tool called Dependency Walker (depends.exe) that will help debug this issue by displaying the import/export table of your SDK.DLL - I'd take a look at that. One other thing that might (this seems suspect to me) be happening is, that since you're using char*, .NET is adding an "A" on the end of your function name. That could be balderdash though.
Clearly there is a name mismatch. You therefore need to make sure that both sides of the interface use the same name:
When exporting the function from the DLL as stdcall it will be decorated. You can avoid this decoration by using a .def file.
When importing using P/Invoke you need to suppress the addition of a W or A suffix. Do so by setting the ExactSpelling field of the DllImportAttribute to true.

Categories