OK i have this crazy idea, since php does not play nice with G-WAN, maybe the solution is to use phalanger to compile php code to c# mono assembly and then use it from g-wan?
Anyone has any experience with this combination and could help?
OR maybe i'm wrong and G-wan can run php?
Did someone tried PH7 ?
PH7 is a PHP engine which allow the host application to compile and execute PHP scripts in-process.
As an embedded interpreter, it allows multiple interpreter states to coexist in the same program, without any interference between them.
PH7 is threadsafe.
But in order to be thread-safe, PH7 must be compiled with the PH7_ENABLE_THREADS compile time directive defined.
Well, I did contact the people behind Phalanger (and a few other solutions) to add support for PHP. And their reply (at the time) was that Phalanger was no longer developed.
Now it has been re-emplemented as a CLR language this might give PHP a second life. While I have used the G-WAN 3.9 beta I did not yet try to play with the various languages supported by the Mono runtime.
Regarding the genuine PHP library, I wrote the code below to make it run:
// ----------------------------------------------------------------------------
// php.c: G-WAN using PHP scripts
//
// To build PHP5:
//
// CFLAGS="-O3" ./configure --enable-embed --enable-maintainer-zts --with-tsrm-pthreads --without-pear
// make clean
// make
// sudo make install
/* Installing PHP SAPI module: embed
Installing PHP CLI binary: /usr/local/bin/
Installing PHP CLI man page: /usr/local/php/man/man1/
Installing PHP CGI binary: /usr/local/bin/
Installing build environment: /usr/local/lib/php/build/
Installing header files: /usr/local/include/php/
Installing helper programs: /usr/local/bin/
program: phpize
program: php-config
Installing man pages: /usr/local/php/man/man1/
page: phpize.1
page: php-config.1
Installing PEAR environment: /usr/local/lib/php/
[PEAR] Archive_Tar - already installed: 1.3.7
[PEAR] Console_Getopt - already installed: 1.3.0
[PEAR] Structures_Graph- already installed: 1.0.4
[PEAR] XML_Util - already installed: 1.2.1
[PEAR] PEAR - already installed: 1.9.4
Wrote PEAR system config file at: /usr/local/etc/pear.conf
You may want to add: /usr/local/lib/php to your php.ini include_path
/home/pierre/Downloads/PHP/php5.4-20/build/shtool install -c ext/phar/phar.phar /usr/local/bin
ln -s -f /usr/local/bin/phar.phar /usr/local/bin/phar
Installing PDO headers: /usr/local/include/php/ext/pdo/ */
/*
enabling the 'thread safety' --enable-maintainer-zts option results in:
error: 'tsrm_ls' undeclared (first use in this function)
*/
/*
tsrm_ls
TSRM local storage - This is the actual variable name being passed around
inside the TSRMLS_* macros when ZTS is enabled. It acts as a pointer to
the start of that thread's independent data storage block.
TSRM
Thread Safe Resource Manager - This is an oft overlooked, and seldom if
ever discussed layer hiding in the /TSRM directory of the PHP source code.
By default, the TSRM layer is only enabled when compiling a SAPI which
requires it (e.g. apache2-worker). All Win32 builds have this layer
enabled enabled regardless of SAPI choice.
ZTS
Zend Thread Ssafety - Often used synonymously with the term TSRM.
Specifically, ZTS is the term used by ./configure
( --enable-experimental-zts for PHP4, --enable-maintainer-zts for PHP5),
and the name of the #define'd preprocessor token used inside the engine
to determine if the TSRM layer is being used.
TSRMLS_??
A quartet of macros designed to make the differences between ZTS and
non-ZTS mode as painless as possible. When ZTS is not enabled, all
four of these macros evaluate to nothing. When ZTS is enabled however,
they expand out to the following definitions:
TSRMLS_C tsrm_ls
TSRMLS_D void ***tsrm_ls
TSRMLS_CC , tsrm_ls
TSRMLS_DC , void ***tsrm_ls
PHP relies on global variables from resource type identifiers, to
function callback pointers, to request specific information such as
the symbol tables used to store userspace variables. Attempting to
pass these values around in the parameter stack would be more than
unruly, it'd be impossible for an application like PHP where it's
often necessary to register callbacks with external libraries which
don't support context data.
So common information, like the execution stack, the function and
class tables, and extension registries all sit up in the global
scope where they can be picked up and used at any point in the
application.
For single-threaded SAPIs like CLI, Apache1, or even Apache2-prefork,
this is perfectly fine. Request specific structures are initialized
during the RINIT/Activation phase, and reset back to their original
values during the RSHUTDOWN/Deactivation phase in preparation for
the next request. A given webserver like Apache1 can serve up multiple
pages at once because it spawns multiple processes each in their own
process space with their own independant copies of global data.
The trouble starts with threaded webservers like Apache2-worker, or IIS
where two or more threads trying to run the a request at the same time.
Each thread wants to use the global scope to store its request-specific
information, and tries to do so by writing to the same
storage space. At the least, this would result in userspace variables
declared in one script showing up in another. In practice, it leads to
quick and disasterous segfaults and completely unpredictable behavior as
memory is double freed or written with conflicting information by separate
threads.
*/
#pragma include "/usr/local/include/php"
#pragma include "/usr/local/include/php/main"
#pragma include "/usr/local/include/php/TSRM"
#pragma include "/usr/local/include/php/Zend"
#pragma link "/usr/local/lib/libphp5.so"
#include "gwan.h" // G-WAN exported functions
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/syscall.h>
#include <php/sapi/embed/php_embed.h>
#include <php/Zend/zend_stream.h>
static pid_t gettid(void) { return syscall(__NR_gettid); }
// PHP
static int ub_write(const char *str, unsigned int str_len TSRMLS_DC)
{
puts(str); // this is the stdout output of a PHP script
return 0;
}
static void log_message(char * message)
{
printf("log_message: %s\n", message);
}
static void sapi_error(int type, const char * fmt, ...) { }
static void php_set_var(char *varname, char *varval)
{
zval *var;
MAKE_STD_ZVAL(var);
ZVAL_STRING(var, varval, 1);
zend_hash_update(&EG(symbol_table), varname, strlen(varname) + 1,
&var, sizeof(zval*), NULL);
}
static char *php_get_var(char *varname)
{
zval **data = NULL;
char *ret = NULL;
if(zend_hash_find(&EG(symbol_table), varname, strlen(varname) + 1,
(void**)&data) == FAILURE)
{
printf("Name not found in $GLOBALS\n");
return "";
}
if(!data)
{
printf("Value is NULL (not possible for symbol_table?)\n");
return "";
}
ret = Z_STRVAL_PP(data);
return ret;
}
static int php_init(void)
{
static int once = 0;
if(once) return 0;
once = 1;
static char *myargv[2] = {"toto.php", NULL};
php_embed_module.log_message = log_message;
php_embed_module.sapi_error = sapi_error;
php_embed_module.ub_write = ub_write;
if(php_embed_init(1, myargv PTSRMLS_CC) == FAILURE)
{
printf("php_embed_init error\n");
return 1;
}
return 0;
}
static void php_shutdown()
{
php_embed_shutdown(TSRMLS_C);
}
static int php_exec(char *str)
{
zval ret_value;
int exit_status;
zend_first_try
{
PG(during_request_startup) = 0;
// run the specified PHP script file
// sprintf(str, "include (\"% s \ ");", scriptname);
zend_eval_string(str, &ret_value, "toto.php" TSRMLS_CC);
exit_status = Z_LVAL(ret_value);
} zend_catch
{
exit_status = EG(exit_status);
}
zend_end_try();
return exit_status;
}
__thread char reply_num[8] = {0};
__thread pid_t tid = 0;
int main(int argc, char *argv[])
{
if(!tid)
{
tid = gettid();
s_snprintf(reply_num, 8, "%u", tid);
php_init();
}
xbuf_t *reply = get_reply(argv);
//php_set_var("argv", argv[0]);
php_set_var(reply_num, "");
char fmt[] = //"print(\"from php [$test]\n\");\n"
"$reply%s = \"Hello World (PHP)\";\n";
char php[sizeof(fmt) + sizeof(reply_num) + 2];
s_snprintf(php, sizeof(php), fmt, reply_num);
php_exec(php);
xbuf_cat(reply, php_get_var(reply_num));
return 200;
}
If anybody can make this code work with more than one worker thread without crashing the PHP runtime, then PHP will be added to G-WAN.
Here is what G-WAN produces with one single worker thread:
-----------------------------------------------------
weighttp -n 100000 -c 100 -t 1 -k "http://127.0.0.1:8080/?php.c"
finished in 0 sec, 592 millisec, **168744 req/s**, 48283 kbyte/s
requests: 100000 total/started/done/succeeded, 0 failed/errored
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 29299985 bytes total, 27599985 bytes http,
1700000 bytes data
-----------------------------------------------------
That would be great to resolve this PHP threading issue. Thanks for helping anyone!
Related
Recently I have been trying to get some Point Cloud Library functionality going in my .NET framework application, and considering that there is no completely functional wrapper for PCL for C#, I made my own for a few functions as a test. Something like this:
[DllImport(DllFilePath, CallingConvention = CallingConvention.Cdecl)]
public extern static IntPtr StatisticalOutlierFilter(IntPtr data, int length, int meanK = 50, float mulThresh = 1.0f);
Which calls a function from a C++ library, such as this:
EXPORT VectorXYZ* StatisticalOutlierFilter(VectorXYZ* data, int length, int meanK, float mulThresh) {
auto processedCloud = process.StatisticalOutlierFilter(data, length, meanK, mulThresh);
auto processedVector = convert.ToVectorXYZ(processedCloud);
return processedVector;
}
Where EXPORT is defined such for gcc:
#define EXPORT extern "C" __attribute__ ((visibility ("default")))
And relevant processing function from PCL is implemented such in a class (note that the returned is a boost shared pointer):
PointCloud<PointXYZ>::Ptr Processors::StatisticalOutlierFilter(VectorXYZ* data, int length, int meanK, float mulThresh) {
auto cloud = PrepareCloud(data, length);
PointCloud<PointXYZ>::Ptr cloud_filtered(new PointCloud<PointXYZ>);
StatisticalOutlierRemoval<PointXYZ> sor;
sor.setInputCloud(cloud);
sor.setMeanK(meanK);
sor.setStddevMulThresh(mulThresh);
sor.filter(*cloud_filtered);
return cloud_filtered;
}
This procedure works well with a dll built w/MSVC and running the whole thing on Windows, though the final target is gcc/Linux/Mono, where I get several errors of the following type (this is from mono debug):
'libavpcl_dll.so': '/usr/lib/libavpcl_dll.so: undefined symbol: _ZN3pcl7PCLBaseINS_8PointXYZEE13setInputCloudERKN5boost10shared_ptrIKNS_10PointCloudIS1_EEEE'.
I have investigated quite a bit so far, and have set my CmakeLists.txt to set(CMAKE_CXX_VISIBILITY_PRESET hidden) , therefore, I imagine, only functions I defined as EXPORT should be visible and imported - however, that is not the case, and I get the aforementioned errors. PCL was installed on Windows via vcpkg and on Xubuntu via apt. I am somewhat stumped as to what is the error source, considering the code runs well on windows, and builds without issue on Linux. Thanks.
I've been running into the same issue as you. I solved it by adding each reference library into the CMakeLists.txt file (I was missing the reference files which gave me the similar missing symbol issues).
I'm at the 'I don't know why this worked' stage but I can give you step by step implementation (I'm also trying to use DllImport into .NET on Linux).
Started with this:
https://medium.com/#xaviergeerinck/how-to-bind-c-code-with-dotnet-core-157a121c0aa6
Then added my in-scope files thanks to the main comment here: How to create a shared library with cmake?:
add_library(mylib SHARED
sources/animation.cpp
sources/buffers.cpp
[...]
)
run cmake .
run make -j$(grep -c ^processor /proc/cpuinfo)
copy path to .so file
DllImport path from above to my c# app
I want to integrate Python with C#. I found two approaches using Interprocess communication and IronPython
Interprocess communication requires Python.exe to be installed on all client machines so not a viable solution.
We started using IronPython, but it only supports 2.7 python version for now. We are using 3.7 version.
Following code we tried using IronPython:
private void BtnJsonPy_Click(object sender, EventArgs e)
{
// 1. Create Engine
var engine = Python.CreateEngine();
//2. Provide script and arguments
var script = #"C:\Users\user\source\path\repos\SamplePy\SamplePy2\SamplePy2.py"; // provide full path
var source = engine.CreateScriptSourceFromFile(script);
// dummy parameters to send Python script
int x = 3;
int y = 4;
var argv = new List<string>();
argv.Add("");
argv.Add(x.ToString());
argv.Add(y.ToString());
engine.GetSysModule().SetVariable("argv", argv);
//3. redirect output
var eIO = engine.Runtime.IO;
var errors = new MemoryStream();
eIO.SetErrorOutput(errors, Encoding.Default);
var results = new MemoryStream();
eIO.SetOutput(results, Encoding.Default);
//4. Execute script
var scope = engine.CreateScope();
var lib = new[]
{
"C:\\Users\\user\\source\\repos\\SamplePy\\CallingWinForms\\Lib",
"C:\\Users\\user\\source\\repos\\SamplePy\\packages\\IronPython.2.7.9\\lib",
"C:\\Users\\user\\source\\repos\\SamplePy\\packages\\IronPython.2.7.9",
"C:\\Users\\user\\source\\repos\\SamplePy\\packages\\IronPython.StdLib.2.7.9"
//"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37 - 32\\Lib",
//"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32\\python.exe",
//"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37 - 32",
//"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32\\DLLs"
};
engine.SetSearchPaths(lib);
engine.ExecuteFile(script, scope);
//source.Execute(scope);
//5. Display output
string str(byte[] x1) => Encoding.Default.GetString(x1);
Console.WriteLine("Errrors");
Console.WriteLine(str(errors.ToArray()));
Console.WriteLine();
Console.WriteLine("Results");
Console.WriteLine(str(results.ToArray()));
lblAns.Text = str(results.ToArray());
}
The problem sometimes is, to do heavy Machine Learning programming we need to add "Modules". These Modules would be dependent on other modules. This increases point 4, Execute Scripts part of code, as more data path of that module has to be given here var lib = new[] and also some modules are not supported with Iron Python as well (for e.g. modules concerning OCR operations etc.)
Due to these limitations I found Pythonnet which also helps in integrating .net applications with Python. But I am new to it, so want some ideas on implementing the same, and code samples available, and is it feasible or recommended to use with Python 3.7
I checked that setting up Pythonnet is cumbersome initially, so want help or steps on how to set up the same. Also would like to know if in future Iron Python would support Python 3.X as well or not.
I am not familiar with IronPython, but I use pythonnet quite a lot for the same purpose - integrate Python with C#, so I can elaborate on that.
The advantage of using pythonnet for your purposes is having all the CPython packages available for you to use (numpy, scipy, pandas, Theano, Keras, scikit-learn etc), but avoiding the overhead of calling python.exe as separate process (pythonnet works by loading pythonXY.dll into your process).
Pay attention that pythonnet also requires to have stand-alone Python availiable, but you can use Embeddable Python package which is very light-weight and can be distributed with your application.
pythonnnet supports Python 3.7, but the published NuGet packages are only for Python 3.5. You have several choices to obtain pythonnet for Python 3.7:
Download pythonnet wheel package from PyPi and extract Python.Runtime.dll from it
Download NuGet package from pythonnet appveyor build artifacts, as advised on pythonnet installation wiki
Build from sources
Important note: pythonnet version has to match your Python version and bitness. For example, if you are using Python 3.7 32-bit, download pythonnet-2.4.0-cp37-cp37m-win32.whl. If your Python is 64-bit, download pythonnet-2.4.0-cp37-cp37m-win_amd64.whl. Your C# project platform target also has to match (x86 for 32-bit or x64 for 64-bit).
Code sample with similar functionality to what you have posted, using pythonnet (tested with Python 3.7.4 on Windows 7 and pythonnet NuGet from latest build artifacts):
private void Test()
{
// Setup all paths before initializing Python engine
string pathToPython = #"C:\Users\user\AppData\Local\Programs\Python\Python37-32";
string path = pathToPython + ";" +
Environment.GetEnvironmentVariable("PATH", EnvironmentVariableTarget.Machine);
Environment.SetEnvironmentVariable("PATH", path, EnvironmentVariableTarget.Process);
Environment.SetEnvironmentVariable("PYTHONHOME", pathToPython, EnvironmentVariableTarget.Process);
var lib = new[]
{
#"C:\Users\user\source\path\repos\SamplePy\SamplePy2",
Path.Combine(pathToPython, "Lib"),
Path.Combine(pathToPython, "DLLs")
};
string paths = string.Join(";", lib);
Environment.SetEnvironmentVariable("PYTHONPATH", paths, EnvironmentVariableTarget.Process);
using (Py.GIL()) //Initialize the Python engine and acquire the interpreter lock
{
try
{
// import your script into the process
dynamic sampleModule = Py.Import("SamplePy");
// It is more maintainable to communicate with the script with
// function parameters and return values, than using argv
// and input/output streams.
int x = 3;
int y = 4;
dynamic results = sampleModule.sample_func(x, y);
Console.WriteLine("Results: " + results);
}
catch (PythonException error)
{
// Communicate errors with exceptions from within python script -
// this works very nice with pythonnet.
Console.WriteLine("Error occured: ", error.Message);
}
}
}
SamplePy.py:
def sample_func(x, y):
return x*y
I have a 32bit dll (no source code) that I need to access from 64bit C# application. I've read this article and took a look into the corresponding code from here. I've also read this post.
I'm not sure that I'm asking the right question, so please help me.
There are 3 projects: dotnetclient, x86Library and x86x64. The x86x64 has x86LibraryProxy.cpp which loads the x86library.dll and calls the GetTemperature function:
STDMETHODIMP Cx86LibraryProxy::GetTemperature(ULONG sensorId, FLOAT* temperature)
{
*temperature = -1;
typedef float (__cdecl *PGETTEMPERATURE)(int);
PGETTEMPERATURE pFunc;
TCHAR buf[256];
HMODULE hLib = LoadLibrary(L"x86library.dll");
if (hLib != NULL)
{
pFunc = (PGETTEMPERATURE)GetProcAddress(hLib, "GetTemperature");
if (pFunc != NULL)
dotnetclient calls that GetTemperature function and print the result:
static void Main(string[] args)
{
float temperature = 0;
uint sensorId = 2;
var svc = new x86x64Lib.x86LibraryProxy();
temperature = svc.GetTemperature(sensorId);
Console.WriteLine($"temperature of {sensorId} is {temperature}, press any key to exit...");
This all works if I build all projects either as x86 or x64. The result for the temperature I get is 20. But, the whole idea was to use 32bit x86x64Lib.dll. That means that dotnetclient should be built as x64 and x86Library and x86x64 as x86, right? If I do this I get -1 as a result.
Should I build x86Library and x86x64 as x86 and dotnetclient as x64? If I do, so what can be the problem that I get -1?
CLARIFICATION
It seems that the provided example only works when both client and server are build in 32 or 64 bit. But not when the client build in 64bit and the server in 32bit. Can someone take a look please?
IMHO, the easiest way to do this is to use COM+ (Component Services) which is part of Windows for like 20 years or so (previous versions used to be called MTS...). It provides the surrogate infrastructure for you with tools, UI, and everything you need.
But that means you'll have to use COM, so it's good to know a bit of COM for this.
First create an x86 COM DLL. I've used ATL for that. Created an ATL project, added an ATL simple object to it, added the method to the IDL and implementation.
.idl (note the [out, retval] attributes so the temperature is considered a return value for higher level languages including .NET):
import "oaidl.idl";
import "ocidl.idl";
[
object,
uuid(f9988875-6bf1-4f3f-9ad4-64fa220a5c42),
dual,
nonextensible,
pointer_default(unique)
]
interface IMyObject : IDispatch
{
HRESULT GetTemperature(ULONG sensorId, [out, retval] FLOAT* temperature);
};
[
uuid(2de2557f-9bc2-42ef-8c58-63ba77834d0f),
version(1.0),
]
library x86LibraryLib
{
importlib("stdole2.tlb");
[
uuid(b20dcea2-9b8f-426d-8d96-760276fbaca9)
]
coclass MyObject
{
[default] interface IMyObject;
};
};
import "shobjidl.idl";
Method implementation for testing purposes:
STDMETHODIMP GetTemperature(ULONG sensorId, FLOAT* temperature)
{
*temperature = sizeof(void*); // should be 4 in x86 :-)
return S_OK;
}
Now, you must register this component in the 32-bit registry (in fact, if you're running Visual Studio w/o admin rights, it will complain at compile time that the component cannot be registered, that's expected), so on a 64-bit OS, you must run something like this (note SysWow64) with admin rights:
c:\Windows\SysWOW64\regsvr32 x86Library.dll
Once you've done that, run "Component Services", browse "Computers/My Computer/COM+ Applications", right click and create a New Application. Choose a name and a "Server application". It means your component will be hosted in COM+ surrogate process.
Once you've done that, browse "Components", right click and create a New Component. Make sure you select "32-bit registry". You should see your object's ProgId. In my case when I created my ATL project I added "MyObject" as a Progid, but otherwise it could be named something like "x86Library.MyObject" or "x86LibraryLib.MyObject"... If it's not there, than you made some mistake earlier.
That's it. Now, this .NET program will always be able to run, compiled as AnyCpu or x86 or x64:
class Program
{
static void Main(string[] args)
{
var type = Type.GetTypeFromProgID("MyObject"); // the same progid
dynamic o = Activator.CreateInstance(type);
Console.WriteLine(o.GetTemperature(1234)); // always displays 4
}
}
You can use Component Services UI to configure your surrogate (activation, shutdown, etc.). It also has an API so you can create COM+ apps programmatically.
You are not going to be able to directly call 32-bit code from 64-bit code (or the other way around), it simply is not going to happen.
There are alternatives, such as creating a 32-bit COM host program that then forwards calls to the DLL. Coupled with that you use DCOM standard marshalling so your 64-bit process can connect to the 32-bit host.
But if recompiling the 32-bit DLL is at all an option that is almost certainly your best option.
I would like to create a library out of go-code and use it inside a C# winforms project.
For the error scroll to the bottom.
Setup
GO 1.10.2
tdm-gcc-5.1.0-3
Windows 10 / x64
Go-project called exprt
What I've tried
I've created a minimal go-tool that creates a file in the working-dir:
package main
import (
"os"
"C"
)
func main() {
// nothing here
}
//export Test
func Test() {
os.OpenFile("created_file.txt", os.O_RDONLY|os.O_CREATE, 0666);
}
The next steps were taken from Building a dll with Go 1.7.
I've then compiled to c-archive with the following command: go build -buildmode=c-archive which gives me exprt.a and exprt.h.
After that I've created a file called goDLL.c (1:1 as in the link above) and inserted this code:
#include <stdio.h>
#include "exprt.h"
// force gcc to link in go runtime (may be a better solution than this)
void dummy() {
Test();
}
int main() {
}
Lastly I've run this command to create my final dll:
gcc -shared -pthread -o goDLL.dll goDLL.c exprt.a -lWinMM -lntdll -lWS2_32
which gave me "goDLL.dll".
My problem
In C# I've created a winforms-project with 1 button that calls this declared function (copied the dll to the debug-folder):
[DllImport("goDLL.dll")]
private static extern void Test();
Error
System.BadImageFormatException: "An attempt was made to load a program with an incorrect format. (HRESULT: 0x8007000B)"
Sorry for that big block of text but this was the most minimal test I could think off.
I appreciate every help in here.
Well, in the given answer here https://social.msdn.microsoft.com/Forums/vstudio/en-US/ee3df896-1d33-451b-a8a3-716294b44b2b/socket-programming-on-64bit-machine?forum=vclanguage there is written:
The implementation is in a file called ws2_32.dll and there are 32-bit and 64-bit versions of the DLL in 64-bit Windows.
So the build as described in my question is correct.
Solution
The C#-Project has to be explicitly set to x64. AnyCPU won't work and throw the error shown in the question above.
Everything is working now. I'm leaving the question and answer as this is a full explanation of how to get go-code running out of C#.
I have problem with Boost.Interprocess (v1.66) library which I use in my C/C++ library which I use in C# through Marshalling (calling C native code from C#).
I found the problem if I was using Boost.Interprocess named_semaphore for sync between processes. (in open_or_create mode)
If I use my C/C++ lib with another native C/C++ code everything works fine (under newest Windows 10, Linux (4+ kernel) and even Mac OS X (>=10.11)).
The problem occurred under Windows - with C# I have C wrapper around C++ code. If I use Marshalling with simple own-build EXE --> Everything works! But If I use The same C# code (with the same C lib) in the third party application as a DLL plugin I got segfault from get_bootstamp in named_semaphore.
So I have third-party C# SW for which I create plugins (C# DLL). In that plugin I use my C library through marshalling. Marshalling work fine in test C# project (which just call C functions from C lib) but same code segfault in third-party SW.
C Library workflow:
Init all necessary C structures
Start desired TCP server (native C/C++ app) using Boost.Process
Wait for server (through named_semaphore) <-- segfault
Connect to the server...
C# code has same workflow.
Found the problem
The problem occured in boost::interprocess::ipcdetail::get_booststamp (which is called in named_semaphore). here:
struct windows_bootstamp
{
windows_bootstamp()
{
//Throw if bootstamp not available
if(!winapi::get_last_bootup_time(stamp)){
error_info err = system_error_code();
throw interprocess_exception(err);
}
}
//Use std::string. Even if this will be constructed in shared memory, all
//modules/dlls are from this process so internal raw pointers to heap are always valid
std::string stamp;
};
inline void get_bootstamp(std::string &s, bool add = false)
{
const windows_bootstamp &bootstamp = windows_intermodule_singleton<windows_bootstamp>::get();
if(add){
s += bootstamp.stamp;
}
else{
s = bootstamp.stamp;
}
}
If I debug to the line
const windows_bootstamp &bootstamp = windows_intermodule_singleton<windows_bootstamp>::get()
booststamp.stamp is not readable. The size is set to 31, capacity is set to some weird value (like 19452345) and the data is not readable. If i step over to
s += bootstamp.stamp;
the segfault occured!
Found the reason
I debug once more and set debug point to the windows_bootstamp constructor entry and I got no hit so the stamp is not initialized (I guess).
Confirmation
If I change get_bootstamp to
inline void get_bootstamp(std::string &s, bool add = false)
{
const windows_bootstamp &bootstamp = windows_intermodule_singleton<windows_bootstamp>::get();
std::string stamp;
winapi::get_last_bootup_time(stamp);
if(add){
s += stamp;
}
else{
s = stamp;
}
}
Recompile my lib and exe - everything works fine (without any problem).
My question is - what I am doing wrong? I read Boost.Interprocess doc really thoroughly but there are no advice/warnings about my problem (yeah there is "COM Initialization" in Interprocess doc but it not seems helpfull).
Or it's just a bug in Boost.interprocess and I may report it to Boost bug tracker?
Notice - if I start server manually (before I run C# code) It works without segfaults