I have 1 exe which is nothing bit a Windows form which will continuously run in background and will watch my serial port and I have 1 event data receive event which fires as my serial port receive data.
As soon as I receive data in this event I will pass this data to another event handler which saves this data in database through web api method.
But data to my serial port will be coming frequently so I want to save this data to my database independently so that my database insert operation doesn't block my incoming serial port data.
This is my code:
void _serialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)//Fires as my serial port receives data
{
int dataLength = _serialPort.BytesToRead;
byte[] data = new byte[dataLength];
int nbrDataRead = _serialPort.Read(data, 0, dataLength);
if (nbrDataRead == 0)
return;
// Send data to whom ever interested
if (NewSerialDataRecieved != null)
{
NewSerialDataRecieved(this, new SerialDataEventArgs(data)); //pass serial port data to new below event handler.
}
}
void _spManager_NewSerialDataRecieved(object sender, SerialDataEventArgs e) //I want this event handler to run independently so that database save operation doenst block incoming serial port data
{
if (this.InvokeRequired)
{
// Using this.Invoke causes deadlock when closing serial port, and BeginInvoke is good practice anyway.
this.BeginInvoke(new EventHandler<SerialDataEventArgs>(_spManager_NewSerialDataRecieved), new object[] { sender, e });
return;
}
//data is converted to text
string str = Encoding.ASCII.GetString(e.Data);
if (!string.IsNullOrEmpty(str))
{
//This is where i will save data to through my web api method.
RunAsync(str).Wait();
}
}
static async Task RunAsync(string data)
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("http://localhost:33396/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var content = new StringContent(data);
var response = await client.PostAsJsonAsync<StringContent>("api/Service/Post", content);//nothing happens after this line.
}
}
Web api controller:
public class MyController : ApiController
{
[HttpPost]
public HttpResponseMessage Post(HttpRequestMessage request)
{
var someText = request.Content.ReadAsStringAsync().Result;
return new HttpResponseMessage() { Content = new StringContent(someText) };
}
}
But here problem is:
var response = await client.PostAsJsonAsync<StringContent>("api/Service/Post", content);
Nothing happens after this line that is operation blocks on this line.
So can anybody guide me with this?
By independently we determined in the SO C# chat room that you really mean "Asynchronously".
Your solution is the code above, saving this data to a WebAPI endpoint so any solution to the problem needs to be in 2 parts ...
PART 1: The Client Part
On the client all we need to do is make the call asynchronously in order to free up the current thread to carry on receiving data on the incoming serial port, we can do that like so ...
// build the api client, you may want to keep this in a higher scope to avoid recreating on each message
var api = new HttpClient();
api.BaseAddress = new Uri(someConfigVariable);
// asynchronously make the call and handle the result
api.PostAsJsonAsync("api/My", str)
.ContinueWith(t => HandleResponseAsync(t.Result))
.Unwrap();
...
PART 2: The Server Part
Since you have web api i'm also going to assume you are using EF too, the common and "clean" way to do this, with all the extras stripped out (like model validation / error handling) might look something like this ...
// in your EF code you will have something like this ...
Public async Task<User> SaveUser(User userModel)
{
try
{
var newUser = await context.Users.AddAsync(userModel);
context.SavechangesAsync();
return newUser;
}
catch(Exception ex) {}
}
// and in your WebAPI controller something like this ...
HttpPost]
public async Task<HttpResponseMessage> Post(User newUser)
{
return Ok(await SaveUser(newUser));
}
...
Disclaimer:
The concepts involved here go much deeper and as I hinted above, much has been left out here like validation, error checking, ect but this is the core to getting your serial port data in to a database using the technologies I believe you are using.
Key things to read up on for anyone wanting to achieve this kind of thing might include: Tasks, Event Handling, WebAPI, EF, Async operations, streaming.
From what you describe it seems like you might want to have a setup like this:
1) your windows form listens for serial port
2) when new stuff comes to port your windows forms app saves it to some kind of a queue (msmq, for example)
3) you should have separate windows service that checks queue and as it finds new messages in a queue it sends request to web api
Best solution for this problem is to use ConcurrentQueue.
Just do search on google and you will get planty of samples.
ConcurrentQueue is thread safe and it support writing and reading from multiple threads.
So the component listening to the searal port can write data to the queue. And you can have 2 or more tasks running parallel which listening to this queue and update db as soon as it receives data.
Not sure if it's the problem, but you shouldn't block on async code. You are doing RunAsync(str).Wait(); and I believe that's the problem. Have a look at this blog post by Stephen Cleary:
http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html
Related
My client is attempting to send messages to the receiver. However I noticed that the receiver sometimes does not receive all the messages sent by the client thus missing a few messages (not sure where the problem is ? Client or the receiver).
Any suggestions on why that might be happening. This is what I am currently doing
On the receiver side this is what I am doing.
This is the Event Processor
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
}
}
This is how the client connects to the event hub
var StrBuilder = new EventHubsConnectionStringBuilder(eventHubConnectionString)
{
EntityPath = eventHubName,
};
this.eventHubClient = EventHubClient.CreateFromConnectionString(StrBuilder.ToString());
How do I direct my messages to specific consumers
I'm using this sample code from eventhub official doc, for sending and receiving.
And I have 2 consumer groups: $Default and newcg. Suppose you have 2 clients, the client_1 are using the default consumer group($Default), and client_2 are using the other consumer group(newcg)
First, after create the send client, in the SendMessagesToEventHub method, we need to add a property with value. The value should be the consumer group name. Sample code like below:
private static async Task SendMessagesToEventHub(int numMessagesToSend)
{
for (var i = 0; i < numMessagesToSend; i++)
{
try
{
var message = "444 Message";
Console.WriteLine($"Sending message: {message}");
EventData mydata = new EventData(Encoding.UTF8.GetBytes(message));
//here, we add a property named "cg", it's value is the consumer group. By setting this property, then we can read this message via this specified consumer group.
mydata.Properties.Add("cg", "newcg");
await eventHubClient.SendAsync(mydata);
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}
await Task.Delay(10);
}
Console.WriteLine($"{numMessagesToSend} messages sent.");
}
Then in the client_1, after create the receiver project, which use the default consumer group($Default)
-> in the SimpleEventProcessor class -> ProcessEventsAsync method, we can filter out the unnecessary event data. Sample code for ProcessEventsAsync method:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here
if (eventData.Properties["cg"].ToString() == "$Default")
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
Console.WriteLine(context.ConsumerGroupName);
}
}
return context.CheckpointAsync();
}
And in another client, like client_2, which use another consumer group, like it's name is newcg, we can follow the steps in client_1, just a little changes in ProcessEventsAsync method, like below:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here, using another consumer group name
if (eventData.Properties["cg"].ToString() == "newcg")
{
//other code
}
}
return context.CheckpointAsync();
}
This happens only when there are 2 or more Event Processor Host reading from same consumer group.
If you have event hub with 32 partitions and 2 event processor host reading from same consumer group. Then each event processor host will read from 16 partition and so on.
Similarly if 4 Event processor host parallelly reading from same consumer group then each will read from 8 partitions.
Check if you have 2 or more event processor host running on same consumer group.
I have tested your code and slightly modified it(different overload of EventProcessorHost constructor, and added CheckpointAsync after consuming the messages), and then did some tests.
By using the default implementation and default EventProcessorOptions(EventProcessorOptions.DefaultOptions) I can say that I did experience some latency when it comes to consuming messages, but all messages were processed successfully.
So, sometimes it seems like I am not getting the messages from the certain partition, but after a certain period of time, all messages arrive:
Here you can find the actual modified code that worked for me. It is a simple console app that prints to the console if something arrives.
string processorHostName = Guid.NewGuid().ToString();
var Options = new EventProcessorOptions()
{
MaxBatchSize = 1, //not required to make it working, just for testing
};
Options.SetExceptionHandler((ex) =>
{
System.Diagnostics.Debug.WriteLine($"Exception : {ex}");
});
var eventHubCS = "event hub connection string";
var storageCS = "storage connection string";
var containerName = "test";
var eventHubname = "test2";
EventProcessorHost eventProcessorHost = new EventProcessorHost(eventHubname, "$Default", eventHubCS, storageCS, containerName);
eventProcessorHost.RegisterEventProcessorAsync<MyEventProcessor>(Options).Wait();
For sending the messages to the event hub and testing I used this message publisher app.
I am a beginner in using Signalr and am checking out some examples.
Is it possible to send a message to the client from the server and wait for a return from it? Or is it possible to guarantee that after the answer the same session will be used?
My question is because in a given process, within a transaction, I need to ask the user if he wants to continue with the changes. I have not been able to ask this question before because validations should be done in the same session where changes have been made (but not yet confirmed).
Reiterating the comment from Jaime Yule, WebSockets are bidirectional communication and do not follow the Request/Response architecture for messaging. Given the very fluid nature of communication around WebSockets, these bullet points are good to keep in mind for your current (& future) scenarios:
SignalR is great if you're going to use it for fire & forget (Display a pop-up to a user and that's it)
It's not designed around request-response like you're asking, and trying to use it as such is an anti-pattern
Messages may be sent from either end of the connection at any time,
and there is no native support for one message to indicate it is
related to another
This makes the protocol poorly suited for transactional requirements
It is possible, but i would not recommend (relying on) it.
And it's not a pretty solution (using a static event and being pretty complex for such a simple thing).
Story goes like this:
Make sure client and server know the connectionId - They probably know that already, but i could not figure out a way to access it.
Await NotificationService.ConfirmAsync
... which will call confirm on the client
... which will await the user supplied answer
... and send it back to the server using Callback from The hub.
... which will notify the Callback from the NotificationService over a static event
... which will hand off the message back to ConfirmAsync (using a AutoResetEvent)
... which is hopefully still waiting :)
Client and server both have a 10 second timeout set.
The hub:
// Setup as /notification-hub
public class NotificationHub : Hub {
public string ConnectionId() => Context.ConnectionId;
public static event Action<string, string> Response;
public void Callback(string connectionId, string message) {
Response?.Invoke(connectionId, message);
}
}
Service:
// Wire it up using DI
public class NotificationService {
private readonly IHubContext<NotificationHub> _notificationHubContext;
public NotificationService(IHubContext<NotificationHub> notificationHubContext) {
_notificationHubContext = notificationHubContext;
}
public async Task<string> ConfirmAsync(string connectionId, string text, IEnumerable<string> choices) {
await _notificationHubContext.Clients.Client(connectionId)
.SendAsync("confirm", text, choices);
var are = new AutoResetEvent(false);
string response = null;
void Callback(string connId, string message) {
if (connectionId == connId) {
response = message;
are.Set();
}
}
NotificationHub.Response += Callback;
are.WaitOne(TimeSpan.FromSeconds(10));
NotificationHub.Response -= Callback;
return response;
}
}
Client side js:
var conn = new signalR.HubConnectionBuilder().withUrl("/notification-hub").build();
var connId;
// using Noty v3 (https://ned.im/noty/)
function confirm(text, choices) {
return new Promise(function (resolve, reject) {
var n = new Noty({
text: text,
timeout: 10000,
buttons: choices.map(function (b) {
return Noty.button(b, 'btn', function () {
resolve(b);
n.close();
});
})
});
n.show();
});
}
conn.on('confirm', function(text, choices) {
confirm(text, choices).then(function(choice) {
conn.invoke("Callback", connId, choice);
});
});
conn.start().then(function() {
conn.invoke("ConnectionId").then(function (connectionId) {
connId = connectionId;
// Picked up by a form and posted to the server
document.querySelector(".connection-id-input").value = connectionId;
});
});
For me this is way to complex to put it into the project i am working on.
It really looks like something that will come back and bite you later...
Is it possible to send a message to the client from the server and wait for a return from it? Or is it possible to guarantee that after the answer the same session will be used?
None of this is possible. Currently there's no way to wait for the client's response or even to get to know if the client received the message. There's some discussion implementing this on GitHub. Also here's the feature request.
Until then, the workaround is to send a "notification" from the server with a fire and forget attitude and let the client get the required data via a HTTP request to the server.
This is now possible with .NET 7 using Client Results.
Today, I've highlighted this issue in dotnet's Github page and got a good response from one of the developers of SignalR.
This requires the server to use ISingleClientProxy.InvokeAsync to be able to make request to the client and wait for response.
Quote from the documentation
In addition to making calls to clients, the server can request a
result from a client. This requires the server to use
ISingleClientProxy.InvokeAsync and the client to return a result from
its .On handler.
From the client (js/ts)
hubConnection.on("GetMessage", async () => {
let promise = new Promise((resolve, reject) => {
setTimeout(() => {
resolve(new { data: "message" });
}, 100);
});
return promise;
});
From the server (C#)
//By calling Client(...) on an instance of IHubContext<T>
async Task<object> SomeMethod(IHubContext<MyHub> context)
{
string result = await context.Clients.Client(connectionID).InvokeAsync<string>(
"GetMessage");
return result;
}
//---------------------------//
//Or by calling Client(...) or Caller on the Clients property in a Hub method
public class ChatHub : Hub
{
public async Task<string> WaitForMessage(string connectionId)
{
var message = await Clients.Client(connectionId).InvokeAsync<string>(
"GetMessage");
return message;
}
}
Using the following form with Invoke waits for and returns the response directly (just like a "real" synchronous method call)
var persons = hubProxy.Invoke<IEnumerable<Person>>("GetPersonsSynchronous", SearchCriteria, noteFields).Result;
foreach (Person person in persons)
{
Console.WriteLine($"{person.LastName}, {person.FirstName}");
}
I'm trying to refactor some ultra-complex legacy code that sends data from a handheld device to an app running on a PC, to which the handheld device is connected.
There is a "conversation" that goes on between the two apps that follows a protocol; the server (the app running on the PC) responds based on what the client tells it, and vice versa. Actually, the "conversation" can be seen about two thirds of the way down here.
Anyway, my problem is: how can I let the client wait for the server to respond without interrupting it, or thinking it's not going to respond and failing to continue? This is what I have right now:
public class FileXferLegacy : IFileXfer
{
private SerialPort cereal;
private String lastDataReceived;
private String receivedData;
. . .
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
// This method will be called when there is data waiting in the port's buffer
try
{
receivedData += cereal.ReadLine();
lastDataReceived = receivedData;
ExceptionLoggingService.Instance.WriteLog(String.Format("Received {0} in FileXferLegacy.SendDataContentsAsXML", receivedData));
}
catch (Exception ex)
{
//MessageBox.Show(ex.Message);
}
}
#region IFileFetchSend Members
. . .
public void SendDataContentsAsXML(string destinationPath, string data)
{
byte[] stuff;
ExceptionLoggingService.Instance.WriteLog("Reached
FileXferLegacy.SendDataContentsAsXML");
cereal.Open();
stuff = System.Text.UTF8Encoding.UTF8.GetBytes("PING" + "\n");
cereal.Write(stuff, 0, stuff.Length);
if (lastDataReceived.Contains("PING")) // Expecting "PING|ACKNOWLEDGE|"
{
stuff =
System.Text.UTF8Encoding.UTF8.GetBytes("LOGIN|foo|000003|LOC_HOST|PPP_PEER|1.4.0.42|bar" + "\n");
// TODO: replace this test data with dynamic data
cereal.Write(stuff, 0, stuff.Length);
}
if (lastDataReceived.Contains("JOIN|LEVEL")) // Expecting something like "JOIN|LEVEL|1
SETTING|ALT_ID|FALSE"
{
stuff = System.Text.UTF8Encoding.UTF8.GetBytes("HHTCOMMAND|GETHHTSUPDATE|");
cereal.Write(stuff, 0, stuff.Length);
}
. . .
String lastResponse = lastDataReceived; // Expecting something like
"RESULT|FILECOMPLETE|INV_000003_whatever(not identical to what was sent earlier!).XML"
// Parse out and do something with the filename ("INV_000003_whatever(not identical to
what was sent earlier!).XML" above)
}
As you can see, the client/handheld sends a string; it then reads "lastDataReceived" which is assigned in the DataReceived method. But what if there has been a delay, and "lastDataReceived" is null? What do I need to do to force a delay (without going to an extreme that would cause the app to appear slothlike in its slowness)? Or what is the way this should be done, if I'm totally off base?
A typical approach is to use a reader thread that pulls bytes off the port with blocking reads (though it can be done with async notification instead) and, once detecting that an entire message has been delivered, it either:
Puts them into a blocking queue (with consumer blocking on calls to dequeue until either a msg is added or a timeout reached
or
Notifies a listener with an event that contains the message.
Which of those two depends a lot on the consumer of those messages. Your code above would benefit from #1, though if the consumer is the UI thread then you should look at #2.
The protocol seems to be half-duplex so rewriting it with synchronous calls to Write/Readline seems to be the simplest way to handle it.
I created an MVC 4 App that gets data from some external sensors, and then shows data depending on the values recived. The external sensors expose their values through an http page (e.g. http:///CheckValue). The MVC App must be continiously checking for those values, let´s say every 5 seconds.
The basic Idea is that this process must be done on the background and in a infinite loop, each sensor on a different thread.
The problem is that I don´t know where is the best place to do this, as of now I just create a new Task for each sensor at the Application_Start method on the Global.aspx file.
protected void Application_Start()
{
foreach (var sensor in sensors)
{
Task.Factory.StartNew(() => sensor.readValue(5000));
}
}
This is the code for readValue()
public void readValue(int timespan)
{
try
{
using HttpClient client = new HttpClient())
{
while(true){
try
{
string result = await client.GetStringAsync(url);
//result validation logic
}
catch(Exception)
{
//Exception Handling
}
Thread.Sleep(timespan);
}
}
}catch(Exception e)
{
Debug.Write(e.Message);
}
}
I´m new to ASP.NET so I really don´t know if this should be in the Application_Start method, or if maybe it shouldn´t be on the MVC App at all, and do it on a separate Windows Service (If so how do I send the values back to the MVC App)
I'm having an issue with ZeroMQ, which I believe is because I'm not very familiar with it.
I'm trying to build a very simple service where multiple clients connect to a server and sends a query. The server responds to this query.
When I use REQ-REP socket combination (client using REQ, server binding to a REP socket) I'm able to get close to 60,000 messages per second at server side (when client and server are on the same machine). When distributed across machines, each new instance of client on a different machine linearly increases the messages per second at the server and easily reaches 40,000+ with enough client instances.
Now REP socket is blocking, so I followed ZeroMQ guide and used the rrbroker pattern (http://zguide.zeromq.org/cs:rrbroker):
REQ (client) <----> [server ROUTER -- DEALER --- REP (workers running on different threads)]
However, this completely screws up the performance. I'm getting only around 4000 messages per second at the server when running across machines. Not only that, each new client started on a different machine reduces the throughput of every other client.
I'm pretty sure I'm doing something stupid. I'm wondering if ZeroMQ experts here can point out any obvious mistakes. Thanks!
Edit: Adding code as per advice. I'm using the clrzmq nuget package (https://www.nuget.org/packages/clrzmq-x64/)
Here's the client code. A timer counts how many responses are received every second.
for (int i = 0; i < numTasks; i++) { Task.Factory.StartNew(() => Client(), TaskCreationOptions.LongRunning); }
void Client()
{
using (var ctx = new Context())
{
Socket socket = ctx.Socket(SocketType.REQ);
socket.Connect("tcp://192.168.1.10:1234");
while (true)
{
socket.Send("ping", Encoding.Unicode);
string res = socket.Recv(Encoding.Unicode);
}
}
}
Server - case 1: The server keeps track of how many requests are received per second
using (var zmqContext = new Context())
{
Socket socket = zmqContext.Socket(SocketType.REP);
socket.Bind("tcp://*:1234");
while (true)
{
string q = socket.Recv(Encoding.Unicode);
if (q.CompareTo("ping") == 0) {
socket.Send("pong", Encoding.Unicode);
}
}
}
With this setup, at server side, I can see around 60,000 requests received per second (when client is on the same machine). When on different machines, each new client increases number of requests received at server as expected.
Server Case 2: This is essentially rrbroker from ZMQ guide.
void ReceiveMessages(Context zmqContext, string zmqConnectionString, int numWorkers)
{
List<PollItem> pollItemsList = new List<PollItem>();
routerSocket = zmqContext.Socket(SocketType.ROUTER);
try
{
routerSocket.Bind(zmqConnectionString);
PollItem pollItem = routerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += RouterSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
dealerSocket = zmqContext.Socket(SocketType.DEALER);
try
{
dealerSocket.Bind("inproc://workers");
PollItem pollItem = dealerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += DealerSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
// Start the worker pool; cant connect
// to inproc socket before binding.
workerPool.Start(numWorkers);
while (true)
{
zmqContext.Poll(pollItemsList.ToArray());
}
}
void RouterSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(routerSocket, dealerSocket);
}
void DealerSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(dealerSocket, routerSocket);
}
void RelayMessage(Socket source, Socket destination)
{
bool hasMore = true;
while (hasMore)
{
byte[] message = source.Recv();
hasMore = source.RcvMore;
destination.Send(message, message.Length, hasMore ? SendRecvOpt.SNDMORE : SendRecvOpt.NONE);
}
}
Where the worker pool's start method is:
public void Start(int numWorkerTasks=8)
{
for (int i = 0; i < numWorkerTasks; i++)
{
QueryWorker worker = new QueryWorker(this.zmqContext);
Task task = Task.Factory.StartNew(() =>
worker.Start(),
TaskCreationOptions.LongRunning);
}
Console.WriteLine("Started {0} with {1} workers.", this.GetType().Name, numWorkerTasks);
}
public class QueryWorker
{
Context zmqContext;
public QueryWorker(Context zmqContext)
{
this.zmqContext = zmqContext;
}
public void Start()
{
Socket socket = this.zmqContext.Socket(SocketType.REP);
try
{
socket.Connect("inproc://workers");
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not create worker, error: {0}", ze.Message);
return;
}
while (true)
{
try
{
string message = socket.Recv(Encoding.Unicode);
if (message.CompareTo("ping") == 0)
{
socket.Send("pong", Encoding.Unicode);
}
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not receive message, error: " + ze.ToString());
}
}
}
}
Could you post some source code or at least a more detailed explanation of your test case? In general the way to build out your design is to make one change at a time, and measure at each change. You can always move stepwise from a known working design to more complex ones.
Most probably the 'ROUTER' is the bottleneck.
Check out these related questions on this:
Client maintenance in ZMQ ROUTER
Load testing ZeroMQ (ZMQ_STREAM) for finding the maximum simultaneous users it can handle
ROUTER (and ZMQ_STREAM, which is just a variant of ROUTER) internally has to maintain the client mapping, hence IMO it can accept limited connections from a particular client. It looks like ROUTER can multiplex multiple clients, only as long as, each client has only one active connection.
I could be wrong here - but I am not seeing much proof to the contrary (simple working code that scales to multi-clients with multi-connections with ROUTER or STREAM).
There certainly is a very severe restriction on concurrent connections with ZeroMQ, though it looks like no one know what is causing it.
I have done done performance testing on calling a native unmanaged DLL function with various methods from C#:
1. C++/CLI wrapper
2. PInvoke
3. ZeroMQ/clrzmq
The last might be interesting for you.
My finding at the end of my performance test was that using the ZMQ binding clrzmq was not useful and produced a factor of 100 performance overhead after I tried to optimize the PInvoke calls within the source code of the binding. Therefore I have used the ZMQ without a binding but with PInvoke calls.these calls must be done with the cdecl convention and with the option "SuppressUnmanagedCodeSecurity" to get most speed.
I had to import just 5 functions which was fairly easy.
At the end the speed was a bit slower than a PInvoke call but with the ZMQ-in my case over "inproc".
This may give you the hint to try it without the binding, if speed is interesting for you.
This is not a direct answer for your question but may help you to increase performance in general.