Signal R Persistant Connection memory Leak - c#

I have a product that uses SignalR (the .NET Framework Version) for its communications. In order for it to work everywhere, we offer that customers can talk to the client via a normal Server>Cloud>Client connection or directly via a SelfHosted SignalR connection.
Everything works fine, except for the fact that it seems that the SelfHosted Connection seems to have a memory leak when sending data directly to the client
The cloud Connection version is this
var _connection = new Connection(ServerUrl);
_connection.Received += OnReceived;
IClientTransport connectionType;
var connectionType = new WebSocketTransport();
_connection.Start(connectionType)
With send from this like this:
private void HandleResult(HeaderPacket headerPacket)
{
_connection.Send(headerPacket); //Send data to Client via Cloud
}
(above code have no memory leaks)
The SelfHosted Version is this
var url = "http://*:8889";
using (WebApp.Start<Startup>(url))
{}
public class Startup
{
public void Configuration(IAppBuilder app)
{
GlobalHost.Configuration.MaxIncomingWebSocketMessageSize = int.MaxValue;
app.MapSignalR();
app.MapSignalR<ServerEndPoint>("/Server");
}
}
public class ServerEndPoint : PersistentConnection
{
private void HandleResult(HeaderPacket response)
{
var data = JsonConvert.SerializeObject(response);
Connection.Send(response.ClientConnectionId, data);
}
protected override Task OnReceived(IRequest request, string connectionId, string data)
{ ... }
}
It is the
Connection.Send(response.ClientConnectionId, data);
Call that makes the memory increase with every call, also with only a single connection.. It should out a few MB and quickly (after 100 calls) it is using 50MB.
If I comment out the send part as the only thing there is no memory leak (indicating it is not the rest of the code that has the memory leak).
I've tried to upgrade latest NuGet packages but the leak persists so I assume it must be some odd setting I'm missing :-(

Related

Azure Cosmos db throwing Socket Exceptions

I am using azure cosmos db with .net core 2.1 application. I am using gremlin driver with this. It's working fine but after every few days it start throwing socket exception on server and we have to recycle IIS pool. Average per day hits are 10000.
Now we are using default gateway mode. Should we have to switch to direct mode as it might be a firewall issue ?
Here is the implementation:
private DocumentClient GetDocumentClient( CosmosDbConnectionOptions configuration)
{
_documentClient = new DocumentClient(
new Uri(configuration.Endpoint),
configuration.AuthKey,
new ConnectionPolicy());
//create database if not exists
_documentClient.CreateDatabaseIfNotExistsAsync(new Database { Id = configuration.Database });
return _documentClient;
}
and in startup.cs:
services.AddSingleton(x => GetDocumentClient(cosmosDBConfig));
and here is how we are communicating with cosmos db:
private DocumentClient _documentClient;
private DocumentCollection _documentCollection;
private CosmosDbConnectionOptions _cosmosDBConfig;
public DocumentCollectionFactory(DocumentClient documentClient, CosmosDbConnectionOptions cosmosDBConfig)
{
_documentClient = documentClient;
_cosmosDBConfig = cosmosDBConfig;
}
public async Task<DocumentCollection> GetProfileCollectionAsync()
{
if (_documentCollection == null)
{
_documentCollection = await _documentClient.CreateDocumentCollectionIfNotExistsAsync(
UriFactory.CreateDatabaseUri(_cosmosDBConfig.Database),
new DocumentCollection { Id = _cosmosDBConfig.Collection },
new RequestOptions { OfferThroughput = _cosmosDBConfig.Throughput });
return _documentCollection;
}
return _documentCollection;
}
and then:
public async Task CreateProfile(Profile profile)
{
var graphCollection = await _graphCollection.GetProfileCollectionAsync();
var createQuery = GetCreateQuery(profile);
IDocumentQuery<dynamic> query = _documentClient.CreateGremlinQuery<dynamic>(graphCollection, createQuery);
if(query.HasMoreResults)
{
await query.ExecuteNextAsync();
}
}
I'm assuming that for communication with CosmosDB you are using HttpClient. The application should share a single instance of HttpClient.
Every time you make a connection after HttpClient disposal there are still a bunch of connections in the state of TIME_WAIT. This means that the connection was closed on one side ( OS ) but it is in "waiting for additional packets" state.
By default, Windows may hold this connection in this state for 240 seconds. There is a limit to how quickly OS can open new sockets. All this may lead to System.Net.Sockets.SocketException exception.
Very good article that explains in details why and how this problem appears digging into TCP diagram and explaining with more details.
UPDATED
Possible solution.
You are using the default ConnectionPolicy object. That object has a property called IdleTcpConnectionTimeout which controls the amount of idle time after which unused connections are closed. By default, idle connections are kept open indefinitely. The value must be greater than or equal to 10 minutes.
So the code could look like:
private DocumentClient GetDocumentClient( CosmosDbConnectionOptions configuration)
{
_documentClient = new DocumentClient(
new Uri(configuration.Endpoint),
configuration.AuthKey,
new ConnectionPolicy() {
IdleTcpConnectionTimeout = new TimeSpan(0,0,10,0)
});
//create database if not exists
_documentClient.CreateDatabaseIfNotExistsAsync(new Database { Id = configuration.Database });
return _documentClient;
}
Here is a link to ConnectionPolicy Class documentation

Can I Terminate HTTP transaction on server WITHOUT sending response to client?

I'm writing a public-facing transaction processor. Naturally, we run on https:// and the payload carries all relevant detail so we'll only process legitimate transactions. However, as a public interface, any number of nefarious actors will no doubt be throwing shade at my server if for no other reason than to just be annoying.
When I detect such a request, is there anyway I can terminate processing at my end - not going to waste time on the transaction - but NOT send a response to the client? Basically, I'd like to force the nefarious clients into a timeout situation so that, if nothing else, it diminishes their capacity to annoy my server.
Here's the code:
public class Webhook : IHttpModule
{
/// <summary>
/// You will need to configure this module in the Web.config file of your
/// web and register it with IIS before being able to use it. For more information
/// see the following link: http://go.microsoft.com/?linkid=8101007
/// </summary>
private bool m_sslRequired = false;
#region IHttpModule Members
<snip...>
#endregion
private void OnBeginRequest(object sender, EventArgs e)
{
WriteTrace("Begin OnBeginRequest");
HttpContext ctx = HttpContext.Current;
try
{
string processor = ctx.Request.Params["p"];
if (processor != null && processor != "")
{
PluginProcessor(processor, ctx);
}
}
catch (Exception ex)
{
ctx.Response.StatusCode = 500;
ctx.Response.Write("ERROR");
}
ctx.ApplicationInstance.CompleteRequest();
WriteTrace("End OnBeginRequest");
}
private void PluginProcessor(string processor, HttpContext ctx)
{
string pluginSpec = AppConfig.GetAppSetting(processor.Trim().ToLower());
if (pluginSpec != "")
{
IWebhookProcessor proc = CreateProcessor(pluginSpec, ctx);
proc.Process(ctx);
}
}
private IWebhookProcessor CreateProcessor(string Processor, HttpContext ctx)
{
string assembly;
string typeName;
typeName = Processor.Substring(0, Processor.IndexOf(",")).Trim();
assembly = Path.Combine(ctx.Request.PhysicalApplicationPath, "bin", Processor.Substring(Processor.IndexOf(",") + 1).Trim());
var obj = Activator.CreateInstanceFrom(assembly, typeName);
return (Interfaces.IWebhookProcessor)obj.Unwrap();
}
}
So if the request doesn't map to a transaction handler, I'd like to 'hang' the client, but not in a way which will tie up resources on the server.
Thanks for your advice!
I think the best thing you can do is use HttpRequest.Abort(), which doesn't leave the client hanging, but it does immediately sever the TCP connection. Even the docs say it is for this kind of scenario:
You might use this method in response to an attack by a malicious HTTP client.
You would use it like this:
ctx.Request.Abort();
In a browser, you see a "connection reset" error.
Another option is to send back an unexpected HTTP status, like 400, or my personal favourite, 418.
Update: If you reaaallly want to make the client wait, you could implement your own HttpModule so that you can make an asynchronous BeginRequest event and then use Task.Delay().
The HttpModule class would look something like this:
public class AsyncHttpModule : IHttpModule {
public void Dispose() { }
public void Init(HttpApplication app) {
var wrapper = new EventHandlerTaskAsyncHelper(DoAsyncWork);
app.AddOnBeginRequestAsync(wrapper.BeginEventHandler, wrapper.EndEventHandler);
}
private async Task DoAsyncWork(object sender, EventArgs e) {
var app = (HttpApplication) sender;
var ctx = app.Context;
if (shouldDie) { //whatever your criteria is
await Task.Delay(60000); //wait for one minute
ctx.Request.Abort(); //kill the connection without replying
}
}
}
Then add the module in your web.config (replace the namespace with your app's namespace):
<system.webServer>
<modules>
<add name="AsyncHttpModule" type="MyNamespace.AsyncHttpModule" />
</modules>
</system.webServer>
Since this is asynchronous, it is not holding up a thread while it waits. Other requests that come in will use the same thread (I tested this).
However, it is still keeping the request context in memory, because the request is still in progress. So if they hit you with 1000+ requests, all of those 1000+ requests are held in memory for 60 seconds. Whereas if you just use HttpRequest.Abort() right away, those get removed from memory right away.

Communication between an Azure web application and a windows form app on Azure VM

This is my first project using Azure. So if I am asking very basic question, please be patient with me.
I have a web application which runs on Azure server. I also have a windows form app which is hosted on Azure VM. According to the requirement, a web app will establish a connection with the windows form app whenever it is required, will send a notification to the form app, receive a response from it and will cut off the connection. So here Web app is like a client and a Windows form app is like a server.
I tried using SignalR. Activated the end point and a port for the Windows form app on Azure portal. I was able to establish the connection but never getting the confirmation of that connection back from the Windows Form app.
Am I using the proper technique or there is a better way to do this? I hope someone will suggest a proper solution.
Here is what I tried
Server side code in Windows form app
Installed the Microsoft.AspNet.SignalR package by Nuget
Activated the VM end point and port #12345 from Azure portal
DNS name of VM - abc.xyz.net
Endpoint port number - 12345
public partial class FormsServer : Form
{
private IDisposable SignalR { get; set; }
const string ServerURI = "http://abc.xyz.net:12345";
private void btnStart_Click(object sender, EventArgs e)
{
btnStart.Enabled = false;
Task.Run(() => StartServer());
}
private void StartServer()
{
try
{
SignalR = WebApp.Start<Startup>(ServerURI);
}
catch (TargetInvocationException)
{ }
}
}
class Startup
{
public void Configuration(IAppBuilder app)
{
app.UseCors(CorsOptions.AllowAll);
app.MapSignalR("/CalcHub", new HubConfiguration());
}
}
public class CalcHub : Hub
{
public async Task<int> AddNumbers(int no1, int no2)
{
MessageBox.Show("Add Numbers");
return no1 + no2;
}
}
Client side code in web application
Installed the Microsoft.AspNet.SignalR.Client by Nuget
public class NotificationAppClient
{
Microsoft.AspNet.SignalR.Client.HubConnection connectionFr;
IHubProxy userHubProxy;
public void InitiateServerConnection()
{
connectionFr = new Microsoft.AspNet.SignalR.Client.HubConnection("http:// abc.xyz.net:12345/CalcHub", useDefaultUrl: false);
connectionFr.TransportConnectTimeout = new TimeSpan(0, 0, 60);
connectionFr.DeadlockErrorTimeout = new TimeSpan(0, 0, 60);
userHubProxy = connectionFr.CreateHubProxy("CalcHub");
userHubProxy.On<string>("addMessage", param => {
Console.WriteLine(param);
});
connectionFr.Error += async (error) =>
{
await Task.Delay(new Random().Next(0, 5) * 1000);
await connectionFr.Start();
};
}
public async Task<int> AddNumbers()
{
try
{
int result = -1;
connectionFr.Start().Wait(30000);
if (connectionFr.State == ConnectionState.Connecting)
{
connectionFr.Stop();
}
else
{
int num1 = 2;
int num2 = 3;
result = await userHubProxy.Invoke<int>("AddNumbers", num1, num2);
}
connectionFr.Stop();
return result;
}
catch (Exception ex)
{ }
return 0;
}
}
There is actually no need to connect and disconnect constantly. The persistent connection will work as well.
Thanks for the reply
So the code works even if it is messy. Usually this is a firewall issue so I would make absolutely sure the port is open all the way between the two services. Check both the Windows firewall and the one in the Azure Network Security Group to make sure that the port is open. I recommend double checking the "Effective Security Rules". If there are multiple security groups in play it is easy to open the port in one group but forget the other.
In order to rule out a DNS issue, you can change const string ServerURI = "http://abc.xyz.net:12345"; to `"http://*:12345" and try connecting over the public IP.
Finally if the catch blocks are actually empty as opposed to just shortening the code either remove them or add something in them that allows you to see errors. As is any errors are just being swallowed with no idea if they are happening. I didn't get any running your code, but it would be good to be sure.
As far as the method of communication goes, if you are going to stick with SignalR I would move opening the connection on the client into the InitiateServerConnection() and leave it open as long as the client is active. This is hoq SignalR is designed to work as opposed to opening and closing the connection each time. If your end goal is to push information in real time from your forms app to the web app as opposed to the web app pulling the data this is fine. For what you are currently doing this is not ideal.
For this sort of use case, I would strongly suggest looking at WebAPI instead of SignalR. If you are going to add more endpoints SignalR is going to get increasingly difficult to work with in comparison to WebApi. You can absolutely use both in parallel if you need to push data to the web client but also want to be able to request information on demand.
The Startup method on the server changes just a bit as Microsoft.AspNet.WebApi is what is being setup instead of SignalR:
class Startup
{
public void Configuration(IAppBuilder app)
{
HttpConfiguration config = new HttpConfiguration();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
app.UseCors(CorsOptions.AllowAll);
app.UseWebApi(config);
}
}
Instead of a Hub you create a controller.
public class AddController : ApiController
{
// GET api/add?num1=1&num2=2
public HttpResponseMessage Get(int num1, int num2)
{
var response = new HttpResponseMessage(System.Net.HttpStatusCode.OK);
response.Content = new StringContent((num1 + num2).ToString());
return response;
}
}
The client side is where things get a lot simpler as you no longer need to manage what is usually a persistent connection. InitiateServerConnection() can go away completely. AddNumbers() becomes a simple http call:
public static async Task<int> AddNumbers(int num1, int num2)
{
try
{
using(var client = new HttpClient())
{
return Int32.Parse(await client.GetStringAsync($"http://<sitename>:12345/api/add?num1={num1}&num2={num2}"));
}
}
catch (Exception e)
{
//Do something with the exception
}
return 0;
}
If that doesn't end up resolving the issue let me know and we can continue to troubleshoot.

How to perform database operation independently?

I have 1 exe which is nothing bit a Windows form which will continuously run in background and will watch my serial port and I have 1 event data receive event which fires as my serial port receive data.
As soon as I receive data in this event I will pass this data to another event handler which saves this data in database through web api method.
But data to my serial port will be coming frequently so I want to save this data to my database independently so that my database insert operation doesn't block my incoming serial port data.
This is my code:
void _serialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)//Fires as my serial port receives data
{
int dataLength = _serialPort.BytesToRead;
byte[] data = new byte[dataLength];
int nbrDataRead = _serialPort.Read(data, 0, dataLength);
if (nbrDataRead == 0)
return;
// Send data to whom ever interested
if (NewSerialDataRecieved != null)
{
NewSerialDataRecieved(this, new SerialDataEventArgs(data)); //pass serial port data to new below event handler.
}
}
void _spManager_NewSerialDataRecieved(object sender, SerialDataEventArgs e) //I want this event handler to run independently so that database save operation doenst block incoming serial port data
{
if (this.InvokeRequired)
{
// Using this.Invoke causes deadlock when closing serial port, and BeginInvoke is good practice anyway.
this.BeginInvoke(new EventHandler<SerialDataEventArgs>(_spManager_NewSerialDataRecieved), new object[] { sender, e });
return;
}
//data is converted to text
string str = Encoding.ASCII.GetString(e.Data);
if (!string.IsNullOrEmpty(str))
{
//This is where i will save data to through my web api method.
RunAsync(str).Wait();
}
}
static async Task RunAsync(string data)
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("http://localhost:33396/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var content = new StringContent(data);
var response = await client.PostAsJsonAsync<StringContent>("api/Service/Post", content);//nothing happens after this line.
}
}
Web api controller:
public class MyController : ApiController
{
[HttpPost]
public HttpResponseMessage Post(HttpRequestMessage request)
{
var someText = request.Content.ReadAsStringAsync().Result;
return new HttpResponseMessage() { Content = new StringContent(someText) };
}
}
But here problem is:
var response = await client.PostAsJsonAsync<StringContent>("api/Service/Post", content);
Nothing happens after this line that is operation blocks on this line.
So can anybody guide me with this?
By independently we determined in the SO C# chat room that you really mean "Asynchronously".
Your solution is the code above, saving this data to a WebAPI endpoint so any solution to the problem needs to be in 2 parts ...
PART 1: The Client Part
On the client all we need to do is make the call asynchronously in order to free up the current thread to carry on receiving data on the incoming serial port, we can do that like so ...
// build the api client, you may want to keep this in a higher scope to avoid recreating on each message
var api = new HttpClient();
api.BaseAddress = new Uri(someConfigVariable);
// asynchronously make the call and handle the result
api.PostAsJsonAsync("api/My", str)
.ContinueWith(t => HandleResponseAsync(t.Result))
.Unwrap();
...
PART 2: The Server Part
Since you have web api i'm also going to assume you are using EF too, the common and "clean" way to do this, with all the extras stripped out (like model validation / error handling) might look something like this ...
// in your EF code you will have something like this ...
Public async Task<User> SaveUser(User userModel)
{
try
{
var newUser = await context.Users.AddAsync(userModel);
context.SavechangesAsync();
return newUser;
}
catch(Exception ex) {}
}
// and in your WebAPI controller something like this ...
HttpPost]
public async Task<HttpResponseMessage> Post(User newUser)
{
return Ok(await SaveUser(newUser));
}
...
Disclaimer:
The concepts involved here go much deeper and as I hinted above, much has been left out here like validation, error checking, ect but this is the core to getting your serial port data in to a database using the technologies I believe you are using.
Key things to read up on for anyone wanting to achieve this kind of thing might include: Tasks, Event Handling, WebAPI, EF, Async operations, streaming.
From what you describe it seems like you might want to have a setup like this:
1) your windows form listens for serial port
2) when new stuff comes to port your windows forms app saves it to some kind of a queue (msmq, for example)
3) you should have separate windows service that checks queue and as it finds new messages in a queue it sends request to web api
Best solution for this problem is to use ConcurrentQueue.
Just do search on google and you will get planty of samples.
ConcurrentQueue is thread safe and it support writing and reading from multiple threads.
So the component listening to the searal port can write data to the queue. And you can have 2 or more tasks running parallel which listening to this queue and update db as soon as it receives data.
Not sure if it's the problem, but you shouldn't block on async code. You are doing RunAsync(str).Wait(); and I believe that's the problem. Have a look at this blog post by Stephen Cleary:
http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html

ZeroMQ performance issue

I'm having an issue with ZeroMQ, which I believe is because I'm not very familiar with it.
I'm trying to build a very simple service where multiple clients connect to a server and sends a query. The server responds to this query.
When I use REQ-REP socket combination (client using REQ, server binding to a REP socket) I'm able to get close to 60,000 messages per second at server side (when client and server are on the same machine). When distributed across machines, each new instance of client on a different machine linearly increases the messages per second at the server and easily reaches 40,000+ with enough client instances.
Now REP socket is blocking, so I followed ZeroMQ guide and used the rrbroker pattern (http://zguide.zeromq.org/cs:rrbroker):
REQ (client) <----> [server ROUTER -- DEALER --- REP (workers running on different threads)]
However, this completely screws up the performance. I'm getting only around 4000 messages per second at the server when running across machines. Not only that, each new client started on a different machine reduces the throughput of every other client.
I'm pretty sure I'm doing something stupid. I'm wondering if ZeroMQ experts here can point out any obvious mistakes. Thanks!
Edit: Adding code as per advice. I'm using the clrzmq nuget package (https://www.nuget.org/packages/clrzmq-x64/)
Here's the client code. A timer counts how many responses are received every second.
for (int i = 0; i < numTasks; i++) { Task.Factory.StartNew(() => Client(), TaskCreationOptions.LongRunning); }
void Client()
{
using (var ctx = new Context())
{
Socket socket = ctx.Socket(SocketType.REQ);
socket.Connect("tcp://192.168.1.10:1234");
while (true)
{
socket.Send("ping", Encoding.Unicode);
string res = socket.Recv(Encoding.Unicode);
}
}
}
Server - case 1: The server keeps track of how many requests are received per second
using (var zmqContext = new Context())
{
Socket socket = zmqContext.Socket(SocketType.REP);
socket.Bind("tcp://*:1234");
while (true)
{
string q = socket.Recv(Encoding.Unicode);
if (q.CompareTo("ping") == 0) {
socket.Send("pong", Encoding.Unicode);
}
}
}
With this setup, at server side, I can see around 60,000 requests received per second (when client is on the same machine). When on different machines, each new client increases number of requests received at server as expected.
Server Case 2: This is essentially rrbroker from ZMQ guide.
void ReceiveMessages(Context zmqContext, string zmqConnectionString, int numWorkers)
{
List<PollItem> pollItemsList = new List<PollItem>();
routerSocket = zmqContext.Socket(SocketType.ROUTER);
try
{
routerSocket.Bind(zmqConnectionString);
PollItem pollItem = routerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += RouterSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
dealerSocket = zmqContext.Socket(SocketType.DEALER);
try
{
dealerSocket.Bind("inproc://workers");
PollItem pollItem = dealerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += DealerSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
// Start the worker pool; cant connect
// to inproc socket before binding.
workerPool.Start(numWorkers);
while (true)
{
zmqContext.Poll(pollItemsList.ToArray());
}
}
void RouterSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(routerSocket, dealerSocket);
}
void DealerSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(dealerSocket, routerSocket);
}
void RelayMessage(Socket source, Socket destination)
{
bool hasMore = true;
while (hasMore)
{
byte[] message = source.Recv();
hasMore = source.RcvMore;
destination.Send(message, message.Length, hasMore ? SendRecvOpt.SNDMORE : SendRecvOpt.NONE);
}
}
Where the worker pool's start method is:
public void Start(int numWorkerTasks=8)
{
for (int i = 0; i < numWorkerTasks; i++)
{
QueryWorker worker = new QueryWorker(this.zmqContext);
Task task = Task.Factory.StartNew(() =>
worker.Start(),
TaskCreationOptions.LongRunning);
}
Console.WriteLine("Started {0} with {1} workers.", this.GetType().Name, numWorkerTasks);
}
public class QueryWorker
{
Context zmqContext;
public QueryWorker(Context zmqContext)
{
this.zmqContext = zmqContext;
}
public void Start()
{
Socket socket = this.zmqContext.Socket(SocketType.REP);
try
{
socket.Connect("inproc://workers");
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not create worker, error: {0}", ze.Message);
return;
}
while (true)
{
try
{
string message = socket.Recv(Encoding.Unicode);
if (message.CompareTo("ping") == 0)
{
socket.Send("pong", Encoding.Unicode);
}
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not receive message, error: " + ze.ToString());
}
}
}
}
Could you post some source code or at least a more detailed explanation of your test case? In general the way to build out your design is to make one change at a time, and measure at each change. You can always move stepwise from a known working design to more complex ones.
Most probably the 'ROUTER' is the bottleneck.
Check out these related questions on this:
Client maintenance in ZMQ ROUTER
Load testing ZeroMQ (ZMQ_STREAM) for finding the maximum simultaneous users it can handle
ROUTER (and ZMQ_STREAM, which is just a variant of ROUTER) internally has to maintain the client mapping, hence IMO it can accept limited connections from a particular client. It looks like ROUTER can multiplex multiple clients, only as long as, each client has only one active connection.
I could be wrong here - but I am not seeing much proof to the contrary (simple working code that scales to multi-clients with multi-connections with ROUTER or STREAM).
There certainly is a very severe restriction on concurrent connections with ZeroMQ, though it looks like no one know what is causing it.
I have done done performance testing on calling a native unmanaged DLL function with various methods from C#:
1. C++/CLI wrapper
2. PInvoke
3. ZeroMQ/clrzmq
The last might be interesting for you.
My finding at the end of my performance test was that using the ZMQ binding clrzmq was not useful and produced a factor of 100 performance overhead after I tried to optimize the PInvoke calls within the source code of the binding. Therefore I have used the ZMQ without a binding but with PInvoke calls.these calls must be done with the cdecl convention and with the option "SuppressUnmanagedCodeSecurity" to get most speed.
I had to import just 5 functions which was fairly easy.
At the end the speed was a bit slower than a PInvoke call but with the ZMQ-in my case over "inproc".
This may give you the hint to try it without the binding, if speed is interesting for you.
This is not a direct answer for your question but may help you to increase performance in general.

Categories