Starting multiple threads in a for loop as no effect - c#

I'm trying to read off messages from a websphere mq queue and dump it in another queue.
Below is the code i have to do it
private void transferMessages()
{
MQQueueManager sqmgr = connectToQueueManager(S_SERVER_NAME, S_QMGR_NAME, S_PORT_NUMBER, S_CHANNEL_NAME);
MQQueueManager dqmgr = connectToQueueManager(D_SERVER_NAME, D_QMGR_NAME, D_PORT_NUMBER, D_CHANNEL_NAME);
if (sqmgr != null && dqmgr != null)
{
MQQueue sq = openSourceQueueToGet(sqmgr, S_QUEUE_NAME);
MQQueue dq = openDestQueueToPut(dqmgr, D_QUEUE_NAME);
if (sq != null && dq != null)
{
setPutMessageOptions();
setGetMessageOptions();
processMessages(sqmgr, sq, dqmgr, dq);
}
}
}
And I'm calling the above method in a for loop and creating separate threads as below.
int NO_OF_THREADS = 5;
Thread[] ts = new Thread[NO_OF_THREADS];
for (int i = 0; i < NO_OF_THREADS; i++)
{
ts[i] = new Thread(() => transferMessages());
ts[i].Start();
}
As you see, I'm making a fresh connection to the queue manager in the transferMessages method. Not sure for some reason, the program makes only one connection to MQ.
The custom method to connect to the queue manager is below..
private MQQueueManager connectToQueueManager(string MQServerName, string MQQueueManagerName, string MQPortNumber, string MQChannel)
{
try
{
mqErrorString = "";
MQQueueManager qmgr;
Hashtable mqProps = new Hashtable();
mqProps.Add(MQC.HOST_NAME_PROPERTY, MQServerName);
mqProps.Add(MQC.CHANNEL_PROPERTY, MQChannel);
mqProps.Add(MQC.PORT_PROPERTY, Convert.ToInt32(MQPortNumber));
mqProps.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_CLIENT);
qmgr = new MQQueueManager(MQQueueManagerName, mqProps);
return qmgr;
}
catch (MQException mqex)
{
//catch and log MQException here
return null;
}
}
Any advise what am i missing?

That is because of Shared Conversation (SHARECNV) feature of MQ where multiple connections to queue manager from one application share the same socket. This value is a negotiated between client and queue manager while establishing a connection. By default 10 connections will be shared over a socket.
You can increase the number of threads in your application to 11, then you can see a second connection being opened. More details on SHARECNV are here.
UPDATE
Channel status when running 6 threads each for put and get. Note I am connecting to the same queue manager (test purpose only). SHARECNV is set to 10.
2 : dis chstatus(MY.SVRCONN)
AMQ8417: Display Channel Status details.
CHANNEL(MY.SVRCONN) CHLTYPE(SVRCONN)
CONNAME(127.0.0.1) CURRENT
STATUS(RUNNING) SUBSTATE(RECEIVE)
AMQ8417: Display Channel Status details.
CHANNEL(MY.SVRCONN) CHLTYPE(SVRCONN)
CONNAME(127.0.0.1) CURRENT
STATUS(RUNNING) SUBSTATE(RECEIVE)
When running 5 threads each.
3 : dis chstatus(MY.SVRCONN)
AMQ8417: Display Channel Status details.
CHANNEL(MY.SVRCONN) CHLTYPE(SVRCONN)
CONNAME(127.0.0.1) CURRENT
STATUS(RUNNING) SUBSTATE(RECEIVE)

Related

TCP server connection causing processor overload

I have a TCP/IP server that is supposed to allow a connection to remain open as messages are sent across it. However, it seems that some clients open a new connection for each message, which causes the CPU usage to max out. I tried to fix this by adding a time-out but still seem to have the problem occasionally. I suspect that my solution was not the best choice, but I'm not sure what would be.
Below is my basic code with logging, error handling and processing removed.
private void StartListening()
{
try
{
_tcpListener = new TcpListener( IPAddress.Any, _settings.Port );
_tcpListener.Start();
while (DeviceState == State.Running)
{
var incomingConnection = _tcpListener.AcceptTcpClient();
var processThread = new Thread( ReceiveMessage );
processThread.Start( incomingConnection );
}
}
catch (Exception e)
{
// Unfortunately, a SocketException is expected when stopping AcceptTcpClient
if (DeviceState == State.Running) { throw; }
}
finally { _tcpListener?.Stop(); }
}
I believe the actual issue is that multiple process threads are being created, but are not being closed. Below is the code for ReceiveMessage.
private void ReceiveMessage( object IncomingConnection )
{
var buffer = new byte[_settings.BufferSize];
int bytesReceived = 0;
var messageData = String.Empty;
bool isConnected = true;
using (TcpClient connection = (TcpClient)IncomingConnection)
using (NetworkStream netStream = connection.GetStream())
{
netStream.ReadTimeout = 1000;
try
{
while (DeviceState == State.Running && isConnected)
{
// An IOException will be thrown and captured if no message comes in each second. This is the
// only way to send a signal to close the connection when shutting down. The exception is caught,
// and the connection is checked to confirm that it is still open. If it is, and the Router has
// not been shut down, the server will continue listening.
try { bytesReceived = netStream.Read( buffer, 0, buffer.Length ); }
catch (IOException e)
{
if (e.InnerException is SocketException se && se.SocketErrorCode == SocketError.TimedOut)
{
bytesReceived = 0;
if(GlobalSettings.IsLeaveConnectionOpen)
isConnected = GetConnectionState(connection);
else
isConnected = false;
}
else
throw;
}
if (bytesReceived > 0)
{
messageData += Encoding.UTF8.GetString( buffer, 0, bytesReceived );
string ack = ProcessMessage( messageData );
var writeBuffer = Encoding.UTF8.GetBytes( ack );
if (netStream.CanWrite) { netStream.Write( writeBuffer, 0, writeBuffer.Length ); }
messageData = String.Empty;
}
}
}
catch (Exception e) { ... }
finally { FileLogger.Log( "Closing the message stream.", Verbose.Debug, DeviceName ); }
}
}
For most clients the code is running correctly, but there are a few that seem to create a new connection for each message. I suspect that the issue lies around how I handle the IOException. For the systems that fail, the code does not seem to reach the finally statement until 30 seconds after the first message comes in, and each message creates a new ReceiveMessage thread. So the logs will show messages coming in, and 30 seconds in it will start to show multiple messages about the message stream being closed.
Below is how I check the connection, in case this is important.
public static bool GetConnectionState( TcpClient tcpClient )
{
var state = IPGlobalProperties.GetIPGlobalProperties()
.GetActiveTcpConnections()
.FirstOrDefault( x => x.LocalEndPoint.Equals( tcpClient.Client.LocalEndPoint )
&& x.RemoteEndPoint.Equals( tcpClient.Client.RemoteEndPoint ) );
return state != null ? state.State == TcpState.Established : false;
}
You're reinventing the wheel (in a worse way) at quite a few levels:
You're doing pseudo-blocking sockets. That combined with creating a whole new thread for every connection in an OS like Linux which doesn't have real threads can get expensive fast. Instead you should create a pure blocking socket with no read timeout (-1) and just listen on it. Unlike UDP, TCP will catch the connection being terminated by the client without you needing to poll for it.
And the reason why you seem to be doing the above is that you reinvent the standard Keep-Alive TCP mechanism. It's already written and works efficiently, simply use it. And as a bonus, the standard Keep-Alive mechanism is on the client side, not the server side, so even less processing for you.
Edit: And 3. You really need to cache the threads you so painstakingly created. The system thread pool won't suffice if you have that many long-term connections with a single socket communication per thread, but you can build your own expandable thread pool. You can also share multiple sockets on one thread using select, but that's going to change your logic quite a bit.

How to read from multiple EventHub partitions simultaneously with high throughput?

My one role instance needs to read data from 20-40 EventHub partitions at the same time (context: this is our internal virtual partitioning scheme - 20-40 partitions represent scale out unit).
In my prototype I use below code. By I get throughput 8 MBPS max. Since if I run the same console multiple times I get throughput (perfmon counter) multiplied accordingly then I think this is not neither VM network limit nor EventHub service side limit.
I wonder whether I create clients correctly here...
Thank you!
Zaki
const string EventHubName = "...";
const string ConsumerGroupName = "...";
var connectionStringBuilder = new ServiceBusConnectionStringBuilder();
connectionStringBuilder.SharedAccessKeyName = "...";
connectionStringBuilder.SharedAccessKey = "...";
connectionStringBuilder.Endpoints.Add(new Uri("sb://....servicebus.windows.net/"));
connectionStringBuilder.TransportType = TransportType.Amqp;
var clientConnectionString = connectionStringBuilder.ToString();
var eventHubClient = EventHubClient.CreateFromConnectionString(clientConnectionString, EventHubName);
var runtimeInformation = await eventHubClient.GetRuntimeInformationAsync().ConfigureAwait(false);
var consumerGroup = eventHubClient.GetConsumerGroup(ConsumerGroupName);
var offStart = DateTime.UtcNow.AddMinutes(-10);
var offEnd = DateTime.UtcNow.AddMinutes(-8);
var workUnitManager = new WorkUnitManager(runtimeInformation.PartitionCount);
var readers = new List<PartitionReader>();
for (int i = 0; i < runtimeInformation.PartitionCount; i++)
{
var reader = new PartitionReader(
consumerGroup,
runtimeInformation.PartitionIds[i],
i,
offStart,
offEnd,
workUnitManager);
readers.Add(reader);
}
internal async Task Read()
{
try
{
Console.WriteLine("Creating a receiver for '{0}' with offset {1}", this.partitionId, this.startOffset);
EventHubReceiver receiver = await this.consumerGroup.CreateReceiverAsync(this.partitionId, this.startOffset).ConfigureAwait(false);
Console.WriteLine("Receiver for '{0}' has been created.", this.partitionId);
var stopWatch = new Stopwatch();
stopWatch.Start();
while (true)
{
var message =
(await receiver.ReceiveAsync(1, TimeSpan.FromSeconds(10)).ConfigureAwait(false)).FirstOrDefault();
if (message == null)
{
continue;
}
if (message.EnqueuedTimeUtc >= this.endOffset)
{
break;
}
this.processor.Push(this.partitionIndex, message);
}
this.Duration = TimeSpan.FromMilliseconds(stopWatch.ElapsedMilliseconds);
}
catch (Exception ex)
{
Console.WriteLine(ex);
throw;
}
}
The above code snippet you provided is effectively: creating 1 Connection to ServiceBus Service and then running all receivers on one single connection (at protocl level, essentially, creating multiple Amqp Links on that same connection).
Alternately - to achieve high throughput for receive operations, You will need to create multiple connections and map your receivers to connection ratio to fine-tune your throughput. That's what happens when you run the above code in multiple processes.
Here's how:
You will need to go one layer down the .Net client SDK API and code at MessagingFactory level - you can start with 1 MessagingFactory per EventHubClient. MessagingFactory is the one - which represents 1 Connection to EventHubs service. Code to create a dedicated connection per EventHubClient:
var connStr = new ServiceBusConnectionStringBuilder("Endpoint=sb://servicebusnamespacename.servicebus.windows.net/;SharedAccessKeyName=saskeyname;SharedAccessKey=sakKey");
connStr.TransportType = TransportType.Amqp;
var msgFactory = MessagingFactory.CreateFromConnectionString(connStr.ToString());
var ehClient = msgFactory.CreateEventHubClient("teststream");
I just added connStr in my sample to emphasize assigning TransportType to Amqp.
You will end up with multiple connections with outgoing port 5671:
If you rewrite your code with 1 MessagingFactory per EventHubClient (or a reasonable ratio) - you are all set (in your code - you will need to move EventHubClient creation to Reader)!
The only extra criteria one need to consider while creating multiple connections is the Bill - only 100 connections are included (including senders and receivers) in basic sku. I guess you are already on standard (as you have >1 TUs) - which gives 1000 connections included in the package - so no need to worry - but mentioning just-in-case.
~Sree
A good option is to create a Task for each partition.
This a copy of my implementation which is able to process a rate of 2.5k messages per second per partition. This rate will be also related to your downstream speed.
static void EventReceiver()
{
for (int i = 0; i <= EventHubPartitionCount; i++)
{
Task.Factory.StartNew((state) =>
{
Console.WriteLine("Starting worker to process partition: {0}", state);
var factory = MessagingFactory.Create(ServiceBusEnvironment.CreateServiceUri("sb", "tests-eventhub", ""), new MessagingFactorySettings()
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("Listen", "PGSVA7L="),
TransportType = TransportType.Amqp
});
var client = factory.CreateEventHubClient("eventHubName");
var group = client.GetConsumerGroup("customConsumer");
Console.WriteLine("Group: {0}", group.GroupName);
var receiver = group.CreateReceiver(state.ToString(), DateTime.Now);
while (true)
{
if (cts.IsCancellationRequested)
{
receiver.Close();
break;
}
var messages = receiver.Receive(20);
messages.ToList().ForEach(aMessage =>
{
// Process your event
});
Console.WriteLine(counter);
}
}, i);
}
}

.Net RabbitMQ client Subscriber.Next hangs

I am using RabbitMQ .net client in a windows service. I have millions of messages coming in bulk which then get processed and then the output is put on another queue. I am creating the connection factory with a heartbeat of 30 and then creating a connection whenever a connection or subscriber is lost. In production, my code probably works in most cases. However, in my integration tests, I know it is failing most of the time. Here is my code:
public void ReceiveAll(Func<IDictionary<ulong, byte[]>, IOnStreamWatchResult> onReceiveAllCallback, int batchSize, CancellationToken cancellationToken)
{
IModel channel = null;
Subscription subscription = null;
while (!cancellationToken.IsCancellationRequested)
{
if (subscription == null || subscription.Model.IsClosed)
{
channel = _channelFactory.CreateChannel(ref _connection, _messageQueueConfig, _connectionFactory);
// This instructs the channel to not prefetch more than batch count into shared queue
channel.BasicQos(0, Convert.ToUInt16(batchSize), false);
subscription = new Subscription(channel, _messageQueueConfig.Queue, false);
}
try
{
BasicDeliverEventArgs message;
var dequeuedMessages = new Dictionary<ulong, byte[]>();
do
{
if (subscription.Next(_messageQueueConfig.DequeueTimeout.Milliseconds, out message))
{
if (message == null)
{
// This means channel is closed and the messages in shared queue would get moved back to ready state
DisposeChannelAndSubcription(ref channel, ref subscription);
ReceiveAll(onReceiveAllCallback, batchSize, cancellationToken);
}
else
{
dequeuedMessages.Add(message.DeliveryTag, message.Body);
}
}
} while (message != null && batchSize > dequeuedMessages.Count && !cancellationToken.IsCancellationRequested);
if (cancellationToken.IsCancellationRequested)
{
if (dequeuedMessages.Any())
{
NackUnProcessedMessages(subscription, dequeuedMessages.Keys);
}
DisposeChannelAndSubcription(ref channel, ref subscription);
dequeuedMessages.Clear();
break;
}
try
{
var onStreamWatchResult = onReceiveAllCallback(dequeuedMessages);
AckProcessedMessages(subscription, onStreamWatchResult.Processed);
NackUnProcessedMessages(subscription, onStreamWatchResult.UnProcessed);
dequeuedMessages.Clear();
}
catch(Exception unhandledException)
{
NackUnProcessedMessages(subscription, dequeuedMessages.Keys);
}
}
catch (EndOfStreamException endOfStreamException)
{
DisposeChannelAndSubcription(ref channel, ref subscription);
}
catch (OperationInterruptedException operationInterruptedException)
{
DisposeChannelAndSubcription(ref channel, ref subscription);
}
}
}
The batch size is set to 4 because I put 4 messages in my integration test, which is just a windows service that I ran after running unit tests.
The issue here is that almost always, the subscriber pre-fetches 4 messages, as expected, returns true for the first two .Next iterations, but after that it returns false. I believe that is happening because my messages are not getting unacked properly. In my integration test, I ack 2 and nack 2 messages and then read the 2 nacked messages again to clear the queue. However, after nacking, the messages are not returned to ready state and hence the test hangs. What am I doing wrong here? Am I not understanding something from the nacking documentation? Here is my nacking code:
subscription.Model.BasicNack(deliveryTag, false, true);

C# TCP Server stop receiving client messages, resumes when service is restarted

I working in a managed Windows Service written with C#. It keeps receiving messages from several clients connected over TCP/IP. The Client is basically a router that receive and resend messages from thermometers to the Server. The Server parse the messages and store them in a SQL Server database.
The problem I am facing is that some clients, suddenly, stops sending messages. But, as soon the service is restarted, they connect again and resume sending. I don't have the code of the Client since it is a third party device and I pretty sure the problem is with the Server.
I manage to reduce the problem by implementing a timer that keeps checking if each client is still connected (see code below). Also, I added a Keep Alive mode to the Socket, using the socket.IOControl(IOControlCode.KeepAliveValues, ...) method, but the problem still happening.
I'm posting some code from specific parts I consider relevant. But, if more snippets are needed to understand the problem, please ask me and I'll edit the post. All the try/catch blocks was removed to reduce the ammount of code.
I don't want a perfect solution, just any guidance will be appreciated.
private Socket _listener;
private ConcurrentDictionary<int, ConnectionState> _connections;
public TcpServer(TcpServiceProvider provider, int port)
{
this._provider = provider;
this._port = port;
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
this._connections = new ConcurrentDictionary<int, ConnectionState>();
ConnectionReady = new AsyncCallback(ConnectionReady_Handler);
AcceptConnection = new WaitCallback(AcceptConnection_Handler);
ReceivedDataReady = new AsyncCallback(ReceivedDataReady_Handler);
}
public bool Start()
{
this._listener.Bind(new IPEndPoint(IPAddress.Any, this._port));
this._listener.Listen(10000);
this._listener.BeginAccept(ConnectionReady, null);
}
// Check every 5 minutes for clients that have not send any message in the past 30 minutes
// MSG_RESTART is a command that the devices accepts to restart
private void CheckForBrokenConnections()
{
foreach (var entry in this._connections)
{
ConnectionState conn = entry.Value;
if (conn.ReconnectAttemptCount > 3)
{
DropConnection(conn);
continue;
}
if (!conn.Connected || (DateTime.Now - conn.LastResponse).TotalMinutes > 30)
{
byte[] message = HexStringToByteArray(MSG_RESTART);
if (!conn.WaitingToRestart && conn.Write(message, 0, message.Length))
{
conn.WaitingToRestart = true;
}
else
{
DropConnection(conn);
}
}
}
}
private void ConnectionReady_Handler(IAsyncResult ar)
{
lock (thisLock)
{
if (this._listener == null)
return;
ConnectionState connectionState = new ConnectionState();
connectionState.Connection = this._listener.EndAccept(ar);
connectionState.Server = this;
connectionState.Provider = (TcpServiceProvider)this._provider.Clone();
connectionState.Buffer = new byte[4];
Util.SetKeepAlive(connectionState.Connection, KEEP_ALIVE_TIME, KEEP_ALIVE_TIME);
int newID = (this._connections.Count == 0 ? 0 : this._connections.Max(x => x.Key)) + 1;
connectionState.ID = newID;
this._connections.TryAdd(newID, connectionState);
ThreadPool.QueueUserWorkItem(AcceptConnection, connectionState);
this._listener.BeginAccept(ConnectionReady, null);
}
}
private void AcceptConnection_Handler(object state)
{
ConnectionState st = state as ConnectionState;
st.Provider.OnAcceptConnection(st);
if (st.Connection.Connected)
st.Connection.BeginReceive(st.Buffer, 0, 0, SocketFlags.None, ReceivedDataReady, st);
}
private void ReceivedDataReady_Handler(IAsyncResult result)
{
ConnectionState connectionState = null;
lock (thisLock)
{
connectionState = result.AsyncState as ConnectionState;
connectionState.Connection.EndReceive(result);
if (connectionState.Connection.Available == 0)
return;
// Here the message is parsed
connectionState.Provider.OnReceiveData(connectionState);
if (connectionState.Connection.Connected)
connectionState.Connection.BeginReceive(connectionState.Buffer, 0, 0, SocketFlags.None, ReceivedDataReady, connectionState);
}
}
internal void DropConnection(ConnectionState connectionState)
{
lock (thisLock)
{
if (this._connections.Values.Contains(connectionState))
{
ConnectionState conn;
this._connections.TryRemove(connectionState.ID, out conn);
}
if (connectionState.Connection != null && connectionState.Connection.Connected)
{
connectionState.Connection.Shutdown(SocketShutdown.Both);
connectionState.Connection.Close();
}
}
}
2 things I think I see...
If this is a connection you keep for multiple messages, you probably should not return from ReceivedDataReady_Handler when connectionState.Connection.Available == 0 IIRC a 0 length data paket can be received. So if the connection is still open, you should call connectionState.Connection.BeginReceive( ... ) before leaving the handler.
(I hesitate to put this here because I do not remember specifics) There is an event you can handle that tells you when things happen to your underlying connection including errors and failures connecting or closing a connection. For the life of me I cannot remember the name(s)... This would likely be more efficient than a timer every few seconds. It also gives you a way to break out of connections stuck in the connecting or closing states.
Add try/catch blocks around all the IO calls, and write the errors to a log file. As it is, it can't recover on error.
Also, be careful with any lock that doesn't have a timeout. These operations should be given a reasonable TTL.
I have experienced these kind of situation many times. The problem is probably not with your code at all but with the network and the way Windows (on boths ends) or the routers handle the network. What happens quite often is that a temporary network outage "breaks" the socket, but Windows isn't aware of it, so it doesn't close the socket.
The only way to overcome this is exactly what you did - sending keep-alives and monitoring connection health. Once you recognize the the connection is down, you need to restart it. However, in your code you don't restart the listener socket which is also broken and can't accept new connections. That's why restarting the service helps, it restarts the listener.

How to optimize SOA requests in HPC

I want to use HPC to do some simulations, I'm going to use SOA. I have following code from some sample materials, I modified it (I added this first for). Currently I stumbled upon problem of optimization / poor performance. This basic sample do nothing expect querying service method, this method return value it gets in parameter. However my example is slow. I have 60 computers with 4 core processors and 1Gb network. First phase of sending messages takes something about 2 seconds and then I have to wait another 7 seconds for return values. All values come leas or more at the same time. Another problem I have is that I cannot re-use session object, that is why this first for is outside using I want to put it inside using, but then I get time out, or information that BrokerClient is ended.
Can I reuse BrokerClient or DurableSession object.
How can I speed up this whole process of message passing ?
static void Main(string[] args)
{
const string headnode = "Head-Node.hpcCluster.edu.edu";
const string serviceName = "EchoService";
const int numRequests = 1000;
SessionStartInfo info = new SessionStartInfo(headnode, serviceName);
for (int j = 0; j < 100; j++)
{
using (DurableSession session = DurableSession.CreateSession(info))
{
Console.WriteLine("done session id = {0}", session.Id);
NetTcpBinding binding = new NetTcpBinding(SecurityMode.Transport);
using (BrokerClient<IService1> client = new BrokerClient<IService1>(session, binding))
{
for (int i = 0; i < numRequests; i++)
{
EchoRequest request = new EchoRequest("hello world!");
client.SendRequest<EchoRequest>(request, i);
}
client.EndRequests();
foreach (var response in client.GetResponses<EchoResponse>())
{
try
{
string reply = response.Result.EchoResult;
Console.WriteLine("\tReceived response for request {0}: {1}", response.GetUserData<int>(), reply);
}
catch (Exception ex)
{
}
}
}
session.Close();
}
}
}
Second version with Session instead of DurableSession, which is working better, but I have problem with Session reuse:
using (Session session = Session.CreateSession(info))
{
for (int i = 0; i < 100; i++)
{
count = 0;
Console.WriteLine("done session id = {0}", session.Id);
NetTcpBinding binding = new NetTcpBinding(SecurityMode.Transport);
using (BrokerClient<IService1> client = new BrokerClient<IService1>( session, binding))
{
//set getresponse handler
client.SetResponseHandler<EchoResponse>((item) =>
{
try
{
Console.WriteLine("\tReceived response for request {0}: {1}",
item.GetUserData<int>(), item.Result.EchoResult);
}
catch (SessionException ex)
{
Console.WriteLine("SessionException while getting responses in callback: {0}", ex.Message);
}
catch (Exception ex)
{
Console.WriteLine("Exception while getting responses in callback: {0}", ex.Message);
}
if (Interlocked.Increment(ref count) == numRequests)
done.Set();
});
// start to send requests
Console.Write("Sending {0} requests...", numRequests);
for (int j = 0; j < numRequests; j++)
{
EchoRequest request = new EchoRequest("hello world!");
client.SendRequest<EchoRequest>(request, i);
}
client.EndRequests();
Console.WriteLine("done");
Console.WriteLine("Retrieving responses...");
// Main thread block here waiting for the retrieval process
// to complete. As the thread that receives the "numRequests"-th
// responses does a Set() on the event, "done.WaitOne()" will pop
done.WaitOne();
Console.WriteLine("Done retrieving {0} responses", numRequests);
}
}
// Close connections and delete messages stored in the system
session.Close();
}
I get exception during second run of EndRequest: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
Don't use DurableSession for computations where the indivdual requests are shorter than about 30 seconds. A DurableSession will be backed by an MSMQ queue in the broker. Your requests and responses may be round-tripped to disk; this will cause performance problems if your amount of computation per request is small. You should use Session instead.
In general, for performance reasons, don't use DurableSession unless you absolutely need the durable behavior in the broker. In this case, since you are calling GetResponses immediately after SendRequests, Session will work fine for you.
You can reuse a Session or DurableSession object to create any number of BrokerClient objects, as long you haven't called Session.Close.
If it's important to process the responses in parallel on the client side, use BrokerClient.SetResponseHandler to set a callback function which will handle responses asynchronously (rather than use client.GetResponses, which handles them synchronously). Look at the HelloWorldR2 sample code for details.

Categories