I have several questions when using MassTransit with RabbitMq.
I have two queues. One for normal messsages and one for priority messages.
Priority ones must be handled before the normal ones.
Lets say I'm configuring bus this way:
public void ConfigureRabbitMq(IBusRegistrationContext context, IRabbitMqBusFactoryConfigurator configurator)
{
var rabbitConfig = RabbitMqConfig.Get<RabbitMqConfiguration>();
configurator.Host(rabbitConfig.Host, rabbitConfig.VirtualHost, hfg =>
{
hfg.Password(rabbitConfig.Password);
hfg.Username(rabbitConfig.UserName);
});
configurator.ConcurrentMessageLimit = 8;
configurator.ReceiveEndpoint(rabbitConfig.SendQueue, endpoint =>
{
endpoint.Durable = true;
endpoint.ConcurrentMessageLimit = 5;
endpoint.PrefetchCount = 25;
endpoint.UseMessageRetry(r => r.Incremental(5, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1)));
endpoint.ConfigureConsumer<Service.MessageService.Send>(context);
});
configurator.ReceiveEndpoint(rabbitConfig.SendPriorityQueue, endpoint =>
{
endpoint.Durable = true;
endpoint.ConcurrentMessageLimit = 5;
endpoint.PrefetchCount = 25;
endpoint.UseMessageRetry(r => r.Incremental(5, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1)));
endpoint.ConfigureConsumer<Service.MessageService.Send>(context);
});
}
What will 'configurator.ConcurrentMessageLimit = 8;' do?
Is it gonna limit the number of messages for entire app or set limit for every endpoint to 8?
Can I somehow make sure that the messages from 'SendPriorityQueue' are handled before the 'SendQueue'?
configurator.ConcurrentMessageLimit = 8;
Sets the default endpoint concurrent message limit to 8. That’s it.
Since you are specifying both ConcurrentMessageLimit and PrefetchCount on each receive endpoint, the default concurrent message limit is overridden, essentially unused in this configuration. Each receive endpoint will prefetch up to 25 messages and process up to 5 concurrently (up to 10 concurrent messages total across both receive endpoints).
Related
In the legacy version of Azure Service Bus (ASB) I can use MessageWaitTimeout in SessionHandlerOptions to control the timeout between 2 messages. For example, if I set timeout 5 seconds, after complete the first message, the queue waits for 5s then picks the next message.
In the new version Azure.Messaging.ServiceBus, the queue has to wait for around 1 minute to pick up the next message. I only need to process one-by-one messages, no need to process concurrent messages.
I follow this example and can't find any solution to set timeout like the old version.
Does anyone know how to do it?
var options = new ServiceBusSessionProcessorOptions
{
AutoCompleteMessages = false,
MaxConcurrentSessions = 1,
MaxConcurrentCallsPerSession = 1,
MaxAutoLockRenewalDuration = TimeSpan.FromMinutes(2),
};
EDIT:
I found the solution. It is RetryOptions in ServiceBusClient
var client = new ServiceBusClient("connectionString", new ServiceBusClientOptions
{
RetryOptions = new ServiceBusRetryOptions
{
TryTimeout = TimeSpan.FromSeconds(5)
}
});
With the latest stable release, 7.2.0, this can be configured with the SessionIdleTimeout property.
I want to use MassTransit to send messages that may have different structures in terms of message.Data, to different Azure Service Bus queues. As long as the router.Name keeps the initial value, it works welll. But, whenever the destination Uri of EndpointConvention.Map<ManyToOneTransferMessage> changes, an exception is thrown by MassTransit as "The endpoint convention has already been created and can no longer be modified". Is there any way to remap the message type with another destination to use MassTransit with multiple queues?
public class AzureServiceBusManager
{
string ServiceBusConnectionString = string.Empty;
public AzureServiceBusManager()
{
ServiceBusConnectionString = ConfigurationManager.AppSettings["AppSettings:ServiceBusConnectionString"];
}
public async Task SendMessageAsyncN1(TransferMessage transferMessage, Router router)
{
var message = new ManyToOneTransferMessage
{
BlobFileName = transferMessage.BlobFileName,
Compressed = transferMessage.Compressed,
Data = transferMessage.Data,
MessageId = transferMessage.MessageId,
TransferId = transferMessage.TransferId,
TransferType = transferMessage.TransferType
};
var queueBusControl = Bus.Factory.CreateUsingAzureServiceBus(
cfg =>
{
cfg.Host(ServiceBusConnectionString);
EndpointConvention.Map<ManyToOneTransferMessage>(new Uri("queue:" + router.Name));
cfg.ReceiveEndpoint(router.Name, e =>
{
e.RequiresSession = true;
e.MaxConcurrentCalls = 500;
});
});
await queueBusControl.Send(message);
}
}
So, first of all, do not use EndpointConvention.Map<ManyToOneTransferMessage>(new Uri("queue:" + router.Name));. It isn't useful, and only adds to the confusion.
You can resolve the endpoint from the bus, but you have to realize that creating a bus for each call is a bad idea. It is best to start the bus at startup (you aren't even starting it in the code above), and stop it at application shutdown.
Then, for each call, you can use that bus to resolve the send endpoint and send the message.
var endpoint = await bus.GetSendEndpoint(new Uri("queue:" + router.Name));
await endpoint.Send(message);
Also, you should remove this since it will cause all messages to be moved to the _skipped queue:
cfg.ReceiveEndpoint(router.Name, e =>
{
e.RequiresSession = true;
e.MaxConcurrentCalls = 500;
});
You'll likely need to configure the queues separately, in advance, if you requireSession, although I don't see you setting a SessionId on the message so it likely will not work anyway without one.
I have a simple REST service, and am calling it with WCF via WebChannelFactory.
When I set the binding to use TransferMode.Streamed, the connections do not seem to be re-used, and after several requests (usually ServicePointManager.DefaultConnectionLimit, but sometimes a few more), I run out of connections (the request call hangs, and then I get a Timeout exception).
[ServiceContract]
public interface IInviteAPI {
[OperationContract]
[WebGet(UriTemplate = "invites/{id}", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
Invite GetInvite(string id);
}
[STAThread]
static int Main(string[] args) {
ServicePointManager.DefaultConnectionLimit = 16; // make a larger default
WebHttpBinding binding = new WebHttpBinding();
binding.TransferMode = TransferMode.Streamed;
try {
WebChannelFactory<IInviteAPI> factory = new WebChannelFactory<IInviteAPI>(binding, new Uri("http://example.com/invite"));
IInviteAPI channel = factory.CreateChannel();
for (int i = 0; i < 100; i++) {
Invite data = channel.GetInvite("160"); // fails on i==16
}
((IChannel)channel).Close();
}
catch (Exception ex) {
Debug.WriteLine(ex);
}
return 0;
}
System.TimeoutException: The request channel timed out while waiting for a reply after 00:00:59.9969999.
There are many many posts on the net about not closing the channel - this is not the problem here, as I am simply making the same request multiple times on the same channel.
If I remove the binding.TransferMode = TransferMode.Streamed; line it works perfectly.
I can also create and close the channel inside the loop, and it has the same issue
for (int i = 0; i < 100; i++) {
IInviteAPI channel = factory.CreateChannel();
Invite data = channel.GetInvite("160"); // fails on i==20
((IChannel)channel).Close();
}
Interestingly, if I add a GC.Collect() in the loop, it does work!! After much detailed tracing through the .Net code, this seems to be because the ServicePoint is only held with a weak reference in the ServicePointManager. Calling GC.Collect then finalizes the ServicePoint, and closes all the current connections.
Is there something I am missing? How can I keep TransferMode.Streamed and be able to call the service multiple times, with a reasonable ServicePointManager.DefaultConnectionLimit?
(I need TransferMode.Streamed because other calls on the service are for transferring huge archives of data up to 1GB)
Update:
If I run netstat -nb I can see that there are 16 connections to the server in ESTABLISHED state. After 30 seconds or so, they change to CLOSE_WAIT (presumably the server closes the idle connection), but these CLOSE_WAIT connections never disappear after that, no matter how big I set the timeouts.
It seems like a bug in .Net: the connections should be being re-used, but are not. The 17th request is just being queued forever.
I know that there is a 20 limit of concurrent inbound connections to the WCF configuration default limits, concurrency and scalability.
you can incrase concurrent connection limit or :
for (int i = 0; i < 100; i++) {
Invite data = channel.GetInvite("160");
Thread.Sleep(1000); // prevent to concurrency connection to wcf service
}
My one role instance needs to read data from 20-40 EventHub partitions at the same time (context: this is our internal virtual partitioning scheme - 20-40 partitions represent scale out unit).
In my prototype I use below code. By I get throughput 8 MBPS max. Since if I run the same console multiple times I get throughput (perfmon counter) multiplied accordingly then I think this is not neither VM network limit nor EventHub service side limit.
I wonder whether I create clients correctly here...
Thank you!
Zaki
const string EventHubName = "...";
const string ConsumerGroupName = "...";
var connectionStringBuilder = new ServiceBusConnectionStringBuilder();
connectionStringBuilder.SharedAccessKeyName = "...";
connectionStringBuilder.SharedAccessKey = "...";
connectionStringBuilder.Endpoints.Add(new Uri("sb://....servicebus.windows.net/"));
connectionStringBuilder.TransportType = TransportType.Amqp;
var clientConnectionString = connectionStringBuilder.ToString();
var eventHubClient = EventHubClient.CreateFromConnectionString(clientConnectionString, EventHubName);
var runtimeInformation = await eventHubClient.GetRuntimeInformationAsync().ConfigureAwait(false);
var consumerGroup = eventHubClient.GetConsumerGroup(ConsumerGroupName);
var offStart = DateTime.UtcNow.AddMinutes(-10);
var offEnd = DateTime.UtcNow.AddMinutes(-8);
var workUnitManager = new WorkUnitManager(runtimeInformation.PartitionCount);
var readers = new List<PartitionReader>();
for (int i = 0; i < runtimeInformation.PartitionCount; i++)
{
var reader = new PartitionReader(
consumerGroup,
runtimeInformation.PartitionIds[i],
i,
offStart,
offEnd,
workUnitManager);
readers.Add(reader);
}
internal async Task Read()
{
try
{
Console.WriteLine("Creating a receiver for '{0}' with offset {1}", this.partitionId, this.startOffset);
EventHubReceiver receiver = await this.consumerGroup.CreateReceiverAsync(this.partitionId, this.startOffset).ConfigureAwait(false);
Console.WriteLine("Receiver for '{0}' has been created.", this.partitionId);
var stopWatch = new Stopwatch();
stopWatch.Start();
while (true)
{
var message =
(await receiver.ReceiveAsync(1, TimeSpan.FromSeconds(10)).ConfigureAwait(false)).FirstOrDefault();
if (message == null)
{
continue;
}
if (message.EnqueuedTimeUtc >= this.endOffset)
{
break;
}
this.processor.Push(this.partitionIndex, message);
}
this.Duration = TimeSpan.FromMilliseconds(stopWatch.ElapsedMilliseconds);
}
catch (Exception ex)
{
Console.WriteLine(ex);
throw;
}
}
The above code snippet you provided is effectively: creating 1 Connection to ServiceBus Service and then running all receivers on one single connection (at protocl level, essentially, creating multiple Amqp Links on that same connection).
Alternately - to achieve high throughput for receive operations, You will need to create multiple connections and map your receivers to connection ratio to fine-tune your throughput. That's what happens when you run the above code in multiple processes.
Here's how:
You will need to go one layer down the .Net client SDK API and code at MessagingFactory level - you can start with 1 MessagingFactory per EventHubClient. MessagingFactory is the one - which represents 1 Connection to EventHubs service. Code to create a dedicated connection per EventHubClient:
var connStr = new ServiceBusConnectionStringBuilder("Endpoint=sb://servicebusnamespacename.servicebus.windows.net/;SharedAccessKeyName=saskeyname;SharedAccessKey=sakKey");
connStr.TransportType = TransportType.Amqp;
var msgFactory = MessagingFactory.CreateFromConnectionString(connStr.ToString());
var ehClient = msgFactory.CreateEventHubClient("teststream");
I just added connStr in my sample to emphasize assigning TransportType to Amqp.
You will end up with multiple connections with outgoing port 5671:
If you rewrite your code with 1 MessagingFactory per EventHubClient (or a reasonable ratio) - you are all set (in your code - you will need to move EventHubClient creation to Reader)!
The only extra criteria one need to consider while creating multiple connections is the Bill - only 100 connections are included (including senders and receivers) in basic sku. I guess you are already on standard (as you have >1 TUs) - which gives 1000 connections included in the package - so no need to worry - but mentioning just-in-case.
~Sree
A good option is to create a Task for each partition.
This a copy of my implementation which is able to process a rate of 2.5k messages per second per partition. This rate will be also related to your downstream speed.
static void EventReceiver()
{
for (int i = 0; i <= EventHubPartitionCount; i++)
{
Task.Factory.StartNew((state) =>
{
Console.WriteLine("Starting worker to process partition: {0}", state);
var factory = MessagingFactory.Create(ServiceBusEnvironment.CreateServiceUri("sb", "tests-eventhub", ""), new MessagingFactorySettings()
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("Listen", "PGSVA7L="),
TransportType = TransportType.Amqp
});
var client = factory.CreateEventHubClient("eventHubName");
var group = client.GetConsumerGroup("customConsumer");
Console.WriteLine("Group: {0}", group.GroupName);
var receiver = group.CreateReceiver(state.ToString(), DateTime.Now);
while (true)
{
if (cts.IsCancellationRequested)
{
receiver.Close();
break;
}
var messages = receiver.Receive(20);
messages.ToList().ForEach(aMessage =>
{
// Process your event
});
Console.WriteLine(counter);
}
}, i);
}
}
Looking at EasyNetQ as replacement for our current library for MQ communication.
For Testing im trying to simply publish a number of messages to an exchange, using a custom naming strategy.
My method for publishing is in t he small test method below>
public void PublishTest()
{
var advancedBus = RabbitHutch.CreateBus("host=localhost;virtualHost=Test;username=guest;password=guest;").Advanced;
var routingKey = "SimpleMessage";
// declare some objects
var queue = advancedBus.QueueDeclare("Q.TestQueue.SimpleMessage");
var exchange = advancedBus.ExchangeDeclare("E.TestExchange.SimpleMessage", ExchangeType.Direct);
var binding = advancedBus.Bind(exchange, queue, routingKey);
var message = new SimpleMessage() {Test = "HELLO"};
for (int i = 0; i < 100; i++)
{
advancedBus.Publish(exchange, routingKey, true, true, new Message<SimpleMessage>(message));
}
advancedBus.Dispose();
}
The problem is that even thou the Exchange and Queue is created, and bound proper, publishing does not produce anything.
No messages hit the queue.
The graph in the Rabbit MQ management interface does not even show any activity on the exchange.
Am i missing something here? The code is basically taken straight from the documentation.
If im using the simple bus and simply just publish, an exchange is created and i can see via the management interface, that messages are being published.
Since the simple bus uses the advanced API to publish i assume that it is a setup issue that i am missing.
I hope someone can bring some insight:-)
/Thomas
I finally tracked down what was causing the problem.
It turns out that setting the parameter: immediate to true will cause the systems to throw exceptions.
the paramters is apparently not supported any more in the RabbitMQ client, see the discussion here: https://github.com/mikehadlow/EasyNetQ/issues/112
So the code below works just fine, mark the change from true to false in the publish method:
public void PublishTest()
{
var advancedBus = RabbitHutch.CreateBus("host=localhost;virtualHost=Test;username=guest;password=guest;").Advanced;
var routingKey = "SimpleMessage";
// declare some objects
var queue = advancedBus.QueueDeclare("Q.TestQueue.SimpleMessage");
var exchange = advancedBus.ExchangeDeclare("E.TestExchange.SimpleMessage", ExchangeType.Direct);
var binding = advancedBus.Bind(exchange, queue, routingKey);
var message = new SimpleMessage() {Test = "HELLO"};
for (int i = 0; i < 100; i++)
{
advancedBus.Publish(exchange, routingKey, true, false, new Message<SimpleMessage>(message));
}
advancedBus.Dispose();
}