How to discard faulted messages when using the ConfigureEndpoints method in MassTransit? - c#

This article provides an example of how to do this when configuring an endpoint manually.
Just like this:
cfg.ReceiveEndpoint("input-queue", ec =>
{
ec.DiscardFaultedMessages();
});
But I've got a lot of consumers and I don't want to configure each of them manually, so I use methods AddConsumers and ConfigureEndpoints.
Like this:
services.AddMassTransit(cfg =>
{
cfg.AddConsumers(Assembly.GetExecutingAssembly());
cfg.AddBus(sp => Bus.Factory.CreateUsingRabbitMq(x => x.ConfigureEndpoints(sp)));
});
If I additionally call the ReceiveEndPoint method (before or after calling ConfigureEndpoints), the exception "A receive endpoint with the same key was already added" is thrown.
Is there a way to configure a specific endpoint when using ConfigureEndpoints method?

When using ConfigureEndpoints, consumers, sagas, and activites are configured on receive endpoints automatically. To configure the receive endpoint for a specific consumer, create a consumer definition. If scanning an assembly for consumers, matching consumer definitions will also be discovered.
public class SubmitOrderConsumerDefinition :
ConsumerDefinition<SubmitOrderConsumer>
{
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<SubmitOrderConsumer> consumerConfigurator)
{
endpointConfigurator.DiscardFaultedMessage();
}
}

Related

MassTransit custom Query/Command topology for Request/Response mechanism

I'm currently reworking a microservices-based solution into a modular monolith with four APIs (pro, cyclist, management, thirdparty). One of the changes that need to be done is adapting the topology of our broker (RabbitMQ) so it fits our requirements. These requirements are shown on the diagram below.
The idea is that we currently always use the Request/Response mechanism for all our commands and queries and Publish mechanism for events, meaning that we always expect a response, whenever issuing a query (obviously) or a command.
We want the topology to support scaling in a way that if API1 (any instance of this executable) has multiple instances
commands/queries issued by any instance of the API1 will be executed by the consumers running in any instance of the API1 - this means that if both API1 and API2 executables have the same consumer, API2 consumers cannot execute commands/queries issued by the API2
when scaling, queues for commands and queries should not be scaled, just new consumers will be added and round robin should fire up
events are always received by all registered consumers so when scaling new queues are created
Right now I'm trying to figure out how to create this topology in MassTransit but I can't seem to get rid of the default publish exchange of type fanout. Here's the code that I use for automatic registration of command/queries endpoints and queues
private static IRabbitMqBusFactoryConfigurator AddNonEventConsumer<TConsumer>(
IRabbitMqBusFactoryConfigurator config,
IRegistration context)
where TConsumer : class, IConsumer
{
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var messageType = typeof(TConsumer)
.GetInterfaces()
?.First(i => i.IsGenericType)
?.GetGenericArguments()
?.First();
if (messageType == null)
{
throw new InvalidOperationException(
$"Message type could not be extracted from the consumer type. ConsumerTypeName=[{typeof(TConsumer).Name}]");
}
config.ReceiveEndpoint(e =>
{
// var exchangeName = new StringBuilder(messageType.FullName)
// .Replace($".{messageType.Name}", string.Empty)
// .Append($":{messageType.Name}")
// .ToString();
var exchangeName = messageType.FullName;
e.ConfigureConsumeTopology = false;
e.ExchangeType = ExchangeType.Direct;
e.Consumer<TConsumer>(context);
e.Bind(exchangeName, b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
});
config.Send<TestCommand>(c =>
{
c.UseRoutingKeyFormatter(x => routingKey);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
return config;
}
Again, we do want to use Request/Response mechanism for queries/commands and Publish mechanism for events (events are not a part of this question, it's a topic on its own, just queries/commands).
The question is - how do I configure endpoints and queues in this method in order to achieve the desired topology?
Alternative question - how else can I achieve my goal?
Cyclist? Pro? What kind of modular monolith is this anyway??
You're almost there, but need to configure a couple of additional items. First, when publishing, you'll need to set the routing key, which can be done using a routing key formatter. Also, configure the message type to use a direct exchange.
configurator.Send<TestCommand>(x =>
{
x.UseRoutingKeyFormatter(context => /* something that gets your string, pro/cyclist */);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
Also, if you're using custom exchange names, I'd add a custom entity name formatter. This will change the exchange names used for message types, so you can stick with message types in your application – keeping all the magic string stuff in one place.
class CustomEntityNameFormatter :
IEntityNameFormatter
{
public string FormatEntityName<T>()
where T : class
{
return new StringBuilder(typeof(T).FullName)
.Replace($".{typeof(T).Name}", string.Empty)
.Append($":{typeof(T).Name}")
.ToString();
}
}
config.MessageTopology
.SetEntityNameFormatter(new CustomEntityNameFormatter());
Then, when configuring your receive endpoint, do not change the endpoint's exchange type, only the bound exchange to match the publish topology. Using an endpoint name formatter, custom for you application, you can configure it manually as shown.
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var endpointNameFormatter = new CustomEndpointNameFormatter();
config.ReceiveEndpoint(endpointNameFormatter.Message<TMessage>(), e =>
{
e.ConfigureConsumeTopology = false;
e.Bind<TMessage>(b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
e.Consumer<TConsumer>(context);
});
This is just a rough sample to get your started. There is a direct exchange sample on GitHub that you can look at as well to see how various things are done in there. You could likely clean up the message type detection as well to avoid having to do all the type based reflection stuff, but that's more complex.

Mass Transit Filter for all Consumers/All Message Types

I'm trying to create a filter that would be executed for all message types. Ideally, you would only have to register the filter once, instead of doing it for each consumer. (I want to do the same thing on the publish side as well). I would need it to be within a lifetime scope. It's just going to pop a value out of a header, and assign it to Lifetime-scoped object that my DI container will provide (publish side does the reverse)
I watched the Chris Patterson's twitch video on Middleware, and I think it comes close to want I want around the 38 minute mark, but he registers the filter for a specific consumer. On the consumer side, I think I need a filter off of the ConsumeContext, but I just don't know how to register the filter in a way that it will be used for all consumers. I'm using MT 7 and Autofac. Can anyone show me some example code on how to register a scoped filter that will work for all consumers? (and if it's very different, one that will work for all publishers)?
If you need a filter that is in the lifetime scope, you need to use scoped filters (requires MassTransit v7). This will register the filter for any consumer, so that it is executed. You do need to make your filter generic, with T as a message type, which you can choose to use or ignore.
public class MyFilter<T> :
IFilter<ConsumeContext<T>>
where T : class
{
SomeScopedObject _obj;
public MyFilter(SomeScopedObject obj)
{
_obj = obj;
}
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
// do your thing with _obj
await next.Send(context);
}
public void Probe(ProbeContext context)
{
}
}
Then, on your receive endpoint(s), configure the filter before the consumers.
e.UseConsumeFilter(typeof(MyFilter<>));
This will configure for every consumer/message a version of your filter that executes within the container scope of the consumer.
You can do the same for publish/send.
Documentation is on the site.
Have you checked the docs? It has the configuration example with retries being configured on the endpoint level and the consumer level.
Bus.Factory.CreateUsingInMemory(cfg =>
{
cfg.ReceiveEndpoint("input-queue", e =>
{
e.UseMessageRetry(r =>
{
r.Immediate(5);
r.Handle<DataException>(x => x.Message.Contains("SQL"));
});
e.Consumer<MyConsumer>(c => c.UseMessageRetry(r =>
{
r.Interval(10, TimeSpan.FromMilliseconds(200));
r.Ignore<ArgumentNullException>();
r.Ignore<DataException>(x => x.Message.Contains("SQL"));
});
);
});
});

Applying custom request options for a remote SPARQL connector in dotNetRdf

I'm trying to add custom headers to the HTTP requsets a SPARQL endpoint connector issues. The connector can use a custom remote endpoint, which inherits an ApplyCustomRequestOptions method I can override. Documentation for that method says
[...] add any additional custom request options/headers to the request.
However my overridden method is never called (so my custom options are not applied, so I can't add the headers).
The following code works as expected, except that my ApplyCustomRequestOptions is never invoked:
using System;
using System.Net;
using VDS.RDF.Query;
using VDS.RDF.Storage;
class Program
{
static void Main(string[] args)
{
var endpointUri = new Uri("https://query.wikidata.org/sparql");
var endpoint = new CustomEndpoint(endpointUri);
using (var connector = new SparqlConnector(endpoint))
{
var result = connector.Query("SELECT * WHERE {?s ?p ?o} LIMIT 1");
}
}
}
public class CustomEndpoint : SparqlRemoteEndpoint
{
public CustomEndpoint(Uri endpointUri) : base(endpointUri) { }
protected override void ApplyCustomRequestOptions(HttpWebRequest httpRequest)
{
// This is never executed.
base.ApplyCustomRequestOptions(httpRequest);
// Implementation omitted.
}
}
Is this the correct way to use these methods? If it isn't, what is it?
BTW this is dotNetRdf 1.0.12, .NET 4.6.1. I've tried multiple SPARQL endpoints, multiple queries (SELECT & CONSTRUCT) and multiple invocations of SparqlConnector.Query.
This is a bug. I've found the problem and fixed it and submitted a PR. You can track the status of the issue here: https://github.com/dotnetrdf/dotnetrdf/issues/103

EasyNetQ - Consuming message from IAdvanceBus by non generic message

Yes we can use auto subscriber functionality and normal subscribe method with this subscription id but this solution is bit ugly in RMQ queues and exchanges. It was difficult to follow and analyze the messages.
I use the advance bus and created my own exchanges and queues. I published successfully but the consuming part is bit disappointment. Currently, it uses this way:
IAdvanceBus = bus.Advanced.Consume(queueName, registration =>
{
registration.Add<MESSAGE1>((message, info) => { ProcessMessage(MESSAGE1) })
registration.Add<MESSAGE2>((message, info) => { ProcessMessage(MESSAGE2) })
registration.Add<MESSAGE3>((message, info) => { ProcessMessage(MESSAGE3) })
registration.Add<MESSAGE4>((message, info) => { ProcessMessage(MESSAGE4) })
registration.Add<MESSAGE5>((message, info) => { ProcessMessage(MESSAGE5) });
});
This is good but the problem if you have hundred listeners?
I checked the registration type IHandlerRegistration it uses only generic one, can we have the none generic way?
Like :
IAdvanceBus = bus.Advanced.Consume(queueName, registration =>
{
registration.Add(typeof(MESSAGE1), info => { ProcessMessage(MESSAGE1) })
registration.Add(typeof(MESSAGE2), info => { ProcessMessage(MESSAGE2) })
registration.Add(typeof(MESSAGE3), info => { ProcessMessage(MESSAGE3) })
});
In this way we can scan the assembly who uses this messages.
In other side, I register via construction of bus:
RabbitHutch.CreateBus(connectionString, registeredServices => {
IEasyNetQLogger logger;
MyCustomHandlerCollection myHandlers = myDIContainer.Resolve<IMyHandlers>();
registeredServices.Register<IHandlerCollection>(s =>
{
logger = s.Resolve<IEasyNetQLogger>();
return myHandlers;
});
registeredServices.Register<IHandlerRegistration>(s => myHandlers});});
But it not respecting my regisration because when I see the code from advance bus consuming:consume code it creates from factory and NOT reading from container. I believe this is a root cause.
To workaround this requirement, I used this method from IAdvanceBus :
IDisposable Consume(IQueue queue, Func<byte[], MessageProperties, MessageReceivedInfo, Task> onMessage);
I rolled my own message dispatcher and deserialize any message from the queue. The collection dispatcher will determine the message type and dispatch to specific handler created using reflection.

C# WCF closing channels and using functions Func<T>

This is the point, I have a WCF service, it is working now. So I begin to work on the client side. And when the application was running, then an exception showed up: timeout. So I began to read, there are many examples about how to keep the connection alive, but, also I found that the best way, is create channel, use it, and dispose it. And honestly, I liked that. So, now reading about the best way to close the channel, there are two links that could be useful to anybody who needs them:
1. Clean up clients, the right way
2. Using Func
In the first link, this is the example:
IIdentityService _identitySvc;
...
if (_identitySvc != null)
{
((IClientChannel)_identitySvc).Close();
((IDisposable)_identitySvc).Dispose();
_identitySvc = null;
}
So, if the channel is not null, then is closed, disposed, and assign null. But I have a little question. In this example the channel has a .Close() method, but, in my case, intellisense is not showing a Close() method. It only exists in the factory object. So I believe I have to write it. But, in the interface that has the contracts or the class that implemets it??. And, what should be doing this method??.
Now, the next link, this has something I haven't try before. Func<T>. And after reading the goal, it's quite interesting. It creates a funcion that with lambdas creates the channel, uses it, closes it, and dipose it. This example implements that function like a Using() statement. It's really good, and a excellent improvement. But, I need a little help, to be honest, I can't understand the function, so, a little explanatino from an expert will be very useful. This is the function:
TReturn UseService<TChannel, TReturn>(Func<TChannel, TReturn> code)
{
var chanFactory = GetCachedFactory<TChannel>();
TChannel channel = chanFactory.CreateChannel();
bool error = true;
try {
TReturn result = code(channel);
((IClientChannel)channel).Close();
error = false;
return result;
}
finally {
if (error) {
((IClientChannel)channel).Abort();
}
}
}
And this is how is being used:
int a = 1;
int b = 2;
int sum = UseService((ICalculator calc) => calc.Add(a, b));
Console.WriteLine(sum);
Yep, I think is really, really good, I'd like to understand it to use it in the project I have.
And, like always, I hope this could be helpful to a lot of people.
the UseService method accepts a delegate, which uses the channel to send request. The delegate has a parameter and a return value. You can put the call to WCF service in the delegate.
And in the UseService, it creates the channel and pass the channel to the delegate, which should be provided by you. After finishing the call, it closes the channel.
The proxy object implements more than just your contract - it also implements IClientChannel which allows control of the proxy lifetime
The code in the first example is not reliable - it will leak if the channel is already busted (e.g. the service has gone down in a session based interaction). As you can see in the second version, in the case of an error it calls Abort on the proxy which still cleans up the client side
You can also do this with an extension method as follows:
enum OnError
{
Throw,
DontThrow
}
static class ProxyExtensions
{
public static void CleanUp(this IClientChannel proxy, OnError errorBehavior)
{
try
{
proxy.Close();
}
catch
{
proxy.Abort();
if (errorBehavior == OnError.Throw)
{
throw;
}
}
}
}
However, the usage of this is a little cumbersome
((IClientChannel)proxy).CleanUp(OnError.DontThrow);
But you can make this more elegant if you make your own proxy interface that extends both your contract and IClientChannel
interface IPingProxy : IPing, IClientChannel
{
}
To answer the question left in the comment in Jason's answer, a simple example of GetCachedFactory may look like the below. The example looks up the endpoint to create by finding the endpoint in the config file with the "Contract" attribute equal to the ConfigurationName of the service the factory is to create.
ChannelFactory<T> GetCachedFactory<T>()
{
var endPointName = EndPointNameLookUp<T>();
return new ChannelFactory<T>(endPointName);
}
// Determines the name of the endpoint the factory will create by finding the endpoint in the config file which is the same as the type of the service the factory is to create
string EndPointNameLookUp<T>()
{
var contractName = LookUpContractName<T>();
foreach (ChannelEndpointElement serviceElement in ConfigFileEndPoints)
{
if (serviceElement.Contract == contractName) return serviceElement.Name;
}
return string.Empty;
}
// Retrieves the list of endpoints in the config file
ChannelEndpointElementCollection ConfigFileEndPoints
{
get
{
return ServiceModelSectionGroup.GetSectionGroup(
ConfigurationManager.OpenExeConfiguration(
ConfigurationUserLevel.None)).Client.Endpoints;
}
}
// Retrieves the ConfigurationName of the service being created by the factory
string LookUpContractName<T>()
{
var attributeNamedArguments = typeof (T).GetCustomAttributesData()
.Select(x => x.NamedArguments.SingleOrDefault(ConfigurationNameQuery));
var contractName = attributeNamedArguments.Single(ConfigurationNameQuery).TypedValue.Value.ToString();
return contractName;
}
Func<CustomAttributeNamedArgument, bool> ConfigurationNameQuery
{
get { return x => x.MemberInfo != null && x.MemberInfo.Name == "ConfigurationName"; }
}
A better solution though is to let an IoC container manage the creation of the client for you. For example, using autofac it would like the following. First you need to register the service like so:
var builder = new ContainerBuilder();
builder.Register(c => new ChannelFactory<ICalculator>("WSHttpBinding_ICalculator"))
.SingleInstance();
builder.Register(c => c.Resolve<ChannelFactory<ICalculator>>().CreateChannel())
.UseWcfSafeRelease();
container = builder.Build();
Where "WSHttpBinding_ICalculator" is the name of the endpoint in the config file. Then later you can use the service like so:
using (var lifetime = container.BeginLifetimeScope())
{
var calc = lifetime.Resolve<IContentService>();
var sum = calc.Add(a, b);
Console.WriteLine(sum);
}

Categories