I'm currently reworking a microservices-based solution into a modular monolith with four APIs (pro, cyclist, management, thirdparty). One of the changes that need to be done is adapting the topology of our broker (RabbitMQ) so it fits our requirements. These requirements are shown on the diagram below.
The idea is that we currently always use the Request/Response mechanism for all our commands and queries and Publish mechanism for events, meaning that we always expect a response, whenever issuing a query (obviously) or a command.
We want the topology to support scaling in a way that if API1 (any instance of this executable) has multiple instances
commands/queries issued by any instance of the API1 will be executed by the consumers running in any instance of the API1 - this means that if both API1 and API2 executables have the same consumer, API2 consumers cannot execute commands/queries issued by the API2
when scaling, queues for commands and queries should not be scaled, just new consumers will be added and round robin should fire up
events are always received by all registered consumers so when scaling new queues are created
Right now I'm trying to figure out how to create this topology in MassTransit but I can't seem to get rid of the default publish exchange of type fanout. Here's the code that I use for automatic registration of command/queries endpoints and queues
private static IRabbitMqBusFactoryConfigurator AddNonEventConsumer<TConsumer>(
IRabbitMqBusFactoryConfigurator config,
IRegistration context)
where TConsumer : class, IConsumer
{
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var messageType = typeof(TConsumer)
.GetInterfaces()
?.First(i => i.IsGenericType)
?.GetGenericArguments()
?.First();
if (messageType == null)
{
throw new InvalidOperationException(
$"Message type could not be extracted from the consumer type. ConsumerTypeName=[{typeof(TConsumer).Name}]");
}
config.ReceiveEndpoint(e =>
{
// var exchangeName = new StringBuilder(messageType.FullName)
// .Replace($".{messageType.Name}", string.Empty)
// .Append($":{messageType.Name}")
// .ToString();
var exchangeName = messageType.FullName;
e.ConfigureConsumeTopology = false;
e.ExchangeType = ExchangeType.Direct;
e.Consumer<TConsumer>(context);
e.Bind(exchangeName, b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
});
config.Send<TestCommand>(c =>
{
c.UseRoutingKeyFormatter(x => routingKey);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
return config;
}
Again, we do want to use Request/Response mechanism for queries/commands and Publish mechanism for events (events are not a part of this question, it's a topic on its own, just queries/commands).
The question is - how do I configure endpoints and queues in this method in order to achieve the desired topology?
Alternative question - how else can I achieve my goal?
Cyclist? Pro? What kind of modular monolith is this anyway??
You're almost there, but need to configure a couple of additional items. First, when publishing, you'll need to set the routing key, which can be done using a routing key formatter. Also, configure the message type to use a direct exchange.
configurator.Send<TestCommand>(x =>
{
x.UseRoutingKeyFormatter(context => /* something that gets your string, pro/cyclist */);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
Also, if you're using custom exchange names, I'd add a custom entity name formatter. This will change the exchange names used for message types, so you can stick with message types in your application – keeping all the magic string stuff in one place.
class CustomEntityNameFormatter :
IEntityNameFormatter
{
public string FormatEntityName<T>()
where T : class
{
return new StringBuilder(typeof(T).FullName)
.Replace($".{typeof(T).Name}", string.Empty)
.Append($":{typeof(T).Name}")
.ToString();
}
}
config.MessageTopology
.SetEntityNameFormatter(new CustomEntityNameFormatter());
Then, when configuring your receive endpoint, do not change the endpoint's exchange type, only the bound exchange to match the publish topology. Using an endpoint name formatter, custom for you application, you can configure it manually as shown.
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var endpointNameFormatter = new CustomEndpointNameFormatter();
config.ReceiveEndpoint(endpointNameFormatter.Message<TMessage>(), e =>
{
e.ConfigureConsumeTopology = false;
e.Bind<TMessage>(b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
e.Consumer<TConsumer>(context);
});
This is just a rough sample to get your started. There is a direct exchange sample on GitHub that you can look at as well to see how various things are done in there. You could likely clean up the message type detection as well to avoid having to do all the type based reflection stuff, but that's more complex.
Related
I'm trying to create a filter that would be executed for all message types. Ideally, you would only have to register the filter once, instead of doing it for each consumer. (I want to do the same thing on the publish side as well). I would need it to be within a lifetime scope. It's just going to pop a value out of a header, and assign it to Lifetime-scoped object that my DI container will provide (publish side does the reverse)
I watched the Chris Patterson's twitch video on Middleware, and I think it comes close to want I want around the 38 minute mark, but he registers the filter for a specific consumer. On the consumer side, I think I need a filter off of the ConsumeContext, but I just don't know how to register the filter in a way that it will be used for all consumers. I'm using MT 7 and Autofac. Can anyone show me some example code on how to register a scoped filter that will work for all consumers? (and if it's very different, one that will work for all publishers)?
If you need a filter that is in the lifetime scope, you need to use scoped filters (requires MassTransit v7). This will register the filter for any consumer, so that it is executed. You do need to make your filter generic, with T as a message type, which you can choose to use or ignore.
public class MyFilter<T> :
IFilter<ConsumeContext<T>>
where T : class
{
SomeScopedObject _obj;
public MyFilter(SomeScopedObject obj)
{
_obj = obj;
}
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
// do your thing with _obj
await next.Send(context);
}
public void Probe(ProbeContext context)
{
}
}
Then, on your receive endpoint(s), configure the filter before the consumers.
e.UseConsumeFilter(typeof(MyFilter<>));
This will configure for every consumer/message a version of your filter that executes within the container scope of the consumer.
You can do the same for publish/send.
Documentation is on the site.
Have you checked the docs? It has the configuration example with retries being configured on the endpoint level and the consumer level.
Bus.Factory.CreateUsingInMemory(cfg =>
{
cfg.ReceiveEndpoint("input-queue", e =>
{
e.UseMessageRetry(r =>
{
r.Immediate(5);
r.Handle<DataException>(x => x.Message.Contains("SQL"));
});
e.Consumer<MyConsumer>(c => c.UseMessageRetry(r =>
{
r.Interval(10, TimeSpan.FromMilliseconds(200));
r.Ignore<ArgumentNullException>();
r.Ignore<DataException>(x => x.Message.Contains("SQL"));
});
);
});
});
I'm using MassTransit + RabbitMQ combo in Asp.Net Core app. The relevant configuration part below:
public IBusControl CreateBus(IServiceProvider serviceProvider)
{
var options = serviceProvider.GetService<IConfiguration>().GetOptions<RabbitMqOptions>("rabbitmq");
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host($"rabbitmq://{options.Host}:{options.Port}");
cfg.ReceiveEndpoint("ingest-products", ep =>
{
ep.PrefetchCount = 16;
ep.UseMessageRetry(r => r.Interval(2, 1000));
ep.Bind<CreateProducts>(x =>
{
x.RoutingKey = "marketplace";
x.ExchangeType = ExchangeType.Direct;
x.AutoDelete = false;
x.Durable = true;
});
ep.ConfigureConsumer<CreateProductsConsumer>(serviceProvider);
});
});
}
When I run the application, I'm getting this exception:
ArgumentException: The
MassTransit.RabbitMqTransport.Topology.Entities.ExchangeEntity entity
settings did not match the existing entity
What am I doing wrong here? Am I not supposed to configure a consumer with the IServiceProvider after I bind exchange to a receive endpoint? If not, then how do I configure it properly (well, I still want stuff to get injected into my consumers)?
If you are binding the messages types to the receive endpoint that are the same as message types in the consumer, you need to disable the automatic exchange binding.
// for MassTransit versions v6 and earlier
endpoint.BindMessageExchanges = false;
// for MassTransit versions 7 and onward
endpoint.ConfigureConsumeTopology = false;
This will prevent MassTransit from trying to bind the messages types of the consumer on the endpoint.
I spent most of my day trying to find this. In v7+ it was renamed to:
endpoint.ConfigureConsumeTopology = false;
You need to disable automatic message exchange binding in order for your custom message binding to work.
endpoint.ConfigureConsumeTopology = false;
By following the source code on GitHub we can see that ConfigureConsumeTopology method obsoletes the previous methods, such as BindMessageTopics, BindMessageExchanges, SubscribeMessageTopics.
How does one get the results of a "Saved Search" of Type "Deleted Record" in NetSuite? Other search types are obvious(CustomerSearchAdvanced, ItemSearchAdvanced, etc...) but this one seems to have no reference online, just documentation around deleting records, not running saved searches on them.
Update 1
I should clarify a little bit more what I'm trying to do. In NetSuite you can run(and Save) Saved Search's on the record type "Deleted Record", I believe you are able to access at least 5 columns(excluding user defined ones) through this process from the web interface:
Date Deleted
Deleted By
Context
Record Type
Name
You are also able to setup search criteria as part of the "Saved Search". I would like to access a series of these "Saved Search's" already present in my system utilizing their already setup search criteria and retrieving data from all 5 of their displayed columns.
The Deleted Record record isn't supported in SuiteTalk as of version 2016_2 which means you can't run a Saved Search and pull down the results.
This is not uncommon when integrating with NetSuite. :(
What I've always done in these situations is create a RESTlet (NetSuite's wannabe RESTful API framework) SuiteScript that will run the search (or do whatever is possible with SuiteScript and not possible with SuiteTalk) and return the results.
From the documentation:
You can deploy server-side scripts that interact with NetSuite data
following RESTful principles. RESTlets extend the SuiteScript API to
allow custom integrations with NetSuite. Some benefits of using
RESTlets include the ability to:
Find opportunities to enhance usability and performance, by
implementing a RESTful integration that is more lightweight and
flexible than SOAP-based web services. Support stateless communication
between client and server. Control client and server implementation.
Use built-in authentication based on token or user credentials in the
HTTP header. Develop mobile clients on platforms such as iPhone and
Android. Integrate external Web-based applications such as Gmail or
Google Apps. Create backends for Suitelet-based user interfaces.
RESTlets offer ease of adoption for developers familiar with
SuiteScript and support more behaviors than NetSuite's SOAP-based web
services, which are limited to those defined as SuiteTalk operations.
RESTlets are also more secure than Suitelets, which are made available
to users without login. For a more detailed comparison, see RESTlets
vs. Other NetSuite Integration Options.
In your case this would be a near trivial script to create, it would gather the results and return JSON encoded (easiest) or whatever format you need.
You will likely spend more time getting the Token Based Authentication (TBA) working than you will writing the script.
[Update] Adding some code samples related to what I mentioned in the comments below:
Note that the SuiteTalk proxy object model is frustrating in that it
lacks inheritance that it could make such good use of. So you end with
code like your SafeTypeCastName(). Reflection is one of the best tools
in my toolbox when it comes to working with SuiteTalk proxies. For
example, all *RecordRef types have common fields/props so reflection
saves you type checking all over the place to work with the object you
suspect you have.
public static TType GetProperty<TType>(object record, string propertyID)
{
PropertyInfo pi = record.GetType().GetProperty(propertyID);
return (TType)pi.GetValue(record, null);
}
public static string GetInternalID(Record record)
{
return GetProperty<string>(record, "internalId");
}
public static string GetInternalID(BaseRef recordRef)
{
PropertyInfo pi = recordRef.GetType().GetProperty("internalId");
return (string)pi.GetValue(recordRef, null);
}
public static CustomFieldRef[] GetCustomFieldList(Record record)
{
return GetProperty<CustomFieldRef[]>(record, CustomFieldPropertyName);
}
Credit to #SteveK for both his revised and final answer. I think long term I'm going to have to implement what is suggested, short term I tried implementing his first solution("getDeleted") and I'd like to add some more detail on this in case anyone needs to use this method in the future:
//private NetSuiteService nsService = new DataCenterAwareNetSuiteService("login");
//private TokenPassport createTokenPassport() { ... }
private IEnumerable<DeletedRecord> DeletedRecordSearch()
{
List<DeletedRecord> results = new List<DeletedRecord>();
int totalPages = Int32.MaxValue;
int currentPage = 1;
while (currentPage <= totalPages)
{
//You may need to reauthenticate here
nsService.tokenPassport = createTokenPassport();
var queryResults = nsService.getDeleted(new GetDeletedFilter
{
//Add any filters here...
//Example
/*
deletedDate = new SearchDateField()
{
#operator = SearchDateFieldOperator.after,
operatorSpecified = true,
searchValue = DateTime.Now.AddDays(-49),
searchValueSpecified = true,
predefinedSearchValueSpecified = false,
searchValue2Specified = false
}
*/
}, currentPage);
currentPage++;
totalPages = queryResults.totalPages;
results.AddRange(queryResults.deletedRecordList);
}
return results;
}
private Tuple<string, string> SafeTypeCastName(
Dictionary<string, string> customList,
BaseRef input)
{
if (input.GetType() == typeof(RecordRef)) {
return new Tuple<string, string>(((RecordRef)input).name,
((RecordRef)input).type.ToString());
}
//Not sure why "Last Sales Activity Record" doesn't return a type...
else if (input.GetType() == typeof(CustomRecordRef)) {
return new Tuple<string, string>(((CustomRecordRef)input).name,
customList.ContainsKey(((CustomRecordRef)input).internalId) ?
customList[((CustomRecordRef)input).internalId] :
"Last Sales Activity Record"));
}
else {
return new Tuple<string, string>("", "");
}
}
public Dictionary<string, string> GetListCustomTypeName()
{
//You may need to reauthenticate here
nsService.tokenPassport = createTokenPassport();
return
nsService.search(new CustomListSearch())
.recordList.Select(a => (CustomList)a)
.ToDictionary(a => a.internalId, a => a.name);
}
//Main code starts here
var results = DeletedRecordSearch();
var customList = GetListCustomTypeName();
var demoResults = results.Select(a => new
{
DeletedDate = a.deletedDate,
Type = SafeTypeCastName(customList, a.record).Item2,
Name = SafeTypeCastName(customList, a.record).Item1
}).ToList();
I have to apply all the filters API side, and this only returns three columns:
Date Deleted
Record Type(Not formatted in the same way as the Web UI)
Name
I'm currently using SignalR to communicate between a server and multiple separate processes spawned by the server itself.
Both Server & Client are coded in C#. I'm using SignalR 2.2.0.0
On the server side, I use OWIN to run the server.
I am also using LightInject as an IoC container.
Here is my code:
public class AgentManagementStartup
{
public void ConfigurationOwin(IAppBuilder app, IAgentManagerDataStore dataStore)
{
var serializer = new JsonSerializer
{
PreserveReferencesHandling = PreserveReferencesHandling.Objects,
TypeNameHandling = TypeNameHandling.Auto,
TypeNameAssemblyFormat = FormatterAssemblyStyle.Simple
};
var container = new ServiceContainer();
container.RegisterInstance(dataStore);
container.RegisterInstance(serializer);
container.Register<EventHub>();
container.Register<ManagementHub>();
var config = container.EnableSignalR();
app.MapSignalR("", config);
}
}
On the client side, I register this way:
public async Task Connect()
{
try
{
m_hubConnection = new HubConnection(m_serverUrl, false);
m_hubConnection.Closed += OnConnectionClosed;
m_hubConnection.TraceLevel = TraceLevels.All;
m_hubConnection.TraceWriter = Console.Out;
var serializer = m_hubConnection.JsonSerializer;
serializer.TypeNameHandling = TypeNameHandling.Auto;
serializer.PreserveReferencesHandling = PreserveReferencesHandling.Objects;
m_managementHubProxy = m_hubConnection.CreateHubProxy(AgentConstants.ManagementHub.Name);
m_managementHubProxy.On("closeRequested", CloseRequestedCallback);
await m_hubConnection.Start();
}
catch (Exception e)
{
m_logger.Error("Exception encountered in Connect method", e);
}
}
On the server side I send a close request the following way:
var managementHub = GlobalHost.ConnectionManager.GetHubContext<ManagementHub>();
managementHub.Clients.All.closeRequested();
I never receive any callback in CloseRequestedCallback. Neither on the Client side nor on the server side I get any errors in the logs.
What did I do wrong here ?
EDIT 09/10/15
After some research and modifications, I found out it was linked with the replacement of the IoC container. When I removed everything linked to LightInject and used SignalR as is, everything worked. I was surprised about this since LightInject documented their integration with SignalR.
After I found this, I realised that the GlobalHost.DependencyResolver was not the same as the one I was supplying to the HubConfiguration. Once I added
GlobalHost.DependencyResolver = config.Resolver;
before
app.MapSignalR("", config);
I am now receiving callbacks within CloseRequestedCallback. Unfortunately, I get the following error as soon as I call a method from the Client to the Server:
Microsoft.AspNet.SignalR.Client.Infrastructure.SlowCallbackException
Possible deadlock detected. A callback registered with "HubProxy.On"
or "Connection.Received" has been executing for at least 10 seconds.
I am not sure about the fix I found and what impact it could have on the system. Is it OK to replace the GlobalHost.DependencyResolver with my own without registering all of its default content ?
EDIT 2 09/10/15
According to this, changing the GlobalHost.DependencyResolver is the right thing to do. Still left with no explanation for the SlowCallbackException since I do nothing in all my callbacks (yet).
Issue 1: IoC Container + Dependency Injection
If you want to change the IoC for you HubConfiguration, you also need to change the one from the GlobalHost so that returns the same hub when requesting it ouside of context.
Issue 2: Unexpected SlowCallbackException
This exception was caused by the fact that I was using SignalR within a Console Application. The entry point of the app cannot be an async method so to be able to call my initial configuration asynchronously I did as follow:
private static int Main()
{
var t = InitAsync();
t.Wait();
return t.Result;
}
Unfortunately for me, this causes a lot of issues as described here & more in details here.
By starting my InitAsync as follow:
private static int Main()
{
Task.Factory.StartNew(async ()=> await InitAsync());
m_waitInitCompletedRequest.WaitOne(TimeSpan.FromSeconds(30));
return (int)EndpointErrorCode.Ended;
}
Everything now runs fine and I don't get any deadlocks.
For more details on the issues & answers, you may also refer to the edits in my question.
We have a worker role which processes records and sends Azure service bus messages as needed based on the results of the query, this is basically a queue processing service. As part of the best practices of using SQL Azure, we have wrapped all of our query statements with a retry policy (this detects transient errors and will retry based on the defined policy). Note that we actually send the message from within the using statement so there is no 'leak' of the db variable.
Inside of our using statement, ReSharper is throwing up the 'Access to Disposed Closure' warning, most likely because we are passing our DataContext as a func parameter of the retry policy.
My question is, am I OK in my assumption that ReSharper is not detecting this pattern correctly or are there alternative methods in how we write these functions in order to prevent the warning above?
The Code
The db variable in the retryPolicy.ExecuteAction is what is getting flagged
using (var db = new MyEntities())
{
var thingsToUpdate = retryPolicy.ExecuteAction(() => db.QueueTable.Where(x => x.UpdateType == "UpdateType" && x.DueNext < DateTime.UtcNow).Take(30).ToList());
if (!thingsToUpdate.Any())
{
return;
}
while (thingsToUpdate.Any())
{
var message = new ServiceMessage{
Type = "UpdateType",
Requests = thingsToUpdate.Select(x => new ServiceMessageRequest
{
LastRan = x.LastRan,
ParentItemId = x.ThingId,
OwnerId = x.Thing.ForiegnKeyid
}).ToList()
};
SendMessage("UpdateType", message);
foreach (var thing in thingsToUpdate )
{
thing.LastRan = DateTime.UtcNow;
thing.DueNext = DateTime.UtcNow.AddMinutes(10);
}
retryPolicy.ExecuteAction(() => db.SaveChanges());
thingsToUpdate = db.QueueTable.Where(x => x.UpdateType == "UpdateType" && x.DueNext < DateTime.UtcNow).Take(30).ToList());
}
}
Additional Information
I also posted this to the ReSharper forums for a broader audience and this particular issue was addressed in a little more detail over there. For posterity, you can find the question here.
I guess your ExecuteAction executes your lamdba immediately. Then you should annotate a lambda parameter from your ExecuteAction method with ReSharper's attribute [InstantHandle].
For example:
public void ExecuteAction([InstantHandle] Action action)
{
...
}
You can either import JetBrains.Annotations.dll to get this attribute or just copy all of attributes inside your project. See more info on JetBrains site here and here.