I have a fully working MassTransit saga, which runs some commands and then executes a request/response call to query a database and then ultimately return a response to the calling controller.
Locally this all works now 99% of the time (thanks to a lot of support I've received on here). However, when deployed to my Azure VM, which has a local copy of RabbitMQ and the 2 ASP.NET Core services running on it, the first call to the saga goes through straight away but all subsequent calls timeout.
I feel like it might be related to the fact that I'm using an InMemorySagaRepository (which in theory should be fine for my use case).
The saga is configured initially like so:
InstanceState(s => s.CurrentState);
Event(() => RequestLinkEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => LinkCreatedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => CreateLinkGroupFailedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => CreateLinkFailedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Event(() => RequestLinkFailedEvent, x => x.CorrelateById(context => context.Message.LinkId));
Request(() => LinkRequest, x => x.UrlRequestId, cfg =>
{
cfg.ServiceAddress = new Uri($"{hostAddress}/{nameof(SelectUrlByPublicId)}");
cfg.SchedulingServiceAddress = new Uri($"{hostAddress}/{nameof(SelectUrlByPublicId)}");
cfg.Timeout = TimeSpan.FromSeconds(30);
});
It's worth noting that my LinkId is ALWAYS a unique Guid as it is created in the controller before the message is sent.
ALSO when I restart the apppool it works again for the first call and then starts timing out again.
I feel like something might be locking somewhere but I can't reproduce it locally!
So I wanted to post my solution to my own problem here in the hopes that it will aide others in the future.
I made 3 fundamental changes which either in isolation or combination solved this issue and everything now flys and works 100% of the time whether I use an InMemorySagaRepository, Redis or MongoDB.
Issue 1
As detailed in another question I posted here:
MassTransit saga with Redis persistence gives Method Accpet does not have an implementation exception
In my SagaStateMachineInstance class I had mistakenly declared the CurrentState property as a 'State' type when it should have been a string as such:
public string CurrentState { get; set;}
This was a fundamental issue and it came to light as soon as I started trying to add persistence so it may have been causing troubles when using the InMemorySagaRepository too.
Issue 2
In hindsight I suspect this was probably my main issue and I'm not completely convinced I've solved it in the best way but I'm happy with how things are.
I made sure my final event is managed in all states. I think what was happening was my request/response was finishing before the CurrentState of the saga had been updated. I realised this was happening by experimenting with using MongoDB as my persistence and seeing that I had sagas not completing stuck in the penultimate state.
Issue 3
This should be unnecessary but I wanted to add it as something to consider/try for those having issues.
I removed the request/response step from my saga and replaced it with a publish/subscribe. To do this I published an event to my consumer which when complete publishes an event with the CorrelationId (as suggested by #alexey-zimarev in my other issue). So in my consumer that does the query (i.e. reuqest) I do the following after it completes:
context.Publish(new LinkCreatedEvent { ... , CorrelationId = context.Message.CorrelationId })
Because the CorrelationId is in there my saga picks it up and handles the event as such:
When(LinkCreatedEvent )
.ThenAsync(HandleLinkCreatedEventAsync)
.TransitionTo(LinkCreated)
I'm really happy with how it all works now and feel confident about putting the solution live.
Related
I'm currently trying to update application that was originally .NET Core 3.1 using MassTransit 6.3.2. It is now configured to use .NET 6.0 and MassTransit 7.3.0
Our application uses MassTransit to send messages via Azure Service Bus, publishing messages to Topics, which then have other Subscribers listening to those Topic.
Cut down, it was implemented like so:
// Program.cs
services.AddMassTransit(config =>
{
config.AddConsumer<AppointmentBookedMessageConsumer>();
config.AddBus(BusControlFactory.ConfigureAzureServiceBus);
});
// BusControlFactory.cs
public static class BusControlFactory
{
public static IBusControl ConfigureAzureServiceBus(IRegistrationContext<IServiceProvider> context)
{
var config = context.Container.GetService<AppConfiguration>();
var azureServiceBus = Bus.Factory.CreateUsingAzureServiceBus(busFactoryConfig =>
{
busFactoryConfig.Host("Endpoint=sb://REDACTED-queues.servicebus.windows.net/;SharedAccessKeyName=MyMessageQueuing;SharedAccessKey=MyKeyGoesHere");
busFactoryConfig.Message<AppointmentBookedMessage>(m => m.SetEntityName("appointment-booked"));
busFactoryConfig.SubscriptionEndpoint<AppointmentBookedMessage>(
"my-subscriber-name",
configurator =>
{
configurator.UseMessageRetry(r => r.Interval(5, TimeSpan.FromSeconds(60)));
configurator.Consumer<AppointmentBookedMessageConsumer>(context.Container);
});
return azureServiceBus;
}
}
}
It has now been changed and upgraded to the latest MassTransit and is implemented like:
// Program.cs
services.AddMassTransit(config =>
{
config.AddConsumer<AppointmentBookedMessageConsumer, AppointmentBookedMessageConsumerDefinition>();
config.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host("Endpoint=sb://REDACTED-queues.servicebus.windows.net/;SharedAccessKeyName=MyMessageQueuing;SharedAccessKey=MyKeyGoesHere");
cfg.Message<AppointmentBookedMessage>(m => m.SetEntityName("appointment-booked"));
cfg.ConfigureEndpoints(context);
});
// AppointmentBookedMessageConsumerDefinition.cs
public class AppointmentBookedMessageConsumerDefinition: ConsumerDefinition<AppointmentBookedMessageConsumer>
{
public AppointmentBookedMessageConsumerDefinition()
{
EndpointName = "testharness.subscriber";
}
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator, IConsumerConfigurator<AppointmentBookedMessageConsumer> consumerConfigurator)
{
endpointConfigurator.UseMessageRetry(r => r.Interval(5, TimeSpan.FromSeconds(60)));
}
}
The issue if it can be considered one, is that I can't bind to a subscription that already exists.
In the example above, you can see that the EndpointName is set as "testharness.subscriber". There was already a subscription to the Topic "appointment-booked" from prior to me upgrading. However, when the application runs, it does not error, but it receives no messages.
If I change the EndpointName to "testharness.subscriber2". Another subscriber appears in the Azure Service Bus topic (via the Azure Portal) and I start receiving messages. I can see no difference in the names (other than the change that I placed, in this case: the "2" suffix).
Am I missing something here? Is there something else I need to do to get these to bind? Is my configuration wrong? Was it wrong? While I'm sure I can get around this by managing the release more closely and removing unneeded queues once they're using new ones - it feels like the wrong approach.
With Azure Service Bus, ForwardTo on a subscription can be a bit opaque.
While the subscription may indeed visually indicate that it is forwarding to the correctly named queue, it might be that the queue was deleted and recreated at some point without deleting the subscription. This results in a subscription that will build up messages, as it is unable to forward them to a queue that no longer exists.
Why? Internally, a subscription maintains the ForwardTo as an object id, which after the queue is deleted points to an object that doesn't exist – resulting in messages building up in the subscription.
If you have messages in the subscription, you may need to go into the portal and update that subscription to point to the new queue (even though it has the same name), at which point the messages should flow through to the queue.
If there aren't any messages in the subscription (or if they aren't important), you can just delete the subscription and it will be recreated by MassTransit when you restart the bus.
We have automapper v4.2 which mostly works fine, however every 4-6 weeks we get this weird error when the mapping stops
Mapping types:
DynamicContentSearchResultItem -> Full
Models.Messages.Search.DynamicContentSearchResultItem -> Models.Alerts.Views.Client.Full
Destination path:
List`1[0]
It doesn't even complain about any specific property, just plainly stops working unless we reset the app pool.
All the mappings are registered and initialized on Application_Start like
Mapper.CreateMap<DynamicContentSearchResultItem, ClientFull>()
.IncludeBase<DynamicContentSearchResultItem, ClientTraveller>()
.ForMember(d => d.Assessment, AssessmentTransformer)
.ForMember(d => d.ManagerAdvice, ManagerAdviceTransformer);
and its invoked in the code when the mapping's about to be called after search as
var alerts = Mapper.Map<List<InternalFull>>(results, optionsParam).ToList<IArticle>();
I would appreciate all help.
thanks
So I have a website written in .NET Core C# and I would like to run a process in the background that would make API calls to other website and save the data in database.
I have created ApiAccessor class and would like to invoke the method from the controller (which uses dependency injections for it's database connection), but if I pass them to the ApiAccessor (it would be async) the connection is already disposed of. I've tried injecting it from the get go, but it will still say that the interfaces are disposed, by the time it finishes. I can only do await on it, but this would cause user to wait for too long. What approach should I take with this one? I am a newbie at DI. Maybe some Singleton class? I would still don't know how to pass dependency injections to singleton
ApiAccessor:
IUserAccount _userAccounts;
public ApiAccessor(IConfiguration configuration, IUserAccount userAccounts)
{
_configuration = configuration;
_userAccounts = userAccounts;
}
//...
MethodToPollApi(){
var newUserIdToAdd = // just some kind of new data from api
_userAccounts.Add(newUserIdToAdd) // accessing DB, that causes errors
}
Controller:
void Index(){
MethodToPollApi();
return View();
}
I would consider an idea of using as called background jobs. There are a few popular frameworks for this type of solutions. Within them: custom implementation based on IHostedService, Quartz.NET, Hangfire, and many more available.
I used to play with many of them, personally prefer Hangfire as it self bootstrapped, provide nice UI for jobs dashboard, and really easy to use - for instance, that is how triggering jobs light look like with it:
Run once immediately:
var jobId = BackgroundJob.Enqueue(() => Console.WriteLine("Fire-and-forget!"));
Run delayed:
BackgroundJob.Schedule(() => Console.WriteLine("Delayed!"), TimeSpan.FromDays(7));
Run repeating:
var jobId = RecurringJob.AddOrUpdate(() => Console.WriteLine("Recurring!"), Cron.Daily);
Pick up completed job and continue:
BackgroundJob.ContinueWith(jobId, () => Console.WriteLine("Continuation!"));
Continuing the answer from #Dmitry. With Hangfire you can do something like this.
services.AddHangfire(x => x.UseSqlServerStorage("<Your connection string>"));
Hope this helps.
I have a .NET project that needs to read messaged from a given Queue.
I have several producers writing the same type of message into the queue.
I want my consumer app to have several threads reading messages and handling them so that the load will not be on a single thread.
Any ideas or sample code on how to achieve this?
Again, Note:
Each message should be processed once and not several times. The work should be balanced between the worker threads
You are going to need a bit of plumbing to get that done.
I have an open-source service bus called Shuttle.Esb and there are many other service bus options available that you may wish to consider.
But if you do not want to go that route you could still have a look at some of the code and implementations to get some ideas. I have a RabbitMQ implementation that may be of assistance.
Take a look at masstransit project : http://masstransit-project.com/MassTransit/usage/message-consumers.html
It has configurations like prefetch count and concurrency limit. It brings you to consume messages paralelly.
Also it is very simple to setup:
IBusControl busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
IRabbitMqHost host = cfg.Host(new Uri(RabbitMQConstants.RabbitMQUri),
hst =>
{
hst.Username(RabbitMQConstants.RabbitMQUserName);
hst.Password(RabbitMQConstants.RabbitMQPassword);
});
cfg.ReceiveEndpoint(host,
RabbitMQConstants.YourQueueName,
endPointConfigurator => {
endPointConfigurator.Consumer<SomeConsumer>();
endPointConfigurator.UseConcurrencyLimit(4);
});
});
busControl.Start();
public class SomeConsumer :
IConsumer<YourMessageClass>
{
public async Task Consume(ConsumeContext<YourMessageClass> context)
{
await Console.Out.WriteLineAsync($"Message consumed: {context.Message.YourValue}");
}
}
I have a MassTransit saga state machine (derived from Automatonymous.MassTransitStateMachine) and I'm trying to work around an issue that only manifests when I set the endpoint configuration prefetchCount to a value greater than 1.
The issue is that the 'StartupCompletedEvent' is published and then immediately handled before the saga state is persisted to the database.
The state machine is configured as follows:
State(() => Initialising);
State(() => StartingUp);
State(() => GeneratingFiles);
Event(() => Requested, x => x.CorrelateById(ctx => ctx.Message.CorrelationId).SelectId(ctx => ctx.Message.CorrelationId));
Event(() => StartupCompleted, x => x.CorrelateById(ctx => ctx.Message.CorrelationId));
Event(() => InitialisationCompleted, x => x.CorrelateById(ctx => ctx.Message.CorrelationId));
Event(() => FileGenerationCompleted, x => x.CorrelateById(ctx => ctx.Message.CorrelationId));
Initially(
When(Requested)
.ThenAsync(async ctx =>
{
Console.WriteLine("Starting up...");
await ctx.Publish(new StartupCompletedEvent() { CorrelationId = ctx.Instance.CorrelationId }));
Console.WriteLine("Done starting up...");
}
.TransitionTo(StartingUp)
);
During(StartingUp,
When(StartupCompleted)
.ThenAsync(InitialiseSagaInstanceData)
.TransitionTo(Initialising)
);
// snip...!
What happens when my saga receives the Requested event is:
The ThenAsync handler of the Initially block gets hit. At this point, no saga data is persisted to the repo (as expected).
StartupCompletedEvent is published to the bus. No saga data is persisted to the repo here either.
The ThenAsync block of the Initially declaration completes. After this, the saga data is finally persisted.
Nothing else happens.
At this point, there are no messages in the queue, and the StartupCompletedEvent is lost. However, there is a saga instance in the database.
I've played about with the start up and determined that one of the other threads (since my prefetch is > 1) has picked up the event, not found any saga with the correlationId in the database, and discarded the event. So the event is being published and handled before the saga has a chance to be persisted.
If I add the following to the Initially handler:
When(StartupCompleted)
.Then(ctx => Console.WriteLine("Got the startup completed event when there is no saga instance"))
Then I get the Console.WriteLine executing. My understanding of this is that the event has been received, but routed to the Initially handler since there is no saga that exists with the correlationId. If I put a break point in at this point and check the saga repo, there is no saga yet.
It's possibly worth mentioning a few other points:
I have my saga repo context set to use IsolationLevel.Serializable
I'm using EntityFrameworkSagaRepository
Everything works as expected when the Prefetch count is set to 1
I'm using Ninject for DI, and my SagaRepository is Thread scoped, so I imagine each handler that the prefetch count permits has its own copy of the saga repository
If I publish the StartupCompletedEvent in a separate thread with a 1000ms sleep before it, then things work properly. I presume this is because the saga repo has completed persisting the saga state so when the event is eventually published and picked up by a handler, the saga state is retrieved from the repo correctly.
Please let me know if I've left anything out; I've tried to provide everything I think worthwhile without making this question too long...
I had this issue too and I would like to post Chris' comment as answer so people can find it.
The solution is to enable the Outbox so messages are held until saga is persisted.
c.ReceiveEndpoint("queue", e =>
{
e.UseInMemoryOutbox();
// other endpoint configuration here
}