MassTransit and Broadcasting - c#

I am trying to get a messaging system up and running betwen multiple applications. I have an instance of RabbitMQ running and that appears to be fine. I can connect multiple subscribers/publisher to the RabbitMQ instance and they appear to be fine. I can then publish a message from one publisher but only one subscriber is getting the message.
I believe it has to do with the the way I am establishing the queues. I've looked at the tutorial on Rabbit, https://www.rabbitmq.com/tutorials/tutorial-three-dotnet.html, but I dont know hopw this translates into the Masstransit library.
For the life of me I am having trouble working out what I am doing wrong.
NuGets:
MassTransit.Extensions.DependencyInjection 5.3.2
MassTransit.RabbitMQ 5.3.2
Can anyone help?
// Register MassTransit
services.AddMassTransit(mtCfg =>
{
mtCfg.AddConsumer<DomainMessageConsumer>();
mtCfg.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(rbCfg =>
{
var host = rbCfg.Host(settings.RabbitMq.Host, settings.RabbitMq.VirtualHost, h =>
{
h.Username(settings.RabbitMq.Username);
h.Password(settings.RabbitMq.Password);
});
rbCfg.ReceiveEndpoint(host, settings.RabbitMq.ConnectionName, ep =>
{
ep.PrefetchCount = 16;
ep.UseMessageRetry(x => x.Interval(2, 100));
ep.ConfigureConsumer<DomainMessageConsumer>(provider);
});
}));
});

The problem you are having is because you are using the same queuename on all consumers. If you want broadcasting to all consumers, you should make all queuenames unique. In your code example, it's the settings.RabbitMq.ConnectionName variable that you should make unique for each consumer.
Check the below picture and imagine Subscription B is the queue settings.RabbitMq.ConnectionName you've set. What you'll get is the left part on the picture, only Subscriber B1 receives (actually it's round-robin balancing, but this is going offtopic). If you want broadcasting, you can create separate subscriptions (or settings.RabbitMq.ConnectionName in your example).

Related

How to filter Redis Keyspace notifications

I am trying to figure out how to properly use the key-event notification system in Redis using Azure Cache for Redis and the StackExchange.Redis package.
Following the documentation from different places, I have been able to get notifications with a setup like this:
var configurationOptions = ConfigurationOptions.Parse(<connection_string>);
_connectionMultiplexer = ConnectionMultiplexer.Connect(configurationOptions);
...
var subscriber = _connectionMultiplexer.GetSubscriber();
subscriber.Subscribe("__keyevent#0__:set", (_, key) =>
{
_logger.LogWarning("Received '{CacheKey}'", key);
});
Running the code above and after having configured Redis with the option Eg$ for notifications, I do receive a notification every time a key is written in the cache or deleted from the cache.
I would like to get notifications for certain keys only. For instance, I would like to be able to only receive notifications for keys that start with the characters 'my.key'.
Initial Approach
The naive approach would be something like this.
subscriber.Subscribe("__keyevent#0__:set", (_, key) =>
{
if(!key.StartsWith("my.key"))
return;
_logger.LogWarning("Received '{CacheKey}'", key);
});
It works but it put the burden on the clients to do the filtering which may not be optimal, especially considering that the (rather expensive) Redis instance is shared across multiple services, which causes my service to receive notifications that are not of interest.
Question
I tried something like the following but it does not seem to work:
"__keyevent#0__:set my.key*"
Is there a way to specify a filter to get certain notifications from Redis?
The correct command that allows me to do the filtering is keyspace.
With the configuration AKE and the following code, I can get the notifications only for the keys my service is interested in:
subscriber.Subscribe("__keyspace#0__:my.key*", (_, key) =>
{
_logger.LogWarning("Received '{CacheKey}'", key);
});

Consuming _error queue in masstransit

For each queue masstransit has consumers, it automatically creates a [queuename]_error queue, and moves messages that could not be processed there (after retrials, etc.)
I´m trying to create a consumer, that takes errors from that queue, and writes it to a database.
In order to consume those messages, I had to create a handler/consumer for the error queue, receiving the original message.
cfg.ReceiveEndpoint(host, "myqueuename", e =>
{
e.Handler<MyMessage>(ctx =>
{
throw new Exception ("Not expected");
});
});
cfg.ReceiveEndpoint(host, "myqueuename_error", e =>
{
e.BindMessageExchanges = false;
e.Handler<MyMessage>(ctx =>
{
Console.WriteLine("Handled");
// do whatever
return ctx.CompleteTask;
});
});
All that works fine, the problem to retrieve the actual exception that occurred.
I was actually able to do that, with some serious hack....
e.Handler<MyMessage>(m =>
{
var buffer = m.ReceiveContext.TransportHeaders
.GetAll().Single(s => s.Key == "MT-Fault-Message").Value as byte[];
var errorText = new StreamReader(new MemoryStream(buffer)).ReadToEnd();
Console.WriteLine($"Handled, Error={errorText}");
return m.CompleteTask;
});
That just fells wrong though.
PS: I Know i could subscribe to a Fault event, but in this particular case, it is a RequestClient (request-response) pattern, and MT redirects FaultAddress back to the client, and I can´t garantee it is still running.
Request/reply should only be used for getting the data. It means that if the requestor goes down - there are no more reasons to reply with data or with fault and you do not have interest in consuming faults.
So, the reason for the request client to use a temporary (non-durable) queue instead of the receive endpoint queue is by design. It encourages you not to understand that the scope of your replies is only within the request waiting time.
If you send commands and need to be informed if the command has been processed - you should publish events to inform about the outcome of the command processing. Using message metadata (initiator id and conversation id) allows you to find out, how events correlate with commands.
So, only use request/reply for requesting information (queries) using decoupled invocation SOA pattern, where the reply only have a meaning in correlation with request and if the requestor goes down, the reply is no longer needed, no matter if it was a success of failure.

Send message to specific channel/routing key with Masstransit/RabbitMQ in C#

I've been working on an application that starts some worker roles based on messaging.
This is the way I want the application to work:
Client sends a request for work (RPC).
One of the worker roles accepts the work, generates a random id, and responds to the RPC with the new id.
The worker will post its debug logs on a log channel with the id.
The client will subscribe to this channel so users can see what's going on.
The RPC is working fine, but I can't seem to figure out how to implement the log-sending.
This is the code that accepts work (simplified)
var bus = Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var host = sbc.Host(new Uri("rabbitmq://xxxxxx.nl"), h =>
{
h.Username("xxx");
h.Password("xxxx");
});
sbc.ReceiveEndpoint(host, "post_work_item", e =>
{
e.Consumer<CreateWorkItemCommand>();
});
sbc.ReceiveEndpoint(host, "list_work_items", e =>
{
e.Consumer<ListWorkItemsCommand>();
});
});
The CreateWorkItemCommand will create the thread, do the work, etc. Now, how would I implement the log-sending with Masstransit? I was thinking something like:
bus.Publish(
obj: WorkUpdate{ Message = "Hello world!" },
channel: $"work/{work_id}"
)
And the client will do something this:
bus.ReceiveFromEvented($"work/{rpc.work_id}").OnMessage += { more_psuedo_code() }
I can't seem to find out how to do this.
Can anyone help me out?
Thanks!
It looks both like a saga and turnout. Current Turnout implementation is monitoring the job itself and I doubt you can really subscribe to that message flow. And it is still not really done.
You might solve this using the saga. Some external trigger (a command) will start the first saga, which will use Request/Response to start the process, which will do the work, and get its correlation id (job id). The long job can publish progress reports using the same correlation id and the saga will consume them, doing what it needs to do.
The "work/{rpc.work_id}" will be then replaced by the correlation.

Keep WebSocket and some event handlers alive while app is active

I use Caliburn.Micro to build a Windows 8.1 Universal app. The app connects to a web service using a WebSocket. I would like this connection, once established, to be kept alive as long as the app is active, no matter what page the user is on.
Currently I'm doing it like this:
container = new WinRTContainer();
container.Singleton<IConnectionService, ConnectionService>();
and it seems to work as I want to. I can inject it in my viewmodels and the connection is still open and it does receive messages even when a view model that does not inject the service is active. I am however a bit curious if this is the correct way (and if it's actually doing what I'm expecting)?
Secondly, I'm using the connection manager to parse the JSON returned from the WebSocket connection and creating corresponding classes like RandomThingHappened and broadcasting these using the event aggregator service from Caliburn.Micro. View interested in these can subscribe and do what they want. However, there are some messages that I would like handled regardless of which view the user is on. Is this possible? I've thought about creating singletons for this as well, and just make sure to instantiate these somewhere. That does however seem a bit... risky.
Suggestions?
Not really sure about websocket but
I am using the following ways for My WCF service ( the dumb terminal must be always connected cause the WCF service will push message to connected terminal using callback
[OperationContract(IsOneWay = true)]
void KeepConnection();
and in your client use a timer to keep calling the service
var timer = new DispatcherTimer { Interval = new TimeSpan(0, 0, 50) };
timer.Start();
timer.Tick += (sender, args) =>
{
try
{
if (this.client.State == CommunicationState.Faulted)
{
this.RegisterTerminal();
}
this.client.KeepConnection();
}
catch
{
throw new Exception("Failed to establish connection with server");
}
};
As for the broadcasting you can use EventAggregator to publish your event to all the listening class.
you can read it more here Event Aggregator

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories