How to filter Redis Keyspace notifications - c#

I am trying to figure out how to properly use the key-event notification system in Redis using Azure Cache for Redis and the StackExchange.Redis package.
Following the documentation from different places, I have been able to get notifications with a setup like this:
var configurationOptions = ConfigurationOptions.Parse(<connection_string>);
_connectionMultiplexer = ConnectionMultiplexer.Connect(configurationOptions);
...
var subscriber = _connectionMultiplexer.GetSubscriber();
subscriber.Subscribe("__keyevent#0__:set", (_, key) =>
{
_logger.LogWarning("Received '{CacheKey}'", key);
});
Running the code above and after having configured Redis with the option Eg$ for notifications, I do receive a notification every time a key is written in the cache or deleted from the cache.
I would like to get notifications for certain keys only. For instance, I would like to be able to only receive notifications for keys that start with the characters 'my.key'.
Initial Approach
The naive approach would be something like this.
subscriber.Subscribe("__keyevent#0__:set", (_, key) =>
{
if(!key.StartsWith("my.key"))
return;
_logger.LogWarning("Received '{CacheKey}'", key);
});
It works but it put the burden on the clients to do the filtering which may not be optimal, especially considering that the (rather expensive) Redis instance is shared across multiple services, which causes my service to receive notifications that are not of interest.
Question
I tried something like the following but it does not seem to work:
"__keyevent#0__:set my.key*"
Is there a way to specify a filter to get certain notifications from Redis?

The correct command that allows me to do the filtering is keyspace.
With the configuration AKE and the following code, I can get the notifications only for the keys my service is interested in:
subscriber.Subscribe("__keyspace#0__:my.key*", (_, key) =>
{
_logger.LogWarning("Received '{CacheKey}'", key);
});

Related

MassTransit and Broadcasting

I am trying to get a messaging system up and running betwen multiple applications. I have an instance of RabbitMQ running and that appears to be fine. I can connect multiple subscribers/publisher to the RabbitMQ instance and they appear to be fine. I can then publish a message from one publisher but only one subscriber is getting the message.
I believe it has to do with the the way I am establishing the queues. I've looked at the tutorial on Rabbit, https://www.rabbitmq.com/tutorials/tutorial-three-dotnet.html, but I dont know hopw this translates into the Masstransit library.
For the life of me I am having trouble working out what I am doing wrong.
NuGets:
MassTransit.Extensions.DependencyInjection 5.3.2
MassTransit.RabbitMQ 5.3.2
Can anyone help?
// Register MassTransit
services.AddMassTransit(mtCfg =>
{
mtCfg.AddConsumer<DomainMessageConsumer>();
mtCfg.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(rbCfg =>
{
var host = rbCfg.Host(settings.RabbitMq.Host, settings.RabbitMq.VirtualHost, h =>
{
h.Username(settings.RabbitMq.Username);
h.Password(settings.RabbitMq.Password);
});
rbCfg.ReceiveEndpoint(host, settings.RabbitMq.ConnectionName, ep =>
{
ep.PrefetchCount = 16;
ep.UseMessageRetry(x => x.Interval(2, 100));
ep.ConfigureConsumer<DomainMessageConsumer>(provider);
});
}));
});
The problem you are having is because you are using the same queuename on all consumers. If you want broadcasting to all consumers, you should make all queuenames unique. In your code example, it's the settings.RabbitMq.ConnectionName variable that you should make unique for each consumer.
Check the below picture and imagine Subscription B is the queue settings.RabbitMq.ConnectionName you've set. What you'll get is the left part on the picture, only Subscriber B1 receives (actually it's round-robin balancing, but this is going offtopic). If you want broadcasting, you can create separate subscriptions (or settings.RabbitMq.ConnectionName in your example).

Send message to specific channel/routing key with Masstransit/RabbitMQ in C#

I've been working on an application that starts some worker roles based on messaging.
This is the way I want the application to work:
Client sends a request for work (RPC).
One of the worker roles accepts the work, generates a random id, and responds to the RPC with the new id.
The worker will post its debug logs on a log channel with the id.
The client will subscribe to this channel so users can see what's going on.
The RPC is working fine, but I can't seem to figure out how to implement the log-sending.
This is the code that accepts work (simplified)
var bus = Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var host = sbc.Host(new Uri("rabbitmq://xxxxxx.nl"), h =>
{
h.Username("xxx");
h.Password("xxxx");
});
sbc.ReceiveEndpoint(host, "post_work_item", e =>
{
e.Consumer<CreateWorkItemCommand>();
});
sbc.ReceiveEndpoint(host, "list_work_items", e =>
{
e.Consumer<ListWorkItemsCommand>();
});
});
The CreateWorkItemCommand will create the thread, do the work, etc. Now, how would I implement the log-sending with Masstransit? I was thinking something like:
bus.Publish(
obj: WorkUpdate{ Message = "Hello world!" },
channel: $"work/{work_id}"
)
And the client will do something this:
bus.ReceiveFromEvented($"work/{rpc.work_id}").OnMessage += { more_psuedo_code() }
I can't seem to find out how to do this.
Can anyone help me out?
Thanks!
It looks both like a saga and turnout. Current Turnout implementation is monitoring the job itself and I doubt you can really subscribe to that message flow. And it is still not really done.
You might solve this using the saga. Some external trigger (a command) will start the first saga, which will use Request/Response to start the process, which will do the work, and get its correlation id (job id). The long job can publish progress reports using the same correlation id and the saga will consume them, doing what it needs to do.
The "work/{rpc.work_id}" will be then replaced by the correlation.

Any downsides to replacing REST endpoints with SignalR?

I'm building a fairly simple single page app. It's basically a list of items, where each item has some details, an activity log, and a current status along with some buttons to trigger actions on the server to advance the status along a workflow.
It was originally written using MVC and REST/Web API but I got stuck on the problem of keeping concurrent users up to date. For example, if User A adds an item, we want the list on User B's screen to now update to include it.
To solve this I looked into SignalR which works great. But I had a problem.
When adding an item (using POST) the callback adds the item on the requesting client. This is fine.
I then triggered a SignalR broadcast on the server to tell all clients about the new item. This worked fine except the local client, who now has 2 items.
I was looking into filtering the duplicate id client-side, or sending the connection id with the POST, then broadcast to all clients except the requester but it seems a bit needlessly complicated.
Instead I'm just doing this.
public class UpdateHub : Hub
{
public void AddNewItem(NewItem item)
{
// and some server-side stuff, persist in the data store, etc
item.trackingID = new Guid();
item.addLogEntry("new item");
// ...
dataStore.addItem(item);
// send message type and data payload
Clients.All.broadcastMessage("add", item);
}
}
It seems a lot simpler to just get rid of all the REST stuff altogether, so am I missing anything important?
It'll run on an intranet for a handful of users using IE11+ and I guess we do lose some commonly-understood semantics around HTTP response codes for error handling, but I don't think that's a huge deal in this situation.
In order to solve duplicate you can try to use Clients.Others inside Hub class, or AllExcept(id) if you not in the Hub class.
Clients.Others.broadcastMessage("add", item);
In your case using SignalR shouldn`t have any downsides.

couchbase lite xamarin pull replication with sync-gateway

I want to pull documents with username attribute
as user1 for user1 like that for each user only attribute with their name.
This is my replication code.
private void setupreplication(){
Console.WriteLine ("Setting up replication");
Uri Server = new Uri("http://192.168.1.213:4984/aussie-coins-syncgw/");
var pull = _db.CreatePullReplication (Server);
var push = _db.CreatePushReplication (Server);
pull.Filter = "byUser";
pull.FilterParams = new Dictionary<string, object> { {"type", "user1"} };
pull.Continuous = true;
push.Continuous = true;
pull.Start();
push.Start();
}
This is my set filter code
_couchBaseLiteLocal.SetFilter("byUser", (revision, filterParams) =>
{
var typeParam = filterParams["type"].ToString();
return (typeParam != null) && typeParam.Equals("user1");
});
With the above code generic pull itself not working.
I just tried to do as given in the documentation.
I do not understand how the setfilter function works to filter data from server. It would be great if someone help in understanding how setfilter works and to make the above code working
Thanks in advance.
The filter function in pull replications can indeed return the specific documents you are interested in. But it's not very efficient, the filter function will run on all the documents on the remote database to determine which ones to pull, every time a pull replication is started.
Instead Sync Gateway introduces the concept of a sync function that incrementally routes and computes access control rules on documents. That way, when starting the pull replication, it's fast and straightforward for Sync Gateway to return the specific documents the user has access to.
You can specify individual channels in a pull replication from Sync Gateway if needed. But the thing to remember is that filtered pull replication between Sync Gateway and Couchbase Lite is not based on filter functions. It's based on the sync function and channel based filtering if needed.
In a P2P scenario (replications between two Couchbase Lite instances), the filter function model is used.

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories