masstransit request/response: get caller timeout in consumer - c#

Last year I started using the actor model with Akka.NET. Now I started using MassTransit (v3.5.7) with RabbitMQ and I really love both!
In the request/response scenario, my request consumer executes its business logic by wrapping the request in a new message and Asking an actor to do the actual job.
So basically the consumer awaits on an actor's Ask method. This (extension) method accepts the message and a timeout as arguments.
I'd like to use the same timeout value used by the originator of the request.
Is there a simple way to obtain, in the consumer context, the original timeout used by the caller in order to pass it to the actor's Ask method?
Note: I'd like to avoid adding the timeout to the request interface.

finally I found a solution! It's quite easy (once investigated the MassTransit source code :-) and works for me but if someone has some advice or hint please let me know.
So, basically I create a support library for MassTransit, where I added a class with two extension methods:
The CreateRequestClientWithTimeoutHeader() method creates a client and stores the string representation of the passed timeout (expressed in seconds) in the message header.
This will be used by the client.
The GetClientTimeout() method retrieves the value from the message header and converts it to a TimeSpan. This will be used in the consumer.
Here's the code:
public static class MassTransitExtMethods
{
private const string ClientTimeoutHeaderKey = "__ClientTimeout__";
public static IRequestClient<TRequest, TResponse> CreateRequestClientWithTimeoutHeader<TRequest, TResponse>
(
this IBus bus,
Uri address,
TimeSpan timeout,
TimeSpan? ttl = default(TimeSpan?),
Action<SendContext<TRequest>> callback = null
)
where TRequest : class
where TResponse : class
{
return
bus
.CreateRequestClient<TRequest, TResponse>
(
address,
timeout,
ttl,
context =>
{
context
.Headers
.Set
(
ClientTimeoutHeaderKey,
timeout.TotalSeconds.ToString(CultureInfo.InvariantCulture)
);
callback?.Invoke(context);
}
);
}
public static TimeSpan? GetClientTimeout(this ConsumeContext consumeContext)
{
string headerValue =
consumeContext
.Headers
.Get<string>(ClientTimeoutHeaderKey);
if (string.IsNullOrEmpty(headerValue))
{
return null;
}
double timeoutInSeconds;
if (double.TryParse(headerValue, NumberStyles.Any, CultureInfo.InvariantCulture, out timeoutInSeconds))
{
return TimeSpan.FromSeconds(timeoutInSeconds);
}
return null;
}
}
To use it, create the client using the new extension method:
var client =
mybus
.CreateRequestClientWithTimeoutHeader<IMyRequest, IMyResponse>
(
new Uri(serviceAddress),
TimeSpan.FromSeconds(10.0)
);
And here is a very simple example of a consumer using an Akka.NET actor, which implements the business logic (please note that the implementation is not complete):
public class MyReqRespProcessor : IConsumer<IMyRequest>
{
private readonly IActorRef _myActor;
public async Task Consume(ConsumeContext<IMyRequest> context)
{
TimeSpan? clientTimeout = context.GetClientTimeout();
var response = await
_myActor
.Ask<IMyResponse>(context.Message, clientTimeout ?? PredefinedTimeout)
.ConfigureAwait(false);
await
context
.RespondAsync<IMyResponse>(response)
.ConfigureAwait(false);
}
}
In a real scenario, with a lot of requests, the actor may be a router, configured according to the endpoint configuration (for example the prefetch count value).
I know this is not a perfect solution but it helps to give, on the server side, a measure of the max processing time.
In case of network delays, the client may receive the timeout before the actor stops processing the request. Anyway the actor will work on that request at most for the time specified by the client. And this is what I wanted to reach out.

Related

Azure Service Bus throwing MessageLockLostException for non-partitioned queue when lock duration has not passed

I'm trying to receive a message using the PeekLock receive mode, process the message (the processing time is well below message lock duration), and then complete the message. I've checked all around for others having similar issues but I think I have ruled out all proposed solutions.
The queue is not partitioned.
There is only one message on the queue when I'm testing and I've even tried to set PrefetchCount to 1 anyway.
There is only one receiver active.
I've increased the LockDuration to 5 minutes and the processing time is ~20 seconds. When inspecting the received message I can see that the LockedUntil property is much later than the CompleteAsync call.
Even so, when calling the CompleteAsync method I get the MessageLockLostException every time.
This is (the relevant part of) the wrapper class I'm using:
public class ServiceBusClient
{
private readonly ServiceBusConnectionManager manager;
public ServiceBusClient(string connectionString)
{
manager = new ServiceBusConnectionManager(connectionString);
}
public async Task<Message> GetNextDownlinkMessage(string queueName)
{
var messageReceiver = new MessageReceiver(manager.Connection, queueName, ReceiveMode.PeekLock);
var message = await messageReceiver.ReceiveAsync();
await messageReceiver.CloseAsync();
return message;
}
public async Task Complete(string queueName, string lockToken)
{
var queueClient = new QueueClient(manager.Connection, queueName, ReceiveMode.PeekLock, null);
await queueClient.CompleteAsync(lockToken);
}
}
I've checked that the lockToken is actually the one set on the received message. I'm really running out of ideas.
Edit: I later realized that the error is because I instantiate a new MessageReceiver object each time and the message needs to be completed by the exact same instance used to fetch the message.
Sounds like you're using an older version of the Service Bus SDK. There's a newer version that's coming out end of this month for general availability--you can find it here
With the new library, you can complete a message by calling .CompleteMessageAsync(msg) on a SeviceBusReceiver (sample link):
string connectionString = "<connection_string>";
string queueName = "<queue_name>";
// since ServiceBusClient implements IAsyncDisposable we create it with "await using"
await using var client = new ServiceBusClient(connectionString);
// create a receiver that we can use to receive and settle the message
ServiceBusReceiver receiver = client.CreateReceiver(queueName);
// the received message is a different type as it contains some service set properties
ServiceBusReceivedMessage receivedMessage = await receiver.ReceiveMessageAsync();
// complete the message, thereby deleting it from the service
await receiver.CompleteMessageAsync(receivedMessage);
I finally found the issue. Every time I completed a message I created a new QueueClient instance. It turns out that you need to complete or abandon the message using the exact same MessageReceiver instance that received it. You can't create a new instance of MessageReceiver even if the queue is the same.

MVC Core, Web Sockets and threading

I am working on a solution that uses web socket protocol to notify client (web browser) when some event happened on the server (MVC Core web app). I use Microsoft.AspNetCore.WebSockets nuget.
Here is my client-side code:
$(function () {
var socket = new WebSocket("ws://localhost:61019/data/openSocket");
socket.onopen = function () {
$(".socket-status").css("color", "green");
}
socket.onmessage = function (message) {
$("body").append(document.createTextNode(message.data));
}
socket.onclose = function () {
$(".socket-status").css("color", "red");
}
});
When this view is loaded the socket request is immediately sent to the MVC Core application. Here is the controller action:
[Route("data")]
public class DataController : Controller
{
[Route("openSocket")]
[HttpGet]
public ActionResult OpenSocket()
{
if (HttpContext.WebSockets.IsWebSocketRequest)
{
WebSocket socket = HttpContext.WebSockets.AcceptWebSocketAsync().Result;
if (socket != null && socket.State == WebSocketState.Open)
{
while (!HttpContext.RequestAborted.IsCancellationRequested)
{
var response = string.Format("Hello! Time {0}", System.DateTime.Now.ToString());
var bytes = System.Text.Encoding.UTF8.GetBytes(response);
Task.Run(() => socket.SendAsync(new System.ArraySegment<byte>(bytes),
WebSocketMessageType.Text, true, CancellationToken.None));
Thread.Sleep(3000);
}
}
}
return new StatusCodeResult(101);
}
}
This code works very well. WebSocket here is used exclusively for sending and doesn't receive anything. The problem, however, is that the while loop keeps holding the DataController thread until cancellation request is detected.
Web socket here is bound to the HttpContext object. As soon as HttpContext for the web request is destroyed the socket connection is immediately closed.
Question 1: Is there any way that socket can be preserved outside of the controller thread?
I tried putting it into a singleton that lives in the MVC Core Startup class that is running on the main application thread. Is there any way to keep the socket open or establish connection again from within the main application thread rather than keep holding the controller thread with a while loop?
Even if it is deemed to be OK to hold up controller thread for socket connection to remain open, I cannot think of any good code to put inside the OpenSocket's while loop. What do you think about having a manual reset event in the controller and wait for it to be set inside the while loop within OpenSocket action?
Question 2: If it is not possible to separate HttpContext and WebSocket objects in MVC, what other alternative technologies or development patterns can be utilized to achieve socket connection reuse? If anyone thinks that SignalR or a similar library has some code allowing to have socket independent from HttpContext, please share some example code. If someone thinks there is a better alternative to MVC for this particular scenario, please provide an example, I do not mind switching to pure ASP.NET or Web API, if MVC does not have capabilities to handle independent socket communication.
Question 3: The requirement is to keep socket connection alive or be able to reconnect until explicit timeout or cancel request by the user. The idea is that some independent event happens on the server that triggers established socket to send data.
If you think that some technology other than web sockets would be more useful for this scenario (like HTML/2 or streaming), could you please describe the pattern and frameworks you would use?
P.S. Possible solution would be to send AJAX requests every second to ask if there was new data on the server. This is the last resort.
After lengthy research I ended up going with a custom middleware solution. Here is my middleware class:
public class SocketMiddleware
{
private static ConcurrentDictionary<string, SocketMiddleware> _activeConnections = new ConcurrentDictionary<string, SocketMiddleware>();
private string _packet;
private ManualResetEvent _send = new ManualResetEvent(false);
private ManualResetEvent _exit = new ManualResetEvent(false);
private readonly RequestDelegate _next;
public SocketMiddleware(RequestDelegate next)
{
_next = next;
}
public void Send(string data)
{
_packet = data;
_send.Set();
}
public async Task Invoke(HttpContext context)
{
if (context.WebSockets.IsWebSocketRequest)
{
string connectionName = context.Request.Query["connectionName"]);
if (!_activeConnections.Any(ac => ac.Key == connectionName))
{
WebSocket socket = await context.WebSockets.AcceptWebSocketAsync();
if (socket == null || socket.State != WebSocketState.Open)
{
await _next.Invoke(context);
return;
}
Thread sender = new Thread(() => StartSending(socket));
sender.Start();
if (!_activeConnections.TryAdd(connectionName, this))
{
_exit.Set();
await _next.Invoke(context);
return;
}
while (true)
{
WebSocketReceiveResult result = socket.ReceiveAsync(new ArraySegment<byte>(new byte[1]), CancellationToken.None).Result;
if (result.CloseStatus.HasValue)
{
_exit.Set();
break;
}
}
SocketHandler dummy;
_activeConnections.TryRemove(key, out dummy);
}
}
await _next.Invoke(context);
string data = context.Items["Data"] as string;
if (!string.IsNullOrEmpty(data))
{
string name = context.Items["ConnectionName"] as string;
SocketMiddleware connection = _activeConnections.Where(ac => ac.Key == name)?.Single().Value;
if (connection != null)
{
connection.Send(data);
}
}
}
private void StartSending(WebSocket socket)
{
WaitHandle[] events = new WaitHandle[] { _send, _exit };
while (true)
{
if (WaitHandle.WaitAny(events) == 1)
{
break;
}
if (!string.IsNullOrEmpty(_packet))
{
SendPacket(socket, _packet);
}
_send.Reset();
}
}
private void SendPacket(WebSocket socket, string packet)
{
byte[] buffer = Encoding.UTF8.GetBytes(packet);
ArraySegment<byte> segment = new ArraySegment<byte>(buffer);
Task.Run(() => socket.SendAsync(segment, WebSocketMessageType.Text, true, CancellationToken.None));
}
}
This middleware is going to run on every request. When Invoke is called it checks if it is a web socket request. If it is, the middleware checks if such connection was already opened and if it wasn't, the handshake is accepted and the middleware adds it to the dictionary of connections. It's important that the dictionary is static so that it is created only once during application lifetime.
Now if we stop here and move up the pipeline, HttpContext will eventually get destroyed and, since the socket is not properly encapsulated, it will be closed too. So we must keep the middleware thread running. It is done by asking socket to receive some data.
You may ask why we need to receive anything if the requirement is just to send? The answer is that it is the only way to reliably detect client disconnecting. HttpContext.RequestAborted.IsCancellationRequested works only if you constantly send within the while loop. If you need to wait for some server event on a WaitHandle, cancellation flag is never true. I tried to wait for HttpContext.RequestAborted.WaitHandle as my exit event, but it is never set either. So we ask socket to receive something and if that something sets CloseStatus.HasValue to true, we know that client disconnected. If we receive something else (client side code is unsafe) we will ignore it and start receiving again.
Sending is done in a separate thread. The reason is the same, it's not possible to detect disconnection if we wait on the main middleware thread. To notify the sender thread that client disconnected we use _exit synchronization variable. Remember, it is fine to have private members here since SocketMiddleware instances are saved in a static container.
Now, how do we actually send anything with this set up? Let's say an event occurs on the server and some data becomes available. For simplicity sake, lets assume this data arrives inside normal http request to some controller action. SocketMiddleware will run for every request, but since it is not web socket request, _next.Invoke(context) is called and the request reaches controller action which may look something like this:
[Route("ProvideData")]
[HttpGet]
public ActionResult ProvideData(string data, string connectionName)
{
if (!string.IsNullOrEmpty(data) && !string.IsNullOrEmpty(connectionName))
{
HttpContext.Items.Add("ConnectionName", connectionName);
HttpContext.Items.Add("Data", data);
}
return Ok();
}
Controller populates Items collection which is used to share data between components. Then the pipeline returns to the SocketMiddleware again where we check whether there is anything interesting inside the context.Items. If there is we select respective connection from the dictionary and call its Send() method that sets data string and sets _send event and allows single run of the while loop inside the sender thread.
And voila, we a have socket connection that sends on server side event. This example is very primitive and is there just to illustrate the concept. Of course, to use this middleware you will need to add the following lines in your Startup class before you add MVC:
app.UseWebSockets();
app.UseMiddleware<SocketMiddleware>();
Code is very strange and hopefully we'll be able to write something much nicer when SignalR for dotnetcore is finally out. Hopefully this example will be useful for someone. Comments and suggestions are welcome.

Consuming rate-limiting API [duplicate]

I have access an API call that accepts a maximum rate of calls per second. If the rate is exceeded, an exception is thrown.
I would like to wrap this call into an abstraction that does the necessary to keep the call rate under the limit. It would act like a network router: handling multiple calls and returning the results to the correct caller caring about the call rate. The goal is to make the calling code as unaware as possible about that limitation. Otherwise, every part in the code having this call would have to be wrapped into a try-catch!
For example: Imagine that you can call a method from an extern API that can add 2 numbers. This API can be called 5 times per second. Anything higher than this will result in an exception.
To illustrate the problem, the external service that limits the call rate is like the one in the answer to this question
How to build a rate-limiting API with Observables?
ADDITIONAL INFO:
Since you don't want the worry about that limit every time you call this method from any part of your code, you think about designing a wrapper method that you could call without worrying about the rate limit. On the inside you care about the limit, but on the outside you expose a simple async method.
It's similar to a web server. How does it return the correct pack of results to the correct customer?
Multiple callers will call this method, and they will get the results as they come. This abstraction should act like a proxy.
How could I do it?
I'm sure the firm of the wrapper method should be like
public async Task<Results> MyMethod()
And inside the method it will perform the logic, maybe using Reactive Extensions (Buffer). I don't know.
But how? I mean, multiple calls to this method should return the results to the correct caller. Is this even possible?
Thank you a lot!
There are rate limiting libraries available (see Esendex's TokenBucket Github or Nuget).
Usage is very simple, this example would limit polling to 1 a second
// Create a token bucket with a capacity of 1 token that refills at a fixed interval of 1 token/sec.
ITokenBucket bucket = TokenBuckets.Construct()
.WithCapacity(1)
.WithFixedIntervalRefillStrategy(1, TimeSpan.FromSeconds(1))
.Build();
// ...
while (true)
{
// Consume a token from the token bucket. If a token is not available this method will block until
// the refill strategy adds one to the bucket.
bucket.Consume(1);
Poll();
}
I have also needed to make it async for a project of mine, I simply made an extension method:
public static class TokenBucketExtensions
{
public static Task ConsumeAsync(this ITokenBucket tokenBucket)
{
return Task.Factory.StartNew(tokenBucket.Consume);
}
}
Using this you wouldn't need to throw/catch exceptions and writing a wrapper becomes fairly trivial
What exactly you should depends on your goals and limitations. My assumptions:
you want to avoid making requests while the rate limiter is in effect
you can't predict whether a specific request would be denied or how exactly will it take for another request to be allowed again
you don't need to make multiple request concurrently, and when multiple requests are waiting, it does not matter in which order are they completed
If these assumptions are valid, you could use AsyncAutoResetEvent from AsyncEx: wait for it to be set before making the request, set it after successfully making a request and set it after a delay when it's rate limited.
The code can look like this:
class RateLimitedWrapper<TException> where TException : Exception
{
private readonly AsyncAutoResetEvent autoResetEvent = new AsyncAutoResetEvent(set: true);
public async Task<T> Execute<T>(Func<Task<T>> func)
{
while (true)
{
try
{
await autoResetEvent.WaitAsync();
var result = await func();
autoResetEvent.Set();
return result;
}
catch (TException)
{
var ignored = Task.Delay(500).ContinueWith(_ => autoResetEvent.Set());
}
}
}
}
Usage:
public static Task<int> Add(int a, int b)
{
return rateLimitedWrapper.Execute(() => rateLimitingCalculator.Add(a, b));
}
A variant to implement this is to ensure a minimum time between calls, something like the following:
private readonly Object syncLock = new Object();
private readonly TimeSpan minTimeout = TimeSpan.FromSeconds(5);
private volatile DateTime nextCallDate = DateTime.MinValue;
public async Task<Result> RequestData(...) {
DateTime possibleCallDate = DateTime.Now;
lock(syncLock) {
// When is it possible to make the next call?
if (nextCallDate > possibleCallDate) {
possibleCallDate = nextCallDate;
}
nextCallDate = possibleCallDate + minTimeout;
}
TimeSpan waitingTime = possibleCallDate - DateTime.Now;
if (waitingTime > TimeSpan.Zero) {
await Task.Delay(waitingTime);
}
return await ... /* the actual call to API */ ...;
}
Older thread but in this context I would also like to mention the Polly library https://github.com/App-vNext/Polly which can be very helpful in this scenario.
Can do the same (and more) as TokenBucket https://github.com/esendex/TokenBucket mentioned in another answer, but this library is a bit older and doesn't support for example .net standard by default.

Wrapping rate limiting API call

I have access an API call that accepts a maximum rate of calls per second. If the rate is exceeded, an exception is thrown.
I would like to wrap this call into an abstraction that does the necessary to keep the call rate under the limit. It would act like a network router: handling multiple calls and returning the results to the correct caller caring about the call rate. The goal is to make the calling code as unaware as possible about that limitation. Otherwise, every part in the code having this call would have to be wrapped into a try-catch!
For example: Imagine that you can call a method from an extern API that can add 2 numbers. This API can be called 5 times per second. Anything higher than this will result in an exception.
To illustrate the problem, the external service that limits the call rate is like the one in the answer to this question
How to build a rate-limiting API with Observables?
ADDITIONAL INFO:
Since you don't want the worry about that limit every time you call this method from any part of your code, you think about designing a wrapper method that you could call without worrying about the rate limit. On the inside you care about the limit, but on the outside you expose a simple async method.
It's similar to a web server. How does it return the correct pack of results to the correct customer?
Multiple callers will call this method, and they will get the results as they come. This abstraction should act like a proxy.
How could I do it?
I'm sure the firm of the wrapper method should be like
public async Task<Results> MyMethod()
And inside the method it will perform the logic, maybe using Reactive Extensions (Buffer). I don't know.
But how? I mean, multiple calls to this method should return the results to the correct caller. Is this even possible?
Thank you a lot!
There are rate limiting libraries available (see Esendex's TokenBucket Github or Nuget).
Usage is very simple, this example would limit polling to 1 a second
// Create a token bucket with a capacity of 1 token that refills at a fixed interval of 1 token/sec.
ITokenBucket bucket = TokenBuckets.Construct()
.WithCapacity(1)
.WithFixedIntervalRefillStrategy(1, TimeSpan.FromSeconds(1))
.Build();
// ...
while (true)
{
// Consume a token from the token bucket. If a token is not available this method will block until
// the refill strategy adds one to the bucket.
bucket.Consume(1);
Poll();
}
I have also needed to make it async for a project of mine, I simply made an extension method:
public static class TokenBucketExtensions
{
public static Task ConsumeAsync(this ITokenBucket tokenBucket)
{
return Task.Factory.StartNew(tokenBucket.Consume);
}
}
Using this you wouldn't need to throw/catch exceptions and writing a wrapper becomes fairly trivial
What exactly you should depends on your goals and limitations. My assumptions:
you want to avoid making requests while the rate limiter is in effect
you can't predict whether a specific request would be denied or how exactly will it take for another request to be allowed again
you don't need to make multiple request concurrently, and when multiple requests are waiting, it does not matter in which order are they completed
If these assumptions are valid, you could use AsyncAutoResetEvent from AsyncEx: wait for it to be set before making the request, set it after successfully making a request and set it after a delay when it's rate limited.
The code can look like this:
class RateLimitedWrapper<TException> where TException : Exception
{
private readonly AsyncAutoResetEvent autoResetEvent = new AsyncAutoResetEvent(set: true);
public async Task<T> Execute<T>(Func<Task<T>> func)
{
while (true)
{
try
{
await autoResetEvent.WaitAsync();
var result = await func();
autoResetEvent.Set();
return result;
}
catch (TException)
{
var ignored = Task.Delay(500).ContinueWith(_ => autoResetEvent.Set());
}
}
}
}
Usage:
public static Task<int> Add(int a, int b)
{
return rateLimitedWrapper.Execute(() => rateLimitingCalculator.Add(a, b));
}
A variant to implement this is to ensure a minimum time between calls, something like the following:
private readonly Object syncLock = new Object();
private readonly TimeSpan minTimeout = TimeSpan.FromSeconds(5);
private volatile DateTime nextCallDate = DateTime.MinValue;
public async Task<Result> RequestData(...) {
DateTime possibleCallDate = DateTime.Now;
lock(syncLock) {
// When is it possible to make the next call?
if (nextCallDate > possibleCallDate) {
possibleCallDate = nextCallDate;
}
nextCallDate = possibleCallDate + minTimeout;
}
TimeSpan waitingTime = possibleCallDate - DateTime.Now;
if (waitingTime > TimeSpan.Zero) {
await Task.Delay(waitingTime);
}
return await ... /* the actual call to API */ ...;
}
Older thread but in this context I would also like to mention the Polly library https://github.com/App-vNext/Polly which can be very helpful in this scenario.
Can do the same (and more) as TokenBucket https://github.com/esendex/TokenBucket mentioned in another answer, but this library is a bit older and doesn't support for example .net standard by default.

Replacing TaskCompletionSource with Observable

In my .NET 4.0 library I have a piece of code that sends data over the network and waits for a response. In order to not block the calling code the method returns a Task<T> that completes when the response is received so that the code can call the method like this:
// Send the 'message' to the given 'endpoint' and then wait for the response
Task<IResult> task = sender.SendMessageAndWaitForResponse(endpoint, message);
task.ContinueWith(
t =>
{
// Do something with t.Result ...
});
The underlying code uses a TaskCompletionSource so that it can wait for the response message without having to spin up a thread only to have it sit there idling until the response comes in:
private readonly Dictionary<int, TaskCompletionSource<IResult>> m_TaskSources
= new Dictionary<int, TaskCompletionSource<IResult>>();
public Task<IResult> SendMessageAndWaitForResponse(int endpoint, object message)
{
var source = new TaskCompletionSource<IResult>(TaskCreationOptions.None);
m_TaskSources.Add(endpoint, source);
// Send the message here ...
return source.Task;
}
When the response is received it is processed like this:
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
if (m_TaskSources.ContainsKey(endpoint))
{
var source = m_TaskSources[endpoint];
source.SetResult(value);
m_TaskSources.Remove(endpoint);
}
}
Now I want to add a time-out so that the calling code won't wait indefinitely for the response. However on .NET 4.0 that is somewhat messy because there is no easy way to time-out a task. So I was wondering if Rx would be able to do this easier. So I came up with the following:
private readonly Dictionary<int, Subject<IResult>> m_SubjectSources
= new Dictionary<int, Subject<IResult>>();
private Task<IResult> SendMessageAndWaitForResponse(int endpoint, object message, TimeSpan timeout)
{
var source = new Subject<IResult>();
m_SubjectSources.Add(endpoint, source);
// Send the message here ...
return source.Timeout(timeout).ToTask();
}
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
if (m_SubjectSources.ContainsKey(endpoint))
{
var source = m_SubjectSources[endpoint];
source.OnNext(value);
source.OnCompleted();
m_SubjectSources.Remove(endpoint);
}
}
This all seems to work without issue, however I've seen several questions stating that Subject should be avoided so now I'm wondering if there is a more Rx-y way to achieve my goal.
The advice to avoid using Subject in Rx is often overstated. There has to be a source for events in Rx, and it's fine for it to be a Subject.
The issue with Subject is generally when it is used in between two Rx queries that could otherwise be joined, or where there is already a well-defined conversion to IObservable<T> (such as Observable.FromEventXXX or Observable.FromAsyncXXX etc.
If you want, you can do away with the Dictionary and multiple Subjects with the approach below. This uses a single subject and returns a filtered query to the client.
It's not "better" per se, Whether this makes sense will depend on the specifics of your scenario, but it saves spawning lots of subjects, and gives you a nice option for monitoring all results in a single stream. If you were dispatching results serially (say from a message queue) this could make sense.
// you only need to synchronize if you are receiving results in parallel
private readonly ISubject<Tuple<int,IResult>, Tuple<int,IResult>> results =
Subject.Synchronize(new Subject<Tuple<int,IResult>>());
private Task<IResult> SendMessageAndWaitForResponse(
int endpoint, object message, TimeSpan timeout)
{
// your message processing here, I'm just echoing a second later
Task.Delay(TimeSpan.FromSeconds(1)).ContinueWith(t => {
CompleteWaitForResponseResponse(endpoint, new Result { Value = message });
});
return results.Where(r => r.Item1 == endpoint)
.Select(r => r.Item2)
.Take(1)
.Timeout(timeout)
.ToTask();
}
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
results.OnNext(Tuple.Create(endpoint,value));
}
Where I defined a class for results like this:
public class Result : IResult
{
public object Value { get; set; }
}
public interface IResult
{
object Value { get; set; }
}
EDIT - In response to additional questions in the comments.
No need to dispose of the single Subject - it won't leak and will be garbage collected when it goes out of scope.
ToTask does accept a cancellation token - but that's really for cancellation from the client side.
If the remote side disconnects, you can send an the error to all clients with results.OnError(exception); - you'll want to instantiate a new subject instance at the same time.
Something like:
private void OnRemoteError(Exception e)
{
results.OnError(e);
}
This will manifest as a faulted task to all clients in the expected manner.
It's pretty thread safe too because clients subscribing to a subject that has previously sent OnError will get an error back immediately - it's dead from that point. Then when ready you can reinitialise with:
private void OnInitialiseConnection()
{
// ... your connection logic
// reinitialise the subject...
results = Subject.Synchronize(new Subject<Tuple<int,IResult>>());
}
For individual client errors, you could consider:
Extending your IResult interface to include errors as data
You can then optionally project this to a fault for just that client by extending the Rx query in SendMessageAndWaitForResponse. For example, and an Exception and HasError property to IResult so that you can do something like:
return results.Where(r => r.Item1 == endpoint)
.SelectMany(r => r.Item2.HasError
? Observable.Throw<IResult>(r.Item2.Exception)
: Observable.Return(r.Item2))
.Take(1)
.Timeout(timeout)
.ToTask();

Categories