I have a (classic) SignalR hub which returns the state of some value, with an additional client callback contract that sends updates to the clients. What I'm basically trying to create on the client side (C#) is an observable stream which invokes a task when the first observer connects, and then starts further listening to the changes on the callback contract.
Server contracts (pseudo):
interface IMyHub
Task<int> GetInitialInt();
interface IMyHubClient
void SendFurtherInts(int value);
Client proxy:
var client = hubConnection.CreateObservableHubProxy<IMyHub, IMyHubClient>();
I now had imagined something like this:
public IObservable<int> Value { get; }
var initialData = Observable.FromAsync(async () =>
await _client.CallAsync(client => client.GetInitialInt));
var stream = _client.Observe(client => client.SendFurtherInts)
.StartWith(initialData)
.Publish()
.RefCount(1);
stream.Connect();
Value = stream.AsObservable();
StartWith() however expects an enumeration of existing values, not a single value or a stream. .Prepend() also doesn't seem to work with streams. I could technically invoke the task first, and then create the stream, but that's neither really functional nor does it fit my needs (invoke when the first observer connects).
What would be the proper way to solve this?
Related
I am using the GraphQL.NET client to subscribe to data on a remote service. The client returns an Observable so when the subscription is created you, as expected, receive new messages in onNext and get errors (both initial connection errors, reconnection errors, and anything else) in onError. The GraphQL client has the ability to automatically reconnect if the initial connection fails or when an established connection drops.
I know that by convention, any messages coming in on onError is supposed to terminate the sequence of messages. However, somehow they are able to continue sending to onNext and onError after that first onError. I have tried reading through the code but it is confusing. There seems to be multiple nesting of Observable and I suspect they are creating a new sequence when they encounter an error.
To clarify my issue, suppose I had the following pseudo Event based wrapper class.
public class PubSubSubscription() {
...
public void CreateSubscription<TResponse>(string topic) {
// GraphQL client
var stream = client
.CreateSubscriptionStream<FixConnectionChangedSubscriptionResult>(...);
stream
.Subscribe(
response => {
// Do stuff with incoming data (validation, mapping, logging, etc.)
// send it on the UI
DataReceived?.Invoke(this, new DataReceivedEventArgs { Message = response });
},
ex => {
// ******************************
// Note that the Observable created by CreateSubscriptionStream()
// will call `onError` over-and-over since it _seems_ like it is
// creating (and re-creating) nested Observables in its own
// classes. In the event of an initial connection failure or
// re-connect it will raise an error and then automatically
// try again.
// ******************************
// send it on to UI
ErrorDetected?.Invoke(this, new ErrorDetectedEventArgs { Exception = ex });
});
}
...
}
I would then call it as follows (or close enough)...
...
var orders = ordersPubSub.CreateSubscription("/orders");
orders.DataReceived += OnDataReceived;
orders.ErrorDetected += OnErrorDetected;
void OnErrorDetected(object sender, ErrorDetectedEventArgs e) {
// Can be called multiple times
// Display message in UI
}
...
I am having trouble converting that event-based wrapper approach to an Observable wrapper approach.
public class PubSubSubscription() {
...
public IObservable<TResponse> CreateSubscription<TResponse>(string topic) {
// Observable that I give back to my UI
var eventSubject = new Subject<TResponse>();
// GraphQL client
var stream = client
.CreateSubscriptionStream<FixConnectionChangedSubscriptionResult>(...);
stream
.Subscribe(
response => {
// Do stuff with incoming data (validation, mapping, logging, etc.)
// send it on the UI
eventSubject.onNext(response);
},
ex => {
// ******************************
// Note that the Observable created by CreateSubscriptionStream()
// will call `onError` over-and-over since it _seems_ like it is
// creating (and re-creating) nested Observables in its own
// classes. In the event of an initial connection failure or
// re-connect it will raise an error and then automatically
// try again.
// ******************************
// send it on to UI
eventSubject.onError(ex);
});
return eventSubject.AsObservable();
}
...
}
This I would then call it as follows (or close enough)...
...
var orders = ordersPubSub.CreateSubscription("/orders");
orders
// Things I have tried...
// Do() by itself does not stop the exception from hitting onError (which makes sense)
.Do(
_ => { },
ex => // display in UI)
// Retry() seems to cause the GraphQL subscription to "go away" because I no longer see connection attempts
.Retry()
// Stops the exception from hitting onError but the sequence still stops since I need to return _something_ from this method
.Catch(() => {
// display in UI
return Observable.Empty<T>();
})
.Subscribe(
msg => // do something with data,
ex => // display in UI);
}
...
Bottom line is what is the proper approach to dealing with sequences that can be "temporarily interrupted"?
I am also unsure of the idea of pushing the responsibility of retries onto the observer. This means that I would need to duplicate the logic each time CreateSubscription() is called. Yet, if I move it into the CreateSubscription() method, I am still unsure how to let the observer know the interruption happened so the UI can be updated.
One approach I am playing with (after reading about it as a possible solution) is to wrap my TResponse in a "fake" SubscriptionResponse<TResponse> which has T Value and Exception Error properties so the outer Observable only has onNext called. Then in my Subscribe I add if/else logic to check if Error is non-null and react accordingly. But this just feels ugly... I would almost want to go back to using events...
If you have an unruly observable - one that produces multiple errors without ended - you can make it workable by doing this:
IObservable<int> unruly = ...;
IObservable<Notification<int>> workable =
unruly
.Materialize();
The Materialize operator turns the IObservable<int> into an IObservable<Notification<int>> where the OnCompleted, OnError, and OnNext messages all get converted to OnNext messages that you can inspect like this:
Now you can deal with the errors without the sequence ending. When you've cleared them you can restore the sequence with Dematerialize like so:
IObservable<int> ruly =
workable
.Where(x => x.Kind != NotificationKind.OnError)
.Dematerialize();
Last year I started using the actor model with Akka.NET. Now I started using MassTransit (v3.5.7) with RabbitMQ and I really love both!
In the request/response scenario, my request consumer executes its business logic by wrapping the request in a new message and Asking an actor to do the actual job.
So basically the consumer awaits on an actor's Ask method. This (extension) method accepts the message and a timeout as arguments.
I'd like to use the same timeout value used by the originator of the request.
Is there a simple way to obtain, in the consumer context, the original timeout used by the caller in order to pass it to the actor's Ask method?
Note: I'd like to avoid adding the timeout to the request interface.
finally I found a solution! It's quite easy (once investigated the MassTransit source code :-) and works for me but if someone has some advice or hint please let me know.
So, basically I create a support library for MassTransit, where I added a class with two extension methods:
The CreateRequestClientWithTimeoutHeader() method creates a client and stores the string representation of the passed timeout (expressed in seconds) in the message header.
This will be used by the client.
The GetClientTimeout() method retrieves the value from the message header and converts it to a TimeSpan. This will be used in the consumer.
Here's the code:
public static class MassTransitExtMethods
{
private const string ClientTimeoutHeaderKey = "__ClientTimeout__";
public static IRequestClient<TRequest, TResponse> CreateRequestClientWithTimeoutHeader<TRequest, TResponse>
(
this IBus bus,
Uri address,
TimeSpan timeout,
TimeSpan? ttl = default(TimeSpan?),
Action<SendContext<TRequest>> callback = null
)
where TRequest : class
where TResponse : class
{
return
bus
.CreateRequestClient<TRequest, TResponse>
(
address,
timeout,
ttl,
context =>
{
context
.Headers
.Set
(
ClientTimeoutHeaderKey,
timeout.TotalSeconds.ToString(CultureInfo.InvariantCulture)
);
callback?.Invoke(context);
}
);
}
public static TimeSpan? GetClientTimeout(this ConsumeContext consumeContext)
{
string headerValue =
consumeContext
.Headers
.Get<string>(ClientTimeoutHeaderKey);
if (string.IsNullOrEmpty(headerValue))
{
return null;
}
double timeoutInSeconds;
if (double.TryParse(headerValue, NumberStyles.Any, CultureInfo.InvariantCulture, out timeoutInSeconds))
{
return TimeSpan.FromSeconds(timeoutInSeconds);
}
return null;
}
}
To use it, create the client using the new extension method:
var client =
mybus
.CreateRequestClientWithTimeoutHeader<IMyRequest, IMyResponse>
(
new Uri(serviceAddress),
TimeSpan.FromSeconds(10.0)
);
And here is a very simple example of a consumer using an Akka.NET actor, which implements the business logic (please note that the implementation is not complete):
public class MyReqRespProcessor : IConsumer<IMyRequest>
{
private readonly IActorRef _myActor;
public async Task Consume(ConsumeContext<IMyRequest> context)
{
TimeSpan? clientTimeout = context.GetClientTimeout();
var response = await
_myActor
.Ask<IMyResponse>(context.Message, clientTimeout ?? PredefinedTimeout)
.ConfigureAwait(false);
await
context
.RespondAsync<IMyResponse>(response)
.ConfigureAwait(false);
}
}
In a real scenario, with a lot of requests, the actor may be a router, configured according to the endpoint configuration (for example the prefetch count value).
I know this is not a perfect solution but it helps to give, on the server side, a measure of the max processing time.
In case of network delays, the client may receive the timeout before the actor stops processing the request. Anyway the actor will work on that request at most for the time specified by the client. And this is what I wanted to reach out.
I am looking at using IObservable to get a response in a request-response environment within a c# async methods and replace some older callback based code, but I am finding that if a value is pushed (Subject.OnNext) to the observable but FirstAsync is not yet at the await, then the FirstAsync is never given that message.
Is there a straightforward way to make it work, without a 2nd task/thread plus synchronisation?
public async Task<ResponseMessage> Send(RequestMessage message)
{
var id = Guid.NewGuid();
var ret = Inbound.FirstAsync((x) => x.id == id).Timeout(timeout); // Never even gets invoked if response is too fast
await DoSendMessage(id, message);
return await ret; // Will sometimes miss the event/message
}
// somewhere else reading the socket in a loop
// may or may not be the thread calling Send
Inbound = subject.AsObservable();
while (cond)
{
...
subject.OnNext(message);
}
I cant put the await for the FirstAsync simply before I send the request, as that would prevent the request being sent.
The await will subscribe to the observable. You can separate the subscription from the await by calling ToTask:
public async Task<ResponseMessage> Send(RequestMessage message)
{
var id = Guid.NewGuid();
var ret = Inbound.FirstAsync((x) => x.id == id).Timeout(timeout).ToTask();
await DoSendMessage(id, message);
return await ret;
}
I took a closer look and there is a very easy solution to your problem by just converting hot into cold observable. Replace Subject with ReplaySubject. Here is the article: http://www.introtorx.com/content/v1.0.10621.0/14_HotAndColdObservables.html.
Here is the explanation:
The Replay extension method allows you take an existing observable
sequence and give it 'replay' semantics as per ReplaySubject. As a
reminder, the ReplaySubject will cache all values so that any late
subscribers will also get all of the values.
I am working on a solution that uses web socket protocol to notify client (web browser) when some event happened on the server (MVC Core web app). I use Microsoft.AspNetCore.WebSockets nuget.
Here is my client-side code:
$(function () {
var socket = new WebSocket("ws://localhost:61019/data/openSocket");
socket.onopen = function () {
$(".socket-status").css("color", "green");
}
socket.onmessage = function (message) {
$("body").append(document.createTextNode(message.data));
}
socket.onclose = function () {
$(".socket-status").css("color", "red");
}
});
When this view is loaded the socket request is immediately sent to the MVC Core application. Here is the controller action:
[Route("data")]
public class DataController : Controller
{
[Route("openSocket")]
[HttpGet]
public ActionResult OpenSocket()
{
if (HttpContext.WebSockets.IsWebSocketRequest)
{
WebSocket socket = HttpContext.WebSockets.AcceptWebSocketAsync().Result;
if (socket != null && socket.State == WebSocketState.Open)
{
while (!HttpContext.RequestAborted.IsCancellationRequested)
{
var response = string.Format("Hello! Time {0}", System.DateTime.Now.ToString());
var bytes = System.Text.Encoding.UTF8.GetBytes(response);
Task.Run(() => socket.SendAsync(new System.ArraySegment<byte>(bytes),
WebSocketMessageType.Text, true, CancellationToken.None));
Thread.Sleep(3000);
}
}
}
return new StatusCodeResult(101);
}
}
This code works very well. WebSocket here is used exclusively for sending and doesn't receive anything. The problem, however, is that the while loop keeps holding the DataController thread until cancellation request is detected.
Web socket here is bound to the HttpContext object. As soon as HttpContext for the web request is destroyed the socket connection is immediately closed.
Question 1: Is there any way that socket can be preserved outside of the controller thread?
I tried putting it into a singleton that lives in the MVC Core Startup class that is running on the main application thread. Is there any way to keep the socket open or establish connection again from within the main application thread rather than keep holding the controller thread with a while loop?
Even if it is deemed to be OK to hold up controller thread for socket connection to remain open, I cannot think of any good code to put inside the OpenSocket's while loop. What do you think about having a manual reset event in the controller and wait for it to be set inside the while loop within OpenSocket action?
Question 2: If it is not possible to separate HttpContext and WebSocket objects in MVC, what other alternative technologies or development patterns can be utilized to achieve socket connection reuse? If anyone thinks that SignalR or a similar library has some code allowing to have socket independent from HttpContext, please share some example code. If someone thinks there is a better alternative to MVC for this particular scenario, please provide an example, I do not mind switching to pure ASP.NET or Web API, if MVC does not have capabilities to handle independent socket communication.
Question 3: The requirement is to keep socket connection alive or be able to reconnect until explicit timeout or cancel request by the user. The idea is that some independent event happens on the server that triggers established socket to send data.
If you think that some technology other than web sockets would be more useful for this scenario (like HTML/2 or streaming), could you please describe the pattern and frameworks you would use?
P.S. Possible solution would be to send AJAX requests every second to ask if there was new data on the server. This is the last resort.
After lengthy research I ended up going with a custom middleware solution. Here is my middleware class:
public class SocketMiddleware
{
private static ConcurrentDictionary<string, SocketMiddleware> _activeConnections = new ConcurrentDictionary<string, SocketMiddleware>();
private string _packet;
private ManualResetEvent _send = new ManualResetEvent(false);
private ManualResetEvent _exit = new ManualResetEvent(false);
private readonly RequestDelegate _next;
public SocketMiddleware(RequestDelegate next)
{
_next = next;
}
public void Send(string data)
{
_packet = data;
_send.Set();
}
public async Task Invoke(HttpContext context)
{
if (context.WebSockets.IsWebSocketRequest)
{
string connectionName = context.Request.Query["connectionName"]);
if (!_activeConnections.Any(ac => ac.Key == connectionName))
{
WebSocket socket = await context.WebSockets.AcceptWebSocketAsync();
if (socket == null || socket.State != WebSocketState.Open)
{
await _next.Invoke(context);
return;
}
Thread sender = new Thread(() => StartSending(socket));
sender.Start();
if (!_activeConnections.TryAdd(connectionName, this))
{
_exit.Set();
await _next.Invoke(context);
return;
}
while (true)
{
WebSocketReceiveResult result = socket.ReceiveAsync(new ArraySegment<byte>(new byte[1]), CancellationToken.None).Result;
if (result.CloseStatus.HasValue)
{
_exit.Set();
break;
}
}
SocketHandler dummy;
_activeConnections.TryRemove(key, out dummy);
}
}
await _next.Invoke(context);
string data = context.Items["Data"] as string;
if (!string.IsNullOrEmpty(data))
{
string name = context.Items["ConnectionName"] as string;
SocketMiddleware connection = _activeConnections.Where(ac => ac.Key == name)?.Single().Value;
if (connection != null)
{
connection.Send(data);
}
}
}
private void StartSending(WebSocket socket)
{
WaitHandle[] events = new WaitHandle[] { _send, _exit };
while (true)
{
if (WaitHandle.WaitAny(events) == 1)
{
break;
}
if (!string.IsNullOrEmpty(_packet))
{
SendPacket(socket, _packet);
}
_send.Reset();
}
}
private void SendPacket(WebSocket socket, string packet)
{
byte[] buffer = Encoding.UTF8.GetBytes(packet);
ArraySegment<byte> segment = new ArraySegment<byte>(buffer);
Task.Run(() => socket.SendAsync(segment, WebSocketMessageType.Text, true, CancellationToken.None));
}
}
This middleware is going to run on every request. When Invoke is called it checks if it is a web socket request. If it is, the middleware checks if such connection was already opened and if it wasn't, the handshake is accepted and the middleware adds it to the dictionary of connections. It's important that the dictionary is static so that it is created only once during application lifetime.
Now if we stop here and move up the pipeline, HttpContext will eventually get destroyed and, since the socket is not properly encapsulated, it will be closed too. So we must keep the middleware thread running. It is done by asking socket to receive some data.
You may ask why we need to receive anything if the requirement is just to send? The answer is that it is the only way to reliably detect client disconnecting. HttpContext.RequestAborted.IsCancellationRequested works only if you constantly send within the while loop. If you need to wait for some server event on a WaitHandle, cancellation flag is never true. I tried to wait for HttpContext.RequestAborted.WaitHandle as my exit event, but it is never set either. So we ask socket to receive something and if that something sets CloseStatus.HasValue to true, we know that client disconnected. If we receive something else (client side code is unsafe) we will ignore it and start receiving again.
Sending is done in a separate thread. The reason is the same, it's not possible to detect disconnection if we wait on the main middleware thread. To notify the sender thread that client disconnected we use _exit synchronization variable. Remember, it is fine to have private members here since SocketMiddleware instances are saved in a static container.
Now, how do we actually send anything with this set up? Let's say an event occurs on the server and some data becomes available. For simplicity sake, lets assume this data arrives inside normal http request to some controller action. SocketMiddleware will run for every request, but since it is not web socket request, _next.Invoke(context) is called and the request reaches controller action which may look something like this:
[Route("ProvideData")]
[HttpGet]
public ActionResult ProvideData(string data, string connectionName)
{
if (!string.IsNullOrEmpty(data) && !string.IsNullOrEmpty(connectionName))
{
HttpContext.Items.Add("ConnectionName", connectionName);
HttpContext.Items.Add("Data", data);
}
return Ok();
}
Controller populates Items collection which is used to share data between components. Then the pipeline returns to the SocketMiddleware again where we check whether there is anything interesting inside the context.Items. If there is we select respective connection from the dictionary and call its Send() method that sets data string and sets _send event and allows single run of the while loop inside the sender thread.
And voila, we a have socket connection that sends on server side event. This example is very primitive and is there just to illustrate the concept. Of course, to use this middleware you will need to add the following lines in your Startup class before you add MVC:
app.UseWebSockets();
app.UseMiddleware<SocketMiddleware>();
Code is very strange and hopefully we'll be able to write something much nicer when SignalR for dotnetcore is finally out. Hopefully this example will be useful for someone. Comments and suggestions are welcome.
In my .NET 4.0 library I have a piece of code that sends data over the network and waits for a response. In order to not block the calling code the method returns a Task<T> that completes when the response is received so that the code can call the method like this:
// Send the 'message' to the given 'endpoint' and then wait for the response
Task<IResult> task = sender.SendMessageAndWaitForResponse(endpoint, message);
task.ContinueWith(
t =>
{
// Do something with t.Result ...
});
The underlying code uses a TaskCompletionSource so that it can wait for the response message without having to spin up a thread only to have it sit there idling until the response comes in:
private readonly Dictionary<int, TaskCompletionSource<IResult>> m_TaskSources
= new Dictionary<int, TaskCompletionSource<IResult>>();
public Task<IResult> SendMessageAndWaitForResponse(int endpoint, object message)
{
var source = new TaskCompletionSource<IResult>(TaskCreationOptions.None);
m_TaskSources.Add(endpoint, source);
// Send the message here ...
return source.Task;
}
When the response is received it is processed like this:
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
if (m_TaskSources.ContainsKey(endpoint))
{
var source = m_TaskSources[endpoint];
source.SetResult(value);
m_TaskSources.Remove(endpoint);
}
}
Now I want to add a time-out so that the calling code won't wait indefinitely for the response. However on .NET 4.0 that is somewhat messy because there is no easy way to time-out a task. So I was wondering if Rx would be able to do this easier. So I came up with the following:
private readonly Dictionary<int, Subject<IResult>> m_SubjectSources
= new Dictionary<int, Subject<IResult>>();
private Task<IResult> SendMessageAndWaitForResponse(int endpoint, object message, TimeSpan timeout)
{
var source = new Subject<IResult>();
m_SubjectSources.Add(endpoint, source);
// Send the message here ...
return source.Timeout(timeout).ToTask();
}
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
if (m_SubjectSources.ContainsKey(endpoint))
{
var source = m_SubjectSources[endpoint];
source.OnNext(value);
source.OnCompleted();
m_SubjectSources.Remove(endpoint);
}
}
This all seems to work without issue, however I've seen several questions stating that Subject should be avoided so now I'm wondering if there is a more Rx-y way to achieve my goal.
The advice to avoid using Subject in Rx is often overstated. There has to be a source for events in Rx, and it's fine for it to be a Subject.
The issue with Subject is generally when it is used in between two Rx queries that could otherwise be joined, or where there is already a well-defined conversion to IObservable<T> (such as Observable.FromEventXXX or Observable.FromAsyncXXX etc.
If you want, you can do away with the Dictionary and multiple Subjects with the approach below. This uses a single subject and returns a filtered query to the client.
It's not "better" per se, Whether this makes sense will depend on the specifics of your scenario, but it saves spawning lots of subjects, and gives you a nice option for monitoring all results in a single stream. If you were dispatching results serially (say from a message queue) this could make sense.
// you only need to synchronize if you are receiving results in parallel
private readonly ISubject<Tuple<int,IResult>, Tuple<int,IResult>> results =
Subject.Synchronize(new Subject<Tuple<int,IResult>>());
private Task<IResult> SendMessageAndWaitForResponse(
int endpoint, object message, TimeSpan timeout)
{
// your message processing here, I'm just echoing a second later
Task.Delay(TimeSpan.FromSeconds(1)).ContinueWith(t => {
CompleteWaitForResponseResponse(endpoint, new Result { Value = message });
});
return results.Where(r => r.Item1 == endpoint)
.Select(r => r.Item2)
.Take(1)
.Timeout(timeout)
.ToTask();
}
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
results.OnNext(Tuple.Create(endpoint,value));
}
Where I defined a class for results like this:
public class Result : IResult
{
public object Value { get; set; }
}
public interface IResult
{
object Value { get; set; }
}
EDIT - In response to additional questions in the comments.
No need to dispose of the single Subject - it won't leak and will be garbage collected when it goes out of scope.
ToTask does accept a cancellation token - but that's really for cancellation from the client side.
If the remote side disconnects, you can send an the error to all clients with results.OnError(exception); - you'll want to instantiate a new subject instance at the same time.
Something like:
private void OnRemoteError(Exception e)
{
results.OnError(e);
}
This will manifest as a faulted task to all clients in the expected manner.
It's pretty thread safe too because clients subscribing to a subject that has previously sent OnError will get an error back immediately - it's dead from that point. Then when ready you can reinitialise with:
private void OnInitialiseConnection()
{
// ... your connection logic
// reinitialise the subject...
results = Subject.Synchronize(new Subject<Tuple<int,IResult>>());
}
For individual client errors, you could consider:
Extending your IResult interface to include errors as data
You can then optionally project this to a fault for just that client by extending the Rx query in SendMessageAndWaitForResponse. For example, and an Exception and HasError property to IResult so that you can do something like:
return results.Where(r => r.Item1 == endpoint)
.SelectMany(r => r.Item2.HasError
? Observable.Throw<IResult>(r.Item2.Exception)
: Observable.Return(r.Item2))
.Take(1)
.Timeout(timeout)
.ToTask();