Using SignalR scaleout how can I broadcast the message from client to all the servers attached to my backplane? I thought it should work by default, however only one server's hub is receiving the message.
Setup: I have 4 virtual machines behind the load balancer and I am using SignalR with Redis backplane. I have the following Hub:
public class ProgressHub : Hub
{
public void StartProcessing(string clientId)
{
// ...
}
}
And on the client side, I am invoking this method with:
$.connection.hub.start().done(function() {
proghub.server.startProcessing(me.clientId);
});
I've enabled tracing on the Message bus and the message is received on all the servers:
SignalR.ScaleoutMessageBus Information: 0 : OnReceived(0, 54, 1)
However, the Hub method is invoked on only one server. How can I make this call execute the StartProcessing method on all servers?
It is not possible. The best way is to enable some kind of synchronization mechanism between servers. Since Redis is used as a backplane it can be also used as such mechanism.
Related
how to detect if the message broker configuration is valid or if the connection to the message broker is lost using Mass Transit to RabbitMQ? When publishing messages when RabbitMQ is present does not seems to complain if there is no broker connection right away and seems to recover when the RabbitMQ server comes up. Is there a way to listen in on the connection events and warn if the configuration is not valid?
If you use .NET Core and configure MassTransit as per the docs, you can resolve the instance of IBusHealth and use it in your service.
The AddMassTransit method registers the default instance, which you can ask for the bus health status at any time. That's the method code:
public HealthResult CheckHealth()
{
var endpointHealthResult = _endpointHealth.CheckHealth();
var data = new Dictionary<string, object> {["Endpoints"] = endpointHealthResult.Data};
return _healthy && endpointHealthResult.Status == BusHealthStatus.Healthy
? HealthResult.Healthy("Ready", data)
: HealthResult.Unhealthy($"Not ready: {_failureMessage}", data: data);
}
As you can see, if you call busHealth.CheckHealth() it will return either Healthy or Unhealthy and in the latter case would also give you the list of failing endpoints.
Since BusHealth only monitors the bus itself and all its receive endpoints, you might not get notified when your service failed to publish messages.
You can use the diagnostics listener or create your own publish or send observer, which is called before and after publish/send and on any failure.
I'm trying to build a system consisting of 1 main server and at least 2 gRPC services (hosted using HTTP.Sys) that could both be clients to the main server and accept connections from the main server hereinafter referred to as the "Workers". Every element of the system is inside a local network. A worker once started connects to the main server and the main server should add the connection in the pool, where the client worker could be accessed later to make a gRPC call. Example of how I'm expecting this to work below
public class MainServer : MainServer.MainServerGrpcBase
{
public List<GrpcChannel> ChannelPool { get; set; }
public MainServer()
{
ChannelPool = new List<GrpcChannel>();
}
public override async Task<Confirmation> Connect(ConnectionRequest request, ServerCallContext context)
{
ChannelPool.Add(context.Channel);
return new Confirmation { Success = true; }
}
// Having a channel pool allows me using methods like this.
public void BroadcastMessage(string message)
{
foreach(var channel in ChannelPool)
{
var client = new Worker.WorkerClient(channel);
client.SendMessage(message);
}
}
}
What I have tried:
Determining worker IP on client-side and sending it to main server, then creating a channel using GrpcChannel.FromAddress(...) as a result I discovered that I can't acquire appropriate worker address to create a channel from.
Creating a channel from ServerCallContext.Peer, that property returns strange IPv6 address not suitable for creating a channel.
TLDR; I need a gRPC server able to store client gRPC services for further use, like broadcasting messages.
If I understood your problem correctly, you tried to send messages from a server to the client, after the request from the client to the server had already been processed. This is not possible because HTTP allows sending requests from a client to a server and not the other way around. Since gRPC is using HTTP/2, you need to use another approach to solve your problem.
Consider creating a subscription service. It's a server-streaming service where you could send one message from the client to the server and then the server could send a stream of response messages when needed. Check out this post for more info:
https://stackoverflow.com/a/52939764/9742876
I have an Azure Cloud Service with a worker role that starts an OWIN web app on startup, which uses SignalR.
Separately, I have a console project that uses the SignalR client library to connect to this worker role and listen for events.
Everything is working when I run the client and the service locally using the Azure emulators.
When I publish the cloud service and point the console application to it and try to connect, I get the following in the SignalR trace logs:
WS Connecting to: ws://myapp.cloudapp.net/signalr/connect?clientProtocol=1.4&transport=webSockets&connectionData=[{"Name":"MessageBusHub"}]&connectionToken=...
OnError(System.Net.WebSockets.WebSocketException (0x80004005): An internal WebSocket error occurred. Please see the innerException, if present, for more details. ---> System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host
It then proceeds to try again using server sent events and long polling with the same error each time.
I'm using the following endpoint in my Cloud service config:
<Endpoints>
<InputEndpoint name="SignalREndpoint" protocol="http" port="80" localPort="80" />
</Endpoints>
And here is how I create my OWIN web app:
var endpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["SignalREndpoint"];
string webAppUrl = $"{endpoint.Protocol}://{endpoint.IPEndpoint}";
_webApp = WebApp.Start<Startup>(webAppUrl);
Finally, here's how I configure SignalR:
public class Startup
{
public void Configuration(IAppBuilder app)
{
app.UseCors(CorsOptions.AllowAll);
app.UseServerAuthentication();
GlobalHost.DependencyResolver.UseServiceBus(CloudConfigurationManager.GetSetting("ServiceBusConnectionString"), "SignalRMessageBus");
app.MapSignalR(new HubConfiguration()
{
EnableDetailedErrors = true,
});
}
}
In the client project I am simply using a HubConnection to connect using the following URL for local testing, http://localhost:80, and the following URL for connecting to the cloud instance, http://myapp.cloudapp.net
I'm not sure what's different between the actual Azure instance and my local emulator that's causing it to not work in the cloud.
Interestingly, if I use the browser to connect to the URL http://myapp.cloudapp.net/signalr/hubs, it works and returns the JS proxy file.
Have you tried using TCP instead of HTTP as a protocol?
I am not a SignalR expert in any way, but I know about it. When we host our server (XSockets.NET) on Azure worker roles we configure the protocol to be TCP (not HTTP).
Have no idea why it would work on localhost though.
Another thing to consider is if the worker role supports websockets? SignalR requires IIS8+ for websocket support and I have no idea if you have access to that in a worker role. There are no options in Azure to turn websockets on/off on a worker role (from what I can see). So my guess is that there is no Microsoft WebSockets in the worker role. By I might be wrong here!
EDIT: Looked at one of my instances and saw that I can change OS and that the default one is 2012 Server. So Microsoft websockets should be available!
I'm currently using .Net 4.5 websocket package to support websocket service on windows 2012 server.
Using WebSocketCollection object I'm succesfully able to broadcast message to all the clients.
private static WebSocketCollection m_clients = new WebSocketCollection ();
m_clients.Broadcast(“Hello all”));
Here how to be sure all clients have received broadcasted messages? If some clients couldn't able to receive message how can I track those error messages? What king of error handling mechanism shall I need to use?
There is a onError virtual function. But I'm not sure how it will work during failure case of broadcasted message.
public virtual void OnError();
Could you not get the clients to "Acknowledge" the receipt of a packet, if a client does not Acknowledge the packet then chances are it wasn't received correctly.
If you are just wanting to keep the connection alive send a "Ping" message to the client before timeout. Likewise you could code the client to Ping the server at a scheduled time, if no Ping received then that could indicate that there is a problem?
I would like to run a WebSocket server off a worker role in Azure.
This works fine locally on the emulator, but there is a windows firewall prompt the first time the socket server runs.
I'm wondering if anyone would know how to overcome the connection issues with regards to sockets on Azure.
My socket server implementation: OnStart
var server = new WebSocketServer("ws://theappname.cloudapp.net:8080/");
server.Start(socket =>
{
socket.OnOpen = () =>
{
Trace.WriteLine("Connected to " + socket.ConnectionInfo.ClientIpAddress,"Information");
_sockets.Add(socket);
};
});
.... etc
The client implementation:
var socket = new WebSocket("ws://theappname.cloudapp.net:8080");
socket.onopen = function () {
status.html("Connection Opened");
};
socket.onclose = function () {
status.html("Connection Closed");
}
The status changes to closed a few seconds after loading the page.
My endpoint for the worker role below:
WebSocket Input http 8080 <Not Set>
I have now tried to bind to the internal IP address using the following:
RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["WebSocket"].IPEndpoint.ToString();
SOLUTION
For the sake of anyone else facing this when implementing websockets on Azure;
Your firewall probably will deny your connection if not on port 80 or 8080 so create a separate deployment for it.
Endpoint must be set to TCP and not HTTP for the correct firewall rules to be created. (see image)
Just for the sake of trial, why don't you change your Input Endpoit from "http" to "tcp" protocol. And explicitly set the local port to 8080 (which in your case is ). Also you have to keep in mind that Windows Azure Load Balancer would kill any connection that is idleing for more than 60 seconds, so you might want to implement some kind of "ping" solution to keep the connection open.
You might want to take a look at this introductory video that Steve Marx (#smarx) put together on how to run node.js on Windows Azure.