Multi Client/Server Discovery in C# - c#

I'm developing a multiple client / multiple server program in C#, and before I got down to the nitty gritty, I was wondering if anyone has ever worked on a similar project and might be able to share their tips / ideas for implementation.
The servers will sit on many PCs, and listen for incoming connections from clients (Or should the Servers broadcast, and the clients listen?).
When a client starts, it should populate a list of potential server IP addresses automatically.
When a server closes, the client should remove that server from it's list.
When a new server starts, the clients should be notified and have it added to their list.
A server may also act as a client, and should be able to see itself, as well as all other servers.
A message sent from a client to the server, that affects the server, should broadcast the change to all connected clients.
Should my server be a Windows Service? What advantages/disadvantages does that present?
Any ideas on how I might go about getting started on this? I've been looking into UDP Multicast, and LAN Scans. I'm using C# and .NET 4.0
EDIT: Found this: http://code.google.com/p/lidgren-network-gen3/ Does anyone have any experience with it and can recommend/not recommend it?

I would suggest NetPeerTcpBinding WCF communications to create a Peer Mesh. Clients and Servers would all join a mesh using a Peer resolver. You can use PNRP or create a custom peer resolver (.Net actually provides you with an implementation called CustomPeerResolverService). See Peer To Peer Networking documentation.
Also you can implement a Discovery service using DiscoveryProxy. With a discovery service, services can announce their endpoints. The discovery service can then service find requests (see FindCriteria) to return endpoints that match the requests. This is referred to as Managed Discovery. Another mode is Ad Hoc Discovery. Each service will announce their endpoints via UDP and discovery clients will probe the network for these endpoints.
I have actually implemented a Managed Discovery service in combination with Peer 2 Peer WCF networking to provide a redundant mesh of discovery services that all share published service endpoints via P2P. Using Managed Discovery I have found performs far better as Ad Hoc Discovery using UDP probing is slower and has some limitations crossing some network boundaries while Managed Discovery leverages a centralized repository of announced service endpoints.
Either/both technologies I think can lead to your solution.

So is this effectively a peer to peer style network (almost like bittorrent), where all servers are clients, but not all clients are servers.
and the requirements are every client should hold a list of all other servers (which are, in turn, clients).
The problem lies in getting the server IPs to the clients in the first place. You can use a master server that has a fixed DNS to act as a kind of tracker, which all of the servers check in to, and the clients check periodically.
Another option (or an additional method) is to use a peer exchange style system, where each of the clients and servers use UDP broadcast packets over a local network to discover each other and then transfer the servers they know of, kind of like a routing protocol. However if the PCs are spread out over a non local network such as the internet, there's little chance that they will ever discover each other on their own, making this method only useful when used in conjunction with other methods of finding servers. Also, you will probably have to deal with router UPnP to allow clients to connect to each other through each others router NAT, so this method is probably too complex for the gains you get. (However, if you're just on a LAN, this is all you need!)
A third option (and again, this sounds a lot like torrent technology), is to use Distributed Hash Tables to store information about the IPs of your servers in the cloud, without having to rely on a central master server.
I have had a shot at a project like this before (a pure P2P, server-less messaging system), but could never get it to work. Without a huge amount of peers, or a master server to track all of the other servers, it is very difficult to reliably retrieve the IPs of all the servers.

Related

Secure requests to Windows Application behind NAT

I'm having a windows application that will be deployed on multiple pcs in different networks. This applications need to launch some actions upon receiving appropriate request from external service.
For this, I got HttpListener that waits for requests and performs required actions.
The issue is with NAT and security. When windows application starts it needs to tell external service that it's alive and how it can be reached (being behind NAT it's not as trivial, some kind of tunneling is needed?). When external service needs something to be executed on windows application, it sends it a request and application should proceed with actions and send response to the server.
What is the best way to expose my Windows Application behind NAT to external service (tunneling?) and how to make it secure (HTTPS?)? Or, maybe, there is a better solution for this kind of remote calls (RPC?)?
Sending http requests to clients behind NAT means, that you have to manually create a route in your NAT router for each client that forwards a port on the external IP to a fixed internal IP.
To make it secure, you have two options:
Use TLS (https) = each client needs a cert and accept https
Leave the requests insecure but keep them in a "secure environment". This could be done with an VPN connection between your server and the NAT network (while defining the local network as secure).
Such a setup works fine if you want to run a server application (one host) and are willing to invest some time. It will typically shorten your life if you want to do that with a large amount of client applications in a company network.
There are technologies available to send messages to clients without having to configure anything on routers. For .Net we have SignalR which combines multiple approaches and uses "the best possible": https://dotnet.microsoft.com/apps/aspnet/signalr

Commands between two applications in different networks

I want to send commands from one application (e.g. running on mobile device) to another application (e.g. running on embedded device) which is located in a different network.
I don't want to use VPN or something like port forwarding. So after some research I found some other ways to do that, for example via a cloud messaging service like Azure Service Bus.
Sending commands/messages from the first application to the service bus is not a problem for me. But I don't really understand how two get a connection from the cloud service to the second device? I know I can also send a message from the second device to a cloud service e.g. via HTTPS. And then the cloud service can keep that connection alive. As long as the connection is alive, I can send messages to the second device.
But there are some points I can't understand:
When I have thousands of devices, isn't that a problem to keep thousands of connections alive?
How can the second device listening the connection if there are new messages? Doesn't that needing too much ressources on the embedded device?
I also read about using "long polling" techniques and web sockets. I know too little to understand what are the advantages and disadvantages of those concepts. Which technique should I use for my problem?
To be more platform agnostic, I don't want to use services like Azure IoT Hub.
Edit:
Maybe I can use a web service and implement a MQTT Broker?
I think the mentioned MQTT Broker will get you there, especially as your usecase is exactly what MQTT and it's implementations (brokers and clients) have been built for.
The simplified story is the following:
A MQTT Client running on your Application 'publishes' a MQTT message using a 'topic' (think routing key) to the MQTT broker. A MQTT client running on your Devices have a subscription for the same 'topic' on the broker. This enables the broker to route the message from the application to the devices without the requirement that they know about each others.
As far as I understand your question your concerns are the following:
can all the devices be connected at the same time (thousands of open TCP connections) and therefore receiving messages published from your first application via the broker in 'realtime'.
assuming the devices will disconnect for whatever reasons, e.g. due to network problems or for decreasing energy consumption, how would be ensured that the devices will eventually receive the messages.
how will the devices connect to the broker.
Regarding 1. MQTT brokers are built to handle (and keeping) a massive amount of TCP connections. For example VerneMQ, a MQTT broker I can talk about, as I am one of the core devs, is able to handle over a million connections on one node (with proper server configuration it's actually mainly a matter of available RAM). However we'd only recommend such a setup if the devices are mainly sleeping. Using VerneMQ you can also add more nodes to the cluster and balance the connections among all your cluster nodes.
Regarding 2. A MQTT broker typically implements an offline storage for messages that haven't been send out to a client or haven't been acknowledged by a client. This allows your device to go offline for hours and receive the messages upon reconnect.
Regarding 3. This is specific to your usecase. In the simplest case you configure a fixed IP:Port on every device, and the MQTT client running on the device uses it to connect to the broker. Depending on the ability to reconfigure the devices it makes sense to use DNS lookups, or even to provide a 'backchannel' for reconfiguration.
For standard compliant MQTT client software have a look at Eclipse Paho. For an up-to-date list of available MQTT brokers consult the list of MQTT brokers.

Automatically forward ports from clients

My system has a server and multiple clients. the server has a service for the clients and each client has a service too, for talking to other clients.
I forward the server's service port manually on the router but the future client can not do it by him self after the installation.
Is there a way to automatically forward ports by code from the client side through the installation?
My main question is - Does this approach is wise? Should the system needs to be build deferentially?
Project details:
C# - WCF, Communication - NetTcpBinding.
The server is on my computer (Home network). Server's service port : 8080.
The clients can be installed everywhere. Client's service port: 8081.
*I'm not known with the IIS technology, can it help in this scenario?
The model you're describing sounds like a mesh network, generally you do not want clients to forward ports, be it automatically or not.
If it's absolutely necessary you could implement UPnP, there is an elaborate article here describing how to do so in .NET with a library. Note that you will have to select a different port.
I would strongly recommend to go for a different option though, having the server manage connections between clients is more managable and safer. There are very few valid arguments in favor of a model where a server is present and clients omit it at times:
Bandwidth, the server might not be able to handle all the data with reasonable throughput (i.e. torrent)
Security, the server might be only there for client updates (i.e. P2P chatclient with updater)
From the sound of it, your project does not apply to either.
EDIT: Because you have indicated the project is basically a torrent client, I would recommend reading up on the UPnP article.

Access data on client from server

Please actually read my post before placing it on hold!!
Let me start by saying I've been searching for a solution all afternoon and so far I have seen plenty of examples for WCF but none that would do what I need.
I have developed an application in c# that will be installed on customer servers and accesses a sql server on the customer's local network. The application also has the ability to control network relays on the customer's local network and records the status of these in sql. I am trying to figure out a way to have the customer's server establish a connection to our datacenter and be able to issue commands back to the customer's server (retrieve datasets from sql, control the network relays, etc). I have found plenty of ways to have a client call classes on a server but have so far been unsuccessful in finding the reverse. One consideration was writing a web service as part of the application on the customer's server but need a way to establish this connection for customers with dynamic IP addresses and without having to publish through firewalls, etc.
Have you considered using
VPN - Virtual private network
or
Configuring a Port Forwarding redirect on the ADSL modem, and using a solution like www.noip.com ?
If I understand correctly you want to get information from the customer's database, which is behind a firewall and has no known static ip, in addition there might be several hundred customers so a dedicated VPN to the customer is not viable.
First of all: you should not contact the customer database directly. Databases are not designed for this scenario and would probably be left open to attack if exposed directly to the internet.
So you need a service on top of the database. There are two main options you can use for this service:
Polling service
The service is actually a client calling some web service on your network and asking for instructions.
Benefits: easy to implement and deploy.
Downsides: With polling there is always the cost-benefit of scalability/bandwidth use vs. speed of service. There are also some considerations in selecting the time to poll to prevent all the client polling at the same time.
The service is a tcp-server
This can be a usual web service (or RESTfull service) or some other service. The only difference is that it needs to advertise itself. For that you need to have a known directory server. When the service starts it then connects to the directory service and tells it the port it can be contacted on (the directory knows the ip from the connection). It will then need to periodically contact the directory to let it know it is still alive and so any change in IP is detected.
A client on your network would now query the directory to find the address of the client and connect directly to it to issue commands.
Benefit: Scalable and bandwidth efficient.
Downside: More difficult to implement. Requires firewall traversal solutions (UPNP or firewall exceptions).

WCF (or alternative) Controller-worker setup on machines across the internet

We have a .net monitoring service that runs on several PCs installed across the UK at client locations. We need to be able to communicate with these pc's from a central web application in order to send them individual commands and request data from them.
These PCs all have internet connectivity but may be behind firewalls. Because these PCs may not be contactable directly from a URL, we need some way for these "workers" to connect to the centralised sever, identify themselves, and then respond to commands from the server.
We are looking at WCF P2P as a solution, but have a few concerns about this (can you target an individual worker with this, and will we suffer problems with NATs and firewalls). We also considered using XMPP as a protocol to communicate.
Is P2P the way forwards, or is there a better solution (either WCF or otherwise).
Thanks
I suggest using netPeerTcpBinding.
There is a good article here.

Categories