I am trying to make a c# socket application which involves listening to a certain port and take any message sent to this port.
The problem is, I want to make it in a non-binding way. I mean, when I use TCPListener and accept socket,once a connection came, a socket for this connection returns and it deal ONLY with this connection.
What I want is different, I want any connection ( from any remote socket ) to send and I get their messages without being tied with any of them
What I am doing is 2 background workers, one for sending and other for receiving (i.e they are on different ports ). the app will send through the sending some commands depends on the pre-defined message format got on the receiving.
I hope it was clear enough
Related
I am trying to poll a connection from a client connected to my server.
When you use poll you need to give it a socket connection but on the server's side the socket is bound to it's own IP address and a specific port. Making another socket to connect on the same port but with the client's IP address won't work since you can't have multiple connections on the same socket.
I am just wondering what would be a good way to constantly be checking if a client is still connected to the server and also when it disconnects?
I was thinking some sort of timeout check or something. I just wanted to know if there was any generic or proper way of achieving this.
I have tried Socket.Poll but it does not seem to achieve what I want.
To restate my question, how do you check if a client is connected on the server side using TCP sockets in C#?
socket.Receive will just return 0.
From MSDN
If you are using a connection-oriented Socket, the Receive method will read as much data as is available, up to the size of the buffer. If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
There is also Connected property in the Socket class if you need.
There are two kinds of sockets: For listening and for connections. If a client has connected you must have an instance of Socket that is associated with that connection.
Use this socket to periodically send data and receive and acknowledgement back. This is the only way to make 100% sure that the connection is still open.
Especially the Connected property cannot be used to detect a lost connection because if the network cable is unplugged neither of the two parties is notified in any way. The property can't be accurate by principle.
I have a UDP server in C on a Linux VM and a UDP client in C# in the host Windows 7 machine.
The UDP server listens for connections. UDP client connects then sends a request. The server receives the request, processes it, then sends back a reply (of less than 100 bytes). The UDP client receives the reply and does some work. This process repeats over and over again, at the rate of about 10 request/reply pairs per second continuously.
Currently, I have the UDP server listening and receiving on port 11000 and sending on port 10001, and the client listening and receiving on port 10001 and sending on port 11000. The socket that is being used to listen is kept open on both sides. With sending, each side is opening the send socket, sending data, then closing until the next request is received. So far, this is working.
I understand that it should be possible to use the SAME socket for both sending and receiving. I haven't been able to get this to work yet, but that isn't my question. My question is, is there an appreciable advantage, in my situation, to using the same socket, if it's working as it currently stands? Is there any disadvantage? Or any advantage to having two separate sockets as in my current implementation?
Thank you.
Of course there are penalties doing what you are doing, resource wasting.
Each time you create a socket, send the data and destroy it you are allocating/deallocating resources unnecesarily.
Supose you have a high message rate, each time you send a message you create/destroy one socket, and sockets are not destroyed immediately (at least in TCP, maybe in UDP i'm wrong).
If you can use just one socket, do it, when you are talking to someone with your cell phone you don't buy a new one each time you want to say something in a conversation and throw it to the trash, true? ;)
Preface:
I have an asynchronous socket server where we receive telemetry data, and when the remote devices do not send us data, we have the ability to send commands to request data. The listener and command processing are done on separate threads. The listener listens on one port, while the commands send on a different port.
My overall question is: Is it possible with C# to check if a socket is connected without having to call a "connect" method in the first place? Our customers device will establish a connection to the server and will remain connected always (unless service coverage drops or battery drains etc.). I'd like to avoid having to keep track of all the connected socket objects in memory if possible.
To be honest I'm not even sure if what I'm asking is feasible. I'd like to hear people's thoughts.
If you know the socket information, you could probably invoke GetExtendedTcpTable and get the state of the socket ("established" or not).
For an example of pinvoking this function, see:
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/5b8eccd3-4db5-44a1-be29-534ed34a588d/
I've no idea how I would go about this but I'm assuming that it is possible in some way, shape or form.
If I have a server that allow multiple connections to it through one port, is there a way I can make some sort of log of the connections, so that I could choose a certain connection to send a message to? Also if this is possible.
Is it also possible to do the same with connections through different ports?
How would I go about this? I'm fairly new to C# so not very experienced - any help is greatly appreciated!
Basically I want 3 clients to connect to a server. The clients will all send a message to the server, and the server will wait for a message from each client before replying to them, in the order in which the messages were sent.
I hope this makes more sense now.
If you are using TCP/IP, this is very much possible - the Socket that listens for incoming connections only does that - it does not handle the communication with each individual socket. Instead, the Accept() and BeginAccept() methods return a new Socket instance for each client that connects.
So the Socket instance you get when a client connects only receives messages from that client, and sending a message on that socket sends it to only that client.
Keeping track of which connection sent what - and which came first - will be more of a challenge, but definately possible.
If you are using UDP though things are a bit different, you would need to use a custom means of identifying each client.
I am supposed to connect to external server using UDP sockets in C#..
I could not understand these 2 lines in server usage notes:
"Use of dedicated sockets is enforced."
and
"If the server looses UDP connectivity with the client, it will ..."
I thought that the UDP socket is connectionless!
So what did "looses connectivity" mean? and how to avoid it?
Does there is a known way to ensure "dedicated sockets"?
Thanks
"Use of dedicated sockets is
enforced."
To me this says, create one unique socket for each connection and use it throughout that connection.
EDIT: Just to expand on this, from the servers point of view.
UDP sockets are not identified by the
remote address, but only by the local
address, although each message has an
associated remote address. (source).
That way the server can distinguish from which client each message came from. Because the remote address is made up of an ip address and port combination, you should use the same socket throughout your communication of the sever. This is because if you don't, it's possible you could get assigned a different port next time you change the underlying socket.
"If the server looses UDP connectivity
with the client, it will ..."
It is possible to loose UPD connectivity e.g. either of the endpoints in the connection is lost, say I go to the server and pull the plug?
EDIT2:
Dan Bryant makes an excellent point in the comments, that links in with what I was saying about.
One thing worth noting is that it's
possible for a call to a UDP socket to
throw a SocketException with
SocketError.ConnectionReset as the
error code. UDP does not have any sort
of session with structured
connect/disconnect, but it does use a
dynamically-assigned remote port to
allow replies, which is a kind of
'connection'.
After 2 hours trying different -may be random solutions:
The server wants you to introduce yourself on a port other than the one you will use to actually send data. "dedicated sockets"
You know which IP and Port you are sending start info on, but you do not know which will be used for actual data transmission..
Solution
1- You will create your socket -with known IPEndpoint, and send on it the 'start message'..
2- Then wait to receive from Any IP...
3- The server will response 'welcome message', stating the Endpoint it will use.(by changing parameter ref remoteEP of Socket.ReceiveFrom())
4- You must then change the port you are sending on = remote Endpoint port + 1 (why? standard way or something?)
5- At last you can send and receive normally using these ports