I'm working on a C# WebSocket server (currently supported by https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17).
The server is working with the Socket object of the .NET for the server to listen and for each client to send and receive messages.
I built a web client that connect to the server, It can connect successfully and i can send messages between clients.
Everything is working great!
Now, if i'm connecting to the server and leave the client for a while without sending messages, the server throwing an exception that says:
Int32 Send(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags):An
existing connection was forcibly closed by the remote host.
The exception, as you can see is from the Send method of the client socket in the server, this is looks very wired because i didn't sent any data from the client and no one sending data to this client back so how can it be that the Send method can throw an exception and why this exception is thrown?
It's called a timeout!
WebSockets are just a wrapper around TCP/IP raw sockets (Socket class in .NET) - which timeout if nothing is sent, and nothing is keeping the connection alive.
AFAIK currently the WebSocket API isn't very well defined as far as how to keep the connection alive. I was experiencing the same and had to just switch over to using a ping (empty message) to keep the connection alive (I'm using the Microsoft sockets implementation).
If you're reinventing the wheel for a non final spec, just remember that you'll have to keep reinventing it every time the spec changes. I specifically chose to use the Microsoft sockets preview so that when it's released I'm pretty much not going to have to change any code. I don't run in IIS - I run as a console app and it's working mostly great so far but I have very very few users.
Note: The problem i was having that led me to find this question was if I send 10 messages without receiving a reply then the connection is closed. I'm still looking into why this is - whether its a bug / feature of WebSockets or a feature of the Socket class. it's possible I'm hitting a 65kb limit but my messages are small and I don't think that's why. Just be aware of this when testing whatever you're working on becasue it gives the same error you got.
I assume that you have exclude the usage of different protocols between the servers and the clients (silly assumption, but you never know).
If your code reaches the Send method without a prior Receive from the client, then it's obvious that something is wrong with the server code. Use trace and/or log to get more information even for abc's like entering wait to receive, receiving, received, exiting receiving etc.
Related
I'm writing a windows based client(c++) and server(c#) application which will communicate to each other via tcp packets. Here the client is sending data and server needs to acknowledge the same.
Now for this purpose I have made one single 'socket()' and 'connect()' call during the client lifetime on its startup. Some error checking and retries has been kept inside 'send()' and 'recv()' calling methods. Do note that one client will send one set (multiple packets) of data and quit at a time.
Now my questions are:
If the server is running continuously(e.g. windows service) on some PC, do I really need to consider about connection
breakdown(network failure) and creating a new socket and connect
accordingly from client?
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last
time?
I want to know the general methods what people are using around the world for dealing this kind of situations for network applications.
do I really need to consider about connection breakdown and creating a new socket and connect accordingly from client?
Depends on how precious your data is. If you want to make sure it ended up at the server, and an error occurred while sending, then you can consider it "not sent".
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last time?
That depends entirely on how your application logic and application protocol work. From your description we can't know how you send your data and how a server would recognize data it has already seen.
do I really need to consider about connection breakdown(network
failure) and creating a new socket and connect accordingly from
client?
You do certainly not need to create a new socket after connection shutdown; you can use the existing socket to connect anew.
I have TCP server and clients written in C#. Since my connection is over wifi which is not reliable, I use resending the same packet and handle packet loss.
For example a bank account platform. The user deposites money and the client send this message to the server, if the server received this message, it will reply the client the operation is successful. If the client doesnt receive the reply, it will send again after a period of time.
This looks simple but I faced a situation when the wifi stucks and the client didnt receive reply and keep sending the same message to the server. End up those messages were received by the server at the same time. As a result the server thought the user deposites money 100 times.
I would like to know usually how people handle such case for tcp server client program, especially when the application is not just a chat application, but more sensitive information like money. My first thought is adding a transaction ID in the message so the server will not handle the messages with the same transaction ID, which will prevent the above case. But not sure if there is any better solution or .Net has some internal function for this.
Thank you.
When you code in C#, you are mostly working from within the Application layer of OSI model. TCP protocol works on the Transport layer (which is below the application layer).
Reliability, that you want to achieve, is already embedded inside the TCP protocol itself. This means, it will attempt to resent the packets, if some were lost, automatically without your additional requests. This will also happen before control is returned to the application layer program. There are also other guarantees, such as ordered delivery of the packets.
This means, that the functionality you need is already implemented at the layers bellow and you don't need to worry about it.
Note, if you were to use UDP, you would need to handle reliability problems yourself.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
TCP Client Side Issue
I am having a big trouble by using c# TCP client and server application. Everything work fine... But in some case, when a TCP server send simultaneous response to TCP client, The client can consider both of the two message send by the server are actually a single message. I don't know why such case are occurring... If any one know please help me. My TCP client and Server are written in c#.
This is normal behavior for TCP. It guarantees you the sequence (if the server sends A, then B, client will never receive B, then A), but it knows nothing about your "messages".
To break data into messages at client side, you need some application protocol over TCP. E.g., HTTP uses CRLFCRLF to determine the end of the HTTP message.
You may use existing one or made your own, depending on your needs.
There's no guarantee of a 1-1 correspondence between calls of Write on one end of a TCP connection, and calls to Read on the other end. You may receive no data, part of a message, an entire message, or multiple messages for each call to Read
It is up to you to perform any appropriate work to turn these blobs of data back into messages - or to switch to a higher level technology (e.g. WCF) if you want something else to do the hard work.
I have written a sample client and server. The server keeps on listening while client connects, sends requests and then disconnects. I have a scenerio when the client connects to the server and before sending requests the server is shutdown forcefully or by any means. My question is how can I handle this? Can I keep the server from disconnecting unless it notifies its connected clients? Can I write such a method? How?
EDIT: by server and client i mean server and client applications I have written my self
Thanks
Please clarify your situation. The
server
means your server application or the physical server itself? If the server means the o/s itself, then nothing you can do except to perform a thorough software and hardware troubleshooting.
UPDATE:
Ok, if that is your application problem, then you can try to implement Try..Catch statement in your code and learn more for the exception being raised.
The point is that, you must prevent an exception in the first place rather than seeking solution when exception happens.
Since you are in control for both server and client application, you can use a comet approach to monitor the server application status, ie the server still running, or had shutdown.
For more information about the concept of comet approach, here is the link: http://www.codeproject.com/KB/aspnet/CometAsync.aspx
Unfortunate short answer: no. Lots of things can forcefully and unexpectedly shut down your server -- whether it be a network error, a system administrator, or a state-wide power failure.
The best you can do is ensure your client is able to handle sudden server disconnections.
I don't think there is anything you can do if the server is forcefully shut down. The best you can do is make sure the client checks to make sure the server is still up before it sends any commands. This will at least prevent the client from crashing.
If your client is always connected and able to receive commands from the server there is nothing stopping you from sending some kind of command to the client if the server is shut down in an orderly fashion.
I have a client-server app where the client is on a Windows Mobile 6 device, written in C++ and the server is on full Windows and written in C#.
Originally, I only needed it to send messages from the client to the server, with the server only ever sending back an acknowledgement that it received the message. Now, I would like to update it so that the server can actually send a message to the client to request data. As I currently have it set up so the client is only in receive mode after it sends data to the server, this doesn't allow for the server to send a request at any time. I would have to wait for client data. My first thought would be to create another thread on the client with a separate open socket, listening for server requests...just like the server already has in respect the client. Is there a way, within the same thread and using the same socket, to all the server to send requests at any time?
Can you use something to the effect of WaitForMultipleObjects() and pass it a receive buffer and an event that tells it there is data to be sent?
When I needed to write an application with a client-server model where the clients could leave and enter whenever they want, (I assume that's also the case for your application as you use mobile devices) I made sure that the clients send an online message to the server, indicating they were connected and ready to do whatever they needed doing.
at that time the server could send messages back to the client trough the same open connection.
Also, but I don't know if that is applicable for you, I had some sort of heartbeat the clients sent to the server, letting it know it was still online. That way the server knows when a client was forcibly disconnected from the network and it could mark that client back as offline.
Using asynchronous communication is totally possible in single thread!
There is a common design pattern in network software development called the reactor pattern (look at this book). Some well known network library provides an implementation of this pattern (look at ACE).
Briefly, the reactor is an object, you register all your sockets inside, and you wait for something. If something happened (new data arrived, connection close...) the reactor will notify you. And of course, you can use only one socket to send and received data asynchronously.
I'm not clear on whether or not you're wanting to add the asynchronous bits to the server in C# or the client in C++.
If you're talking about doing this in C++, desktop Windows platforms can do socket I/O asynchronously through the API's that use overlapped I/O. For sockets, WSASend, WSARecv both allow async I/O (read the documentation on their LPOVERLAPPED parameters, which you can populate with events that get set when the I/O completes).
I don't know if Windows Mobile platforms support these functions, so you might have to do some additional digging.
Check out asio. It is a cross compatable c++ library for asyncronous IO. I am not sure if this would be useful for the server ( I have never tried to link a standard c++ DLL to a c# project) but for the client it would be useful.
We use it with our application, and it solved most of our IO concurrency problems.