Windows Phone 8 - Keep socket open and receive data of unknown length - c#

I have socket in which I want to receive multiple messages of unknown lengths: text, media, ..
I saw how it works with Windows.Networking.Sockets and it seems that the sender should send the length first and that's not my case
I saw a few improvements in System.Net.Sockets but didn't find any events that listens for data packets receive.
My question is: do I have to check the socket every now and then for data ? is there any better implementation?

Windows Phone 8 Approach
Asynchronous networking operations are supported by using the System.Net.Sockets.Socket class ReceiveAsync method. All you need to do is properly configure the SocketAsyncEventArgs before making this call and you will be able raise a handler only when data has been received.
The example from this MSDN article shows how to perform a Read operation synchronously, but internally it is using an asynchronous pattern. If you remove the ManualResetEvent part from the code, then the calls will fall through, and the event will only fire when data is ready to be processed.
Desktop / Full .NET Approach
Use the TcpClient class instead of Socket and you will be able to get a NetworkStream object which implements BeginRead - an asynchronous method which will invoke a callback function only when there is new data to be read (or the remote side has closed).
Note: the MSDN example for BeginRead is finished on the page for EndRead. Use those two code snippets together for working code.

You can use the select function to keep track of your open descriptors and receive the data that has arrived over the socket.

Related

Is multithreading dangerous in sockets writing?

I'm creating a network protocol based on Tcp, and i am using berkeley socket via C#.
is the socket buffer gonna mixed up when two threads trying to send data via socket.send method at a time?
should i use lock for accessing one thread at a time?
According to MSDN, the Socket class is thread safe. So, that means you don't need to lock the send method and you can safely send and receive on different threads. But be aware that the order is not guaranteed, so the data won't be mixed if you send it all at a single call to the send method instead of chuncked calls.
In addition to that, I would recommend to lock and flush it just in case you don't want the server swapping the responses across multiple concurrent requests. But that doesn't seems to be this case.

C# System.IO.NetworkStream Read/Write Race Condition

I have a WebSocket Server that uses the System.IO.Stream class to communicate (1 Stream per connection). The server needs to send and receive, (C# .NET 2.0) and the Stream object is created from the generated TcpClient when I accept a connection.
The desired setup is I have Stream.Read on one thread handling all the incoming messages. It's a loop where Stream.Read() is expected to block as messages come in.
On another thread, I need to occasionally send messages back to the client using Stream.Write().
My question is, would there ever be a race condition? Is it possible when I fire off a Stream.Write() while Stream.Read() is waiting/reading that I could muddle up the incoming read data? or is Stream smart enough to lock up the resources for me? Is there any case where having these two sitting on Read() and Write() at the same time could break things?
After some more research, it turns out it's a NetworkStream object. Which does indeed handle synchronous read/write without a race condition:
https://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream(v=vs.110).aspx
Read and write operations can be performed simultaneously on an instance of the NetworkStream class without the need for synchronization. As long as there is one unique thread for the write operations and one unique thread for the read operations, there will be no cross-interference between read and write threads and no synchronization is required."

Sockets terminology - what does "blocking" mean?

When talking sockets programming in C# what does the term blocking mean?
I need to build a server component (possibly a Windows service) that will receive data, do some processing and return data back to the caller. The caller can wait for the reply but I need to ensure that multiple clients can call in at the same time.
If client 1 connects and I take say 10 seconds to process their request, will the socket be blocked for client 2 calling in 2 seconds later? Or will the service start processing a second request on a different thread?
In summary, my clients can wait for a response but I must be able to handle multiple requests simultaneously.
Blocking means that the call you make (send/ receive) does not return ('blocks') until the underlying socket operation has completed.
For read that means until some data has been received or the socket has been closed.
For write it means that all data in the buffer has been sent out.
For dealing with multiple clients start a new thread for each client/ give the work to a thread in a threadpool.
Connected TCP sockets can not be shared, so it must be one socket per client anyway.
This means you can't use the socket for anything else on the current executing thread.
It has nothing to do with szerver side.
It means the thread pauses whilst it waits for a response from the socket.
If you don't want it to pause, use the async methods.
Read more: http://www.developerfusion.com/article/28/introduction-to-tcpip/8/
A blocking call will hold the currently executing thread until the call completes.
For example, if you wish to read 10 bytes from a network stream call the Read method as follows
byte[] buf = new byte[10];
int bytesRead = stream.Read(buf, 0, buf.Length);
The currently executing thread will block on the Read call until 10 bytes has been read (or the ReadTimeout has expired).
There are Async variants of Read and Write to prevent blocking the current thread. These follow the standard APM pattern in .NET. The Async variants prevent you having to deal out a Thread (which will be blocked) to each client which increases you scalability.
Blocking operations are usually those that send or receive data and those that establish connections (i.e. listen for new clients or connect to other listeners).
To answer your question, blocking basically means that the control stays within a function or block of code (such as readfile() in c++) until it returns and does not move to the code following this code block.
This can be either in a Single threaded or a Multi-threaded context. Though having blocking calls in a single threaded code is basically recipe for disaster.
Solution:
To solve this in C#, you can simply use asynchronous methods for example BeginInvoke(), and EndInvoke() in the sockets context, that will not block your calls. This is called asynchronous programming method.
You can call BeginInvoke() and EndInvoke() either on a delegate or a control depending on which ASYNCHRONOUS method you follow to achieve this.
You can use the function Socket.Select()
Select(IList checkRead, IList checkWrite, IList checkError, int microSeconds)
to check multiple Sockets for both readability or writability. The advantage is that this is simple. It can be done from a single thread and you can specify how long you want to wait, either forever (-1 microseconds) or a specific duration. And you don't have to make your sockets asynchronous (i.e.: keep them blocking).
It also works for listening sockets. It will report writability. when there is a connection to accept. From experimenting, i can say that it also reports readability for graceful disconnects.
It's probably not as fast as asyncrhonous sockets. It's also not ideal for detecting errors. I haven't had much use for the third parameter, because it doesn't detect an ungraceful disconnect.
You should use one socket per thread. Blocking sockets (synchronous) wait for a response before returning. Non-blocking (asynchronous) can peek to see if any data received and return if no data there yet.

C# TCP socket, asynchronous read and write at the "same" time (.NET CF), how?

At the first look when using asynchronous methods on the socket level, there shouldn't be problems with sending and receiving data from the associated network stream, or 'directly' on the socket. But, you already probably know, there is.
The problem is particularly highlighted on the Windows Mobile platform and the Compact Framework.
I use asynchronous methods, BeginReceive and the callback function which performs ends a pending asynchronous read for the received data (EndReceive) from the async. result.
As i need to constantly receive data from the socket, there is a loop which waits for the data.
The problems begins when i want to send data. For that purposes before sending some data through the socket i'm "forcing" ends of asynchronous read with EndReceive. There is a great delay on that (sometimes you just can't wait for this timeout). Timeout is too long, and i need to immediately send the data. How? I don't want to close the socket and reconnect.
I use synchronous method Send for sending data (although the results are the same with async. methods BeginSend/EndSend). When sending is finished, i start to receive data again.
Resources that i know about:
stackoverflow.com...properly-handling-network-timeouts-on-windows-ce - comment about timeouts,
developerfusion.com...socket-programming-in-c-part-2/ - solution for simple client/server using asynchronous methods for receiving and synchronous method Send for sending data.
P.S.:I tried to send the data without ending asynchronous receive but then i got SocketException: A blocking operation is currently executing (ErrorCode: 10036).
Thanks in advance! I hope that i'm not alone in this 'one little problem'.. : )
Have you considered to use Poll (or Select for multiple sockets) static method instead of BeginReceive to check if there are data for read? In my opinion this is causing you the trouble.

issue/response serial port processing in C#

Okay here's the problem (this is related to a previous post of mine)
I need to be able to have an issue/response system for serial comms that works something like this:
issue: hello
response: world?
issues: no, hello nurse
reponse: well you're no fun.
this would mean I say "hello" the remote unit is expected to send back "world?" within some timeframe, and if it doesn't i should have a way to access that buffer, so here's what i'm thinking, please give me feedback
a ReaderWriterLock'd 'readBuffer'
a Issue Method that will write to the stream
a Response Method that will watch the readBuffer until it contains what i'm expecting or until the timeout expires.
first, how would the stackoverflow community design this class, second, how would they write the datarecieved event? Third how would they make this code more robust so that multiple instances of the class can exist in parallel threads for simultaneous communications?
This is basically a producer-consumer problem, so that should be the basis for the general design.
Here are some thoughts on that:
a) FIFO buffer (Queue)
First of all, you should have an instance of a thread-safe Queue (a FIFO buffer) for each instance of your class. One thread would receive the data and fill it, while the other one would read the data in a thread-safe manner. This only means you would have to use a lock on each enqueue/dequeue operation.
FIFO Queue would enable you to simultaneously process the data in the worker thread, while filling it from the communication thread. If you need to receive lots of data, you could dequeue some data in the worker thread and parse it before all of it has been received. Otherwise you would need to wait until all data has been received to parse it all at once. In most cases, you don't know how much data you are supposed to get, until you start to parse it.
b) Worker thread waiting for data
I would create a worker thread which would wait for a signal that new data has been received. You could use ManualResetEvent.WaitOne(timeOut) to have a timeout in case nothing happens for a while. When the data is received, you would have to parse it, based on your current state -- so that would be an implementation of a state machine.
c) Port abstraction
To handle different types of ports, you could wrap your serial port inside an interface, which could have at least these methods (I might have forgotten something):
interface IPort
{
void Open();
void Close();
event EventHandler<DataEventArgs> DataReceived;
void Write(Data data);
}
This would help you separate the specific communication code from the state machine.
NOTE:
(according to Microsoft) The DataReceived event is not guaranteed to be raised for every byte received. Use the BytesToRead property to determine how much data is left to be read in the buffer. So you could create your own implementation of IPort which would poll SerialPort in regular intervals to ensure that you don't miss a byte (there is a question of SO which already addresses this).
d) Receiving data
To receive the data, you would have to attach a handler for the IPort.DataReceived event (or SerialPort.DataReceived, if you're not wrapping it), and enqueue the received data to the Queue inside the handler. In that handler you would also set the mentionel ManualResetEvent to notify the worker thread that new data has been received.
I had a similar design issue writing an asynchronous socket listener for formatted data.
A translation of what I came up with into the SerialPort / DataReceived model would be something like this:
The main class encapsulates the issue/response system - it will contain the logic for generating the response based on input. Each instance of the class will be bound to a single serial port that can be set during or after construction. There will be a StartCommunications type method - it will wire up the DataReceived event to another method in the class. This method is responsible for grabbing the data from the port, and determining if a full message has arrived. If so, it raises its own event (defined on the class), which will have an appropriate method wired to it. You could also have it call a predefined method on your class instead of raising an event - I defined an event to improve flexibility.
That basic design is working just fine in a production system, and can handle more input than the rest of the systems connected to it can.

Categories